id
string | meta
dict | quality_signals
dict | content_image
sequence | md
string |
---|---|---|---|---|
1911.03376 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 50532,
"num_imgs": 5,
"llama3_tokens_count": 14670
} | [
"content_image/1911.03376/x1.png",
"content_image/1911.03376/x2.png",
"content_image/1911.03376/x4.png",
"content_image/1911.03376/x6.png",
"content_image/1911.03376/x7.png"
] | # First principles calculation of shift current in chalcopyrite semiconductor ZnSnP\({}_{2}\)
Banasree Sadhukhan
b.sadhukhan@ifw-dresden.de
Leibniz Institute for Solid State and Materials Research IFW Dresden, Helmholtzstr. 20, 01069 Dresden, Germany
Yang Zhang
Leibniz Institute for Solid State and Materials Research IFW Dresden, Helmholtzstr. 20, 01069 Dresden, Germany
Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Max Planck Institute for Chemical Physics of Solids, 01187 Dresden, Germany
Rajyavardhan Ray
r.ray@ifw-dresden.de
Leibniz Institute for Solid State and Materials Research IFW Dresden, Helmholtzstr. 20, 01069 Dresden, Germany
Dresden Center for Computational Materials Science, TU Dresden, D-01062 Dresden, Germany
Jeroen van den Brink
Leibniz Institute for Solid State and Materials Research IFW Dresden, Helmholtzstr. 20, 01069 Dresden, Germany
Dresden Center for Computational Materials Science, TU Dresden, D-01062 Dresden, Germany
February 17, 2024
###### Abstract
The bulk photovoltaic effect generates intrinsic photocurrents in materials without inversion symmetry. Shift current is one of the bulk photovoltaic phenomena related to the Berry phase of the constituting electronic bands: photo-excited carriers coherently shift in real space due to the difference in the Berry connection between the valence and conduction bands. Ferroelectric semiconductors and Weyl semimetals are known to exhibit such nonlinear optical phenomena. Here we consider chalcopyrite semiconductor ZnSnP\({}_{2}\) which lacks inversion symmetry and calculate the shift current conductivity. We find that the magnitude of the shift current is comparable to the recently measured values on other ferroelectric semiconductors and an order of magnitude larger than bismuth ferrite. The peak response for both optical and shift current conductivity, which mainly comes from P-3\(p\) and Sn-5\(p\) orbitals, is several eV above the bandgap.
## I Introduction
Ternary compounds A\({}^{II}\)B\({}^{IV}\)C\({}^{V}_{2}\) and A\({}^{I}\)B\({}^{III}\)C\({}^{VI}_{2}\) (where, A, B = metals, C = sulfur/nitrogen family) having chalcopyrite structure are of considerable interest because of their structural, mechanical, thermoelectric and nonlinear optical properties Ohmer and Pandey (1998). They are also promising materials for spintronics application because of the ability to host ferromagnetism at room temperature Medvedkin et al. (2000); Cho et al. (2002). The chalcopyrite structures are derived from the binary analogs M\({}^{III}\)C\({}^{V}\) and M\({}^{II}\)C\({}^{VI}\) in cubic zinc-blende structures by doubling the unit cell along \(c\), leading to a body-centered tetragonal unit cell. Each cation (anion) is surrounded by four nearest-neighbor anions (cations) as in the zinc-blende structure. The A and B cations alternatively occupy the Zn-positions and form a tetrahedral bonding of the two cation sublattice. The reduced symmetry lowers the band gap significantly in the ternary compounds compared to their binary analogs Yeh et al. (1994). This spatial symmetry reduction also plays an important role to realize the topological insulating and Weyl semimetallic phases in some ternary chalcopyrites Feng et al. (2011); Ruan et al. (2016); Juneja et al. (2018); Lau and Ortix (2019); Fu and Kane (2007); Bernevig et al. (2006); Xiao et al. (2010); Chadov et al. (2010); Lin et al. (2010).
Of particular interest is the ternary compound ZnSnP\({}_{2}\) (ZSP), type A\({}^{II}\)B\({}^{IV}\)C\({}^{V}_{2}\), which is now recognized as an alternative photoabsorber material in solar cell applications Scanlon and Walsh (2012); Kumagai et al. (2014). It undergoes a structural transition from the ordered chalcopyrite ZnSnP\({}_{2}\) (CH-ZSP) structure to a disordered sphalerite structure (SP-ZSP) at 990 K (Shay and Wernick, 2017). In SP-ZSP, the Sn and Zn atoms are randomly distributed over the cation sub-lattice. In comparison, in the ordered CH-ZSP, the P\({}^{3-}\) anions are surrounded by two Zn\({}^{2+}\) (A-type) and two Sn\({}^{4+}\) (B-type) cations while each cation is surrounded by four anions. Due to the possibility of bandgap engineering and tunability of the electronic and optical properties, the electronic structure and properties of ZSP are being investigated both theoretically and experimentally Nakatani et al. (2008); St-Jean,P. et al. (2010); Scanlon and Walsh (2012); Mishra and Ganguli (2013); Sahin et al. (2012); Xu et al. (2015); Mukherjee et al. (2018). In comparison to other well known chalcopyrite ternary compounds, an important feature of CH-ZSP is that the ground state is a trivial insulator, but lacks inversion symmetry due to the displacement of the anion positions (anion-shift) towards one of the cation as compared to the ideal case obtained by doubling the cubic zinc-blende unit cell (see Sec. III for details). It is also anisotropic due to the presence of two types of cationic bonding which also gives rise to high birefringence.
An interesting and potentially useful property of non-centrosymmetric crystals is that in such materials symmetry allows incident photons to induce a photocurrent. This is a bulk photovoltaic phenomenon and the induced current is referred to as the shift current Sipe and Shkrebtii (2000); Morimoto and Nagaosa (2016); Nastos and Sipe (2006, 2010); Zhang et al. (2018); Morimoto and Nagaosa (2016). In contrast to a conventional drift photocurrent under an electric field, the shift current originates from the charge center shifts in real space due to difference in the Berry connection between the valence and conduction bands involved in the optical excitation process Morimoto and Nagaosa (2016, 2016). Recently, Weyl semimetals (WSMs) have been theoretically investigated for such nonlinear optical phenomena Zhang et al. (2018); Goswami et al. (2015); Ishizuka et al. (2016); Sodemann and Fu (2015); Morimoto et al. (2016); König et al. (2017); Zhang et al. (2018); Yang et al. (2017).
The shift current with a less dissipative character has remarkable advantages over the conventional drift photocurrent driven by a built-in potential or external electric field (Choi et al., 2009; Yang et al., 2010; Grinberg et al., 2013). For example, it depends on the polarization direction of the incident photon field, is insensitive to the sample resistivity or barrier formation near the electrodes Ogawa et al. (2017), and is also independent of the external bias voltage Tan et al. (2016); Young and Rappe (2012).
Moreover, photocurrents induced by optical transitions obeying dipole and polarization selection rules naturally permit ultrafast manipulation. In particular, shift currents induced by a properly tuned external pulsed photon sources can create coherent electromagnetic wave emission in the terahertz frequency regime, where control of the ellipticity and chirality over a broad spectral range is notoriously difficult Amer,N. et al. (2005); Gao et al. (2020).
Here we show, using a recently developed multi-band approach, Zhang et al. (2018) that the shift current conductivity Zhang et al. (2018); von Baltz and Kraut (1981); Kraut and von Baltz (1979) in CH-ZSP is comparable to that in SbSI Sotome et al. (2019), and an order of magnitude larger than the famous multiferroic BiFeO\({}_{3}\)Young et al. (2012). The multi-band approach involves the full set of Bloch states and a sum over all intermediate states participating in three-band transitions is considered. The three-band virtual transitions make the dominant contributions and are distributed uniformly in the momentum space. Naturally, in comparison to the widely used two-band effective models, estimates based on the multi-band approach are accurate and highly desirable for materials application.
A key challenge in an accurate density functional theory (DFT) based description of insulating materials is the well known problem of bandgap underestimation by the local and semi-local functionals. These problems can be cured by employing schemes which take into account the self-energy of a many-body electronic system, such as the GW approximation Aryasetiawan and Gunnarsson (1998), and the hybrid exchange correlation (HSE) functional Heyd,Jochen et al. (2003). However, they are computationally very expensive. At the same time, while the HSE functionals improve the bandgap close to the experimental value, it may overestimate the lattice constants and the atomic displacement associated with structural distortion which may eventually lead to inaccurate estimate of material properties. For example, in the non-centrosymmetric compound BaTiO\({}_{3}\), such overestimation affects the ferroelectricity and gives inaccurate optical response Sanna et al. (2011); Gupta et al. (2004); Bagayoko et al. (1998).
Traditionally, the deficiency associated with the bandgap is addressed by using a simple “scissors operation” Godby et al. (1988); Levine and Allan (1989) on the standard DFT [using generalized gradient approximation/local density approximation (GGA/LDA)] bands whereby the conduction bands are rigidly shifted such that the resulting electronic bandgap matches with the experimental value. Within this procedure, the optical response obtained within LDA/GGA is shifted by the same amount (referred to as scissor-shift in the following) and retains the features obtained from standard DFT Nastos et al. (2005). As an alternative, a semi-empirical DFT+\(U\) approximation might be used to improve the bandgap values. Very recently, an empirical Tran-Blaha modified Becke-Johnson (TB-mBJ) potential Tran and Blaha (2009) was shown to lead to an accuracy comparable to the much expensive hybrid functional and GW approximation at a computational cost comparable to standard DFT calculations. Here, we thus, consider the latter three approaches, _viz._, the scissors operation (GGA\(+\Delta\)), DFT+\(U\) and TB-mBJ methods and discuss their implications for the electronic and optical properties of CH-ZSP.
## II Computational details
We performed density functional theory (DFT) calculations within the Perdew-Burke-Ernzerhof (PBE) implementation Perdew et al. (1996) of the GGA functional using the full-potential local-orbital (FPLO) code [58; 1]. Self-consistent calculations employing the default scalar relativistic approximation were performed on a \(k\)-mesh with 18 \(\times\) 18 \(\times\) 18 subdivisions. Starting from the experimental structure Vaipolin et al. (1968), several crystal structures with different unit cell volumes \(V\) were considered: \(0.90V_{\rm exp}\leq V\leq 1.10V_{\rm exp}\), where \(V_{\rm exp}\) is the unit cell volume of the experimental crystal structure. For each case, the internal parameters (atomic positions) were optimized such that net force on each atom was less than 1 meV/Å and the ground state energy was evaluated. The optimized structure was considered for further detailed study of electronic and optical properties. The spin-orbit effects are expected to be small and were, therefore, not considered (see Appendix A).
To overcome the issue of bandgap underestimation, both DFT+\(U\) and TB-mBJ calculations were carried out. The on-site orbital dependent electron electron correlations (\(U\)) were applied to Zn-3\(d\) as well as P-3\(p\) states and the evolution of the bandgap was studied.
The TB-mBJ calculations Tran and Blaha (2009) were carried out using the full-potential Augmented Plane Waves + local orbital (APW+lo) method as implemented in the WIEN2k code Blaha et al. (2001). A good quantitative and qualitative agreement between the two codes were obtained within the scalar relativistic GGA calculations. For the TB-mBJ potential, the self-consistent \(c\) parameter was used Tran and Blaha (2009). The energy convergence of the obtained solutions is better than \(10^{-5}\) Ryd per unit cell and the charge convergence is better than \(10^{-4}\) e/a.u.\({}^{3}\).
The optical properties within the linear response theory were obtained using the well-known relations: the imaginary part of the dielectric function is given by
\[\epsilon_{2}^{\alpha\beta}(\omega)={\rm Im}[\epsilon_{\alpha\beta }(\omega)] =-\frac{4\pi^{2}e^{2}}{m_{0}^{2}\omega^{2}}\int d{\bf k}\sum_{n,l }\left(f_{n}-f_{l}\right)\]
\[\times\frac{\langle\mathbf{k}n|\hat{v}_{\alpha}|\mathbf{k}l \rangle\langle\mathbf{k}l|\hat{v}_{\beta}|\mathbf{k}n\rangle}{(E_{\mathbf{k}n} -E_{\mathbf{k}l}-\hbar\omega-i\delta)}\,,\] (1)
where, \(\alpha,\beta=(x,y,z)\) are the Cartesian coordinates, \(\hat{v}_{\alpha}=\hat{p}_{\alpha}/m_{0}\) is the velocity operator along \(\alpha\), \(m_{0}\) is the free electron mass, \(|{\bf k}n\rangle\) are the wavefunction corresponding to the band with energy \(E_{{\bf k}n}\) at momentum \({\bf k}\) and index \(n\), \(f_{n}\equiv f(E_{\mathbf{k}n})\) is the Fermi function for the state with energy \(E_{\mathbf{k}n}\), and \(\hbar\omega\) is the incident photon energy. \(\delta=\hbar/\tau_{s}\) is the broadening parameter which depends inversely on the single particle relaxation time associated with the quantum mechanical broadening \(\tau_{s}\). The real part can be obtained via the Kramer-Kronig relation:
\[\epsilon_{1}^{\alpha\beta}(\omega)={\rm Re}{[\epsilon_{\alpha\beta}(\omega)]}= \delta_{\alpha\beta}+\frac{1}{\pi}\mathcal{P}\int_{-\infty}^{\infty}\ d\omega^ {\prime}\ \frac{{\rm Im}[\epsilon_{\alpha\beta}(\omega^{\prime})]}{\omega- \omega^{\prime}}\,.\] (2)
All optical response functions can now be derived from these. In particular, the optical conductivity is
\[\sigma_{\alpha\beta}(\omega)=\frac{\omega\epsilon_{2}^{\alpha\beta}(\omega)}{4 \pi}\,.\] (3)
To calculate the shift current response, we used the general relation for the photoconductivity in quadratic response theory Zhang et al. (2018); von Baltz and Kraut (1981); Kraut and von Baltz (1979):
\[\sigma^{\gamma}_{\alpha\beta} =\frac{|e|^{3}}{8\pi^{3}\omega^{2}}{\rm Re}\bigg{\{}\phi_{\alpha \beta}\sum_{\Omega=\pm\omega}\sum_{l,m,n}\int_{BZ}d{\bf k}(f_{l}-f_{n})\] (4)
\[\times\frac{\langle{\bf k}n|\hat{v}_{\alpha}|{\bf k}l\rangle \langle{\bf k}l|\hat{v}_{\beta}|{\bf k}m\rangle\langle{\bf k}m|\hat{v}_{\gamma }|{\bf k}n\rangle}{(E_{\mathbf{k}n}-E_{\mathbf{k}m}-i\delta)(E_{\mathbf{k}n}-E _{\mathbf{k}l}+\hbar\Omega-i\delta)}\bigg{\}}.\]
The conductivity \(\sigma_{\alpha\beta}^{\gamma}\) (\(\alpha,\beta,\gamma=x,y,z\)) is a third rank tensor representing the photocurrent \(J_{\gamma}\) generated by an electrical field via \(J_{\gamma}=\sigma_{\alpha\beta}^{\gamma}{{\mathcal{E}}^{*}_{\alpha}}{\mathcal{ E}}_{\beta}\). \(\phi_{\alpha\beta}\) is the phase difference between the driving field \({\mathcal{E}}_{\alpha}\) and \({\mathcal{E}}_{\beta}\). The real part of the integral in Eq. (4) describes the shift current response under a linearly polarized light. Note that while the above equation doesn’t explicitly reflect the topological nature of the shift current, it depends on the topological Berry connection and Berry curvature Young and Rappe (2012); Morimoto and Nagaosa (2016); Zhang et al. (2018).
The starting point for shift current calculation is a bandstructure and the corresponding eigenstates and energies in the Brillouin zone. To this end, a tight binding model was obtained using maximally projected Wannier functions (WFs) for the Zn-3\(d\), Sn-4\(d\), 5\(s\), 5\(p\) and P-3\(p\) orbitals in the energy range of -7.0 eV to 5.0 eV. The typical mismatch between the tight-binding model derived from such Wannier functions and the self-consistent DFT bandstructure was \(\lesssim 1\) meV. For the integral in Eq. (4), the Brillouin zone (BZ) was sampled by a \(200\times 200\times 200\)\(k\)-mesh with satisfactory convergence. The value of the conductivity changes by less than \(3-4\)% above that \(k\)-mesh. A typical value of the broadening parameter was used for both linear and non-linear response: in ordinary metals and semiconductors, the ratio of the transport relaxation time (\(\tau_{t}\)) and the single particle relaxation time associated with the quantum mechanical broadening (\(\tau_{s}\)) is \(\tau_{t}/\tau_{s}=1\)Hong et al. (2009); Hwang and Das Sarma (2008); Das et al. (1993); Sadhukhan et al. (2017). In semiconductors, the transport relaxation time \(\tau_{t}\approx\) femtoseconds (10\({}^{-15}\) sec.) at room temperature Klick,Alwin et al. (2019); Inuzuka,Fumikazu et al. (2004), leading to \(\delta\approx\hbar/\tau_{s}=0.1\) at room temperature.
ZnSnP\({}_{2}\) belongs to the \(\it{D_{2d}}\) (-4\(\it{m}\)2) point group in the ferroelectric phase. Therefore, it has the second-order photoconductivity (\(\sigma^{\gamma}_{\alpha\beta}\)) tensor of the form
\[\sigma_{\alpha\beta}^{\gamma}=\left({\begin{array}[]{ccccccc}0&0&0&\sigma^{x}_ {yz}&0&0\\ 0&0&0&0&\sigma^{y}_{xz}&0\\ 0&0&0&0&0&\sigma^{z}_{xy}\\ \end{array}}\right)\]
The second-harmonic susceptibility \(\chi_{\alpha\beta}^{\gamma}\) (\(\chi_{\alpha\beta}^{\gamma}=\sigma_{\alpha\beta}^{\gamma}/2i\omega\epsilon\)) is governed by the same symmetry and, therefore, has similar form. The crystal has a mirror reflection \(M_{xy}\) in the \(x-y\) plane, which exchanges \(x\) and \(y\) indexes. In addition, the \(4_{2}\) screw rotation symmetry about the \(z\) axis gives \(\sigma^{x}_{yz}=\sigma^{y}_{xz}\), leaving only two independent nonlinear optical photoconductivity tensor elements \(\sigma{{}^{x}_{yz}}\) and \(\sigma{{}^{z}_{xy}}\).
## III Results and Discussion
<figure><img src="content_image/1911.03376/x1.png"><figcaption>Figure 1: Unit cell of the chalcopyrite ZnSnP2 (CH-ZSP) lattice: (a) sideview, (b) top view. It has two twofold glide rotational and mirror symmetriesC2(x), C2(y), Mxy and Mx−y. (c) Brillouin zone (BZ) along with the highsymmetric points.</figcaption></figure>
Ternary ZSP crystallizes in a body-centered tetragonal structure which in the chalcopyrite phase (CH-ZSP) has the space group I\(\bar{4}\)2d (No. 122). It has eight atoms per primitive unit cell. Basically, it is a superlattice of zinc-blende structure obtained by doubling the zinc-blende unit cell along the \(z\) direction. The unit cell of ZnSnP\({}_{2}\) is shown in Fig. 1(a). In an ideal zinc-blende structure of binary compound, each anion has four similar cations as nearest neighbors. So, all the four bond lengths are equal and the charge distribution is identical around each bond. Consequently, in a binary compound with zinc-blende structure, \(u\) is 0.25 and \(\eta=c/a=1\). Therefore, the ideal case for the doubled unit cell corresponds to \(u=0.25\) and \(\eta=c/a=2\).
In CH-ZSP, each anion has two Zn and two Sn cations as nearest neighbor. Due to dissimilar atoms as neighbors the anion acquires an equilibrium position closer to one pair of cation than to the other. The displacement of the position of anion thus leads to bond alternation. In the most general case, \(u\neq\) 0.25 and \(\eta\)\(\neq\) 2. In contrast to other chalcopyrite compounds, CH-ZSP lacks tetragonal distortion (\(\eta=2\)) but exhibits displacement of anions towards the smaller cation (anion shift). The positions of the different types of atoms in the tetragonal unit cell are: Zn atom at (0, 0, 0); Sn atom at (0, 0, 0.5) and P atom at (\(u\), 0.25, 0.125), where \(u\) is the anion displacement parameter. The equilibrium lattice parameters and the optimal internal parameters for CH-ZSP, along with the corresponding experimental values, are presented in Table 1. Compared to the binary compound, the cubic symmetry is broken and the non-centrosymmetric CH-ZSP crystal has two twofold glide rotational symmetries \(C_{2}(x)\), \(C_{2}(y)\), and two glide mirror symmetries \(M_{xy}\) and \(M_{x-y}\). It also has a twofold rotational symmetry along \(z\) [see Fig. 1].
ZnSnP2 | a (Å) | c (Å) | u
---|---|---|---
Experimental | 5.7382 | 11.4764 | 0.239
Theoretical | 5.7382 | 11.4764 | 0.2272
Table 1: The experimental and equilibrium structural parameters for CH-ZSP.
Please see text for details. The experimental parameters were taken from Ref.
[Vaipolin et al., 1968].
<figure><img src="content_image/1911.03376/x2.png"><figcaption>Figure 2: (a) The bandstructure from GGA and GGA+U, showing a direct bandgapat Γ. The GGA band gap is ∼0.69 eV whereas the GGA+U bandgap is ∼1.05 eV,obtained with Ud=10 eV and Up=2 eV (see text for details). (b) The total andpartial density of states within GGA+U. (c) Comparison of bandstructuresobtained within GGA+Δ and TB-mBJ calculations. For comparison, the GGAconduction bands have been scissor-shifted by Δ=0.71 eV (bottom panel) tomatch the bandgap, and Δ=0.45 eV in an energy range of 1 eV and 5 eV (toppanel), showing the qualitative and quantitative agreement between the twomethods in the description of the conduction band states.</figcaption></figure>
The bandstructure along the high symmetry lines in the Brillouin zone and the density of states (DOS) is shown in Fig. 2. A direct bandgap is found at the \(\Gamma\) point (see Fig. 2(a)). Within GGA, the bandgap is \(\sim 0.69\) eV in good agreement with earlier calculations Xu et al. (2015). This is, however, merely \(\sim 41\%\) of the experimental gap of 1.68 eV St-Jean,P. et al. (2010), implying that a scissor-shift of \(\Delta\sim 1\) eV should be applied to obtain a quantitative agreement with the experimental results.
Within the \(+U\) scheme, the bandgap can reach 1.05 eV upon adjusting the \(U\) parameter. Since the dominant contribution across the Fermi energy is due to P-3\(p\) states [see Fig. 2(b), and discussed below] one also needs to consider \(U_{p}\) for these states along with \(U_{d}\) for Zn-3\(d\) states. The largest bandgap is obtained for \(U_{d}=10\) eV and \(U_{p}=2\) eV. In this context it should be noted that application of \(U\)-term correlations simultaneously to cation \(d\) and anion \(p\) states is not unprecedented, the most relevant example being ZnO Bashyal et al. (2018); Oba and Kumagai (2018). At the same time, a similar large value of \(U_{d}=10\) eV was also suggested in Ref. Ma et al. (2013).
The atomic contributions to DOS across the Fermi energy within GGA+\(U\) remain remarkably similar to the GGA results, and is shown in Fig. 2(b). The valence band region upto \(-5\) eV is mainly composed of \(p\) states, with dominant contribution by P-\(3p\), followed by Sn-\(5p\) and Zn-\(4p\). The valence band maximum is composed primarily of the P-\(3p\) states, and the Zn-\(4s\) states lie relatively deep in the valence band, between \(-4\) and \(-5\) eV. On the other hand, the conduction band region is contributed by the P-3\(p\), Sn-\(5s\) and \(5p\) states, reflecting strong covalency effects in CH-ZSP [see Fig. 2(b)].
The most notable difference between GGA and GGA+\(U\) is the relative position and spread of the Zn-\(3d\) states. Arguably, the Zn-3\(d\) states within GGA are over-hybridized with the Zn-\(4p\) states similar to other strongly covalent systems involving Zn Oba and Kumagai (2018). As a result, they are underbound and lie somewhat higher in the energy, in the range of -5.1 eV to -7.6 eV, (just) below the P-\(4p\) states in the valence band. This, in turn, leads to severe underestimation of bandgap within GGA Bashyal et al. (2018); Oba and Kumagai (2018). The location of the Zn-3\(d\) states can, in principle, be tuned within the GGA+\(U\) functional. Within \(+U\), they shift somewhat lower in energy and are more localized (smaller bandwidth), leading to a well-defined gap in the DOS at \(\sim-5\) eV. This is accompanied by redistribution of the Zn-\(4s\) and Zn-\(4p\) contributions. Eventually, the \(+U\) method improves the bandgap over GGA, but is not sufficient.
On the other hand, application of the TB-mBJ potential leads to a gap of \(\sim 1.4\) eV [see Fig. 2(c)] in good agreement with previously reported value Xu et al. (2015). This is a significant improvement over the GGA and GGA+\(U\) values, however remains only at \(\sim 83\%\) of the experimentally reported value. This is not surprising since P-\(3p\) states contribute significantly to the states across the Fermi energy Koller et al. (2011). Such a discrepancy is indicative of the fact that many body effects could be important for CH-ZSP and that an approach accounting for such many body effects, such as DFT calculations with hybrid functional or GW approximation, may be required to address the full bandgap issue here.
To compare the TB-mBJ bandstructure with that of GGA, a scissor-shift \(\Delta=0.71\) eV is required, as shown in the lower panel of Fig. 2(c). However, a somewhat smaller scissor-shift of \(\Delta=0.45\) shows a remarkably good qualitative and quantitative agreement between the two methods [see the top panel in Fig. 2(c)]. Comparison of atom-resolved DOS (not shown) suggests that the relative compositions of the conduction bands is similar in both approaches. Therefore, within all the considered approaches, the valence band edge remains largely unaffected while the qualitative description of the conduction bands and it’s composition is nearly the same.
To summarize, within the considered methods, the description of CH-ZSP predominantly differs only in the predicted value of the bandgap. Therefore, different values of scissor-shift is required to compare the GGA results with the others. A large value of \(\Delta\sim 1\) eV is needed to match with the experimental results, whereas \(\Delta=0.36\) eV and \(\Delta=0.71\) eV is needed to compare with the GGA+\(U\) and the TB-mBJ results, respectively.
<figure><img src="content_image/1911.03376/x4.png"><figcaption>Figure 3: The real and imaginary parts of the dielectric constant as afunction of the incident photon energy obtained within (a)-(b) GGA+U, and(c)-(d) TB-mBJ scheme, showing overall agreement.</figcaption></figure>
To further ascertain the degree of agreement between the considered methods, we also compare the linear optical response. Fig. 3 shows the real and imaginary part of dielectric function obtained within GGA+\(U\) and it’s comparison with the corresponding results from TB-mBJ. Tetragonal symmetry of the crystal structure implies that the in-plane (\(\alpha\beta=xx\),\(yy\)) and the out-of-plane (\(\alpha\beta=zz\)) components are distinct. The qualitative similarities in both the schemes is evident. Within GGA+\(U\), the real part of the dielectric constant \(\epsilon_{1}\) has prominent peaks at approximately 2 eV and 3.7 eV, and the zero-energy crossings lie between 4.5 - 5 eV. The imaginary part of the dielectric constant \(\epsilon_{2}\) is also characterized by two prominent peaks, at \(\sim 2\) eV and between 4.0-4.5 eV, similar to \(\epsilon_{1}\). These peak positions correspond to the interband transitions between the valence and conduction band states. The dominant peak in \(\epsilon_{2}^{zz}\) (at \(\sim 4\) eV) lies slightly lower than in \(\epsilon_{2}^{xx}\), as expected from the respective zero-energy crossings in \(\epsilon_{1}\). Considering a scissor-shift of \(\Delta\sim 0.35\), required to match the bandgaps between GGA+\(U\) and TB-mBJ methods, the peak position in \(\epsilon_{1}\) and \(\epsilon_{2}\), as well the zero-energy crossing in \(\epsilon_{1}\) are in good agreement within the two approaches.
The zero frequency limit of \(\epsilon_{1}(\omega)\), \(\epsilon_{1}(0)\), is an important quantity. It represents the electronic part of the static dielectric constant and depends strongly on the bandgap. The static dielectric constant is found to be \(\epsilon_{1}(0)=10.56\) eV within GGA+\(U\). In comparison, this value was measured to be 10 eV Madelung (2013); Petousis et al. (2017) while the TB-mBJ calculations yield 9.7.
As the non-linear optical properties depends crucially on the wavefunctions von Baltz and Kraut (1981), reliable estimates of the optical properties, especially the magnitude of the shift current, can thus be obtained even within the scissors operator method GGA\(+\Delta\). Therefore, in the following, we focus on the GGA\(+\Delta\) and GGA\(+U\) methods with the understanding that an additional scissor-shift of \(\sim 0.6\) eV (\(\sim 0.4\) eV) may be required for a quantitative comparison with experiments (TB-mBJ).
Figure 4 shows the calculated optical and shift current conductivity for CH-ZSP. The obtained GGA response has been scissor-shifted by \(\Delta=0.36\) eV to compare with GGA+\(U\). The structure and magnitude of the optical response does not depend too much on the on GGA+\(U\). We begin with a comparison of the optical conductivity obtained within GGA and GGA+\(U\), shown in Fig. 4(a) and 4(b), respectively, for \(\sigma_{xx}\) and \(\sigma_{zz}\) . The optical conductivities are in very good agreement, as expected from the fact that both of these methods provide qualitatively similar description of the conduction and valence bands. The main optical conductivity peak in \(\sigma_{xx}\) appears at 4.61 eV with a low energy peak at 2.52 eV and for \(\sigma_{zz}\), the peak appears at 4.08 and 2.43 eV, respectively. These peak positions are consistent with the corresponding peak positions in the imaginary part of the dielectric function \(\epsilon_{2}\).
<figure><img src="content_image/1911.03376/x6.png"><figcaption>Figure 4: The optical conductivity (a) σxx and (b) σzz from GGA+U and GGA+Δ(Δ=0.36 eV). The shift current conductivity (c) σxyz and (d) σzxy from GGA+Uand GGA+Δ. (e) Both the optical and shift current conductivity from GGA+Uplotted on the same scale.</figcaption></figure>
In the shift current conductivity [see Fig. 4 (c - e)], \(\sigma{{}^{x}_{yz}}\) and \(\sigma{{}^{z}_{xy}}\) are the only nonvanishing, independent components of the third rank tensor \(\sigma{{}^{\gamma}_{\alpha\beta}}\). Similar to the optical conductivity, the shift current response starts only above the bandgap. Interestingly, the shift current shows a strong increase at the gap edge in contrast to the optical current conductivity, which increases slowly above the gap. Figure 4(c-d) shows the calculated shift current for \(\sigma{{}^{x}_{yz}}\) and \(\sigma{{}^{z}_{xy}}\) components. The shift current response for both \({xy}\) and \({yz}\) polarized light are negative though \(\sigma{{}^{x}_{yz}}\) has small positive contribution near the bandgap. The shift current conductivity is around 6 \({{\mu}A}/V^{2}\) near the bandgap for both \(\sigma{{}^{x}_{yz}}\) and \(\sigma{{}^{z}_{xy}}\) which is comparable to recent experimental observations on the semiconductor SbSI Sotome et al. (2019), and an order of magnitude larger than the famous multi-ferroelectric compound bismuth ferrite (0.5 \({{\mu}A}/V^{2}\)) Young et al. (2012). Similar to the optical conductivity discussed before, the shift current exhibits a large increase to 12 \({{\mu}A}/V^{2}\) at photon energy at (\(\sim 3.5-4\)) eV. This is due to the large real space charge center shift between valence electrons and conduction electrons, which contributes mainly from \(3p\) orbitals of P atoms to \(5p\) orbitals of Sn atoms.
## IV Conclusion and outlook
In conclusion, we investigated the non-linear photocurrent in non-centrosymmetric chalcopyrite semiconductor ZnSnP\({}_{2}\) based on first principles calculations. Based on a detailed analysis of the electronic properties of CH-ZSP within the traditional scissors operator method GGA+\(\Delta\), GGA+\(U\) and TB-mBJ methods, we find that TB-mBJ leads to a much better agreement (\(\sim 83\%\)) with the reported experimental bandgap. More importantly, although various methods rely on different approaches, the description of the electronic bands within all these methods is remarkably similar. This bodes well for the reliability of our estimates for linear and non-linear optical properties based on either of the methods. The shift current conductivity that we find is around 6 \({{\mu}A}/V^{2}\) near the bandgap and 12 \({{\mu}A}/V^{2}\) at photon energy \(3.5-4\,\)eV. This comes mainly from the large real space charge center shift between valence electrons and conduction electrons of P-3p and Sn-5p orbitals. Distinct from the diffusion mechanism in the p-n junction based photogalvanic effect, the generation of photocurrent under linear polarized electromagnetic radiation in ZnSn\(P_{2}\) is dominated by Berry phase related shift current. Due to the underlying selection rules ultrafast photo-induced currents will strongly depend on the crystal orientation and laser polarization. This can offer a promising avenue to achieve efficient generation and control of secondary terahertz radiation, which in ZnSn\(P_{2}\) will result from the intrinsic shift current mechanism Gao et al. (2020): the magnitude of the shift current is comparable to the recent experimental value on SbSI Sotome et al. (2019) and an order of magnitude larger than multi-ferroelectric compound bismuth ferrite (0.5 \({{\mu}A}/V^{2}\)) Young et al. (2012).
## Acknowledgements
We thank Manuel Richter for helpful discussions and Ulrike Nitzsche for technical assistance. This work was supported by the German Research Foundation (DFG) via SFB 1143, Project No. A5 and by the DFG through the Würzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter - _ct.qmat_ (EXC 2147, Project No. 39085490).
## Appendix A Role of spin-orbit coupling (SOC)
Figure 5 shows a comparison of the bandstructures of CH-ZSP within the scalar relativistic (“no SOC”) and full relativistic GGA calculations. Sizable differences are found only around \(-3\,\)eV and \(3.5\,\)eV, where the Sn-\(5p\) contribution is dominant (see Fig. 2).
<figure><img src="content_image/1911.03376/x7.png"><figcaption>Figure 5: A comparison of the scalar relativistic (no SOC) and fullrelativistic (SOC) bandstructures for ZnSnP2.</figcaption></figure>
## References
* [1] () . Note: https://www.fplo.de External Links: Link Cited by: §II.
* Amer,N., Hurlbut,W. C., Norton,B. J., Lee,Yun-Shik, and Norris,T. B. (2005)Generation of terahertz pulses with arbitrary elliptical polarization. 87 (22), pp. 221111. External Links: Document, Link Cited by: §I.
* F. Aryasetiawan and O. Gunnarsson (1998)The GW method. 61 (3), pp. 237. External Links: Document Cited by: §I.
* D. Bagayoko, G. Zhao, J. Fan, and J. Wang (1998)Ab initio calculations of the electronic structure and optical properties of ferroelectric tetragonal. 10 (25), pp. 5645. External Links: Document Cited by: §I.
* K. Bashyal, C. K. Pyles, S. Afroosheh, A. Lamichhane, and A. T. Zayak (2018)Empirical optimization of dft+ u and hse for the band structure of ZnO. 30 (6), pp. 065501. External Links: Document Cited by: §III, §III.
* B. A. Bernevig, T. L. Hughes, and S. Zhang (2006)Quantum spin hall effect and topological phase transition in HgTe quantum wells. Science314 (5806), pp. 1757–1761. External Links: Document, ISSN 0036-8075, Link Cited by: §I.
* P. Blaha, K. Schwarz, G. K. Madsen, D. Kvasnicka, and J. Luitz (2001)WIEN2k: an augmented plane wave+ local orbitals program for calculating crystal properties. External Links: Link Cited by: §II.
* S. Chadov, Qi,Xiaoliang, J. Kübler, G. H. Fecher, C. Felser, and S. Zhang (2010)Tunable multifunctional topological insulators in ternary heusler compounds. Nature Materials9, pp. 541. External Links: Document, Link Cited by: §I.
* S. Cho, S. Choi, G. Cha, S. C. Hong, Y. Kim, Y. Zhao, A. J. Freeman, J. B. Ketterson, B. J. Kim, Y. C. Kim, and B. Choi (2002)Room-temperature ferromagnetism in (Zn\({}_{1-\mathit{x}}\)Mn\({}_{\mathit{x}}\))GeP\({}_{2}\) semiconductors. Phys. Rev. Lett.88, pp. 257203. External Links: Document, Link Cited by: §I.
* T. Choi, S. Lee, Y. J. Choi, V. Kiryukhin, and S.-W. Cheong (2009)Switchable ferroelectric diode and photovoltaic effect in BiFeO\({}_{3}\). 324 (5923), pp. 63–66. External Links: Document, ISSN 0036-8075, Link Cited by: §I.
* B. Das, S. Subramaniam, M. R. Melloch, and D. C. Miller (1993)Single-particle and transport scattering times in a back-gated GaAs/\({\mathrm{Al}}_{\mathit{x}}\)\({\mathrm{Ga}}_{1\mathrm{-}\mathit{x}}\)as modulation-doped heterostructure. 47, pp. 9650–9653. External Links: Document, Link Cited by: §II.
* W. Feng, D. Xiao, J. Ding, and Y. Yao (2011)Three-dimensional topological insulators in I\(-\)III\(-\)VI\({}_{2}\) and I\(-\)IV\(-\)V\({}_{2}\) chalcopyrite semiconductors. Phys. Rev. Lett.106, pp. 016402. External Links: Document, Link Cited by: §I.
* L. Fu and C. L. Kane (2007)Topological insulators with inversion symmetry. Phys. Rev. B76, pp. 045302. External Links: Document, Link Cited by: §I.
* Y. Gao, S. Kaushik, E. Philip, Z. Li, Y. Qin, Y. Liu, W. Zhang, Y. Su, X. Chen, H. Weng, et al. (2020)Chiral terahertz wave emission from the Weyl semimetal TaAs. 11 (1), pp. 720. External Links: Document Cited by: §I, §IV.
* R. W. Godby, M. Schlüter, and L. J. Sham (1988)Self-energy operators and exchange-correlation potentials in semiconductors. 37, pp. 10159–10175. External Links: Document, Link Cited by: §I.
* P. Goswami, G. Sharma, and S. Tewari (2015)Optical activity as a test for dynamic chiral magnetic effect of Weyl semimetals. 92, pp. 161110. External Links: Document, Link Cited by: §I.
* I. Grinberg, D. V. West, M. Torres, G. Gou, D. M. Stein, L. Wu, G. Chen, E. M. Gallo, A. R. Akbashev, P. K. Davies, et al. (2013)Perovskite oxides for visible-light-absorbing ferroelectric and photovoltaic materials. 503 (7477), pp. 509–512. External Links: Document Cited by: §I.
* G. Gupta, T. Nautiyal, and S. Auluck (2004)Optical properties of the compounds BaTiO\({}_{3}\) and SrTiO\({}_{3}\). 69, pp. 052101. External Links: Document, Link Cited by: §I.
* Heyd,Jochen, S. E., and Ernzerhof,Matthias (2003)Hybrid functionals based on a screened coulomb potential. 118 (18), pp. 8207. External Links: Document, Link, https://doi.org/10.1063/1.1564060 Cited by: §I.
* X. Hong, K. Zou, and J. Zhu (2009)Quantum scattering time and its implications on scattering sources in graphene. 80, pp. 241415. External Links: Document, Link Cited by: §II.
* E. H. Hwang and S. Das Sarma (2008)Single-particle relaxation time versus transport scattering time in a two-dimensional graphene layer. 77, pp. 195412. External Links: Document, Link Cited by: §II.
* Inuzuka,Fumikazu, Misawa,Kazuhiko, Nishi,Kenichi, and Lang,Roy (2004)Femtosecond time-resolved dispersion relation of complex nonlinear refractive index in a semiconductor quantum well. 85 (17), pp. 3678–3680. External Links: Document, Link Cited by: §II.
* H. Ishizuka, T. Hayata, M. Ueda, and N. Nagaosa (2016)Emergent electromagnetic induction and adiabatic charge pumping in noncentrosymmetric Weyl semimetals. 117, pp. 216601. External Links: Document, Link Cited by: §I.
* R. Juneja, R. Shinde, and A. K. Singh (2018)Pressure-induced topological phase transitions in CdGeSb\({}_{2}\) and CdSnSb\({}_{2}\). The Journal of Physical Chemistry Letters9 (9), pp. 2202–2207. External Links: Document, Link Cited by: §I.
* Klick,Alwin, Großmann,Malte, Beewen,Maria, Bittorf,Paul, Fiutowski,Jacek, Leißner,Till, Rubahn,Horst-Günter, Reinhardt,Carsten, Elmers,Hans-Joachim, and Bauer,Michael (2019)Femtosecond time-resolved photoemission electron microscopy operated at sample illumination from the rear side. 90 (5), pp. 053704. External Links: Document, Link Cited by: §II.
* K. Koepernik and H. Eschrig (1999)Full-potential nonorthogonal local-orbital minimum-basis band-structure scheme. 59, pp. 1743–1757. External Links: Document, Link Cited by: §II.
* D. Koller, F. Tran, and P. Blaha (2011)Merits and limits of the modified Becke-Johnson exchange potential. 83, pp. 195134. External Links: Document, Link Cited by: §III.
* E. J. König, H.-Y. Xie, D. A. Pesin, and A. Levchenko (2017)Photogalvanic effect in Weyl semimetals. 96, pp. 075123. External Links: Document, Link Cited by: §I.
* W. Kraut and R. von Baltz (1979)Anomalous bulk photovoltaic effect in ferroelectrics: a quadratic response theory. 19, pp. 1548–1554. External Links: Document, Link Cited by: §I, §II.
* Y. Kumagai, M. Choi, Y. Nose, and F. Oba (2014)First-principles study of point defects in chalcopyrite ZnSnP\({}_{2}\). Phys. Rev. B90, pp. 125202. External Links: Document, Link Cited by: §I.
* A. Lau and C. Ortix (2019)Topological semimetals in the SnTe material class: nodal lines and Weyl points. Phys. Rev. Lett.122, pp. 186801. External Links: Document, Link Cited by: §I.
* Z. H. Levine and D. C. Allan (1989)Linear optical response in silicon and germanium including self-energy effects. 63, pp. 1719–1722. External Links: Document, Link Cited by: §I.
* H. Lin, L. A. Wray, Y. Xia, S. Xu, S. Jia, R. J. Cava, A. Bansil, and M. Z. Hasan (2010)Half-heusler ternary compounds as new multifunctional experimental platforms for topological quantum phenomena. Nature Materials9, pp. 546. External Links: Document, Link Cited by: §I.
* X. Ma, Y. Wu, Y. Lv, and Y. Zhu (2013)Correlation effects on lattice relaxation and electronic structure of zno within the GGA+\(U\) formalism. 117 (49), pp. 26029–26039. External Links: Document Cited by: §III.
* O. Madelung (2013)Semiconductors: data handbook. Springer, Berlin. Cited by: §III.
* G. A. Medvedkin, T. Ishibashi, T. Nishi, K. Hayata, Y. Hasegawa, and K. Sato (2000)Room temperature ferromagnetism in novel diluted magnetic semiconductor Cd\({}_{1-x}\)Mn\({}_{x}\)GeP\({}_{2}\). Japanese Journal of Applied Physics39 (Part 2, No. 10A), pp. L949–L951. External Links: Document, Link Cited by: §I.
* S. Mishra and B. Ganguli (2013)Effect of \(p–d\) hybridization, structural distortion and cation electronegativity on electronic properties of ZnSnX\({}_{2}\) (X=P, As, Sb) chalcopyrite semiconductors. Journal of Solid State Chemistry200, pp. 279 – 286. External Links: ISSN 0022-4596, Document, Link Cited by: §I.
* T. Morimoto and N. Nagaosa (2016)Topological aspects of nonlinear excitonic processes in noncentrosymmetric crystals. 94, pp. 035117. External Links: Document, Link Cited by: §I, §II.
* T. Morimoto and N. Nagaosa (2016)Topological nature of nonlinear optical effects in solids. 2 (5). External Links: Document, Link Cited by: §I.
* T. Morimoto, S. Zhong, J. Orenstein, and J. E. Moore (2016)Semiclassical theory of nonlinear magneto-optical responses with applications to topological Dirac/Weyl semimetals. 94, pp. 245121. External Links: Document, Link Cited by: §I.
* S. Mukherjee, T. Maitra, A. Nayak, A. Pradhan, M. Mukhopadhyay, B. Satpati, and S. Bhunia (2018)Microstructural and light emission properties of ZnSnP\({}_{2}\) thin film absorber: study of native defects. 204, pp. 147–153. External Links: Document Cited by: §I.
* K. Nakatani, T. Minemura, K. Miyauchi, K. Fukabori, H. Nakanishi, M. Sugiyama, and S. Shirakata (2008)Photoluminescence property of ZnSnP\({}_{2}\) by solution growth and normal freezing methods. Japanese Journal of Applied Physics47 (7), pp. 5342–5344. External Links: Document, Link Cited by: §I.
* F. Nastos, B. Olejnik, K. Schwarz, and J. E. Sipe (2005)Scissors implementation within length-gauge formulations of the frequency-dependent nonlinear optical response of semiconductors. 72, pp. 045223. External Links: Document, Link Cited by: §I.
* F. Nastos and J. E. Sipe (2006)Optical rectification and shift currents in GaAs and GaP response: below and above the band gap. 74, pp. 035201. External Links: Document, Link Cited by: §I.
* F. Nastos and J. E. Sipe (2010)Optical rectification and current injection in unbiased semiconductors. 82, pp. 235204. External Links: Document, Link Cited by: §I.
* F. Oba and Y. Kumagai (2018)Design and exploration of semiconductors from first principles: a review of recent advances. 11 (), pp. 060601. External Links: Document Cited by: §III, §III.
* N. Ogawa, M. Sotome, Y. Kaneko, M. Ogino, and Y. Tokura (2017)Shift current in the ferroelectric semiconductor SbSI. 96, pp. 241203. External Links: Document, Link Cited by: §I.
* M. C. Ohmer and R. Pandey (1998)Emergence of chalcopyrites as nonlinear optical materials. MRS Bulletin23 (7), pp. 16–22. External Links: Document Cited by: §I.
* J. P. Perdew, K. Burke, and M. Ernzerhof (1996)Generalized gradient approximation made simple. 77, pp. 3865–3868. External Links: Document, Link Cited by: §II.
* I. Petousis, D. Mrdjenovich, E. Ballouz, M. Liu, D. Winston, W. Chen, T. Graf, T. D. Schladt, K. A. Persson, and F. B. Prinz (2017)High-throughput screening of inorganic compounds for the discovery of novel dielectric and optical materials. 4 (1), pp. 160134. External Links: Document Cited by: §III.
* J. Ruan, S. Jian, D. Zhang, H. Yao, H. Zhang, S. Zhang, and D. Xing (2016)Ideal Weyl semimetals in the chalcopyrites CuTlSe\({}_{2}\), AgTlTe\({}_{2}\), AuTlTe\({}_{2}\), and ZnPbAs\({}_{2}\). Phys. Rev. Lett.116, pp. 226801. External Links: Document, Link Cited by: §I.
* B. Sadhukhan, S. Bandyopadhyay, A. Nayak, and A. Mookerjee (2017)Disorder induced lifetime effects in binary disordered systems: a first principles formalism and an application to disordered graphene. 31 (29), pp. 1750218. External Links: Document Cited by: §II.
* S. Sahin, Y.O. Ciftci, K. Colakoglu, and N. Korozlu (2012)First principles studies of elastic, electronic and optical properties of chalcopyrite semiconductor ZnSnP\({}_{2}\). Journal of Alloys and Compounds529, pp. 1 – 7. External Links: ISSN 0925-8388, Document, Link Cited by: §I.
* S. Sanna, C. Thierfelder, S. Wippermann, T. P. Sinha, and W. G. Schmidt (2011)Barium titanate ground- and excited-state properties from first-principles calculations. 83, pp. 054112. External Links: Document, Link Cited by: §I.
* D. O. Scanlon and A. Walsh (2012)Bandgap engineering of ZnSnP\({}_{2}\) for high-efficiency solar cells. Appl. Phys. Lett.100, pp. 251911. External Links: Document, Link Cited by: §I.
* J. L. Shay and J. H. Wernick (2017)Ternary chalcopyrite semiconductors: growth, electronic properties, and applications: international series of monographs in the science of the solid state. Vol. 7, Elsevier, San Diego, USA. Cited by: §I.
* J. E. Sipe and A. I. Shkrebtii (2000)Second-order optical response in semiconductors. 61, pp. 5337–5352. External Links: Document, Link Cited by: §I.
* I. Sodemann and L. Fu (2015)Quantum nonlinear hall effect induced by berry curvature dipole in time-reversal invariant materials. 115, pp. 216806. External Links: Document, Link Cited by: §I.
* M. Sotome, M. Nakamura, J. Fujioka, M. Ogino, Y. Kaneko, T. Morimoto, Y. Zhang, M. Kawasaki, N. Nagaosa, Y. Tokura, and N. Ogawa (2019)Spectral dynamics of shift current in ferroelectric semiconductor SbSI. 116 (6), pp. 1929. External Links: Document Cited by: §I, §III, §IV.
* St-Jean,P., Seryogin,G. A., and Francoeur,S. (2010)Band gap of sphalerite and chalcopyrite phases of epitaxial ZnSnP\({}_{2}\). Applied Physics Letters96 (23), pp. 231913. External Links: Document, Link Cited by: §I, §III.
* L. Z. Tan, F. Zheng, S. M. Young, F. Wang, S. Liu, and A. M. Rappe (2016)Shift current bulk photovoltaic effect in polar materials—hybrid and oxide perovskites and beyond. 2 (1), pp. 1–12. External Links: Document Cited by: §I.
* F. Tran and P. Blaha (2009)Accurate band gaps of semiconductors and insulators with a semilocal exchange-correlation potential. 102, pp. 226401. External Links: Document, Link Cited by: §I, §II.
* A. Vaipolin, N. Goryunova, L. Kleshchinskii, G. Loshakova, and E. Osmanov (1968)The structure and properties of the semiconducting compound ZnSnP\({}_{2}\). 29 (1), pp. 435–442. External Links: Document Cited by: §II, Table 1.
* R. von Baltz and W. Kraut (1981)Theory of the bulk photovoltaic effect in pure crystals. 23, pp. 5590–5596. External Links: Document, Link Cited by: §I, §II, §III.
* D. Xiao, Y. Yao, W. Feng, J. Wen, W. Zhu, X. Chen, G. M. Stocks, and Z. Zhang (2010)Half-heusler compounds as a new class of three-dimensional topological insulators. Phys. Rev. Lett.105, pp. 096404. External Links: Document, Link Cited by: §I.
* Y. Xu, Z. M. Ao, D. F. Zou, G. Z. Nie, W. Sheng, and D. W. Yuan (2015)Strain effects on the electronic structure of ZnSnP\({}_{2}\) via modified Becke–Johnson exchange potential. Physics Letters APhys. Rev. BPhys. Rev. BPhys. Rev. BPhys. Rev. BPhys. Rev. BScience AdvancesPhys. Rev. BPhys. Rev. Lett.Phys. Rev. Lett.Phys. Rev. BPhys. Rev. BPhys. Rev. BScienceNature nanotechnologyNaturePhys. Rev. BNpj Computational MaterialsPhys. Rev. Lett.Nature communicationsApplied Physics LettersPhys. Rev. BPhys. Rev. BProceedings of the National Academy of SciencePhys. Rev. Lett.Reports on Progress in PhysicsThe Journal of Chemical PhysicsPhys. Rev. BPhys. Rev. BJournal of Physics: Condensed MatterPhys. Rev. BPhys. Rev. Lett.Phys. Rev. BPhys. Rev. Lett.Phys. Rev. BPhysica Status Solidi (b)Sov. Phys. Solid StatePhys. Rev. Lett.Phys. Rev. BPhys. Rev. BPhys. Rev. BReview of Scientific InstrumentsApplied Physics LettersAppl. Phys. ExpressJournal of Physics: Condensed MatterThe Journal of Physical Chemistry CPhys. Rev. BJournal of Solid State ChemistryScientific dataInternational Journal of Modern Physics BMaterials Chemistry and Physics379 (5), pp. 427 – 430. External Links: ISSN 0375-9601, Document, Link Cited by: §I, §III, §III.
* S. Yang, J. Seidel, S. Byrnes, P. Shafer, C. Yang, M. Rossell, P. Yu, Y. Chu, J. Scott, J. Ager, et al. (2010)Above-bandgap voltages from ferroelectric photovoltaic devices. 5 (2), pp. 143–147. External Links: Document Cited by: §I.
* X. Yang, K. Burch, and Y. Ran (2017)Divergent bulk photovoltaic effect in weyl semimetals. External Links: 1712.09363 Cited by: §I.
* C. Yeh, S. Wei, and A. Zunger (1994)Relationships between the band gaps of the zinc-blende and wurtzite modifications of semiconductors. Phys. Rev. B50, pp. 2715–2718. External Links: Document, Link Cited by: §I.
* S. M. Young and A. M. Rappe (2012)First principles calculation of the shift current photovoltaic effect in ferroelectrics. 109, pp. 116601. External Links: Document, Link Cited by: §I, §II.
* S. M. Young, F. Zheng, and A. M. Rappe (2012)First-principles calculation of the bulk photovoltaic effect in bismuth ferrite. 109, pp. 236601. External Links: Document, Link Cited by: §I, §III, §IV.
* Y. Zhang, H. Ishizuka, J. van den Brink, C. Felser, B. Yan, and N. Nagaosa (2018)Photogalvanic effect in Weyl semimetals from first principles. 97, pp. 241118(R). External Links: Document, Link Cited by: §I, §I, §II.
* Y. Zhang, Y. Sun, and B. Yan (2018)Berry curvature dipole in Weyl semimetal materials: an _ab initio_ study. 97, pp. 041101. External Links: Document, Link Cited by: §I.
|
1805.00111 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 33196,
"num_imgs": 4,
"llama3_tokens_count": 10320
} | [
"content_image/1805.00111/x1.png",
"content_image/1805.00111/x2.png",
"content_image/1805.00111/x3.png",
"content_image/1805.00111/x4.png"
] | # Perturbative traveling wave solution for a flux-limited reaction-diffusion morphogenesis equation
Waipot Ngamsaad
waipot.ng@up.ac.th
Division of Physics, School of Science, University of Phayao, Phayao 56000, Thailand
Suthep Suantai
Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
[
###### Abstract
In this study, we investigate a porous medium-type flux limited reaction–diffusion equation that arises in morphogenesis modeling. This nonlinear partial differential equation is an extension of the generalized Fisher–Kolmogorov–Petrovsky–Piskunov (Fisher-KPP) equation in one-dimensional space. The approximate analytical traveling wave solution is found by using a perturbation method. We show that the morphogen concentration propagates as a sharp wave front where the wave speed has a saturated value. The numerical solutions of this equation are also provided to compare them with the analytical predictions. Finally, we qualitatively compare our theoretical results with those obtained in experimental studies.
Flux-limited Fisher-KPP equation, Morphogenesis, Traveling wave solution pacs: 87.10.-e, 05.45.-a, 02.60.-x ]
## I Introduction
Reaction–diffusion models formulated using nonlinear partial differential equations have a wide range of applications in physics, chemistry, and biology (for a review, see Murray (1989)). The most widely recognized reaction–diffusion model is the Fisher–Kolmogorov–Petrovsky–Piskunov (Fisher–KPP ) equation, the solution of which indicates the propagation of a traveling wave that switches between equilibrium states Fisher (1937), Kolmogorov et al. (1991). Based on previous seminal studies, finding the traveling wave solutions of reaction–diffusion equations are attractive objectives because they can provide insights into the underlying physical dynamics of natural processes.
In developmental biology, it has been hypothesized that the concentration gradient of secreted signaling molecules known as _morphogens_ regulate the structure and pattern formation in tissues Dessaud et al. (2007), Rogers and Schier (2011), Briscoe and Thérond (2013), Simon et al. (2016). Reaction–diffusion equations have been employed as models of morphogenesis Lander et al. (2002), Saha and Schaffer (2006), Kondo and Miura (2010) since the pioneering work of Turing Turing (1952), Crick Crick (1970), and Gierer and Meinhardt Gierer and Meinhardt (1972). However, the classical theory describes the migration of morphogens as a linear diffusion process or random-walk motion from a microscopic perspective Turing (1952), Crick (1970), Gierer and Meinhardt (1972), Lander et al. (2002), Kondo and Miura (2010). Unfortunately, experimental studies of some specific morphogens (such as Hedgehog (Hh) molecules) have shown that the classical reaction–diffusion equations are unable to capture the actual morphogenetic patterns Verbeni et al. (2013), Sánchez et al. (2015). A model based on linear diffusion Saha and Schaffer (2006) reproduces an unclear front, which does not agree with the experimental observations Verbeni et al. (2013), Sánchez et al. (2015). In addition, the classical diffusion models has a shortcoming because it contains an infinite flux with a concentration gradient Rosenau (1992). To address various nonphysical issues, previous studies Verbeni et al. (2013), Sánchez et al. (2015) proposed flux-limited reaction–diffusion equations to model morphogen transport. This novel model appears to obtain more realistic morphogenetic patterns, which have been verified using experimental data Verbeni et al. (2013), Sánchez et al. (2015).
The flux-limited diffusion equation can be derived using two different approaches comprising special relativistic-like mechanics Rosenau (1992) and the optimal transport theory Brenier (2003). The equation was extended to the flux-limited porous medium-type diffusion equation to allow its generalization Chertock et al. (2003), Caselles et al. (2013). Together with reaction processes, the flux-limited reaction–diffusion equations have been studied widely Kurganov and Rosenau (2006), Andreu et al. (2010), Andreu et al. (2012), Garrione and Sanchez (23), Campos et al. (2013), Campos and Soler (2016), Calvo et al. (2015), Calvo et al. (2016), Calvo (2017), Calvo (2018). A one-dimensional model is a plausible representation of the real morphogenetic system, as exemplified by the propagation of Sonic Hedgehog (Shh) molecules in a neural tube along the dorsal-ventral axis Verbeni et al. (2013), Sánchez et al. (2015).
Motivated by this biological system, in the present study we investigate a one-dimensional porous medium-type flux-limited reaction–diffusion equation as a simplified model of morphogenesis. Variations in the flux-limited reaction–diffusion models have been studied previously Kurganov and Rosenau (2006), Andreu et al. (2010), Campos et al. (2013), Garrione and Sanchez (30), Calvo et al. (2016), Campos and Soler (2016), but to the best of our knowledge, the exact solutions have not been obtained. Therefore, in the present study we aim to find the approximate analytical traveling wave solution of this equation by using a simple perturbation method, as used in the previous works Garduño and Maini (1994), Ngamsaad and Suantai (2016). This simple approximation approach is similar to asymptotic analysis Murray (1989) and it uncovers two main physical features comprising the morphogen concentration profile and the propagation speed of the wave front. To obtain a precise solution, we also solve this equation using a robust fully implicit numerical scheme. Finally, we qualitatively compare our solutions with previously reported experimental evidence. We hope that our solutions provide insights into the spread and formation of patterns by morphogenesis when modeled using this simple flux-limited reaction–diffusion process.
## II Model description
The one-dimensional porous medium-type flux-limited reaction–diffusion equation considered in this study was presented by Campos and Soler (2016) and it is given by
\[\rho_{t}=\mu\left(\frac{\rho\rho_{x}}{\sqrt{1+\frac{\mu^{2}}{c_{s}^{2}}\rho_{x }^{2}}}\right)_{x}+R(\rho),\] (1)
where \(\rho(x,t)\) is the morphogen concentration at position \(x\) and time \(t\), \(\mu\) is the viscosity constant, \(c_{s}\) is the speed of sound, and \(R(\rho)\) is the reaction term Campos and Soler (2016). For the sake of simplicity, similar to previous studies Andreu et al. (2010), Campos et al. (2013), Calvo et al. (2016), Campos and Soler (2016), the choice of the reaction term is the logistic law
\[R(\rho)=\alpha\rho\left(1-\frac{\rho}{\rho_{m}}\right),\] (2)
where \(\alpha\) is the rate constant and \(\rho_{m}\) is the maximum concentration. To capture the physical meaning of the viscosity constant, we define \(\mu=c_{s}^{2}/(\gamma\rho_{m})\), where \(\gamma\) is the frictional rate. From Eq. (1) without the reaction term, the diffusion is rapid when the value of \(\gamma\) is small whereas the diffusion is slow when the value of \(\gamma\) is large. We rewrite Eq. (1) in the general form of the reaction–diffusion equation \(u_{t}=-j_{x}+f(\rho)\), where \(j(x,t)\) is the flux defined by \(j(x,t)=\rho(x,t)V(x,t)\) and \(V(x,t)\) is the velocity field. Therefore, from Eq. (1), the velocity field is given by
\[V=-\mu\frac{\rho_{x}}{\sqrt{1+\frac{\mu^{2}}{c_{s}^{2}}\rho_{x}^{2}}}.\] (3)
To facilitate further analysis, we introduce the dimensionless quantities as follows: \(u=\rho/\rho_{m}\), \(t^{\prime}=\alpha t\), and \(\epsilon=\alpha/\gamma\). Due to the constraint that \(c_{s}\) is the highest admissible speed, we choose the dimensionless velocity as \(v=V/c_{s}\). According to \(dx^{\prime}\sim vdt^{\prime}\), the dimensionless position is provided by \(x^{\prime}=(\alpha/c_{s})x\). Now, the dimensionless concentration and velocity are limited such that \(0\leq u\leq 1\) and \(0\leq|v|\leq 1\), respectively. By substituting Eq. (2) into Eq. (1) with all the defined dimensionless quantities, we obtain the flux-limited reaction–diffusion equation in dimensionless form
\[u_{t}=\left(\frac{\epsilon uu_{x}}{\sqrt{1+\epsilon^{2}u_{x}^{2}}}\right)_{x}+ u\left(1-u\right),\] (4)
where the prime symbols have dropped. Similarly, for Eq. (3), the dimensionless velocity field is given by
\[v=-\frac{\epsilon u_{x}}{\sqrt{1+\epsilon^{2}u_{x}^{2}}}.\] (5)
Eq. (4) is the generalized Fisher–KPP equation Newman (1980) with the flux-limited diffusion extension. This equation is _degenerate_ at \(u=0\), where it transforms from the second-order into the first-order differential equation. It is well understood that the degenerate reaction–diffusion equation produces a clear wave front interface provided that the concentration profile vanishes at a finite position Newman (1980), Murray (1989). This feature has been observed in experimental studies of morphogenesis Verbeni et al. (2013), Sánchez et al. (2015). The only parameter that appears in our system is the ratio of the reaction rate relative to the frictional rate \(\epsilon\), which has a crucial role in the regulation of this system. When \(\epsilon\to 0\), Eq. (4) recovers a logistic reaction equation, \(u_{t}=u\left(1-u\right)\), which has no propagating front. As \(\epsilon\to\infty\), it converges to a reaction–convection equation, \(u_{t}\approx(uu_{x}/|u_{x}|)_{x}+u\left(1-u\right)\), the solution of which propagates with the saturated speed \(c=1\) (or \(c_{s}\) with a physical unit). Therefore, it is likely that the flux-limited reaction–diffusion equation will eliminate the shortcoming in terms of the infinite propagation speed for the entire parameter range, or even for large concentration gradients Andreu et al. (2010), Campos et al. (2013), Calvo et al. (2016). Due to these features, the flux-limited model is more realistic than the classical theory for describing biological transport processes.
<figure><img src="content_image/1805.00111/x1.png"><figcaption>Figure 1: (Color online) Plots of the analytical wave profile Eq. (20)(straight line) and the corresponding velocity field Eq. (19) (dashed line)for ϵ=0.2.</figcaption></figure>
## III Perturbative traveling wave solution
We now assume that the solution of Eq. (4) is in the traveling wave form \(u(x,t)=\phi(\xi)\), where \(\xi=x-ct\) and \(c\) is the speed of the front. By substituting this solution into Eq. (4) and Eq. (5), respectively, we obtain
(6)
and
\[v(\xi)=-\frac{\epsilon\phi_{\xi}}{\sqrt{1+\epsilon^{2}\phi_{\xi}^{2}}}.\] (7)
For simplicity, we define the rescaled variables \(z=\xi/\sqrt{\epsilon}\) and \(\nu=c/\sqrt{\epsilon}\) such that Eq. (4) reads
\[\left(\frac{\phi\phi_{z}}{\sqrt{1+\epsilon\phi_{z}^{2}}}\right)_{z}+\nu\phi_{z }+\phi\left(1-\phi\right)=0.\] (8)
Eq. (8) is the main equation that we aim to analyze in this study. The exact solution of Eq. (8) in the general case is not available, and thus we consider a special case where \(\epsilon\ll 1\), which can occur when either the growth rate is slow, \(\alpha\to 0\), or the frictional rate is high, \(\gamma\to\infty\).
We employ a simple perturbation method to find the solution of Eq. (8), as presented in previous studies Murray (1989), Garduño and Maini (1994), Ngamsaad and Suantai (2016). By using the Taylor expansion, Eq. (8) can be written in approximate form
\[\left[\phi\left(\phi_{z}-\frac{\epsilon}{2}\phi_{z}^{3}\right)\right]_{z}+\nu \phi_{z}+\phi\left(1-\phi\right)+O(\epsilon^{2})=0.\] (9)
Next, we define \(\phi_{z}=w(\phi)\) and we then rewrite Eq. (9)
\[\phi\left(w-\frac{3\epsilon}{2}w^{3}\right)w^{\prime}-\frac{\epsilon}{2}w^{4}+ w^{2}+\nu w+\phi\left(1-\phi\right)=0,\] (10)
where \((*)^{\prime}\equiv d(*)/d\phi\). The solution of Eq. (10) can be written in the power series of \(\epsilon\) (up to the first order)
\[w(\phi) = w_{0}(\phi)+w_{1}(\phi)\epsilon+O(\epsilon^{2}),\] (11)
\[\nu = c_{0}+c_{1}\epsilon+O(\epsilon^{2}),\] (12)
where \(w_{*}\) and \(c_{*}\) are the undetermined concentration wave gradients and wave speeds, respectively. After substituting Eq. (11) and Eq. (12) into Eq. (10), we have
\[\phi\left(w_{0}w_{0}^{\prime}+w_{0}^{\prime}w_{1}\epsilon+w_{0}w_ {1}^{\prime}\epsilon-\frac{3}{2}w_{0}^{3}w_{0}^{\prime}\epsilon\right)\] (13)
\[-\frac{1}{2}w_{0}^{4}\epsilon+w_{0}^{2}+2w_{0}w_{1}\epsilon+c_{0} w_{0}+c_{1}w_{0}\epsilon+c_{0}w_{1}\epsilon\]
\[+\phi\left(1-\phi\right)+O(\epsilon^{2})=0.\]
By comparing the coefficients of the \(\epsilon^{0}\) and \(\epsilon^{1}\) terms, respectively, we obtain
\[\phi w_{0}w_{0}^{\prime}+w_{0}^{2}+c_{0}w_{0}+\phi\left(1-\phi\right)=0\] (14)
and
\[\phi w_{0}w_{1}^{\prime}+\left(\phi w_{0}^{\prime}+2w_{0}+c_{0} \right)w_{1}\] (15)
\[-\frac{3}{2}\phi w_{0}^{3}w_{0}^{\prime}-\frac{1}{2}w_{0}^{4}+c_{ 1}w_{0}=0.\]
Eq. (14) has known solutions in previous studies Newman (1980), Murray (1989) given by
\[w_{0}(\phi)=\frac{1}{\sqrt{2}}\left(\phi-1\right),\qquad c_{0}=\frac{1}{\sqrt{ 2}}.\] (16)
Using Eq. (16), Eq. (15) can be solved as shown in Appendix A. Finally, by combining all of the terms, we have the approximate solutions (up to the first-order correction)
\[w(\phi) = \frac{1}{\sqrt{2}}\left(\phi-1\right)\left[1+\frac{\epsilon}{6} \left(\phi^{2}-\frac{21}{10}\phi+\frac{6}{5}\right)\right],\] (17)
\[\nu = \frac{1}{\sqrt{2}}\left(1-\frac{\epsilon}{20}\right).\] (18)
Using the transformation \(\phi_{\xi}=\phi_{z}/\sqrt{\epsilon}=w/\sqrt{\epsilon}\), from Eq. (7), we can obtain the solution for the velocity field
\[v(\phi(\xi))=-\frac{\sqrt{\epsilon}w(\phi(\xi))}{\sqrt{1+\epsilon w^{2}(\phi( \xi))}}.\] (19)
In addition, from Eq. (17), after evaluating the integral \(\sqrt{\epsilon}\int d\phi/w(\phi)=\int d\xi\), we obtain the approximate analytical solution for the wave profile
\[a\ln\frac{\left(\phi-1\right)^{2}}{1+\frac{\epsilon}{6}\left( \phi^{2}-\frac{21}{10}\phi+\frac{6}{5}\right)}\] (20)
\[+2ab\tan^{-1}\left(b\left(20\phi-21\right)\right)+\xi_{0}=\xi,\]
where \(a=\frac{30\sqrt{2\epsilon}}{60+\epsilon}\), \(b=\frac{\sqrt{\epsilon}}{\sqrt{2400+39\epsilon}}\) and \(\xi_{0}=a\left[\ln\left(1+\frac{\epsilon}{5}\right)+2b\tan^{-1}(21b)\right]\), which is determined by using the boundary condition that \(\phi(0)=0\). Eq. (20) is an implicit solution but the variables are separated explicitly, so we can plot the wave profile and the corresponding velocity field Eq. (19), as illustrated in Fig. (1). It should be noted that the solutions in Eq. (17), Eq. (19), and Eq. (20) are available for \(0\leq\phi\leq 1\); otherwise, they are zero. The wave profiles have a sharp front interface where the concentration falls to zero at a finite front position. This feature is in qualitative agreement with the experimental observations because the morphogen concentration profiles have a clear invading front interface Verbeni et al. (2013), Sánchez et al. (2015).
From Eq. (19), we observe that \(v(\phi)\to 1\) as \(\epsilon\to\infty\) and \(w(\phi)<0\). Thus, the velocity field \(v(\phi)\) in this regime is close to constant regardless of whether the solution of \(w(\phi)\) is approximate. To obtain a better approximate wave speed function, we use the fact that the front speed is the velocity field at the leading edge \(c=v(0)\). Thus, from Eq. (17) and Eq. (19), we have
\[c(\epsilon)=\sqrt{\frac{\epsilon}{2}}\frac{1+\epsilon/5}{\sqrt{1+\frac{ \epsilon}{2}\left(1+\epsilon/5\right)^{2}}}.\] (21)
By expanding Eq. (21), we can prove that \(c/\sqrt{\epsilon}\approx\left(1-\frac{\epsilon}{20}\right)/\sqrt{2}+O(\epsilon ^{2})=\nu\), which is consistent with the first-order approximate solution in Eq. (18). As \(\epsilon\to\infty\), from Eq. (21), the wave speed reaches the limited value at \(c=1\) (or \(c_{s}\) with a physical unit), which proves that this flux-limited reaction–diffusion equation provides the saturated wave speed, as required for biological applications Verbeni et al. (2013), Sánchez et al. (2015).
<figure><img src="content_image/1805.00111/x2.png"><figcaption>Figure 2: (Color online) Numerical concentration profiles at t=15 forselected values of the rate ratio ϵ. The dash-dot line represents the initialprofile.</figcaption></figure>
<figure><img src="content_image/1805.00111/x3.png"><figcaption>Figure 3: (Color online) Front position versus time corresponding to theconcentration profiles in Fig. (2). The markers are shown for every three datapoints, and the solid lines represent the linear fitting curve for the last 50data.</figcaption></figure>
## IV Numerical solutions and discussion
To compare the analytical predictions with more accurate numerical values, we solve the dimensionless flux-limited reaction–diffusion equation (Eq. (4)) by using a nonstandard fully implicit finite-difference method Eberl and Demaret (2007), Ngamsaad and Suantai (2016). First, we rewrite Eq. (4) in the usual form of the reaction–diffusion equation
\[u_{t}=\left[M(u,u_{x})u_{x}\right]_{x}+f(u)u,\] (22)
where \(M(u,u_{x})=\epsilon u/\sqrt{1+\epsilon^{2}u_{x}^{2}}\), which is equivalent to the nonlinear diffusion coefficient, and \(f(u)=1-u\) denotes the nonlinear reaction rate. It is known that solving Eq. (22) with a standard explicit method is inefficient due to the variable diffusion coefficient Press et al. (1988). In addition, solving with a standard implicit scheme is even more difficult due to the nonlinearity of the equation. The idea of the nonstandard fully implicit finite-difference method is that only linear terms are discretized forward in time. Thus, we define the discrete space and time as follows: \(x_{i}=i\delta x\), \(t_{n}=n\delta t\), where \(\delta x\) is the grid spacing, \(\delta t\) is the time step, \(i\in\{0,1,2,\ldots,J\}\), \(n\in\{0,1,2,\ldots,N\}\), and \(J\) and \(N\) are integers. Now, the discrete concentration reads \(u^{n}_{i}=u(x_{i},t_{n})\). Thus, Eq. (22) in discrete form is provided by
\[\frac{\partial}{\partial t}u^{n+1}_{i}\approx\frac{\partial}{\partial x}\left( M^{n}_{i}\frac{\partial}{\partial x}u^{n+1}_{i}\right)+f^{n}_{i}u^{n+1}_{i},\] (23)
where \(M^{n}_{i}=M(u^{n}_{i},\partial u^{n}_{i}/\partial x)\) and \(f^{n}_{i}=1-u^{n}_{i}\). By using this approach, Eq. (23) can be evaluated as a tridiagonal matrix equation in the usual manner. It has been proved that this numerical scheme is sufficiently stable for solving this type of nonlinear partial differential equation. A complete evaluation of the algorithm and its stability analysis are presented in Appendix B.
In our computations, we set the grid spacing and the time step as \(\delta x=0.01\) and \(\delta t=0.01\), respectively. All of the calculations were performed using 3,000 grids with 1,500 iterations, which covered a spatial length of 30 and total time of 15 in dimensionless units. The initial concentration profile, \(u_{0}(x)\), was set to a step function:
\[u_{0}(x)=\left\{\begin{array}[]{lc}1,&x<10\\ 0,&x\geq 10.\end{array}\right.\] (24)
The zero flux condition, \(u_{x}=0\), was imposed at the boundaries. Illustrations of the concentration profiles obtained with various rate ratios (\(\epsilon\)) using the numerical method are shown in Fig. (2). We found that the concentration profiles evolved with the sharp traveling wave, which decreased to zero at a finite front position \(r_{f}\), as predicted by the analytical solution. The wave profile had a smoother interface as the value of \(\epsilon\) increased because the frictional rate \(\gamma\) was small relative to the growth rate \(\alpha\), so the morphogens migrated toward the free space more rapidly. For large values of \(\epsilon\), the profiles obtained over an equal time tended to overlap, thereby demonstrating that the front speed tended to reach a saturated value.
The front positions were collected every \(t=0.1\) to compute the wave speed. Due to numerical deviations, the front position was determined by the first position where the concentration was lower than \(1\times 10^{-6}\). The last 50 data points were selected for fitting with the linear equation, \(r_{f}=ct+r_{0}\), where the wave speed was the slope of the fitted equation. Plots of the corresponding front positions versus time are shown in Fig. (3). Our calculated numerical front positions fitted well with the linear equation, thereby indicating that the concentration propagated with a constant front speed.
A plot of the numerical front speed \(c\) versus the rate ratio \(\epsilon\) is compared with the analytical predictions obtained using Eq. (21) in Fig. (4). Both the analytical and numerical data showed that the front speed increased with \(\epsilon\), and it reached a saturated value at \(c=1\) as \(\epsilon\) approached a large value. The analytical results agreed well with the numerical data for a small value of the rate ratio (\(\epsilon\ll 1\)) because the correction of our analytical solution was only \(O(\epsilon^{2})\).
<figure><img src="content_image/1805.00111/x4.png"><figcaption>Figure 4: (Color online) Front speed versus the rate ratio ϵ. The circularmarkers represent the numerical results and the dashed lines denote the frontspeed predicted using Eq. (21). The inset shows the results for small valuesof ϵ.</figcaption></figure>
Our analytical and numerical solutions of this simple flux-limited reaction–diffusion equation capture some physical features of morphogenesis. In particular, the wave profiles have a sharp front interface where the concentration decreases to zero at a finite front position. This feature is in qualitative agreement with experimental observations because the morphogen concentration profiles have a clear invading front interface Verbeni et al. (2013), Sánchez et al. (2015). Finally, we proved that this flux-limited reaction-diffusion equation provides the saturated wave speed, which is a more realistic model compared with the conventional theory Rosenau (1992).
## V Conclusions
In this study, we investigated a simplified morphogenesis model governed by a porous medium-type flux-limited reaction–diffusion equation. This equation is actually an extension of the generalized Fisher–KPP equation. The approximate analytical solutions of this equation were obtained using a perturbation approach. We also solved this equation by using a nonstandard fully implicit finite-difference method in order to compare the results with the analytical predictions. The results showed that the morphogen concentration propagated as a sharp traveling wave that vanished at a finite front position and it reproduced a clear front interface. The front speed increased as the ratio of the growth rate relative to the frictional rate increased, and a saturated value was reached for a larger value of this rate ratio. We found that the flux-limited reaction–diffusion model can eliminate the shortcoming of the classical models, which yield a nonphysical infinite front speed. These features are in qualitative agreement with the experimental observations.
###### Acknowledgements.
This research was supported by the Research Fund for DPST Graduate with First Placement (_Grant no._ 28/2557) funded by The Institute for the Promotion of Teaching Science and Technology (IPST).
## Appendix A Evaluation of \(w_{1}(\phi)\) and \(c_{1}\)
By substituting Eq. (16) into Eq. (15), we have
\[\phi\left(\phi-1\right)w_{1}^{\prime}+\left(3\phi-1\right)w_{1}\] (25)
\[+\left[c_{1}-\frac{1}{4\sqrt{2}}\left(4\phi-1\right)\left(\phi-1 \right)^{2}\right]\left(\phi-1\right)=0.\]
Eq. (25) is a linear first-order ordinary differential equation of the form
\[w_{1}^{\prime}+p(\phi)w_{1}=q(\phi),\] (26)
where \(p(\phi)=(3\phi-1)/[\phi(\phi-1)]\) and \(q(\phi)=\frac{1}{4\sqrt{2}}(4\phi-1)(\phi-1)^{2}/\phi-c_{1}/\phi\). The solution of Eq. (26) is given by \(w_{1}=(C+\int I(\phi)q(\phi)d\phi)/I(\phi)\) where \(I(\phi)=e^{\int p(\phi)d\phi}\) is called the integrating factor and \(C\) is the integral constant Arfken (1985). After evaluation, we find that \(I=\phi\left(\phi-1\right)^{2}\), and thus we have
\[w_{1}(\phi)=\frac{1}{\phi\left(\phi-1\right)^{2}}\left\{C+\left( \phi-1\right)^{3}\right.\] (27)
To remove singularities at \(\phi=0\) and \(\phi=1\), it is necessary that \(C=0\) and \(c_{1}=-\frac{1}{20\sqrt{2}}\). Thus, the first-order concentration wave gradient is obtained by
\[w_{1}(\phi)=\frac{1}{6\sqrt{2}}\left(\phi-1\right)\left(\phi^{2}-\frac{21}{10} \phi+\frac{6}{5}\right).\] (28)
## Appendix B Evaluation of numerical scheme and stability analysis
The differential operators in Eq. (23) were discretized further by Eberl and Demaret (2007), Ngamsaad and Suantai (2016), and thus we obtain
\[\frac{u^{n+1}_{i}-u^{n}_{i}}{\delta t}=\frac{1}{\left(\delta x \right)^{2}}\left[M^{n}_{i+1/2}\left(u^{n+1}_{i+1}-u^{n+1}_{i}\right)\right.\] (29)
\[\left.-M^{n}_{i-1/2}\left(u^{n+1}_{i}-u^{n+1}_{i-1}\right)\right] +f^{n}_{i}u^{n+1}_{i},\]
where
\[M^{n}_{i-1/2} = M(\frac{u^{n}_{i-1}+u^{n}_{i}}{2},\frac{u^{n}_{i}-u^{n}_{i-1}}{ \delta x}),\] (30)
\[M^{n}_{i+1/2} = M(\frac{u^{n}_{i}+u^{n}_{i+1}}{2},\frac{u^{n}_{i+1}-u^{n}_{i}}{ \delta x}).\] (31)
It should be noted that the correction of Eq. (29) is \(O(\delta{t},\left(\delta{x}\right)^{2})\). After rearranging Eq. (29), we have
\[\alpha^{n}_{i}u^{n+1}_{i-1}+\theta^{n}_{i}u^{n+1}_{i}+\beta^{n}_{i}u^{n+1}_{i+ 1}=u^{n}_{i},\] (32)
where
\[\alpha^{n}_{i} = -\mu M^{n}_{i-1/2},\]
\[\beta^{n}_{i} = -\mu M^{n}_{i+1/2},\]
\[\theta^{n}_{i} = 1-\delta tf^{n}_{i}+\mu\left(M^{n}_{i-1/2}+M^{n}_{i+1/2}\right),\]
\[\mu = \delta t/\left(\delta x\right)^{2}.\] (33)
By imposing the zero-flux condition at the boundary grid \(\Omega\), i.e., \(\left.u_{x}\right|_{\Omega}\approx\frac{u^{n}_{\Omega+1}-u^{n}_{\Omega-1}}{2 \delta x}+O((\delta x)^{2})=0\), we find that \(u^{n}_{\Omega-1}=u^{n}_{\Omega+1}\) and \(M^{n}_{\Omega-1/2}=M^{n}_{\Omega+1/2}\). According to the boundary condition, we have \(\beta^{n}_{0}=-2\mu M^{n}_{1/2}\), \(\alpha^{n}_{J}=-2\mu M^{n}_{J-1/2}\), \(\theta^{n}_{0}=1-\delta tf^{n}_{0}+2\mu M^{n}_{1/2}\) and \(\theta^{n}_{J}=1-\delta tf^{n}_{J}+2\mu M^{n}_{J-1/2}\). Eq. (32) can be written as a tridiagonal matrix equation, which can be solved numerically at each time step to obtain the numerical concentration profile \(u^{n}_{i}\)Press et al. (1988), Ngamsaad and Suantai (2016). The tridiagonal matrix equation is given by
\[\textbf{A}^{n}\cdot\textbf{U}^{n+1}=\textbf{U}^{n},\] (34)
where
\[\textbf{A}^{n}=\left[\begin{array}[]{ccccc}\theta^{n}_{0}&\beta^{n}_{0}&\cdots &\cdots&0\\ \alpha^{n}_{1}&\theta^{n}_{1}&\beta^{n}_{1}&&\vdots\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ \vdots&&\alpha^{n}_{J-1}&\theta^{n}_{J-1}&\beta^{n}_{J-1}\\ 0&\cdots&\cdots&\alpha^{n}_{J}&\theta^{n}_{J}\end{array}\right],\] (35)
and
\[\textbf{U}^{n}=\left[\begin{array}[]{cccccc}u^{n}_{0}&u^{n}_{1}&u^{n}_{2}& \cdots&u^{n}_{J}\end{array}\right]^{\textnormal{T}}.\] (36)
We analyze the stability of this numerical scheme (Eq. (32)) by using the von Neumann approach, which assumes that
\[u^{n}_{i}=\left(\lambda\right)^{n}e^{\mathbf{i}ki\delta x},\] (37)
where \(\mathbf{i}=\sqrt{-1}\), \(\lambda\) represents the amplification factor and \(k\) is the wave number Press et al. (1988). By substituting Eq. (37) into Eq. (29), we have \(\lambda^{-1}=1-\delta tf^{n}_{i}-\mu M^{n}_{i+1/2}\left(e^{\mathbf{i}k\delta x }-1\right)+\mu M^{n}_{i-1/2}\left(1-e^{-\mathbf{i}k\delta x}\right)\), which can be approximated further to obtain
\[\lambda\approx\left[1-\delta tf^{n}_{i}+4\mu M^{n}_{i}\sin^{2}\left(k\delta x/ 2\right)+O(\delta x)\right]^{-1}.\] (38)
As \(u^{n}_{i}\) increases from 0 to 1, we find that \(0\leq f^{n}_{i}\leq 1\) and \(0\leq M^{n}_{i}<\infty\). At the saturated concentration \(f^{n}_{i}(u^{n}_{i}=1)=0\), it is guaranteed that \(0<\lambda<1\). Based on Eq. (37) and Eq. (38), the numerical solution could converge to a finite value provided that \(\delta x\ll 1\) and \(\delta t\ll 1\). Therefore, the algorithm is sufficiently stable for solving this type of nonlinear partial differential equation Eberl and Demaret (2007), Ngamsaad and Suantai (2016).
## References
* Murray (1989) J. Murray, _Mathematical Biology_ (Springer-Verlag, New York, 1989).
* Fisher (1937) R. Fisher, Ann. Eugenics **7**, 355 (1937).
* Kolmogorov et al. (1991) A. Kolmogorov, I. Petrovskii, and N. Piscounov, in _Selected works of AN Kolmogorov_, edited by V. Tikhomirov (Springer, 1991), pp. 242–270.
* Dessaud et al. (2007) E. Dessaud, L. L. Yang, K. Hill, B. Cox, F. Ulloa, A. Ribeiro, A. Mynett, B. G. Novitch, and J. Briscoe, Nature **450**, 717 (2007).
* Rogers and Schier (2011) K. W. Rogers and A. F. Schier, Annu. Rev. Cell Dev. Biol. **27**, 377 (2011), pMID: 21801015.
* Briscoe and Thérond (2013) J. Briscoe and P. P. Thérond, Nat. Rev. Mol. Cell Biol. **14**, 416 (2013).
* Simon et al. (2016) E. Simon, A. Aguirre-Tamaral, G. Aguilar, and I. Guerrero, J. Dev. Biol. **4** (2016), URL http://www.mdpi.com/2221-3759/4/4/34.
* Lander et al. (2002) A. D. Lander, Q. Nie, and F. Y. Wan, Dev. Cell **2**, 785 (2002), ISSN 1534-5807, URL http://www.sciencedirect.com/science/article/pii/S153458070200179X.
* Saha and Schaffer (2006) K. Saha and D. V. Schaffer, Development **133**, 889 (2006).
* Kondo and Miura (2010) S. Kondo and T. Miura, Science **329**, 1616 (2010), ISSN 0036-8075, URL https://science.sciencemag.org/content/329/5999/1616.
* Turing (1952) A. M. Turing, Phil. Trans. Roy. Soc. B **237**, 37 (1952).
* Crick (1970) F. Crick, Nature **225**, 420 (1970).
* Gierer and Meinhardt (1972) A. Gierer and H. Meinhardt, Kybernetik **12**, 30 (1972), URL https://doi.org/10.1007/BF00289234.
* Verbeni et al. (2013) M. Verbeni, O. Sánchez, E. Mollica, I. Siegl-Cachedenier, A. Carleton, I. Guerrero, A. R. i Altaba, and J. Soler, Phys. Life Rev. **10**, 457 (2013), ISSN 1571-0645, URL http://www.sciencedirect.com/science/article/pii/S1571064513000833.
* Sánchez et al. (2015) Ó. Sánchez, J. Calvo, C. Ibáñez, I. Guerrero, and J. Soler, in _Hedgehog signaling protocols_ (Springer, 2015), pp. 19–33.
* Rosenau (1992) P. Rosenau, Phys. Rev. A **46**, R7371 (1992), URL https://link.aps.org/doi/10.1103/PhysRevA.46.R7371.
* Brenier (2003) Y. Brenier, in _Optimal transportation and applications_ (Springer, 2003), pp. 91–121.
* Chertock et al. (2003) A. Chertock, A. Kurganov, and P. Rosenau, Nonlinearity **16**, 1875 (2003), URL http://stacks.iop.org/0951-7715/16/i=6/a=301.
* Caselles et al. (2013) V. Caselles et al., Publicacions Matemàtiques **57**, 144 (2013).
* Kurganov and Rosenau (2006) A. Kurganov and P. Rosenau, Nonlinearity **19**, 171 (2006), URL http://stacks.iop.org/0951-7715/19/i=1/a=009.
* Andreu et al. (2010) F. Andreu, V. Caselles, and J. Mazón, J. Differ. Equations **248**, 2528 (2010), ISSN 0022-0396, URL http://www.sciencedirect.com/science/article/pii/S0022039610000124.
* Andreu et al. (2012) F. Andreu, J. Calvo, J. Mazón, and J. Soler, J. Differ. Equations **252**, 5763 (2012), ISSN 0022-0396, URL http://www.sciencedirect.com/science/article/pii/S0022039612000411.
* Garrione and Sanchez (2015a) M. Garrione and L. Sanchez, Bound. Value Probl. **2015**, 45 (2015a), URL https://doi.org/10.1186/s13661-015-0303-y.
* Campos et al. (2013) J. Campos, P. Guerrero, Óscar Sánchez, and J. Soler, Ann. I. H. Poincaré - NA **30**, 141 (2013), ISSN 0294-1449, URL http://www.sciencedirect.com/science/article/pii/S0294144912000637.
* Campos and Soler (2016) J. Campos and J. Soler, Nonlinear Anal. **137**, 266 (2016), ISSN 0362-546X, nonlinear Partial Differential Equations, in honor of Juan Luis Vázquez for his 70th birthday, URL http://www.sciencedirect.com/science/article/pii/S0362546X1500437X.
* Calvo et al. (2015) J. Calvo, J. Campos, V. Caselles, O. Sánchez, and J. Soler, EMS Surv. Math. Sci **2**, 131 (2015).
* Calvo et al. (2016) J. Calvo, J. Campos, V. Caselles, O. Sánchez, and J. Soler, Invent. Math. **206**, 57 (2016), URL https://doi.org/10.1007/s00222-016-0649-5.
* Calvo (2017) J. Calvo, in _Computational Mathematics, Numerical Analysis and Applications_ (Springer, 2017), pp. 189–194.
* Calvo (2018) J. Calvo, SeMA Journal **75**, 173 (2018), URL https://doi.org/10.1007/s40324-017-0128-y.
* Garrione and Sanchez (2015b) M. Garrione and L. Sanchez, Bound. Value Probl. **2015**, 45 (2015b), ISSN 1687-2770, URL https://doi.org/10.1186/s13661-015-0303-y.
* Garduño and Maini (1994) F. S. Garduño and P. Maini, Appl. Math. Lett. **7**, 47 (1994), ISSN 0893-9659, URL http://www.sciencedirect.com/science/article/pii/0893965994900515.
* Ngamsaad and Suantai (2016) W. Ngamsaad and S. Suantai, Commun. Nonlinear Sci. Numer. Simulat. **35**, 88 (2016), ISSN 1007-5704, URL http://www.sciencedirect.com/science/article/pii/S1007570415003718.
* Newman (1980) W. Newman, J. Theor. Biol. **85**, 325 (1980).
* Eberl and Demaret (2007) H. J. Eberl and L. Demaret, Electron. J. Diff. Eqns., Conference **15**, 77 (2007).
* Press et al. (1988) W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, _Numerical Recipes in C: The Art of Scientific Computing_ (Cambridge University Press, Cambridge, 1988).
* Arfken (1985) G. B. Arfken, _Mathematical Methods for Physicists_ (Academic Press, San Diego, 1985).
|
1502.07328 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 62236,
"num_imgs": 1,
"llama3_tokens_count": 20152
} | [
"content_image/1502.07328/x1.png"
] | # Combined Top-down and Bottom-up Approach
to Multilevel Supervisory Control
Jan Komenda, Tomáš Masopust, and Jan H. van Schuppen
J. Komenda is with the Institute of Mathematics, Academy of Sciences of the Czech Republic, Žižkova 22, 616 62 Brno, Czech Republic. T. Masopust is with TU Dresden, Germany, and with the Institute of Mathematics, Academy of Sciences of the Czech Republic. J. H. van Schuppen is with Van Schuppen Control Research, Gouden Leeuw 143, 1103 KB, Amsterdam, The Netherlands. komenda@math.cas.cz, masopust@ipm.cz, jan.h.van.schuppen@xs4all.nl
###### Abstract
Recently, we have proposed two complementary approaches, top-down and bottom-up, to multilevel supervisory control of discrete-event systems. In this paper, we compare and combine these approaches. The combined approach has strong features of both approaches, namely, a lower complexity of the top-down approach with the generality of the bottom-up approach. We show that, for prefix-closed languages, a posteriori supervisors computed in the bottom-up manner do not alter maximal permissiveness within the three-level coordination control architecture, that is, the supremal three-level conditionally-controllable and conditionally-normal language can always be computed in a distributed way using multilevel coordination. Moreover, a general polynomial-time procedure for non-prefix closed case is proposed based on coordinators for nonblockingness and a posteriori supervisors.
## I Introduction
Discrete-event abstractions of complex engineering systems have often a modular structure and typically consist of either a large Petri nets or a network (synchronous product) of finite automata. Supervisory control theory was introduced to provide a formal guarantee of safety and nonblockingness for these systems. Modular and decentralized supervisory control theories are especially relevant for large scale systems and these are often combined with hierarchical control based on abstractions. Coordination control of distributed systems with synchronous communication was developed by the authors, see [8] and the references therein, in which a coordinator restricts the behavior of two or more subsystems so that, after further control synthesis, safety and nonblockingness of the distributed system are achieved.
In order to further decrease the complexity of control synthesis, a multilevel coordination control framework was proposed in [4], where a single (central) coordinator at the top level of the standard (three-level) coordination control was replaced by group supervisors for different group systems at the lowest level. These coordinators together with their supervisors then form the middle (intermediate) level, while a (single) high-level coordinator is at the top level of the three-level coordination control. This architecture considerably limits the computational complexity due to relatively small event sets at the various levels.
Recently, we proposed two complementary approaches, called top-down [4] and bottom-up [7], to multilevel supervisory control of discrete-event systems. We have developed constructive results in the top-down approach of [4], where it was shown under which conditions the maximally permissive solution for the three-level coordination control architecture exists, that is, the supremal three-level conditionally controllable languages.
In this paper, we propose a combined approach, which can be described as a top-down design followed by a bottom-up computation. The combined approach combines the strong features, namely, the lower complexity of the top-down approach with the generality of the bottom-up approach. More specifically, we propose to complete the top-down design of coordinators from the high-level to the bottom-level by computing a posteriori supervisors on these coordinator alphabets in the opposite direction, i.e., in the bottom-up manner. The role of these supervisors is to enforce the sufficient conditions for distributed computation presented in [10], which are formulated as controllability and normality on all coordinator alphabets. Note that unlike the bottom-up approach of [7], we do not need to compute supervisors at the higher level, but only supervisors for individual subsystems at the lowest level are computed.
Moreover, we show that for prefix-closed languages a posteriori supervisors do not alter maximal permissiveness within the three-level coordination control architecture, i.e., the supremal three-level conditionally-controllable and conditionally-normal languages can always be computed in the distributed way. In the general case of non-prefix-closed languages, we propose to compute coordinators for nonblockingness in the bottom-up manner in addition to the a posteriori supervisors.
This paper has the following structure. In Section II, we recall the basic elements of supervisory control theory together with basic (three-level) coordination control framework. In Section III, multilevel coordination control framework is discussed and the strong points and drawbacks of the two existing approaches are compared. The main results of the paper are presented in Sections IV and V. In the former section, it is proven that in the combined approach based on a posteriori supervisors, the supremal three-level conditionally-controllable and conditionally-normal languages can always be computed in a distributed way. Then, in the latter section concerned with the non-prefix-closed case, a formal general procedure is presented, where a posteriori supervisors are combined with coordinators for nonblockingness.
## II Preliminaries
This section recalls the basic results about coordination control of partially observed DES with a single (centralized) coordinator. First, elementary notions and notation of supervisory control theory are recalled. The reader is referred to [2] for more details.
Let \(A\) be a finite nonempty set of _events_, and let \(A^{*}\) denote the set of all finite words over \(A\). The _empty word_ is denoted by \(\varepsilon\).
A _generator_ is a quintuple \(G=(Q,A,f,q_{0},Q_{m})\), where \(Q\) is the finite nonempty set of _states_, \(A\) is the _event set_, \(f:Q\times A\to Q\) is the _partial transition function_, \(q_{0}\in Q\) is the _initial state_, and \(Q_{m}\subseteq Q\) is the set of _marked states_. In the usual way, the transition function \(f\) can be extended to the domain \(Q\times A^{*}\) by induction. The behavior of \(G\) is described in terms of languages. The language _generated_ by \(G\) is the set \(L(G)=\{s\in A^{*}\mid f(q_{0},s)\in Q\}\) and the language _marked_ by \(G\) is the set \(L_{m}(G)=\{s\in A^{*}\mid f(q_{0},s)\in Q_{m}\}\subseteq L(G)\).
A _(regular) language_\(L\) over an event set \(A\) is a set \(L\subseteq A^{*}\) such that there exists a generator \(G\) with \(L_{m}(G)=L\). The prefix closure of \(L\) is the set \(\overline{L}=\{w\in A^{*}\mid\text{there exists }u\in A^{*}\text{ such that } wu\in L\}\); \(L\) is _prefix-closed_ if \(L=\overline{L}\).
A _(natural) projection_\(P:A^{*}\to A_{o}^{*}\), for some \(A_{o}\subseteq A\), is a homomorphism defined so that \(P(a)=\varepsilon\), for \(a\in A\setminus A_{o}\), and \(P(a)=a\), for \(a\in A_{o}\). The _inverse image_ of \(P\), denoted by \(P^{-1}:A_{o}^{*}\to 2^{A^{*}}\), is defined as \(P^{-1}(s)=\{w\in A^{*}\mid P(w)=s\}\). The definitions can naturally be extended to languages. The projection of a generator \(G\) is a generator \(P(G)\) whose behavior satisfies \(L(P(G))=P(L(G))\) and \(L_{m}(P(G))=P(L_{m}(G))\).
A _controlled generator with partial observations_ is a structure \((G,A_{c},P,\Gamma)\), where \(G\) is a generator over \(A\), \(A_{c}\subseteq A\) is the set of _controllable events_, \(A_{u}=A\setminus A_{c}\) is the set of _uncontrollable events_, \(P:A^{*}\to A_{o}^{*}\) is the projection, and \(\Gamma=\{\gamma\subseteq A\mid A_{u}\subseteq\gamma\}\) is the _set of control patterns_.
A _supervisor_ for the controlled generator \((G,A_{c},P,\Gamma)\) is a map \(S:P(L(G))\to\Gamma\).
A _closed-loop system_ associated with the controlled generator \((G,A_{c},P,\Gamma)\) and the supervisor \(S\) is defined as the smallest language \(L(S/G)\subseteq A^{*}\) such that
1. \(\varepsilon\in L(S/G)\) and
2. if \(s\in L(S/G)\), \(sa\in L(G)\), and \(a\in S(P(s))\), then also \(sa\in L(S/G)\).
The marked behavior of the closed-loop system is defined as \(L_{m}(S/G)=L(S/G)\cap L_{m}(G)\).
Let \(G\) be a generator over \(A\), and let \(K\subseteq L_{m}(G)\) be a specification. The aim of supervisory control theory is to find a nonblocking supervisor \(S\) such that \(L_{m}(S/G)=K\). The nonblockingness means that \(\overline{L_{m}(S/G)}=L(S/G)\), hence \(L(S/G)=\overline{K}\). It is known that such a supervisor exists if and only if \(K\) is
1. _controllable_ with respect to \(L(G)\) and \(A_{u}\); that is, \(KA_{u}\cap L\subseteq K\), and
2. _observable_ with respect to \(L(G)\), \(A_{o}\), and \(A_{c}\); that is, for all \(s\in K\) and \(\sigma\in A_{c}\), if \(s\sigma\notin K\) and \(s\sigma\in L(G)\), then \(P^{-1}[P(s)]\sigma\cap K=\emptyset\), where \(P:A^{*}\to A_{o}^{*}\).
The synchronous product (parallel composition) of languages \(L_{1}\subseteq A_{1}^{*}\) and \(L_{2}\subseteq A_{2}^{*}\) is defined by
\[L_{1}\parallel L_{2}=P_{1}^{-1}(L_{1})\cap P_{2}^{-1}(L_{2})\subseteq A^{*}\,,\]
where \(P_{i}:A^{*}\to A_{i}^{*}\), for \(i=1,2\), are projections to local event sets. In terms of generators, it is known that \(L(G_{1}\|G_{2})=L(G_{1})\parallel L(G_{2})\) and \(L_{m}(G_{1}\|G_{2})=L_{m}(G_{1})\parallel L_{m}(G_{2})\), see [2] for more details.
We need the following lemma, which should be obvious.
**Lemma 1**: _For any language \(L\subseteq A^{*}\) and projections \(P_{1}:A^{*}\to B_{1}^{*}\) and \(P_{2}:A^{*}\to B_{2}^{*}\) with \(B_{2}\subseteq B_{1}\subseteq A\), it holds that \(P_{1}(L)\parallel P_{2}(L)=P_{1}(L)\)._
Let \(G\) be a generator over \(A\), and let \(Q:A^{*}\to A_{o}^{*}\) be a natural projection. A language \(K\subseteq L(G)\) is _normal_ with respect to \(L(G)\) and \(Q\) if \(\overline{K}=Q^{-1}Q(K)\cap L(G)\).
Recall that controllability is preserved by the synchronous product. It is easy to show that the same holds for normality.
**Lemma 2**: _For \(i=1,2,\dots,n\), let \(K_{i}\subseteq L_{i}\) be controllable with respect to \(L_{i}\subseteq A_{i}^{*}\) and \(A_{i,u}\), nonconflicting, and normal with respect to \(L_{i}\) and \(Q_{i}\), where \(Q_{i}:A_{i}^{*}\to A_{i,o}^{*}\) are natural projections that define partial observations in subsystems. Then \(\parallel_{i=1}^{n}K_{i}\) is controllable with respect to \(\parallel_{i=1}^{n}L_{i}\) and \(\cup_{i=1}^{n}A_{i,u}\) and normal with respect to \(\parallel_{i=1}^{n}L_{i}\) and \(Q\), where \(Q:(\cup_{i=1}^{n}A_{i})^{*}\to(\cup_{i=1}^{n}A_{i,o})^{*}\) is the natural projection that describes partial observations over the global alphabet._
Transitivity of controllability and normality is needed later.
**Lemma 3** ([4]): _Let \(K\subseteq L\subseteq M\) be languages over \(A\) such that \(K\) is controllable with respect to \(L\) and \(A_{u}\) and normal with respect to \(L\) and \(Q\), and \(L\) is controllable with respect to \(M\) and \(A_{u}\) and normal with respect to \(M\) and \(Q\). Then \(K\) is controllable with respect to \(M\) and \(A_{u}\) and normal with respect to \(M\) and \(Q\)._
Now we recall the basic notions of coordination control [8]. A language \(K\) over \(\cup_{i=1}^{n}A_{i}\) is _conditionally decomposable with respect to alphabets \((A_{i})_{i=1}^{n}\) and \(A_{k}\)_, where \(\cup_{1\leq i,j\leq n}^{i\neq j}(A_{i}\cap A_{j})\subseteq A_{k}\subseteq\cup_ {i=1}^{n}A_{j}\), if
\[K=\ \parallel_{i=1}^{n}P_{i+k}(K)\,,\]
for projections \(P_{i+k}\) from \(\cup_{j=1}^{n}A_{j}\) to \(A_{i}\cup A_{k}\), for \(i=1,2,\ldots,n\). The alphabet \(A_{k}\) is a coordinator alphabet and includes all shared events:
\[A_{sh}=\cup_{1\leq i,j\leq n}^{i\neq j}(A_{i}\cap A_{j})\subseteq A_{k}\,.\]
This has the following well-known impact.
**Lemma 4** ([3]): _Let \(P_{k}:A^{*}\to A_{k}^{*}\) be a projection, and let \(L_{i}\) be a language over \(A_{i}\), for \(i=1,2,\dots,n\), and let \(A_{sh}\subseteq A_{k}\). Then \(P_{k}(\parallel_{i=1}^{n}L_{i})=\parallel_{i=1}^{n}P_{k}(L_{i})\)._
The problem of coordination control synthesis is now recalled.
**Problem 5**: _Let \(G_{i}\), for \(i=1,2,\dots,n\), be local generators over the event sets \(A_{i}\) of a modular plant \(G=\parallel_{i=1}^{n}G_{i}\), and let \(G_{k}\) be a coordinator over an alphabet \(A_{k}\). Let \(K\subseteq L(G\|G_{k})\) be a specification language. Assume that \(A_{k}\supseteq A_{sh}\) and that \(K\) is conditionally decomposable with respect to event sets \((A_{i})_{i=1}^{n}\) and \(A_{k}\)._
_The overall task \(K\) is divided into the local subtasks and the coordinator subtask, cf. [11]. The supervisor \(S_{k}\) for the coordinator will guarantee that \(L(S_{k}/G_{k})\subseteq P_{k}(K)\). Similarly, the supervisors \(S_{i}\) will guarantee that \(L(S_{i}/[G_{i}\|(S_{k}/G_{k})])\subseteq P_{i+k}(K)\), for \(i=1,2,\dots,n\)._
_The problem is to determine the supervisors \(S_{1},S_{2},\dots,S_{n}\), and \(S_{k}\) such that_
\[\parallel_{i=1}^{n}L_{m}(S_{i}/[G_{i}\|(S_{k}/G_{k})])=K\,.\]
The main existential result for a prefix-closed specification \(K\) is the special case of Theorem 13 of [9] extended to general \(n\geq 2\).
**Theorem 6**: _[_9_]_ _Consider the setting of Problem 5. There exist supervisors \(S_{1},S_{2},\ldots,S_{n}\) and \(S_{k}\) based on partial observations such that_
\[\parallel_{i=1}^{n}L(S_{i}/[G_{i}\|(S_{k}/G_{k})])=K\] (1)
_if and only if \(K\) is_
1. _conditionally controllable with respect to the generators_ \(G_{i}\) _and_ \(G_{k}\) _and the uncontrollable sets_ \(A_{i,u}\) _and_ \(A_{k,u}\)_, for_ \(i=1,2,\ldots,n\)_, and_
2. _conditionally observable with respect to the generators_ \(G_{i}\) _andd_ \(G_{k}\)_, the event sets_ \(A_{i,c}\) _and_ \(A_{k,c}\)_, and the projections_ \(Q_{i+k}\) _and_ \(Q_{k}\) _from_ \(A_{i}^{*}\) _to_ \(A_{i,o}^{*}\)_, for_ \(i=1,2,\ldots,n\)_._
Recall that \(K\subseteq L(G_{1}\|G_{2}\|\dots\|G_{n}\|G_{k})\) is _conditionally controllable_ for generators \(G_{1},G_{2},\dots,G_{n}\) and a coordinator \(G_{k}\) and uncontrollable alphabets \(A_{i,u}\), \(i=1,2,\dots,n\), and \(A_{k,u}\) if \(P_{k}(K)\) is controllable with respect to \(L(G_{k})\) and \(A_{k,u}\), and \(P_{i+k}(K)\) is controllable with respect to \(L(G_{i})\parallel P_{k}(K)\) and \(A_{i+k,u}=(A_{i}\cup A_{k})\cap A_{u}\), for \(i=1,2,\dots,n\).
For coordination control with partial observations, the notion of conditional observability is of the same importance as observability for monolithic supervisory control theory with partial observations. We recall that the supervisors \(S_{i}\), \(i=1,2,\dots,n\), are supervisors based on partial observations, because they have only information about observable events from \(A_{i,o}\) and observable coordinator events \(A_{k,o}\), but do not observe events from \(A_{i+k}\setminus(A_{i,o}\cup A_{k,o})\).
A language \(K\subseteq L(G_{1}\|G_{2}\|\dots\|G_{n}\|G_{k})\) is _conditionally observable_ with respect to the generators \(G_{i}\) and \(G_{k}\), controllable sets \(A_{i,c}\) and \(A_{k,c}\), and projections \(Q_{i+k}\) and \(Q_{k}\), where \(Q_{i}:A_{i}^{*}\to A_{i,o}^{*}\), for \(i=1,2,\dots,n\), if \(P_{k}(K)\) is observable with respect to \(L(G_{k})\), \(A_{k,c}\), \(Q_{k}\), and \(P_{i+k}(K)\) is observable with respect to \(L(G_{i})\parallel P_{k}(K)\), \(A_{i+k,c}=A_{c}\cap(A_{i}\cup A_{k})\), and \(Q_{i+k}\), for \(i=1,2,\dots,n\).
The coordination control theory has been extended to the non-prefix-closed case in [8]. The extension consists in introducing coordinators for nonblockingness based on abstractions that are natural observers. We now state an important result from [8, Theorem 7] extended to general \(n\geq 2\).
**Theorem 7**: _Consider a modular plant with local marked languages \(L_{i}=L_{m}(G_{i})\subseteq A_{i}^{*}\), \(i=1,\dots,n\), and let projections \(P_{k}:A_{i}^{*}\to(A_{i}\cap A_{k})^{*}\), with shared events included in \(A_{k}\), be an \(L_{i}\)-observer, for \(i=1,\dots,n\). Define \(C_{k}\) as the nonblocking generator with \(L_{m}(C_{k})=\parallel_{i=1}^{n}P_{k}(L_{i})\) with notation \(L_{k}=L_{m}(C_{k})\), i.e., \(L(C_{k})=\overline{L_{k}}=\overline{\parallel_{i=1}^{n}P_{k}(L_{i})}\). Then the coordinated system \(G\parallel C_{k}\) is nonblocking, i.e., \(\overline{\parallel_{i=1}^{n}L_{i}\parallel L_{m}(C_{k})}=\|_{i=1}^{n} \overline{L_{i}}\parallel\overline{L_{m}(C_{k})}\)._
## III Three-level coordination control
Since too many events may need to be included in the coordinator alphabet for systems with a large number of subsystems, the top-down approach with three-level coordination control has been proposed in [4].
Given a modular system \(G=G_{1}\|G_{2}\|\dots\|G_{n}\), the three-level hierarchical structure depicted in Fig. 1 makes it possible to add coordinator events only locally (to low-level group coordinators).
<figure><img src="content_image/1502.07328/x1.png"><figcaption>Fig. 1: The multilevel control architecture under consideration.</figcaption></figure>
The event sets of low-level groups \(I_{j}\), \(j=1,2,\dots,m\), are denoted by
\[A_{I_{j}}=\bigcup\nolimits_{i\in I_{j}}A_{i}\,.\]
Recall that \(P_{I_{r}}\) denotes the projection \(P_{I_{r}}:A^{*}\to A_{I_{r}}^{*}\). Then \(P_{I_{r}+k}:A^{*}\to(A_{I_{r}}\cup A_{k})^{*}\) stands for the projection to the group alphabets extended with the high-level coordinator events. Similarly, \(P_{j+k_{r}+k}:A^{*}\to(A_{j}\cup A_{k_{r}}\cup A_{k})^{*}\) denotes the projection to the alphabet \(A_{j}\) of an automaton \(G_{j}\) belonging to the group \(I_{r}\) extended with the alphabet \(A_{k_{r}}\) of the group coordinator of the low-level group \(I_{r}\) and the high-level coordinator alphabet \(A_{k}\).
We start by constructing \(A_{k}\subseteq A_{sh}=\bigcup\nolimits_{k,\ell\in\{1,2,\dots,m\}}^{k\not=l}(A_ {I_{k}}\cap A_{I_{\ell}})\) such that \(K=\ \parallel_{r=1}^{m}P_{I_{r}+k}(K)\). Note that \(A_{sh}\), that is, the set of events shared by the low-level groups, is much smaller than the set of all shared events. The reason is that the events shared only among subsystems belonging to a given low-level group do not count for \(A_{sh}\). An algorithm to construct \(A_{k}\) as an extension of \(A_{sh}\) making the first equation of Definition 8 below hold true is described in [5].
In order to simplify the notation and definitions, we have included in [4] into the group coordinator alphabets \(A_{k_{j}}\) all events from the global coordinator by defining \(A_{k_{j}}:=A_{k_{j}}\cup A_{k}\), for \(j=1,2,\dots,m\). This simplification enables us to use only the group coordinators \(G_{k_{j}}\) in all the definitions below, which is more concise than using \(G_{k_{j}}\|G_{k}\), but we have to bear in mind that from now on \(G_{k_{j}}\) may also contain the high-level coordinator events from other groups than \(I_{j}\).
**Definition 8** (3-level conditional decomposability): _[_4_]_
_A language \(K\subseteq A^{*}\) is said to be three-level conditionally decomposable with respect to the alphabets \(A_{1}\), \(A_{2}\), …, \(A_{n}\), the high-level coordinator alphabet \(A_{k}\), and the low-level coordinator alphabets \(A_{k_{1}}\), \(A_{k_{2}}\), …, \(A_{k_{m}}\) if_
\[K=\ \parallel_{j=1}^{m}P_{I_{j}+k}(K)\] _and_ \[P_{I_{j}+k}(K) =\ \parallel_{i\in I_{j}}P_{i+k_{j}}(K)\]
_for \(j=1,2,\dots,m\)._
Definition 8 makes sense, because on the right-hand side of the second equation \(P_{i+k_{j}}(K)\) includes all events from both the group coordinator \(A_{k_{j}}\) and the high-level coordinator \(A_{k}\).
**Problem 9** (Three-level coordination control problem):
_Consider the modular system \(G=G_{1}\|G_{2}\|\dots\|G_{n}\) along with the three-level hierarchical structure of the subsystems organized into groups \(I_{j}\), \(j=1,2,\dots,m\leq n\), on the low level. The synchronous products \(\parallel_{i\in I_{j}}G_{i}\), \(j=1,2,\dots,m\), then represent the \(m\) high-level systems. The coordinators \(G_{k_{j}}\) are associated to groups of subsystems \(\{G_{i}\mid i\in I_{j}\}\), \(j=1,2,\dots,m\). The three-level coordination control problem consists in synthesizing the supervisor \(S_{i}\) for every low-level system \(G_{i}\), \(i=1,2,\ldots,n\), and the high-level supervisor \(S_{k_{j}}\) supervising the group coordinator \(G_{k_{j}}\), \(j=1,2,\dots,m\), such that the specification \(K=\overline{K}\subseteq L(G)\) is met by the closed-loop system, i.e.,_
\[\parallel_{j=1}^{m}\parallel_{i\in I_{j}}L(S_{i}/[G_{i}\|(S_{k_{j }}/G_{k_{j}})])=K\,. \triangleleft\]
Low level (group) coordinators \(G_{k_{j}}\), \(j=1,2,\dots,m\), are computed using Algorithm 1 below.
For a specification \(K\), the coordinator \(G_{k_{j}}\) of the \(j\)-th group of subsystems \(\{G_{i}\mid i\in I_{j}\}\) is computed as follows.
1. Set \(A_{k_{j}}=\bigcup_{k,\ell\in I_{j}}^{k\neq\ell}(A_{k}\cap A_{\ell})\) to be the set of all shared events of systems from the group \(I_{j}\).
2. Extend \(A_{k_{j}}\) so that \(P_{I_{r}+k}(K)\) is conditional decomposable with respect to \((A_{i})_{i\in I_{j}}\) and \(A_{k_{j}}\), for instance using a method described in [5].
3. Set the coordinator equal to \(G_{k_{j}}=\|_{i=1}^{n}P_{k_{j}}(G_{i})\).
**Algorithm 1** Computation of the group coordinators.
Recall that due to the extension of \(A_{k_{j}}\) by high-level coordinator events, \(A_{k}\subseteq A_{k_{j}}\), hence \(L(G_{k})\|L(G_{k_{j}})\) of [6] is reduced to \(L(G_{k_{j}})\). Indeed, by our choice of the coordinators, \(L(G_{k})\|L(G_{k_{j}})=P_{k}(L)\parallel P_{k_{j}}(L)=P_{k_{j}}(L)=L(G_{k_{j}})\), where \(L=\parallel_{i=1}^{n}L(G_{i})\) is the global plant language and the second equality holds by Lemma 1. Therefore, instead of the low-level coordinators \(G_{k_{j}}\), \(j=1,2,\dots,m\), for subsystems belonging to the individual groups \(\{G_{i}\mid i\in I_{j}\}\) and the high-level coordinators \(G_{k}\) that coordinate the different groups, we are using only the low-level (group) coordinators \(G_{k_{j}}\), but over larger alphabets compared to [6].
Since the only known condition ensuring that the projected generator is smaller than the original one is the observer property [13] we might need to further extend the alphabets \(A_{k_{j}}\) so that the projection \(P_{k_{j}}\) is an \(L(G_{i})\)-observer, for all \(i\in I_{j}\).
The key concept is the following.
**Definition 10** ([6]): _Consider the setting and notations of Problem 9, and let \(G_{k}\) be a coordinator. A language \(K\subseteq L(\parallel_{i=1}^{n}G_{i})\) is three-level conditionally controllable with respect to the generators \(G_{1}\), \(G_{2}\), …, \(G_{n}\), the local alphabets \(A_{1}\), \(A_{2}\), …, \(A_{n}\), the low-level coordinator alphabets \(A_{k_{1}}\), \(A_{k_{2}}\), …, \(A_{k_{m}}\), and the uncontrollable alphabet \(A_{u}\) if for all \(j=1,2,\dots,m\)_
1. \(P_{k_{j}}(K)\) _is controllable with respect to_ \(L(G_{k_{j}})\) _and_ \(A_{k_{j},u}\)_,_
2. \(P_{i+k_{j}}(K)\) _is controllable with respect to_ \(L(G_{i})\parallel P_{k_{j}}(K)\) _and_ \(A_{i+k_{j},u}\)_, for all_ \(i\in I_{j}\)_._ __
For the sake of brevity, \(K\) will be called three-level conditionally controllable with respect to \(G_{i}\), \(i\in I_{\ell}\), and \(G_{k_{\ell}}\), where some sets are not referenced.
For multilevel systems with partial observations, three-level conditionally observability, cf. [10], is needed. Unfortunately, it is not closed under language unions and, therefore, three-level conditional normality has been proposed in [10], where it is shown that the supremal three-level conditionally normal language always exists.
**Definition 11**: _A language \(K\subseteq L(\parallel_{i=1}^{n}G_{i})\) is three-level conditionally normal with respect to the generators \(G_{1}\), \(G_{2}\), …, \(G_{n}\), the local alphabets \(A_{1}\), \(A_{2}\), …, \(A_{n}\), the low-level coordinator alphabets \(A_{k_{1}}\), \(A_{k_{2}}\), …, \(A_{k_{m}}\), and the corresponding natural projections if for all \(j=1,2,\dots,m\)_
1. \(P_{k_{j}}(K)\) _is normal with respect to_ \(L(G_{k_{j}})\) _and_ \(Q_{k_{j}}\)_,_
2. \(P_{i+k_{j}}(K)\) _is normal with respect to_ \(L(G_{i})\parallel P_{k_{j}}(K)\) _and_ \(Q_{i+k_{j}}\)_, for all_ \(i\in I_{j}\)_._ __
The computation of the supremal three-level conditionally controllable and conditionally normal sublanguage of \(K\), denoted by \(\mbox{$\sup{\rm mcCN}$}(K,L,A,Q)\), has been studied in [10]. We have shown that under some controllability and normality conditions on all coordinator alphabets it can be computed in a distributed way based on the following languages corresponding to supervisors for low-level group coordinators and local supervisors for individual subsystems, respectively. For all \(j=1,2,\dots,m\) and \(i\in I_{j}\),
\[\mbox{$\sup{\rm CN}$}_{k_{j}} =\mbox{$\sup{\rm CN}$}(P_{k_{j}}(K),L(G_{k_{j}}),A_{k_{j},u},Q_{k _{j}})\] (1)
\[\mbox{$\sup{\rm CN}$}_{i+k_{j}} =\mbox{$\sup{\rm CN}$}(P_{i+k_{j}}(K),L(G_{i})\|\mbox{$\sup{\rm CN }$}_{k_{j}},A_{i+k_{j},u},Q_{i+k_{j}})\]
where \(\mbox{$\sup{\rm CN}$}(K,L,A_{u},Q)\) denotes the supremal sublanguage of \(K\) controllable with respect to \(L\) and \(A_{u}\) and normal with respect to \(L\) and the natural projection \(Q\), see [2].
As in the centralized coordination, the following inclusion always holds true.
**Lemma 12**: _For all \(j=1,2,\dots,m\) and for all \(i\in I_{j}\), we have that \(P^{i+k_{j}}_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i+k_{j}})\subseteq\mbox{$\sup{\rm CN }$}_{k_{j}}\)._
The proof follows immediately from the definition of \(\mbox{$\sup{\rm CN}$}_{i+k_{j}}\). Indeed, we have that \(P^{i+k_{j}}_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i+k_{j}})\subseteq\mbox{$\sup{\rm CN }$}_{k_{j}}\), because \(\mbox{$\sup{\rm CN}$}_{k_{j}}\) is part of the plant language of \(\mbox{$\sup{\rm CN}$}_{i+k_{j}}\) over the alphabet \(A_{k_{j}}\). We recall the notation for the closed-loop corresponding to group \(I_{j}\), i.e. \(\mbox{$\sup{\rm cCN}$}_{j}=\ \parallel_{i\in I_{j}}\mbox{$\sup{\rm CN}$}_{i+k_ {j}}\) for \(j=1,2,\dots,m\). The main result of [10] is now recalled.
**Theorem 13** ([10]): _Consider Problem 9 and the languages defined in (1). For \(j=1,2,\dots,m\) and \(i\in I_{j}\), let the languages \(P^{i+k_{j}}_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i+k_{j}})\) be controllable with respect to \(L(G_{k_{j}})\) and \(A_{k_{j},u}\), and normal with respect to \(L(G_{k_{j}})\) and \(Q_{k_{j}}\), and let \(P_{k}^{I_{j}}(\mbox{$\sup{\rm cCN}$}_{j})\) be controllable with respect to \(L(G_{k})\) and \(A_{k,u}\), and normal with respect to \(L(G_{k})\) and \(Q_{k}\). Then_
\[\mbox{$\sup{\rm mcCN}$}(K,L,A,Q)=\ \parallel_{j=1}^{m}\parallel_{i\in I_{j}} \mbox{$\sup{\rm CN}$}_{i+k_{j}}\,.\]
## IV Combined Approach to Multilevel Coordination Control of Modular DES
Recently, we have proposed two different constructive approaches to multilevel supervisory control: bottom-up [7] and top-down [4]. Bottom-up approach relies only on original notions of conditional decomposability and conditional controllability of the specification language, while top-down approach requires the specification to be conditionally decomposable and conditionally controllable with respect to the multilevel architecture. In the top-down approach, the specification is decomposed a priori in the top-down manner: firstly, with respect to the high-level coordinator alphabet and then with respect to the group coordinators for all low-level groups of subsystems. The advantage of the top-down approach is that, for prefix-closed specifications, the computation at the lowest level consists in constructing supervisors for individual subsystems and no further computation at the higher level is needed.
However, the least restrictive supervisors can only be computed under some conditions. We have presented in [4] the sufficient conditions for distributed computation of full observation supervisors yielding the maximally permissive solution in the three-level hierarchical control architecture. This condition has been generalized in [10] in two directions: to partial observations and to weaker sufficient conditions for the distributed computation of local supervisors assisted by coordinators. These weaker sufficient conditions are homogeneous, i.e., they are both formulated in terms of controllability and normality for both hierarchical interfaces: between the low level and the middle level and between the middle level and the top level.
In this section all languages are assumed to be prefix-closed. In the general case with non-prefix-closed specifications, the individual supervisors of the groups can be conflicting and also the group supervisors on the higher level might be conflicting. Therefore, additional coordinators for nonblocking should be constructed at all levels, which is presented in the next section.
To conclude, the main drawback of the top-down approach is the lack of generality: the blocking issue and the restrictive conditions for a distributed computation of the maximally permissive solution: supremal three-level conditionally controllable sublanguages.
In this paper, we propose a combined approach that can be described as a top-down decomposition followed by a bottom-up computation. This proposed approach combines the strong features of both approaches, namely the low complexity of the top-down approach with the generality of the bottom-up approach that enables effective synthesis of both a posteriori supervisors to make sufficient conditions for distributed computation of supervisors hold and of coordinators for nonblocking.
It is then natural to impose controllability and normality of low-level supervisors with respect to group coordinators and also controllability and normality of group supervisors with respect to the high coordinator at the very top level. In this paper, we will show that these supervisors can be synthesized in the bottom-up manner, i.e., we start with the supervisors on coordinator alphabets of each low-level group.
In the case that controllability of the projected low-level supervisors with respect to the group coordinators and/or controllability of projected group supervisors with respect to the top coordinator from Theorem 13 do not hold, a posteriori supervisors on respective coordinator alphabets can be synthesized to make these conditions hold.
We will show that both a posteriori supervisors and coordinators for nonblocking can be computed in the bottom-up manner. This is the main message of this paper: first, we perform a top-down design of coordinators based on two-level decomposition of the specification and this top-down design is followed by a bottom-up computation of a posteriori supervisors and coordinators for nonblocking.
It is easy to shown that the language \(\parallel_{j=1}^{m}\parallel_{i\in I_{j}}\mbox{$\sup{\rm CN}$}_{i+k_{j}}\) of Theorem 13 further restricted by a posteriori supervisors will always satisfy all controllability and normality conditions required in Theorem 13. It appears that controllability and normality conditions on the low-level coordinator alphabets and on the high-level coordinator alphabet can be imposed by a posteriori supervisors defined a follows.
We first compute a posteriori supervisors on the low-level coordinator alphabets \(A_{k_{j}}\), \(j=1,2,\dots,m\), by
\[\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}=\cap_{i\in I_{j}}\mbox{$\sup{\rm CN} $}(P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i+k_{j}}),L(G_{k_{j}}),A_{k_{j},u},Q_{k_{j }}).\] (2)
This supervisor will guarantee controllability and normality with respect to the group coordinator alphabets as required in Theorem 13. It should be noticed that
\[\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}=\mbox{$\sup{\rm CN}$}(P_{k_{j}}( \parallel_{i\in I_{j}}\mbox{$\sup{\rm CN}$}_{i+k_{j}}),L(G_{k_{j}}),A_{k_{j},u },Q_{k_{j}}),\] (3)
but the former distributed form is more suitable for computation of a posteriori supervisors \(\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\) on group coordinator alphabets because of obvious complexity reasons. Otherwise stated, the a posteriori supervisors can be distributed and their roles consist simply in replacing local supervisors for individual subsystems \(G_{i}\) at the lowest level: \(\mbox{$\sup{\rm CN}$}_{i+k_{j}}\) by
\[\mbox{$\sup{\rm CN}$}_{i+k_{j}}\parallel\widetilde{\mbox{$\sup{ \rm CN}$}_{k_{j}}} =\] (4)
\[\mbox{$\sup{\rm CN}$}_{i+k_{j}}\parallel \cap_{i\in I_{j}}\mbox{$\sup{\rm CN}$}(P_{k_{j}}(\mbox{$\sup{\rm CN }$}_{i+k_{j}}),L(G_{k_{j}})).\]
Moreover, we show in Theorem 14 that the restriction induced by the supervisor does alter maximal permissiveness. Then we compute the a posteriori supervisor on the high-level coordinator alphabet by
\[\widetilde{\mbox{$\sup{\rm CN}$}_{k}}=\mbox{$\sup{\rm CN}$}(P_{k}(\parallel_{j =1}^{m}\mbox{$\sup{\rm cCN}$}_{j}),L(G_{k}),A_{k,u},Q_{k}),\]
where \(\mbox{$\sup{\rm cCN}$}_{j}=\parallel_{i\in I_{j}}\mbox{$\sup{\rm CN}$}_{i+k_{j }}\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\) is the resulting group supervisor. The supervisor \(\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\) will guarantee controllability and normality with respect to the high-level coordinator \(L(G_{k})\).
Note that it is easy to see that \(\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\) can be computed in the modular way as follows:
\[\widetilde{\mbox{$\sup{\rm CN}$}_{k}} =\parallel_{j=1}^{m}\mbox{$\sup{\rm CN}$}(P_{k}(\mbox{$\sup{\rm cCN }$}_{j}),L(G_{k}),A_{k,u},Q_{k})\] (5)
This is a very special case of modular control with multiple prefix-closed specifications [12] for a single plant \(G_{k}\). Therefore it follows from the assumption that all languages involved are prefix-closed, hence the languages in the intersection are trivially nonconflicting, which is required for preserving normality and controllability under intersection.
It can be shown that the language \(M\) further restricted by these supervisors will always satisfy all controllability and normality conditions required in Theorem 13. Somewhat surprisingly, it can be shown that these a posteriori supervisors do not alter another important property: supremality. The result below shows that the solution is still minimally restrictive with respect to our two level coordination control architecture, which is formally shown in the second inclusion of the proof.
**Theorem 14**: _Consider the setting of Theorem 13. Then_
\[\mbox{$\sup{\rm mcCN}$}(K,L,A,Q)\\ =(\parallel_{j=1}^{m}(\parallel_{i\in I_{j}}\mbox{$\sup{\rm CN}$} _{i+k_{j}})\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}})\parallel \widetilde{\mbox{$\sup{\rm CN}$}_{k}}\]
_where a posteriori supervisors \(\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\) and \(\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\) are defined in equations (2) and (5), respectively._
For simplicity, denote \(\mbox{$\sup{\rm mcCN}$}(K,L,A,Q)=\mbox{$\sup{\rm mcCN}$}\), and let us use the notation
\[M_{j}=\mbox{$\sup{\rm cCN}$}_{j}=\ \parallel_{i\in I_{j}}\mbox{$\sup{\rm CN}$} _{i+k_{j}}\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\]
for the resulting language of the (centralized) coordination control for each group \(I_{j}\), \(j=1,2,\dots,m\). We denote
\[M=\ \parallel_{j=1}^{m}M_{j}\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\,.\]
Hence, we need to show that \(\mbox{$\sup{\rm mcCN}$}=\parallel_{j=1}^{m}M_{j}\).
In order to show the inclusion \(M\subseteq\mbox{$\sup{\rm mcCN}$}\), it suffices to prove that \(M\) is three-level conditionally controllable and conditionally normal with respect to \(G_{i}\), \(i\in I_{\ell}\), and \(G_{k_{\ell}}\), for \(\ell=1,2,\dots,m\). Then, since both \(M\) and \(\sup{\rm mcCN}\) are sublanguages of \(K\), and \(\sup{\rm mcCN}\) is the supremal one having these properties, it will follow that \(M\subseteq\mbox{$\sup{\rm mcCN}$}\).
For items 1 of three-level conditional controllability and conditional normality, we show that, for any \(j=1,2,\dots,m\), \(M_{j}\) is conditionally controllable and conditionally normal with respect to \(G_{i}\), \(i\in I_{j}\), \(L(G_{k_{j}})\), \(A_{k_{j},u}\), and \(Q_{k_{j}}\). First, note that
\[P_{k_{j}}(M) =P_{k_{j}}(\parallel_{\ell=1}^{m}M_{\ell}\parallel\widetilde{ \mbox{$\sup{\rm CN}$}_{k}})=\]
\[P_{k_{j}}(M_{j})\parallel\parallel_{\ell=1,2,\dots,m}^{\ell\neq j }P_{k_{j}}(M_{\ell})\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\]
because \(A_{k_{j}}\supseteq A_{k}\) and \(A_{k_{j}}\) contains all shared events in the composition. Moreover, \(P_{k_{j}}(M_{j})=P_{k_{j}}(\|_{i\in I_{j}}\mbox{$\sup{\rm CN}$}_{i+k_{j}} \parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}})=\cap_{i\in I_{j}}P_{k_{j}} (\mbox{$\sup{\rm CN}$}_{i+k_{j}})\cap\tilde{\mbox{$\sup{\rm CN}$}}_{k_{j}}\) , because of Lemma 4 and the fact that \(A_{k_{j}}\) contains all shared events of subsystems of the group \(I_{j}\)).
It is then easy to see that \(P_{k_{j}}(M_{j})=\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\) is controllable and normal with respect to \(L(G_{k_{j}})\), \(A_{k_{j},u}\) and \(Q_{k_{j}}\). We now show \(M_{j}\), \(j=1,2,\dots,m\), are conditionally controllable and conditionally normal with respect to their groups \(G_{i}\), \(i\in I_{j}\), and \(G_{k_{j}}\).
Since the distributivity holds due to Lemma 4, \(P_{i+k_{j}}(M_{j})=P_{i+k_{j}}(\|_{i^{\prime}\in I_{j}}\mbox{$\sup{\rm CN}$}_{ i^{\prime}+k_{j}}\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}})=\|_{i^{ \prime}\in I_{j}}P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i^{\prime}+k_{j}})\parallel \widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}=\mbox{$\sup{\rm CN}$}_{i+k_{j}} \parallel\|_{i^{\prime}\in I_{j}}^{i\neq i^{\prime}}P_{k_{j}}(\mbox{$\sup{\rm CN }$}_{i^{\prime}+k_{j}})\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\). Observe that
\[P_{i+k_{j}}(M_{j})=\mbox{$\sup{\rm CN}$}_{i+k_{j}}\parallel P_{k_{j}}(M_{j})\,,\]
since \(\mbox{$\sup{\rm CN}$}_{i+k_{j}}\parallel P_{k_{j}}(\|_{i^{\prime}\in I_{j}} \mbox{$\sup{\rm CN}$}_{i^{\prime}+k_{j}}\parallel\widetilde{\mbox{$\sup{\rm CN }$}_{k_{j}}})=\mbox{$\sup{\rm CN}$}_{i+k_{j}}\parallel\|_{i^{\prime}\in I_{j}} P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i^{\prime}+k_{j}})\parallel\widetilde{\mbox{$ \sup{\rm CN}$}_{k_{j}}}=\mbox{$\sup{\rm CN}$}_{i+k_{j}}\parallel\|_{i^{\prime} \in I_{j}}^{i\neq i^{\prime}}P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i^{\prime}+k_{j} })\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}=P_{i+k_{j}}(M_{j})\).
Therefore, by Lemma 2, \(P_{i+k_{j}}(M_{j})\) is controllable and normal with respect to \([L(G_{i})\parallel\mbox{$\sup{\rm CN}$}_{k_{j}}]\parallel P_{k_{j}}(M_{j})=L(G _{i})\parallel P_{k_{j}}(M_{j})\), where the last equality is by the fact that \(P_{k_{j}}(M_{j})\subseteq\mbox{$\sup{\rm CN}$}_{k_{j}}\), for any \(j=1,2,\dots,m\) and \(i\in I_{j}\).
Altogether, \(M_{j}\), \(j=1,2,\dots,m\), are conditionally controllable and conditionally normal with respect to their groups \(G_{i}\), \(i\in I_{j}\), and \(G_{k_{j}}\).
Furthermore, for \(\ell=1,2,\dots,m\), \(\ell\neq j\),
\[P_{k_{j}}(M_{\ell})=P_{k}(M_{\ell})\,,\] (6)
because \(M_{\ell}\subseteq A_{I_{\ell}}^{*}\), \(A_{k}\subseteq A_{k_{j}}\subseteq A_{I_{j}}\cup A_{k}\), \(A_{I_{j}}\cap A_{I_{\ell}}\subseteq A_{k}\), whence \(A_{k_{j}}\cap A_{I_{\ell}}=A_{k}\cap A_{I_{\ell}}\).
Now, we have \(P_{k}(M_{\ell})\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}=P_{k}(\parallel_ {i\in I_{\ell}}\mbox{$\sup{\rm CN}$}_{i+k_{\ell}}\parallel\widetilde{\mbox{$ \sup{\rm CN}$}_{k_{\ell}}}\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}= \parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\).
This is because
\[\widetilde{\mbox{$\sup{\rm CN}$}_{k}} =\parallel_{j=1}^{m}\mbox{$\sup{\rm CN}$}(P_{k}(\mbox{$\sup{\rm cCN }$}_{j}),L(G_{k}),A_{k,u},Q_{k})\] (7)
\[=\parallel_{j=1}^{m}\mbox{$\sup{\rm CN}$}(P_{k}(\parallel_{i\in I _{j}}\widetilde{\mbox{$\sup{\rm CN}$}_{i+k_{j}}}),L(G_{k}),A_{k,u},Q_{k})\] (8)
\[=\parallel_{j=1}^{m}\cap_{i\in I_{j}}\mbox{$\sup{\rm CN}$}(P_{k}( \widetilde{\mbox{$\sup{\rm CN}$}_{i+k_{j}}}),L(G_{k}),A_{k,u},Q_{k})\,.\]
Therefore, \(P_{k}(M_{\ell})\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}=\widetilde{\mbox {$\sup{\rm CN}$}_{k}}\) are controllable and normal with respect to \(L(G_{k})\), \(A_{k,u}\), and \(Q_{k}\), for \(\ell=1,2,\dots,m\).
Altogether, in accordance with Lemma 2, we obtain that \(P_{k_{j}}(M)=\ \parallel_{\ell=1}^{m}P_{k_{j}}(M_{\ell})\parallel P_{k_{j}}( \widetilde{\mbox{$\sup{\rm CN}$}_{k}})=P_{k_{j}}(M_{j})\parallel\parallel_{ \ell=1,2,\dots,m}^{\ell\neq j}P_{k_{j}}(M_{\ell})\parallel\widetilde{\mbox{$ \sup{\rm CN}$}_{k}}\) is controllable and normal with respect to \(L(G_{k_{j}})\parallel\parallel L(G_{k}).\) We recall that \(L(G_{k_{j}})\parallel L(G_{k})=L(G_{k_{j}})\). Therefore, \(P_{k_{j}}(M)\) is controllable with respect to \(L(G_{k_{j}})\) and \(A_{k_{j},u}\), and normal with respect to \(L(G_{k_{j}})\) and \(Q_{k_{j}}\). This shows items 1 of both three-level conditional controllability and conditional normality.
In order to show item 2 of three-level conditional controllability and conditional normality, it must be shown that \(P_{i+k_{j}}(M)=P_{i+k_{j}}(\parallel_{\ell=1}^{m}M_{\ell}\parallel\widetilde{ \mbox{$\sup{\rm CN}$}_{k}})\) is controllable with respect to \(L(G_{i})\parallel P_{k_{j}}(M)\) and \(A_{i+k_{j},u}\), and normal with respect to \(L(G_{i})\parallel P_{k_{j}}(M)\) and \(Q_{i+k_{j}}\). Note that \(P_{k_{j}}(M)=P_{k_{j}}(M_{j})\parallel\parallel_{\ell=1,2,\dots,m}^{\ell\neq j }P_{k_{j}}(M_{\ell})\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\), because due to \(A_{k_{j}}\subseteq A_{k}\) we have \(P_{k_{j}}(\widetilde{\mbox{$\sup{\rm CN}$}_{k}}=\widetilde{\mbox{$\sup{\rm CN} $}_{k}}\).
In a similar way as above, we get
\[P_{i+k_{j}}(M) =P_{i+k_{j}}(M_{j})\parallel\parallel_{\ell=1,2,\dots,m}^{\ell \neq j}P_{i+k_{j}}(M_{\ell})\parallel P_{i+k_{j}}(\widetilde{\mbox{$\sup{\rm CN }$}_{k}})\]
\[=P_{i+k_{j}}(M_{j})\parallel\parallel_{\ell=1,2,\dots,m}^{\ell \neq j}P_{k_{j}}(M_{\ell})\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\]
since, for \(j\neq\ell\), \(A_{I_{j}}\cap A_{I_{\ell}}\subseteq A_{k}\subseteq A_{k_{j}}\) fulfills the requirements of Lemma 4, which justifies the first equation. Moreover, it also implies that \(P_{i+k_{j}}(M_{\ell})=P_{k}(M_{\ell})=P_{k_{j}}(M_{\ell})\), see equation (6), which justifies the second equation. Furthermore, from above We recall at this point that \(M_{j}\) are conditionally controllable and conditionally normal with respect to their groups \(G_{i}\), \(i\in I_{j}\), the group coordinators \(L(G_{k_{j}})\), whence for all \(j=1,2,\dots,m\) and for all \(i\in I_{j}\) we have that we have that \(P_{i+k_{j}}(M_{j})\) are controllable and normal with respect to \(L(G_{i})\parallel P_{k_{j}}(M_{j})\), \(A_{i+k_{j},u}\), and \(Q_{i+k_{j}}\). It is obvious that languages \(P_{k_{j}}(M_{\ell})\) for \(\ell=1,2,\dots,m\), \(\ell\neq j\), are controllable and normal with respect to themselves. Finally, \(\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\) is controllable normal with respect to itself.
Therefore, according to Lemma 2, \(P_{i+k_{j}}(M)\) is controllable and normal with respect to \([L(G_{i})\parallel P_{k_{j}}(M_{j})]\parallel\|_{\ell=1,2,\dots,m}^{\ell\neq j }P_{k_{j}}(M_{\ell})\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}=L(G_{i}) \parallel\|_{\ell=1}^{m}P_{k_{j}}(M_{\ell})\parallel\widetilde{\mbox{$\sup{\rm CN }$}_{k}}=L(G_{i})\parallel P_{k_{j}}(\|_{\ell=1}^{m}M_{\ell})\parallel \widetilde{\mbox{$\sup{\rm CN}$}_{k}}=L(G_{i})\parallel P_{k_{j}}(M)\), \(A_{i+k_{j},u}\), and \(Q_{i+k_{j}}\), which was to be shown. Note that distributivity \(P_{i+k_{j}}(\parallel_{\ell=1}^{m}M_{\ell})=\ \parallel_{\ell=1}^{m}P_{i+k_{j} }(M_{\ell})\) holds true in accordance with Lemma 4, because \(A_{i+k_{j}}\) contains \(A_{i+k}\) and \(A_{i+k}\) contains all shared events of languages \(P_{i+k_{j}}(M_{\ell})\) over their respective alphabets \(A_{I_{\ell}+i}\), \(\ell=1,2,\dots,m\). More precisely, for \(i\in I_{j}\) we have that
\[A_{I_{\ell}+i}=\left\{\begin{array}[]{ll}A_{I_{j}}&\hbox{if } \ell=j\\ A_{I_{\ell}}&\hbox{otherwise}\end{array}\right.\]
The converse \(\mbox{$\sup{\rm mcCN}$}\subseteq(\parallel_{j=1}^{m}~{}(\parallel_{i\in I_{j}} ~{}\mbox{$\sup{\rm CN}$}_{i+k_{j}})\parallel\widetilde{supCN_{k_{j}}}) \parallel supCN^{\prime}_{k}\) will be proven by showing that for all \(j=1,2,\dots,m\) and for all \(i\in I_{j}\),
\[P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\subseteq\mbox{$\sup{\rm CN}$}_{i+k_{j}} \parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\parallel supCN^{\prime}_{k}\,.\] (9)
According to the definition of synchronous product, Eq. 9 is equivalent to three separate inclusions
* \(P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\subseteq\mbox{$\sup{\rm CN}$}_{i+k_{j}}\)
* \(P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\subseteq(P_{k_{j}}^{i+k_{j}})^{-1} \widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\)
* \(P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\subseteq(P_{k}^{i+k_{j}})^{-1}\mbox{$\sup {\rm CN}$}^{\prime}_{k}\)
The first inclusion is not hard to see. Indeed, from the definitions of conditional controllability and conditional normality, \(P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\) is controllable and normal with respect to \(L(G_{i})\parallel P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})\), \(A_{i+k_{j},u}\), and \(Q_{i+k_{j}}\). Furthermore, \(L(G_{i})\parallel P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})\) is controllable and normal with respect to \(L(G_{i})\parallel\mbox{$\sup{\rm CN}$}_{k_{j}}\), \(A_{i+k_{j},u}\), and \(Q_{i+k_{j}}\), because \(P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})\) being controllable and normal with respect to \(L(G_{k_{j}})\) is also controllable and normal with respect to the smaller language \(\mbox{$\sup{\rm CN}$}_{k_{j}}\subseteq L(G_{k_{j}})\). Therefore, using transitivity of controllability and normality (Lemma 3), \(P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\) is controllable and normal with respect to \(L(G_{i})\parallel\mbox{$\sup{\rm CN}$}_{k_{j}}\), \(A_{i+k_{j},u}\) and \(Q_{i+k_{j}}\).
The proof of the other two inclusions is more involved. First, note that (ii) is equivalent to the inclusion \(P^{i+k_{j}}_{k_{j}}P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\subseteq\widetilde{ \mbox{$\sup{\rm CN}$}_{k_{j}}}\), and that \(P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})=P^{i+k_{j}}_{k_{j}}P_{i+k_{j}}(\mbox{$\sup{ \rm mcCN}$})\). Hence, it is equivalent to the inclusion \(P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})\subseteq\widetilde{\mbox{$\sup{\rm CN}$}_{k _{j}}}\). We recall at this point that \(\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}=\ \parallel_{i\in I_{j}}\mbox{$\sup{ \rm CN}$}(P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i+k_{j}}),L(G_{k_{j}}),A_{k_{j},u}, Q_{k_{j}})\). By the definition of the three-level conditionally controllable and normal languages, \(P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})\) is controllable with respect to \(L(G_{k_{j}})\) and normal with respect to \(L(G_{k_{j}})\) and \(Q_{k_{j}}\). Clearly, \(P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})\subseteq P_{k_{j}}(K)\). Now, \(\mbox{$\sup{\rm CN}$}(P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i+k_{j}}),L(G_{k_{j}}), A_{k_{j},u},Q_{k_{j}})\) is the supremal sublanguage of \(P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i+k_{j}})\), which is controllable and normal with respect to \(L(G_{k_{j}})\) and \(Q_{k_{j}}\). Hence, we obtain that \(P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})\subseteq\widetilde{\mbox{$\sup{\rm CN}$}_{k _{j}}}\) provided \(P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})\) is also a sublanguage of \(P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i+k_{j}})\). Thus, it remains to show that \(P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})\subseteq P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i +k_{j}})\). However, it holds that \(P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\subseteq\mbox{$\sup{\rm CN}$}_{i+k_{j}}\), because \(P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\) is, by definition of the three-level conditionally controllable and normal languages, a sublanguage of \(P_{i+k_{j}}(K)\) that is controllable and normal with respect to \(L(G_{i})\parallel P_{k_{j}}(\mbox{$\sup{\rm mcCN}$})\) and \(Q_{k_{j}}\), i.e., it is by transitivity of Lemma 3 (and the fact that the synchronous product preserve both controllability and normality for nonconflicting languages) controllable and normal with respect to \(L(G_{i})\parallel L(G_{k_{j}})\) and \(Q_{k_{j}}\). Since \(\mbox{$\sup{\rm CN}$}_{k_{j}}\subseteq L(G_{k_{j}})\), we obtain that \(P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\) is controllable and normal with respect to \(L(G_{i})\parallel\mbox{$\sup{\rm CN}$}_{k_{j}}\) and \(Q_{k_{j}}\). Therefore, \(P_{i+k_{j}}(\mbox{$\sup{\rm mcCN}$})\) has to be included in \(\mbox{$\sup{\rm CN}$}_{i+k_{j}}\), which is the supremal sublanguage of \(P_{i+k_{j}}(K)\) that is controllable and normal with respect to \(L(G_{i})\parallel\mbox{$\sup{\rm CN}$}_{k_{j}}\) and \(Q_{k_{j}}\).
Finally, inclusion (iii) can be shown using the same arguments as in (ii).
## V General Case: A Posteriori Supervisors Combined with Coordinators for Nonblocking
In the previous section we have shown that a posteriori supervisors enable us to compute maximally permissive supervisors for our three-level coordination control architecture whenever there is no problem with blocking, e.g., in the prefix-closed case. It is clear from Theorem 14 that first the a posteriori supervisors on the group coordinator alphabets \(\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\) are computed and then the a posteriori supervisor \(\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\) on the high-level coordinator alphabet is computed. Otherwise stated, the computation of the a posteriori supervisors goes in the bottom-up way. The computation of these supervisors is necessary for obtaining the maximally permissive solution, i.e., the supremal three-level conditionally controllable and conditionally-normal sublanguage of the specification if the sufficient condition of Theorem 13 does not hold.
In the general case, local supervisors \(\mbox{$\sup{\rm CN}$}_{i+k_{j}}\), \(i\in I_{j}\), for at least one group \(I_{j}\), \(j=1,2,\dots,m\), are conflicting and/or the resulting group supervisors at the higher level are conflicting. This issue can be solved by computing coordinators for nonblockingness that we have presented in [8] for the basic coordination control architecture with a single (centralized) coordinator that can now be qualified as the two-level coordination control architecture.
It appears then natural to combine the bottom-up computation of a posteriori supervisors with the bottom-up computation of coordinators for nonblockingness, which is proposed in this section. First of all, it should be noted that, unlike the prefix-closed case, we do not have a general distributed procedure to compute the supremal conditionally controllable and normal languages. We have shown in [8] that, for the two-level coordination control architecture, the maximally permissive solutions for non-prefix-closed languages can be computed in a similar distributed way if the optimal supervisor for the coordinator is included in the optimal local supervisors projected to the coordinator alphabet: \(\mbox{$\sup{\rm C}$}_{k}\subseteq P_{k}(\mbox{$\sup{\rm C}$}_{i+k})\) for all local supervisors \(i\). We recall at this point that the opposite inclusion is always true and if the equality \(\mbox{$\sup{\rm C}$}_{k}\subseteq P_{k}(\mbox{$\sup{\rm C}$}_{i+k})\) does not hold, one may still compute local supervisors \(\mbox{$\sup{\rm C}$}_{i+k}\) as described in [8], but the maximal permissiveness cannot be guaranteed.
Moreover, the typical issue with non-prefix-closed languages is that the local supervisors \(\mbox{$\sup{\rm CN}$}_{i+k_{j}}\), \(i\in I_{j}\), after the application of the a posteriori group supervisors \(\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\) are conflicting in general, which corresponds to the blocking case. Let us recall that group supervisors for groups \(j=1,2,\dots,m\) are computed as follows, cf. Eq. 3: \(\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}=\ \parallel_{i\in I_{j}}\mbox{$\sup{ \rm CN}$}(P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i+k_{j}}),L(G_{k_{j}})\). We propose to apply Theorem 7 to all groups \(j=1,2,\dots,m\), where \(\mbox{$\sup{\rm CN}$}_{i+k_{j}}\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j }}}\), \(i\in I_{j}\), denoted by \(\widetilde{\mbox{$\sup{\rm CN}$}_{i+k_{j}}}\), are blocking. Namely, we have to extend the alphabets \(A_{k_{j}}\) so that the observer conditions of Theorem 7 are met. Namely, we need to extend the alphabets \(A_{k_{j}}\) so that \(P_{k_{j}}:(A_{i+k_{j}})^{*}\to(A{k_{j}})^{*}\) be \(\widetilde{\mbox{$\sup{\rm CN}$}_{i+k_{j}}}\)-observer, for all \(i\in I_{j}\).
The group coordinators for nonblockingness can now be computed as follows
\[C_{k_{j}} =\mbox{$\sup{\rm CN}$}(\parallel_{i\in I_{j}}P_{k_{j}}(\widetilde {\mbox{$\sup{\rm CN}$}_{i+k_{j}}}),\] (10)
\[\parallel_{i\in I_{j}}\overline{P_{k_{j}}(\widetilde{\mbox{$\sup{ \rm CN}$}_{i+k_{j}}})},A_{k_{j},u},Q_{k_{j}})\,.\]
This means that the final nonblocking supervisor for the group \(j\in\{1,\dots,m\}\) is given by \(\parallel_{i\in I_{j}}\widetilde{\mbox{$\sup{\rm CN}$}_{i+k_{j}}}\parallel C_{ k_{j}}\) and we denote it by \(N_{j}\).
Similarly as within the low-level groups, it may happen that for \(K\) that is not prefix-closed, the languages resulting from the group supervisors \(N_{j}\subseteq A_{I_{j}}\), \(j=1,2,\dots,m\), are conflicting, thus leading to blocking. Then, Theorem 7 can be used again. This means that we extend the high-level coordinator alphabet \(A_{k}\) so that the observer conditions of Theorem 7 is satisfied. A high-level coordinator for nonblockingness is then defined by
\[C_{k}=\mbox{$\sup{\rm CN}$}(\parallel_{j=1}^{m}P_{k}(N_{j}),\parallel_{j=1}^{m }\overline{P_{k}(N_{j})},A_{k,u},Q_{k})\,,\] (11)
where \(A_{k}\) is the extension of the original (for safety) high level coordinator such that \(P_{k}:A_{I_{j}}^{*}\to(A_{I_{j}}\cap A_{k})^{*}\), be \(N_{j}\)-observer, for all \(j=1,\dots,m\).
Now we are ready to formally propose the combined approach consisting in the following top-down design of coordinators followed by the bottom-up computations of a posteriori supervisors and coordinators for nonblockingness.
The combined approach is formalized in Procedure 2 below. The organizations of subsystems into a hierarchical structure with low-level groups is assumed to be given.
1. Extend the shared alphabet \(A_{sh}\) to high-level coordinator alphabet \(A_{k}\supseteq A_{sh}\) such that \(K=\ \parallel_{r=1}^{m}P_{I_{r}+k}(K)\).
2. Construct the high-level coordinator \(G_{k}=P_{k}(\parallel_{r=1}^{m}L_{r}^{hi})\) and set \(L_{k}=L(G_{k})\).
3. For all low-level groups \(I_{j}\), \(j=1,2,\dots,m\), extend the shared event sets of groups \(A_{sh,j}\) to low-level coordinator alphabets \(A_{k_{j}}\supseteq A_{sh,j}\) so that \(P_{I_{j}+k}(K)=\ \parallel_{i\in I_{j}}P_{i+k_{j}}(K)\).
4. Construct the coordinators for low-level groups, that is, \(G_{k_{j}}=\|_{\ell\in I_{j}}P_{k_{j}}(G_{\ell})\) and set \(L_{k_{j}}=L(G_{k_{j}})\).
5. Compute the supervisors \(\mbox{$\sup{\rm CN}$}_{k_{j}}=\mbox{$\sup{\rm CN}$}(P_{k_{j}}(K),\)\(L(G_{k_{j}}),A_{k_{j},u},Q_{k_{j}})\) for group coordinators \(L(G_{k_{j}})\), \(j=1,2,\dots,m\).
6. Compute supervisors \(\mbox{$\sup{\rm CN}$}_{i+k_{j}}=\ \mbox{$\sup{\rm CN}$}(P_{i+k_{j}}(K),\)\(L(G_{i})\parallel\mbox{$\sup{\rm CN}$}_{k_{j}},A_{i+k_{j},u},Q_{i+k_{j}})\) for subsystems \(i\in I_{j}\) and for all groups \(I_{j}\), \(j=1,2,\dots,m\).
7. Compute the a posteriori supervisors \(\widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}=\ \cap_{i\in I_{j}}\ \mbox{$\sup{\rm CN }$}(P_{k_{j}}(\mbox{$\sup{\rm CN}$}_{i+k_{j}}),L(G_{k_{j}}),A_{k_{j},u},Q_{k_{ j}})\) for all groups \(j=1,2,\dots,m\).
8. For all groups \(j\in\{1,2,\dots,m\}\) such that \(\widetilde{\mbox{$\sup{\rm CN}$}_{i+k_{j}}}:=\mbox{$\sup{\rm CN}$}_{i+k_{j}}\| \widetilde{\mbox{$\sup{\rm CN}$}_{k_{j}}}\), for \(i\in I_{j}\), are conflicting (cf. Eq. (4)), compute the group coordinators for nonblockingness using Eq. (10), that is, \(C_{k_{j}}=\mbox{$\sup{\rm CN}$}(\parallel_{i\in I_{j}}P_{k_{j}}(\widetilde{ \mbox{$\sup{\rm CN}$}_{i+k_{j}}}),\parallel_{i\in I_{j}}\overline{P_{k_{j}}( \widetilde{\mbox{$\sup{\rm CN}$}_{i+k_{j}}})},A_{k_{j},u},Q_{k_{j}})\), and set \(C_{k_{j}}=A_{k_{j}}^{*}\) for all groups, where \(\widetilde{\mbox{$\sup{\rm CN}$}_{i+k_{j}}}\) are not conflicting. Then the language \(N_{j}=\ \parallel_{i\in I_{j}}\widetilde{\mbox{$\sup{\rm CN}$}_{i+k_{j}}} \parallel C_{k_{j}}\) is the resulting nonblocking supervisor for the group \(j\).
9. Compute the a posteriori supervisor \(\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\) at the high-level (cf. Eq. 5).
10. If the languages \(N_{j}\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\) are conflicting, then compute the high-level coordinator for nonblocking \(C_{k}\) using Eq. (11), i.e. \(C_{k}=\mbox{$\sup{\rm CN}$}(\parallel_{j=1}^{m}P_{k}(N_{j}),\parallel_{j=1}^{m }\overline{P_{k}(N_{j})},A_{k,u},Q_{k})\) and set \(C_{k}=A_{k}^{*}\) if the languages \(N_{j}\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\) are not conflicting.
11. Set \(N_{j}\parallel\widetilde{\mbox{$\sup{\rm CN}$}_{k}}\parallel C_{k}\) as the final closed-loop of the three-level coordination control based on the combined approach.
**Procedure 2** The combined approach
We have shown in previous sections that, for prefix-closed languages, Procedure 2 yields the supremal three-level conditionally controllable and conditionally normal sublanguage of \(K\). This cannot be guaranteed in the general case, however, we have a distributive and hierarchical (sometimes referred to as heterarchical) way to compute a safe (although possibly not maximally permissive) and nonblocking supervisor.
We note that the computational complexity of all steps in Procedure 2 is polynomial in fairly small parameters (number of states and events of subsystems combined with coordinators) provided the projection to all coordinator alphabets satisfy the observer condition, in which case there is no problem with possibly an exponential size of the projected generators, and these are guaranteed to be smaller than the non-projected generators.
## VI Concluding remarks
We proposed a new general approach to coordination control of DES with partial observations. The approach combines the advantages of both the top-down and the bottom-up approaches proposed earlier. It consists in a top-down computation of coordinators (first a high-level coordinator is computed and then the group coordinators are computed) followed by the computation of supervisors at the lowest level (for individual subsystems) and, finally, the a posteriori supervisors and coordinators for nonblockingness are computed in a bottom-up manner.
The main advantage of the approach is that it combines the main advantage of the top-down approach—the possibility to compute local supervisors only for the individual subsystems—with the generality offered by the bottom-up approach that has namely enabled to leave out the restrictive conditions for being able to compute maximally permissive solutions in a distributed way and to leave out the nonconflictingness assumptions owing to the bottom-up computation of coordinators for nonblockingness. In the near future, we plan to apply the combined approach to discrete-event models of large scale systems stemming from manufacturing and traffic systems. We recall that recently a weaker condition than normality, called relative observability, was proposed for monolithic partially observed DES, cf. [1]. It is possible to introduce a distributed version of relative observability, conditional relative observability [9] and use it in our multilevel architecture instead of normality.
## VII Acknowledgments
The authors thank S. Lafortune and F. Lin for a fruitful discussion. The research was supported by RVO 67985840, by the Czech Ministry of Education in project MUSIC (grant LH13012), and by the DFG in project DIAMOND (Emmy Noether grant KR 4381/1-1).
## References
* [1] K. Cai, R. Zhang, and W. M. Wonham. On relative observability of discrete-event systems. In _Proc. of IEEE CDC 2013_, pages 7285–7290, Florence, Italy, 2013.
* [2] C. G. Cassandras and S. Lafortune. _Introduction to discrete event systems_. Springer, second edition, 2008.
* [3] L. Feng. _Computationally Efficient Supervisor Design for Discrete-Event Systems_. PhD thesis, University of Toronto, 2007.
* [4] J. Komenda and T. Masopust. Decentralized supervisory control with communicating supervisors based on top-down coordination control. In _Proc. of IEEE CDC 2014_, pages 5149–5155, Los Angeles, CA, USA, 2014.
* [5] J. Komenda, T. Masopust, and J. H. van Schuppen. On conditional decomposability. _Systems Control Lett._, 61(12):1260–1268, 2012.
* [6] J. Komenda, T. Masopust, and J. H. van Schuppen. Multilevel coordination control of modular DES. In _Proc. of IEEE CDC 2013_, pages 6323–6328, Florence, Italy, 2013.
* [7] J. Komenda, T. Masopust, and J. H. van Schuppen. Bottom-up approach to multilevel supervisory control with coordination. In _Proc. of ECC 2014_, pages 2715–2720, Strasbourg, France, 2014.
* [8] J. Komenda, T. Masopust, and J. H. van Schuppen. Coordination control of discrete-event systems revisited. _Discrete Event Dyn. Syst._, 2014. To appear.
* [9] J. Komenda, T. Masopust, and J. H. van Schuppen. A note on relative observability in coordination control. _ArXiv.org, CoRR_, abs/1404.2195, 2014.
* [10] J. Komenda, T. Masopust, and J. H. van Schuppen. Multilevel coordination control of partially observed modular DES. In _Proc. of ACC 2015_, Chicago, USA, 2015. Accepted.
* [11] J. Komenda and J. H. van Schuppen. Coordination control of discrete-event systems. In _Proc. of WODES 2008_, pages 9–15, Göteborg, Sweden, 2008.
* [12] J. Komenda and J. H. van Schuppen. Modular control of discrete-event systems with coalgebra. _IEEE Trans. Autom. Control_, 53(2):447–460, 2008.
* [13] K. C. Wong and W. M. Wonham. Hierarchical control of discrete-event systems. _Discrete Event Dyn. Syst._, 6(3):241–273, 1996.
|
1511.01830 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 68939,
"num_imgs": 11,
"llama3_tokens_count": 16607
} | [
"content_image/1511.01830/x1.png",
"content_image/1511.01830/x3.png",
"content_image/1511.01830/x4.png",
"content_image/1511.01830/x5.png",
"content_image/1511.01830/x6.png",
"content_image/1511.01830/x7.png",
"content_image/1511.01830/stopwords.png",
"content_image/1511.01830/stopwords.png",
"content_image/1511.01830/x8.png",
"content_image/1511.01830/connected_components.png",
"content_image/1511.01830/x9.png"
] | Prediction and Characterization of High-Activity Events in Social Media Triggered by Real-World News
Janani Kalyanam1,*, Mauricio Quezada², Barbara Poblete², Gert Lanckriet¹
\({}^{1}\) **Department of Electrical and Computer Engineering**
**University of California, San Diego, California, U.S.A.**
\({}^{2}\) **Department of Computer Science**
**University of Chile, Santiago, Chile**
*** jkalyana@ucsd.edu**
## Abstract
On-line social networks publish information on a high volume of real-world events almost instantly, becoming a primary source for breaking news. Some of these real-world events can end up having a very strong impact on on-line social networks. The effect of such events can be analyzed from several perspectives, one of them being the intensity and characteristics of the collective activity that it produces in the social platform.
We research 5,234 real-world news events encompassing 43 million messages discussed on the Twitter microblogging service for approximately 1 year. We show empirically that exogenous news events naturally create collective patterns of bursty behavior in combination with long periods of inactivity in the network. This type of behavior agrees with other patterns previously observed in other types of natural collective phenomena, as well as in individual human communications. In addition, we propose a methodology to classify news events according to the different levels of intensity in activity that they produce. In particular, we analyze the most highly active events and observe a consistent and strikingly different collective reaction from users when they are exposed to such events. This reaction is independent of an event’s reach and scope. We further observe that extremely high-activity events have characteristics that are quite distinguishable at the beginning stages of their outbreak. This allows us to predict with high precision, the top 8% of events that will have the most impact in the social network by just using the first 5% of the information of an event’s lifetime evolution. This strongly implies that high-activity events are naturally prioritized collectively by the social network, engaging users early on, way before they are brought to the mainstream audience.
## Introduction
Social media is now a primary source of breaking news information for millions of users all over the world [13]. On-line social networks along with mobile internet devices have crowdsourced the task of disseminating real-time information. As a result, both news media and news consumers have become inundated with much more information than they can process. One possible way of handling this data overload, is to find ways to filter and prioritize information that has the potential of creating a strong collective impact. Understanding and quickly identifying the type of reaction that certain exogenous events will produce in on-line social networks, at both global and local scales, can help in the understanding of collective human behavior, as well as improve information delivery, journalistic coverage and crisis management, among other things. We address this challenge by analyzing the properties of real-world news events in on-line social networks, showing that they corroborate patterns previously identified in other case studies of human communications. In addition, we present our main findings of how news events that produce extremely high-activity can be clearly identified in the early stages of their outbreak.
The study of information propagation on the Web has sparked tremendous interest in recent years. Current literature on the subject primarily considers the process through which a _meme_, usually a piece of media (like a video, an image, or a specific Web article), gains popularity [4, 20, 14, 22, 18, 1, 15, 16]. However, a meme represents a simple information unit and its propagation behavior does not necessarily correspond to that of more complex information such as news events. News events are usually diffused in the network in many different formats, e.g., a particular news story such as an _earthquake in Japan_ can be communicated through images, URLs, tweets, videos, etc. Therefore, current research can benefit from analyzing the effects of more high-level forms of information.
Traditionally, the impact of information in on-line social networks has been measured in relation to the total amount of attention that this subject receives [3, 10, 9, 17, 8]. That is, if a content posted in the network receives votes/comments/shares above a certain threshold it is usually deemed as _viral_ or _popular_. Nevertheless, this notion of popularity or impact will favor only information that produces very large volumes of social media messages. Naturally, global breaking news that has world-wide coverage and that produces a high volume of activity in a short time should be considered as having a strong impact on the network. However, there are other types of events that can produce a similar reaction in smaller on-line communities such as, for example, on users from a particular country (e.g., the withdrawal of the main right wing presidential candidate in Chile due to psychiatric problems, just before elections [24]). Clearly, events of local scope do not produce as much social media activity as events of global scope, but they can create a strong and immediate reaction from users in local networks [5]. Conversely, there are large events which do not produce an intense reaction, such as _The Oscars_ (Fig. 0(b)), which span a long period of time and are discussed by social network users for weeks or even months, but do not spark intense user activity. Therefore, it is reasonable to consider additional dimensions, than just volume, when analyzing the impact of information in on-line communities.
Prior research has shown that certain types of individual activities, such as communications (studied in email exchanges), work patterns and entertainment, follow a behavior of bursts of rapidly occurring actions followed by long periods of inactivity [2], referred to as temporally inhomogeneous behavior [12]. This type of behavior initially observed in individual activities, has also been observed in relation to other naturally occurring types of collective phenomena in human dynamics similar to processes seen in self-organized criticality [12]. In particular, extremely high-activity bursty behavior seems to also occur in critical situations, observed from the information flow in cell phone networks during emergencies[7]. Although, there is research towards modeling this type of collective behavior [28] in on-line social networks, to the best of our knowledge, it has not yet been analyzed quantitatively.
Our work focuses on high-activity events in social media produced by real-world news, with the following contributions:
1. We introduce a methodology for modeling and classifying events in social media, based on the intensity of the activity that they produce. This methodology is independent of the size and scope of the event, and is an indicator of the impact that the event information had on the social network.
2. We show empirically that real-world news events produce collective patterns of bursty behavior in the social network, in combination with long periods of inactivity. Furthermore, we identify events for which most of their activity is concentrated into very high-activity periods, we call these events _high-activity events_.
3. We determine the existence of unique characteristics that differentiate how high-activity events propagate in the social network.
4. We show that an important portion of high-activity events can be predicted very early in their lifecycle, indicating that this type of information is spontaneously identified and filtered collectively, early on, by social network users.
## Materials and Methods
We define an event as a conglomerate of information that encompasses all of the social media content related to a real-world news occurrence. Using this specification, which considers an event as a complex unit of information, we study the type of collective reaction produced by the event on the social network. In particular, we analyze the intensity or immediacy of the social network’s response. By analyzing the levels of intensity in activity induced by different exogenous events to the network, we are implicitly studying the priority that has been collectively assigned to the event by groups of independent individuals [2, 12].
We characterize an event’s discrete activity dynamics by using _interarrival times_ between consecutive social media messages within an event (e.g., \(d_{i}=t_{i+1}-t_{i}\), where \(d_{i}\) denotes the interarrival time between two consecutive social media messages \(i\) and \(i+1\) that arrived in moments \(t_{i}\) and \(t_{i+1}\), respectively).
We introduce a novel vectorial representation based on a _vector quantization of the interarrival time distribution_, which we call _“VQ-event model”_. This model is designed to filter events based on the distribution of the interarrival times between consecutive messages. This approach is inspired by the _codebook-based representation_ from the field of multimedia content analysis, which has been used in audio processing and computer vision [6, 27]. In our proposed approach, our method learns a set of the most representative interarrival times from a large training corpus of events; each one of the representative interarrival times is known as a _codeword_ and the complete learned set is known as the _codebook_ [27]. Each event is then modeled using a vector quantization (VQ) that converts the interarrival times of an event into a discrete set of values, each value corresponding to the closest codeword in the codebook (details in supplementary material). The resulting VQ-event model is then a vector in which each dimension contains the percentage of interarrival times of the event that were assigned a particular codeword in the codebook.
The VQ-event representation is relative to an event’s overall size since the model is normalized with respect to the number of messages in the event. Therefore the only criteria that are considered in the model are the interarrival times of each particular event. This model allows us to group events based on the _similarity of the distribution_ of their interarrival times. In those terms, we consider as high-activity events those events for which the distribution of interarrival times is most heavily skewed towards the smallest possible interval, zero. In other words, events for which the overall activity is extremely intense in comparison with other events.
To illustrate events with different levels of intensity in activity we present two examples taken from our analysis of Twitter data. These examples show the interarrival time histograms for the entire lifecycle of the two events. In the first example, the majority of the messages about the death of political leader Nelson Mandela (Fig. 0(a)) arrive within almost zero seconds of each other. On the contrary, the messages about The Oscars (Fig. 0(b)) are much more spread out in time.
<figure><img src="content_image/1511.01830/x1.png"><figcaption>(a) User posts about the death of Nelson Mandela arrive almost instantly.</figcaption></figure>
We note that, by using interarrival times to describe the intensity of the activity of an event, we make our analysis independent of the particular evolution of each event. By doing this, we put no restrictions on how high-activity events unfold in time, for example, they could be: (a) events that start out slowly and suddenly gain momentum, (b) events that go viral soon after they appear on social media and then decay in intensity over a long (or short) period of time, (c) events that from the beginning produce large amounts of interest and sustain that interest throughout their long (or short) lifespan, or (d) events that are a concatenation of any of the above, etc.
We study a dataset of news events gathered from news headlines from a _manually curated_ list of well-known news media accounts (e.g., @CNN, @BreakingNews, @BBCNews, etc.) in the microblogging platform Twitter [26] (a full list of all the news media accounts is provided in the supplementary material). Headlines were collected periodically every hour, over the course of approximately one year. In parallel, all the Twitter messages (called _tweets_) were extracted for each news event using the public API [25]. This process was performed by automatically extracting descriptive sets of keywords for each event using a variation of frequent itemset extraction [21] over the event’s headlines. These sets of keywords were then used to retrieve corresponding user tweets for each event. We validate the events gathered in our data collection process to ensure that each group of social media posts corresponds to a meaningful and cohesive news event. We provide a detailed description of the collection methodology and of the validation of event cohesiveness in the supplementary material. Overall, the resulting dataset contains \(43,256,261\) tweets that account for \(5,234\) events (Table 6).
In Figure 2 we characterize an example event from our dataset, by showing the set of keywords and a sample of tweets associated to the event. These keywords form a semantically meaningful event; they refer to the incident where soccer player Luis Suarez was charged for biting another player during the FIFA World Cup in 2014. This general collection process results in a set of social media posts associated to an event which can encompass several memes, viral tweets and pieces of information. Therefore, an event is composed of diverse information, addressing more heterogeneous content than prior work [4, 20, 14, 23, 18, 1, 19] which focus on single pieces of information (e.g., a particular meme, a viral tweet etc.).
Event Collection Statistics | Minimum | Mean | Median | Maximum
---|---|---|---|---
# of posts (per event) | 1,000 | 8,254 | 2,474 | 510,920
# of keywords (per tweet) | 2 | 3.77 | 3 | 39
Event duration (hours) | 0.12 | 20.93 | 7.46 | 190.43
Table 1: High-level description of the dataset of news events.
<figure><img src="content_image/1511.01830/x3.png"><figcaption>Figure 2: An example event, collected on 06/25/2014 with keywords (left) andsample user posts (right) obtained from the Twitter Search API. The tweets inthe event contain at least a pair of descriptive keywords and were retrievedclose to the time of the event.</figcaption></figure>
The collection of events is converted into their VQ-event model representation. Using this model, we can identify events that have produced similar levels of activity in the social network. In other words, events are considered to have similar activity if the interarrival times between their social media posts are similarly distributed, implying a very much alike collective reaction from users to the events within a group. In order to identify groups of similar events, we cluster the event models. We sort the resulting groups of events from highest to lowest activity, according to the concentration of social media posts in the bins that correspond to short interarrival times. We consider the events that fall in the top cluster to be high-activity events as most of their interarrival times are concentrated in the smallest interval of the VQ-event model. In our dataset, these correspond to roughly 8% of the events. We consider the next clusters in the sorted ranking to form medium-high activity events, and so on. Thus we end with four groups of events: high, medium-high, medium-low and low. Figure 3 shows a heatmap of the interarrival relative frequency for each cluster. This classification of events based on activity intensity is independent of event size. More details of this methodology are provided in the supplementary material.
<figure><img src="content_image/1511.01830/x4.png"><figcaption>Figure 3: Each row is the average representation of all the events in acluster. A darker cell represents a higher relative frequency value. They-axis specifies the number of events in each cluster. Clusters are (top tobottom): high-activity, medium-high medium-low and low.</figcaption></figure>
## Results and Discussion
Our main objective in this work is to analyze the characteristics of high-activity events which differentiate them from other types of events. In particular, we identify how early on in an event’s lifecycle can we determine if an event is going produce high activity in the on-line social network.
Tables 2 and 3 show examples of events from the high-activity category and low-activity category. We recall that the high-activity events are those which were in the top 8% of the ranking obtained by sorting the event clusters according to concentration of interarrival times of social media posts in the shortest interarrival time of the VQ-event model. Table 2 shows two events of different sizes (large and small) and different scopes (one global and the other of more local scope) categorized as high activity in our dataset. The first event, the death of Nelson Mandela, is one of the largest events in the dataset, with \(\approx 134,000\) tweets. The histogram representation of this event, shown in Figure 0(a), suggests that more than \(80\%\) of the activity of the event was produced in high-activity periods. This is an event of international, political, and social importance, that produced an overwhelming flood of messages on social media. Hence, it makes sense for such an example to be a high-activity event. The second event, on the other hand, about the 2013 Mumbai Gang Rape is of much smaller scale, with a total of \(\approx 1,700\) tweets. However, this event caused considerable amount of immediate reaction on social media, with close to \(50\%\) of its activity concentrated within high-activity periods. Despite its smaller size, in comparison to the previous event, this event displays a similar reaction to that of other high-activity events, but at a smaller scale.
Table 3 shows events that have been classified by our methodology in the category of low activity. The first event, about a teen surviving after hiding in the wheel of a airplane, had only a little more than \(25\%\) of its messages arriving with high-activity bursts although it had over \(18,000\) messages. The second event, about the damages caused by a tornado in Canada, did not garner much immediacy in attention of Twitter users, with only \(7\%\) of its messages produced with short interarrival times. Most of the messages of this event were well spaced out in time. Even though we cannot say whether or not this event had significant implications in the real-world, we can say that it did not have considerable impact on the Twitter network. The lack of interest could be due to several factors that are currently beyond the scope of this work, ranging from the lack of Twitter users in the locality of the real-world event, to it not being considered urgent by Twitter users. We intend to research the relation between the real-world impact of an event and the network reaction in future work.
Fig. 4 shows the average histograms for events that belong to the high activity, medium-high activity, medium-low activity and low-activity clusters (displayed from left to right and top to bottom). All histograms show a quick decay in average relative frequency (resembling a distribution from the exponential family). In particular, the high-activity group concentrates most of its activity in the shortest interarrival rate, with lower activity groups mostly concentrating their activity in the second bin with slower decay. Fig. 5 further characterizes the differences in behavior of the high and low-activity groups, showing that high-activity events concentrate on average \(70\%\) fo their activity in the smallest bin (\(0\) sec.), against \(8\%\) for low-activity events. In addition, Fig. 6 (left) shows the cumulative distribution function (CDF) for each group of events, and Fig. 6 (right) shows \(\log{(1-\mathrm{CDF})}\). Visual inspection shows a clear difference in how interarrival rates are distributed within each group, however, these figures do not indicate a power-law distribution nor exponential distribution.
Event | Sample Tweets
---|---
\pbox20cmDescription: |
Death of South African |
politician Nelson Mandela. |
Keywords: |
[nelson, mandela] |
Date: |
2013-12-05 |
Size: |
134,637 tweets | \pbox20cm @DaniellePeazer: RIP Nelson Mandela….. what a truly phenomenal and
inspirational man xx |
@iansomerhalder: Im in tears.The world has lost one of its greatest shepherds |
of peace. Thank you Mr.Mandela for the love you radiated. <http://t.co/u39MVVEKe8> |
@FootballFunnys: This is so true. RIP Nelson Mandela. <http://t.co/vF9xri8LdP> |
@David_Cameron: I’ve spoken to the Speaker and there will be statements |
and tributes to Nelson Mandela in the House on Monday. |
\pbox20cmDescription: |
2013 Mumbai Gang Rape |
Keywords: |
[rape, mumbai] |
Date: |
2013-08-24 |
Size: |
1,705 tweets | \pbox20cm @TheNewsRoundup: Mumbai gang-rape: Second accused confesses to crime:
Mumbai Police - Daily News Analysis <http://t.co/KnabwhqH66> |
@vijayarumugam: An interesting take on the Mumbai rape: <http://t.co/ylBmW4l8sA> |
@LondonStephanie: Two arrested over gang rape of Mumbai photojournalist |
that sparked renewed protests in India <http://t.co/McYfLNDvaE> |
@GanapathyI: Most brutal rapist of Delhi gang-rape was 17. Most brutal rapist |
of Mumbai gang-rape is 18. Worst Young generation I have seen in my life. |
Table 2: Examples of high-activity news events. The events shown were taken
from the “high” category according to Fig. 4.
Event | Sample Tweets
---|---
\pbox20cmDescription: |
Teen survives hiding |
in a plane wheel. |
Keywords: |
[teen, survives, old, |
well, skydivers, plane, wheel, flight] |
Date: |
2014-04-21 |
Size: |
18,519 | \pbox20cm @ToniWoemmel: 16-year-old somehow survives flight from California to
Hawaii stowed away in planes wheel well: <http://t.co/IGiJa60SiK> |
@iOver_think: 38,000 feet at -80F: Teen stowaway survives five-hour |
California-to-Hawaii flight in wheel well <http://t.co/ejXQH9VZyT> |
@TruEntModels: GOD IS GOOD…runaway TEEN hid in plane’s wheel for |
5 HOUR flight during FREEZING temps and survived <http://t.co/6g6Cqhs9Ib> |
@DvdVill: A 16-year-old kid, who was mad at his parents, hid inside a jet |
wheel and survived flight to Hawaii. <http://t.co/c82GbjrfUH> |
|
\pbox20cmDescription: |
Surveying the damages of |
recent tornado in Canada. |
Keywords: |
[canada, tornado] |
Date: |
2014-06-21 |
Size: |
1,033 | \pbox20cm @Kathleen_Wynne: Visited #Angus today to survey the damage. Thankfully no
fatalities or major injuries from recent tornado. <http://t.co/xRQyRWg5Vw> |
@SunNewsNetwork: PHOTOS & VIDEO: Hundreds displaced after |
tornado hits Ontario town, destroying homes <http://t.co/L38rG6N1a6> |
@CBCToronto: Kathleen Wynne is speaking at site of tornado damage in Angus, |
Ont. now. Watch live here: <http://t.co/EDKNUiZo0X> #cbcto |
@InsuranceBureau: @CTVBarrieNews: Insurance Bureau of Canada is setting up |
a mobile unit in #Angus today to help residents affected by #Tornado |
Table 3: Examples of events with low activity. The events shown were taken
from the “low” category according to Fig. 4 .
<figure><img src="content_image/1511.01830/x5.png"><figcaption>Figure 4: Average histograms of the high activity, medium-high activity,medium-low activity and low activity clusters in our dataset (from left toright and top to bottom). All histograms include standard deviation bars andwere cut-off at 60 second length for better visibility.</figcaption></figure>
<figure><img src="content_image/1511.01830/x6.png"><figcaption>Figure 5: Scatter plots of the average relative frequencies of interarrivaltimes for the high-activity and low-activity clusters of events (i.e., scatterplots of the histograms in Fig. 4 in log-log scale). y-axis represents theaverage relative frequency of social media messages and x-axis theinterarrival time.</figcaption></figure>
<figure><img src="content_image/1511.01830/x7.png"><figcaption>Figure 6: (Left) Average cumulative distribution function (CDF) for the highactivity, medium-high activity, medium-low activity and low activity clustersin our dataset. (Right) log(1−CDF) for the same clusters.</figcaption></figure>
Further analysis of the high-activity events shows significant differences to other events, in the following aspects: (i) how the information about these events is propagated, (ii) the characteristics of the conversations that they generate, and (iii) how focused users are on the news topic. In detail, high-activity events have a higher fraction of _retweets_ (or shares) relative to their overall message volume. On average, a tweet from a high-activity event is retweeted 2.36 times more than a tweet from a low activity event. The most retweeted message in high-activity events is retweeted 7 times more than the most retweeted message in a medium or low activity event. We find that a small set of initial social media posts are propagated quickly and extensively through the network without any rephrasing by the user (just plain forwarding). Intuitively, this seems justified given general topic urgency of high-activity events. Events that are not high-activity did not exhibit these characteristics.
Our research also revealed that high-activity events tend to spark more conversation between users, 33.4% more than other events. This is reflected in the number of _replies_ to social media posts. The number of different users that engage with high-activity events is 32.7% higher than in events that are not high-activity. Posts about high-activity events are much more topic focused than in other events. The vocabulary of unique words as well as _hashtags_ used in high-activity events is much more narrow than for other events. Medium and low activity events have over 7 times more unique hashtags than high-activity events. This is intuitive, given that if a news item is sensational, people will seldom deviate from the main conversation topic.
In a real-world scenario, in order to predict if an early breaking news story will have a considerable impact in the social network, we will not have enough data to create its activity-based model, i.e., we will not yet know the distribution of the speed at which the social media posts will arrive for the event. For instance, an event can start slowly and later produce an explosive reaction, or start explosively and decay quickly to an overall slower message arrival rate. Still, reliable early prediction of very high-activity news is important in many aspects, from decisions of mass media information coverage, to natural disaster management, brand and political image monitoring, and so on.
For the task of early prediction of high-activity events we use features that are independent of our activity-based model such as the retweets, the sentiment of the posts about the event, etc. These features are computed on the early 5% of messages about the event. The results are an average from a 5-fold cross validation with randomly selected 60% training, 20% validation and 20% test splits. The high-activity events are identified with a precision of 82% using only the earliest 5% of the data of each event (Table 13). Additionally, we were able to identify with high accuracy a considerable percentage of all high-activity events (\(\approx 46\%\)) at an early stage, with very few false positives (Table 13 and 12).
| Early 5% Tweets | All Tweets
---|---|---
| FP-Rate | Precision | Recall | ROC-area | FP-Rate | Precision | Recall | ROC-area
high-activity | 0.009 | 0.819 | 0.455 | 0.900 | 0.01 | 0.830 | 0.540 | 0.945
non-high-activity | 0.545 | 0.954 | 0.991 | 0.900 | 0.460 | 0.960 | 0.990 | 0.945
Table 4: Classification of high-activity events.
| Early 5% Tweets | All Tweets
---|---|---
high-activity | non-high-activity | high-activity | non-high-activity
high-activity | 194 | 232 | 230 | 196
non-high-activity | 43 | 4,765 | 47 | 4,761
Table 5: Confusion matrix for high-activity events prediction.
The precision using only the early tweets is almost as good as using all tweets in the event (0.819 to 0.830). This suggests that the social network somehow acts as a natural filter in separating out the high-activity events fairly early on. The recall goes from 0.455 to 0.540. This indicates that there are some high-activity events which require more data in order to determine what kind of activity they will produce, or events for which activity occurs due to random conditions. A detailed description of the features and different classification settings are provided in the supplementary material.
## Conclusion
We study the characteristics of the activity that real-world news produces in the Twitter social network. In particular, we propose to measure the impact of the real-world news event on the on-line social network by modeling the user activity related to the event using the distribution of their interarrival times between consecutive messages. In our research we observe that the activity triggered by real-world news events follows a similar pattern to that observed in other types of collective reactions to events. This is, by displaying periods of intense activity as well as long periods of inactivity. We further extend this analysis by identifying groups of events that produce much more concentration of high-activity than other events. We show that there are several specific properties that distinguish how high-activity events evolve in Twitter, when comparing them to other events. We design a model for events, based on the codebook approach, that allows us to do unambiguous classification of high-activity events based on the impact displayed by social network. Some notable characteristics of high-activity events are that they are forwarded more often by users, and generate a greater amount of conversation than other events. Social media posts from high-activity news events are much more focused on the news topic. Our experiments show that there are several properties that can suggest early on if an event will have high-activity on the on-line community. We can predict a high number of high-activity events _before_ the network has shown any type of explosive reaction to them. This suggests that users are collectively quick at deciding whether an event should receive priority or not. However, there does exist a fraction of events which will create high activity, despite not presenting patterns of other high activity events during their early stages. These events are likely to be affected by other factors, such as random conditions found in the social network at the moment and require further investigation.
## References and Notes
* 1. Mohamed Ahmed, Stella Spagna, Felipe Huici, and Saverio Niccolini. A peek into the future: Predicting the evolution of popularity in user generated content. In _Proceedings of the Sixth ACM International Conference on Web Search and Data Mining_, WSDM ’13, pages 607–616, New York, NY, USA, 2013. ACM.
* 2. Albert-Laszlo Barabasi. The origin of bursts and heavy tails in human dynamics. _Nature_, 435(7039):207–211, 2005.
* 3. Jonah Berger and Katherine L Milkman. What makes online content viral? _Journal of Marketing Research_, 49(2):192–205, 2012.
* 4. Carlos Castillo, Mohammed El-Haddad, Jürgen Pfeffer, and Matt Stempeck. Characterizing the life cycle of online news stories using social media reactions. In _Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work and Social Computing_, CSCW ’14, pages 211–223, New York, NY, USA, 2014. ACM.
* 5. Júlio Cesar dos Reis, Fabrício Benevenuto, Pedro Olmo S. Vaz de Melo, Raquel O. Prates, Haewoon Kwak, and Jisun An. Breaking the news: First impressions matter on online news. _CoRR_, abs/1503.07921, 2015.
* 6. L. Fei-Fei and P. Perona. A bayesian hierarchical model for learning natural scene categories. In _Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on_, volume 2, pages 524–531 vol. 2, June 2005.
* 7. Liang Gao, Chaoming Song, Ziyou Gao, Albert-László Barabási, James P Bagrow, and Dashun Wang. Quantifying information flow during emergencies. _arXiv preprint arXiv:1401.1274_, 2014.
* 8. Julien Gaugaz, Patrick Siehndel, Gianluca Demartini, Tereza Iofciu, Mihai Georgescu, and Nicola Henze. Predicting the future impact of news events. In _Advances in Information Retrieval_, pages 50–62. Springer, 2012.
* 9. Marco Guerini, Carlo Strapparava, and Gözde Özbal. Exploring text virality in social networks. In _ICWSM_, 2011.
* 10. José Luis Iribarren and Esteban Moro. Branching dynamics of viral information spreading. _Physical Review E_, 84(4):046116, 2011.
* 11. Karen Spärck Jones. A statistical interpretation of term specificity and its application in retrieval. _Journal of Documentation_, 28:11–21, 1972.
* 12. Márton Karsai, Kimmo Kaski, Albert-László Barabási, and János Kertész. Universal features of correlated bursty behaviour. _Scientific reports_, 2, 2012.
* 13. Haewoon Kwak, Changhyun Lee, Hosung Park, and Sue Moon. What is twitter, a social network or a news media? In _Proceedings of the 19th International Conference on World Wide Web_, WWW ’10, pages 591–600, New York, NY, USA, 2010. ACM.
* 14. Kristina Lerman and Tad Hogg. Using a model of social dynamics to predict popularity of news. In _Proceedings of the 19th International Conference on World Wide Web_, WWW ’10, pages 621–630, New York, NY, USA, 2010. ACM.
* 15. Cheng-Te Li, Man-Kwan Shan, Shih-Hong Jheng, and Kuan-Ching Chou. Exploiting concept drift to predict popularity of social multimedia in microblogs. _Inf. Sci._, 339(C):310–331, April 2016.
* 16. Qian Liu, Mi Zhou, and Xin Zhao. Understanding news 2.0. _Inf. Manage._, 52(7):764–776, November 2015.
* 17. Adam J Mills. Virality in social media: the spin framework. _Journal of public affairs_, 12(2):162–169, 2012.
* 18. Henrique Pinto, Jussara M. Almeida, and Marcos A. Gonçalves. Using early view patterns to predict the popularity of youtube videos. In _Proceedings of the Sixth ACM International Conference on Web Search and Data Mining_, WSDM ’13, pages 365–374, New York, NY, USA, 2013. ACM.
* 19. Bongwon Suh, Lichan Hong, Peter Pirolli, and Ed H Chi. Want to be retweeted? large scale analytics on factors impacting retweet in twitter network. In _Social computing (socialcom), 2010 ieee second international conference on_, pages 177–184. IEEE, 2010.
* 20. Gabor Szabo and Bernardo A. Huberman. Predicting the popularity of online content. _Commun. ACM_, 53(8):80–88, August 2010.
* 21. Pang-Ning Tan, Michael Steinbach, and Vipin Kumar. _Introduction to Data Mining, (First Edition)_. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2005.
* 22. Alexandru Tatar, Marcelo Dias de Amorim, Serge Fdida, and Panayotis Antoniadis. A survey on predicting the popularity of web content. _Journal of Internet Services and Applications_, 5(1):1–20, 2014.
* 23. Alexandru Tatar, Jérémie Leguay, Panayotis Antoniadis, Arnaud Limbourg, Marcelo Dias de Amorim, and Serge Fdida. Predicting the popularity of online articles based on user comments. In _Proceedings of the International Conference on Web Intelligence, Mining and Semantics_, WIMS ’11, pages 67:1–67:8, New York, NY, USA, 2011. ACM.
* 24. Telegraph. Chile news. http://www.telegraph.co.uk/news/worldnews/southamerica/chile/.
* 25. Twitter API. https://dev.twitter.com.
* 26. Twitter Inc. https://www.twitter.com.
* 27. Yonatan Vaizman, Brian McFee, and Gert Lanckriet. Codebook-based audio feature representation for music information retrieval. _IEEE/ACM Trans. Audio, Speech and Lang. Proc._, 22(10):1483–1493, October 2014.
* 28. Qiang Yan, Lianren Wu, Chao Liu, and Xiye Li. Information propagation in online social network based on human dynamics. In _Abstract and Applied Analysis_, volume 2013. Hindawi Publishing Corporation, 2013.
## 1 Data Collection Methodology
The Twitter Search API¹ was used to obtained tweets about \(5,234\) news events. This encompasses a total of \(43,256,261\) tweets. Table 6 shows a high level description of the dataset. The full dataset is available in http://dcc.uchile.cl/~mquezada/breakingnews/.
[FOOTNOTE:1][ENDFOOTNOTE]
News events’ property | Minimum | Mean | Median | Maximum
---|---|---|---|---
# of tweets | 1,000 | 8,254 | 2,474 | 510,920
# of keywords | 2 | 3.77 | 3 | 39
Event duration (hours) | 0.12 | 20.93 | 7.46 | 190.43
Table 6: High-level description of the dataset of news events.
### Collecting the Tweets
The data collection process entails detecting pairs of keywords from the most recent hourly batch of news headlines (the pairs of keywords are meant to describe the events succinctly), and then searching for tweets using the pairs of keywords as queries. We merge the search results of ‘similar’ queries every 24 hours and form the tweets set for an event. We obtained the hourly batch of headlines from the news media accounts on Twitter listed in Table 7. Figure 7 represents the high-level flowchart of the data collection process. A summary of this process is described in Algorithm 1. The accounts are verified accounts on Twitter².
[FOOTNOTE:2][ENDFOOTNOTE]
Twitter Account | Name | Location
---|---|---
breakingnews | Breaking News | Global
cnnbrk | CNN Breaking News | Everywhere
cnn | CNN |
nytimes | The New York Times | New York City
bbcbreaking | BBC Breaking News | London, UK
theeconomist | The Economist | London
skynewsbreak | Sky News Newsdesk | London, UK
reuters | Reuters Top News | Around the world
wsjbreakingnews | WSJ Breaking News | New York, NY
foxnews | Fox News | U.S.A.
msnbc_breaking | msnbc.com Breaking |
skynews | Sky News | London, UK
nbcnews | NBC News | New York, NY
cbsnews | CBS News | New York, NY
bbcworld | BBC News (World) | London, UK
abc | ABC News | New York, NY
bbcnews | BBC News (UK) | London
ap | The Associated Press | Global
telegraphnews | Telegraph News | London, UK
breakingnewsuk | Breaking News UK | London
channel4news | Channel 4 News | Weekdays at 7 on Channel 4
twcbreaking | TWC Breaking | Atlanta, GA
washingtonpost | Washington Post | Washington, D.C.
yahoonews | Yahoo News | Santa Monica, Calif.
breakingpol | Breaking Politics | Global
nydailynews | New York Daily News | New York City
ajenglish | Al Jazeera English | Doha, Qatar
usatoday | USA TODAY | USA TODAY HQ, McLean, Va.
wsj | Wall Street Journal | New York, NY
guardiannews | Guardian news | London
bloombergnews | Bloomberg News | New York and the World
abcworldnews | ABC World News | New York
nypost | New York Post | New York, NY
msnbc | msnbc |
nbcnightlynews | NBC Nightly News | New York
huffingtonpost | Huffington Post |
rt_com | RT |
abcnews | ABC News | Australia
latimes | Los Angeles Times | Los Angeles, CA
googlenews | Google News | Mountain View, CA
cnnlive | CNN Live | Everywhere
newshour | NewsHour | Arlington, VA
guardian | The Guardian | London
afp | Agence France-Presse | France
independent | The Independent | London, United Kingdom
ndtv | NDTV | India
cp24 | CP24 | Toronto
reuterslive | Reuters Live | Global
bostonglobe | The Boston Globe | Boston, MA
foxnewsalert | Fox News Alert | New York, NY
ft | Financial Times | London
jerusalem_post | The Jerusalem Post | Israel
bbcnewsus | BBC News US | Washington DC
foxheadlines | Fox News | New York, NY
forbes | Forbes | New York, NY
thetimes | The Times of London | London
usnews | U.S. News | Washington, DC
Table 7: List of news account. The first column is the Twitter account. It can
be accessed in a browser at <http://twitter.com/accountname>. The second and
third columns were obtained from each account’s page.
In Algorithm 1, the goal of the detect_keywords() module is to produce pairs of keywords that coherently, and succinctly describes an event. Inspired by the data mining concept of mining frequent itemsets [21], we develop an algorithm which identifies the most commonly occurring keyword groups (or item sets) in the headlines. From the item sets, we pick the most common keyword pairs. The algorithm is described in Algorithm 2. This algorithm finds string intersections between headlines (intersect() in Line 5 returns the number of words present in both \(s_{a}\) and \(s_{b}\)). If the common set of words has sufficient Jaccard similarity to any of the existing item sets, then the common set of words are added to that item set. If not, a new item set is created (Line 11). During the process of identifying the most commonly occurring item sets, we also track how many times each keyword has been added to an item set, namely, the score of the keyword. The score of each item set is the average of the scores of its keywords. Once the item sets have been identified, we select the top 2 keywords from each of the top six item sets and use them for searches. We preprocess the headlines to remove duplicates, stopwords, punctuation, convert everything to lower case, and subject the text through the process of stemming.
We made the choice of selecting 2 keywords since having a single keyword maybe not define an event accurately. For example, the keyword {obama} could retrieve tweets about any event related to Obama. However, a keyword _pair_ like {obama, syria} describes the event more accurately³.
[FOOTNOTE:3][ENDFOOTNOTE]
The Twitter Search API imposes several restrictions on the number of searches that can be performed in a given time duration. We produce six search threads to perform searches, one for each keyword pair. All in all, with \(\tau=60\) minutes in Figure 7, six new pairs of keywords are discovered from the most recent batch of headlines, and then we query for tweets in the Twitter Search API using these keywords over the next one hour.
We make some notes about the data collection methodology. Firstly, there is a temporal sensitivity to the data collection methodology. For example, one of the keyword pairs obtained as soon the Malaysian airlines jet disappeared was {plane,missing}. Although this keyword pair does not specifically refer to the Malaysian airlines jet, it is likely that the tweets retrieved from searching for this pair will indeed be about the Malaysian airlines plane that went missing, since the search is performed as and when the event breaks out. Secondly, Algorithm 2 may return multiple pairs of keywords (possibly different pairs) describing the same event. Some pair examples of keywords produced when there was a bomb threat at Harvard University in December 2013 were {harvard, evacuated}, {harvard, explosives}, etc. How do we merge the keyword pairs which belong to the same event? In order to address this, we collect all the pairs obtained in the past \(24\) hours, and build a graph with keywords as nodes, and keyword pairs (as obtained from Algorithm 2) as edges. We then discover the connected components of this graph, and treat each connected component as an ‘‘event”⁴. The set of tweets obtained by merging the tweets from each of the keyword pairs is the set of messages associated with the event. Figure 10 is an example component formed on December 16, 2013. It illustrates the merge of smaller keyword pairs into larger components for two events. One was the bomb threat at Harvard University, and the other was about the attack on police in the Xinjiang province in China.
[FOOTNOTE:4][ENDFOOTNOTE]
<figure><img src="content_image/1511.01830/stopwords.png"><figcaption>Figure 8: Stopwords detection. Normalized 1−maxtf-idf score for data fromAugust 27th (left) and August 28th (right) of 2013. The top score words forboth plots are “says” and “live”. We used the top score words to disconnectconnected components of events.</figcaption></figure>
```
0: stream of headlines.
0: data structures \(\{\mathcal{H}_{1},\mathcal{H}_{2},\ldots\}\), with \(\mathcal{H}.keywords\) = keyword pair, and \(\mathcal{H}.tweets\) = set of tweets
1: \(i\gets 0\), \(j\gets 0\)
2: loop
3: \(\mathcal{S}\leftarrow\) headlines for hour-\(i\)
4: \(keyPairs\leftarrow\)detect_keywords\((\mathcal{S})\) {\(keyPairs\) is a list of keyword pairs.}
5: for \(k=0\) to len(\(keyPairs\))\(-1\) do
6: \(\mathcal{H}_{j}.keywords\gets keyPairs[k]\)
7: \(\mathcal{H}_{j}.tweets\leftarrow\)search\((\mathcal{H}_{j}.keywords)\) {using Twitter Search API}
8: \(j\gets j+1\)
9: end for
10: \(i\gets i+1\)
11: end loop
```
**Algorithm 1**data_collection()
```
0: A set of \(M\) sets of words, \(\mathcal{S}=\{H_{1},H_{2},\ldots,H_{M}\}\), positive integers \(k,\eta\)
0: \(k\) sets of keywords, \(G=(\mathcal{I}_{1},\mathcal{I}_{2},\ldots,\mathcal{I}_{k})\)
1: \(\mathcal{I}_{i}\leftarrow\emptyset\) for \(i=1,2,\ldots,k\)
2: \(score_{i}\leftarrow\) empty dictionary for \(i=1,2,\ldots,k\)
3: \(i\gets 1\)
4: for every pair of headlines \(\{H_{a},H_{b}\}\in\mathcal{S}\) such that \(|H_{a}\cap H_{b}|\geq\eta\) do
5: \(\mathcal{G}\gets H_{a}\cap H_{b}\)
6: \(j\leftarrow\operatorname{arg\,max}_{j}|\mathcal{I}_{j}\cap\mathcal{G}|\)
7: if \(|\mathcal{I}_{j}\cap\mathcal{G}|\geq\eta\) then
8: \(\mathcal{I}_{j}\leftarrow\mathcal{I}_{j}\cap\mathcal{G}\)
9: \(score_{j}[w]\gets score_{j}[w]+1\) for all \(w\in\mathcal{I}_{j}\)
10: else
11: \(\mathcal{I}_{i}\leftarrow\mathcal{G}\)
12: \(score_{i}[w]\gets 1\) for all \(w\in\mathcal{I}_{i}\)
13: \(i\gets i+1\)
14: end if
15: end for
16: \(total\_score_{i}\leftarrow\sum_{w\in\mathcal{I}_{i}}score_{i}[w]\) for \(i=1,2,\ldots,k\)
17: return \(G\leftarrow(\mathcal{I}_{i}\) sorted by \(total\_score_{i})\)
```
**Algorithm 2**detect_keywords()
### Cleaning the Data
The data was preprocessed to reduce the noisy and irrelevant tweets.
#### 1.2.1 Special Stopwords: Articulation Words
During the data collection process, sometimes unrelated events were joined together with keywords that was common to both events.
Typical stopwords such as ‘‘the” and ‘‘a” were removed during preprocessing the news headlines. However, there are other words which occur quite commonly in news headlines. For example, words like ‘‘watch’’, ‘‘live’’, or ‘‘update’’ are common to express things like ‘‘watch this video’’, ‘‘we are live on TV’’, or to update a previous headline with more information. Such words could possibly incorrectly connect two or more very different events as one. Example: ‘‘Watch Jim Harbaugh’s press conference live’’⁵ and ‘‘WATCH LIVE: Of the 48 people being monitored for contact with Dallas patient, no one is showing any symptoms’’⁶. We call such words _articulation words_ We now delve into understanding how and when these words occur, and how to subsequently identify and remove them in the preprocessing step, just as we would a stopword.
[FOOTNOTE:5][ENDFOOTNOTE]
[FOOTNOTE:6][ENDFOOTNOTE]
It is well known that _tf-idf_[11] is a statistic of a word that indicates how important that word is in a given document. Intuitively, if a word appears in all the documents, then its statistic is generally low in all the documents. However, if the word appears in very few documents, its statistic in those documents is fairly high, indicating that the word is somehow representative of the content of the document. It turns out the articulation words do not occur often enough for them to be detected by regular _tf-idf_, but do occur enough times for them to falsely relate several unrelated events together. To identify a group of those keywords, we used a modified _tf-idf_ to detect them from the headlines.
The modified version of _tf-idf_, what we refer as _maxtf-idf_, is meant to assign more weight to the terms that are frequent in any document. For instance, _tf-idf_ of a term in a document tries to assign a weight related to how “rare” that term is in the whole collection, and how frequent the term is in that document, thus indicating how representative the term is of the document. On the other hand, we want to place a higher weight on a term if its frequency is higher in any other document, relative to the frequency in the current document. With that in mind, we want to identify terms that might be “adding noise” to the corpus and hence merge unrelated events together.
The definition of _maxtf_ is as follows:
\[\text{maxtf}(t,d,D)=0.5+\frac{0.5+max\{f(t,d^{\prime}):d^{\prime}\in D\}}{max \{f(w,d):w\in d\}}\] (1)
and for _idf_, the usual formula:
\[\text{idf}(t,D)=\log\frac{N}{|\{d\in D:t\in d\}|}\] (2)
where \(t\) is a term, \(d\) is a document, and \(D\) is the corpus of all documents. In this case, we set \(t\) as a keyword, \(d\) as the set of keywords of one hour of a given day, and \(D\) the set of documents of that day.
<figure><img src="content_image/1511.01830/stopwords.png"><figcaption>Figure 8: Stopwords detection. Normalized 1−maxtf-idf score for data fromAugust 27th (left) and August 28th (right) of 2013. The top score words forboth plots are “says” and “live”. We used the top score words to disconnectconnected components of events.</figcaption></figure>
After identifying such words, the idea is to disconnect the components connected by those words. The process is to disconnect each component by the word with top normalized \(1-\text{maxtf-idf}\) score each time until the component could not be disconnected further. We add the top scoring words to our list of stopwords. These words are hence ignored from the subsequent runs of the data collection methodology. In Figure 8 there are two examples of this process to identify the words.
#### 1.2.2 Discarding Irrelevant Tweets
Due to the capabilities of the REST API, the tweets collected can be older than the actual date of the event detected. Hence, some tweets can be very old and not relevant to the event itself. This may lead to inaccuracies in predictions when using the early features.
This problem is illustrated in Figure 9, Note that the first 5% of the tweets take an unusually large portion of the duration of the entire event. This suggests that we are collecting tweets which existed much before the event broke out, and hence are possibly irrelevant. Once we discard the first 5% of tweets, we observe that each segment of the event (first 5%, the next 5%, etc.) occupies roughly the same duration of the entire event.
<figure><img src="content_image/1511.01830/x8.png"><figcaption>Figure 9: Duration differences of events. The x-axis represents the categoriesof datasets: the first one (t5%-t0) represents the difference of time betweenthe timestamp of the oldest tweet and the newest tweet in the first 5% of thetweets. The next one (t10%-t5%) corresponds to the difference between thenewest tweet in the first 10% and the newest tweet in the first 5% of data,etc. After removing the first 5% of data, the time differences are roughly thesame across all datasets.</figcaption></figure>
### Validation of Data Collection
We performed experiments validating that merging keywords by forming connected components indeed produced meaningful groups of keywords representing an event. As a baseline, we used components obtained by merging random keyword pairs together. We evaluated how well a cluster is formed from the set of tweets obtained from connected components, comparing the cluster to the set of tweets obtained from random components. Connected components are expected to merge keyword pairs that belong to the same event, and hence would make better clusters when compared to merging random keyword pairs. The results are displayed in Figure 11. In this figure, each plot depicts a different metric that evaluates the quality of a cluster. These clustering metrics are summarized in Table 8. For better interpretation and visual clarity, in each of the plots, we sorted the clustering metrics obtained via connected components. We then rearranged the clustering metrics for the baseline according to the sorting order obtained from connected components. (This is the reason why the blue line is monotonically increasing.) This experiment was performed on one month of data (there are approximately 30 data points in each plot) between August 2013 and September 2013. We took all the keyword pairs obtained in a day and found the connected components as in Figure 10. For random components, we merged the keyword pairs randomly. We took precautions to make sure that the size of the connected components and random components per day were comparable. That is, if we had connected components of sizes 6, 6, and 5 formed from keyword pairs on particular day, we made sure that similarly sized random components were also formed from the keyword pairs of the same day. Also, to make sure that tweets from any one keyword pair do not dominate the tweet set, we sampled an equal number of tweets from each keyword pair, and the _same_ sample of tweets is used to calculate the clustering metrics in both the connected components approach and the random components approach. The random baseline has been averaged over 3 different rounds of experimentation.
<figure><img src="content_image/1511.01830/connected_components.png"><figcaption>Figure 10: This figure illustrates how we merge keyword pairs which representthe same event into larger components.</figcaption></figure>
<figure><img src="content_image/1511.01830/x9.png"><figcaption>(a) I1</figcaption></figure>
Name | Metric | Meaning
---|---|---
I1 | ∑ki=11ni∑(u,v)∈Sisim(u,v) | Higher value is better
I2 | ∑ki=1√∑(u,v)∈Si(u,v) | Higher value is better
E1 | ∑ki=1ni∑v∈Si,u∈Ssim(u,v)√∑(u,v)∈Sisim(u,v) | Lower value is better
G1 | ∑ki=1∑v∈Si,u∈Ssim(u,v)∑(v,u)∈Ssim(v,u) | Lower value is better
G′1 | ∑ki=1n2i∑v∈Si,u∈Ssim(v,u)∑(u,v)∈Sisim(u,v) | Lower value is better
H1 | I1E1 | Higher value is better
H2 | I2E1 | Higher value is better
Table 8: This table lists the clustering metrics used in Figure 11.
## 2 VQ Event Model
We introduce a novel vectorial representation based on a vector quantization of the interarrival time distribution, which we call “VQ-event model”. The most representative interarrival times are learned from a large training corpus. Each of the learned interarrival times is called a _codeword_, and the entire set of the learned interarrival times, the _codebook_.
We represent an event \(e\), belonging to a collection of events \(\mathcal{E}\), as a tuple \((\mathcal{K}_{e},\mathcal{M}_{e})\), where \(\mathcal{K}_{e}\) is a set of _keywords_ and \(\mathcal{M}_{e}\) is a set of _social media messages_. Both the keywords and the messages are related to a real-world occurrence. As explained in Section The keywords are extracted in order to succinctly describe the occurrence, and the messages are posts from users about the event.
To learn the most representative interarrival times we perform the following: for each \(e\in\mathcal{E}\) with messages \(\mathcal{M}_{e}=[m_{1}^{e},m_{2}^{e},\ldots m_{n}^{e}]\) and their corresponding time-stamps \([t_{1}^{e},t_{2}^{e},\ldots t_{n}^{e}]\) where \(t_{i}\leq t_{i+i}\forall i\in[1,n]\), we compute all the interarrival times \(d_{i}^{e}=t_{i}^{e}-t_{i-1}^{e}\) (the value of \(t_{0}\) is considered equal to \(t_{1}\) for initialization purposes). Then, the values of \(d_{i}^{e}\) for all events in \(\mathcal{E}\) are clustered to identify the _most representative_ interarrival times.
Once the most representative interarrival times have been learned, the vector quantizations for each event is produced as follows: for each event, obtain all the interarrival times, and quantize each of the interarrival times to the closest codeword in the codebook. This process is summarized in Algorithm 3. Line 1 collects all of the interarrival times for all the events in \(\mathcal{E}\) in **f**. Line 2 is a clustering algorithm which takes **f** and the number of clusters \(k\) as inputs and returns the centroids of the clusters as the output in **c**. The centroids can be thought of as the most representative interarrival times for the event set \(\mathcal{E}\). After that, the interarrival times of each event \(e\) is vector quantized in terms of the centroids to obtain a \(k\)-dimensional real valued representation of the event (Line 4). In this representation, each entry is percentage of messages with that particular codeword as the interarrival time.
```
0: Event set \(\mathcal{E}\), and number of codewords \(k\) in the codebook.
0: A representation in \(\mathbb{R}^{k}\) of each event \(e=(\mathcal{K}_{e},\mathcal{M}_{e})\in\mathcal{E}\).
1: \(\textbf{f}\leftarrow\{d_{i}^{e}|m_{i}^{e}\in\mathcal{M}_{e},e\in\mathcal{E}\}\)
2: c\(\leftarrow\)cluster(\(\textbf{f},k\))
3: for \(e\in\mathcal{E}\) do
4: e\(\leftarrow\)vq(\(d_{i}^{e},\textbf{c}\))
5: end for
```
**Algorithm 3**learn_representation()
## 3 High Activity Vs Low Activity Events
Once the collection of events is converted into their VQ-event model representation, we can identify events that have produced similar levels of activity in the social network. In other words, events are considered to have similar activity if the interarrival times between their social media posts are similarly distributed, implying a very much alike collective reaction from users to the events within a group. In order to identify groups of similar events, we cluster the event models. We sort the resulting groups of events from highest to lowest activity, according to the concentration of social media posts in the bins that correspond to short interarrival times. We consider the events that fall in the top cluster to be high-activity events as most of their interarrival times are concentrated in the smallest interval of the VQ-event model. Thus we end with four groups of events: high, medium-high, medium-low and low. shows a heatmap of the interarrival relative frequency for each cluster.
Through this section, we analyze different features for each of the event categories and compare them both qualitatively and quantitatively. We peformed two-tailed \(t\)-tests for a variety of features for events in the high-activity category, and compare it with the average values for the remaining events.
### Information Forwarding Characteristics
We found that the high-activity events possess more information forwarding characteristics than other events. We present four features which support this argument. The features, their description and their values are listed in Table 9.
The retweet_count is generally higher for high-activity events. This feature is the fraction of retweets present in the event, log-normalized by the total amount of tweets in the event. A higher value suggests that people have a greater tendency to spread the occurrence of these events, and forward this information to their followers.
The tweets_retweeted is lower for high-activity events than for the rest. This feature is the number of tweets which have been retweeted, log-normalized by the total number of tweets in the event. This suggests that the high amount of retweets for the high-activity events actually originates from fewer tweets. This suggests that fewer tweets become popular and are retweeted several times.
The retweets_most_retweeted is the total number of retweets of the tweet that has been retweeted the most. This number is much higher for high-activity events than for low-activity events, suggesting that the most popular tweet indeed becomes very popular when the event is of high-impact.
Feature Name | Description | high-activity, others | Hypothesis, p-value
---|---|---|---
retweet_count | \pbox20cmlog(total retweet count | |
in the event divided by total | | |
tweets in the event) | 2.205,1.473 | 1, p=0 |
tweets_retweeted | \pbox20cmlog(number of tweets | |
retweeted divided by | | |
total tweets in the event) | −1.091,−0.964 | 1, p=2.7×10−5 |
retweets_most_retweeted | \pbox20cmnumber of tweets of the most | |
retweeted tweet | 284.491,40.261 | 1, p=0 |
Table 9: (Refer to Section 3.1.) This table lists all the features which
characterize the information forwarding aspect of an event. In general, high-
activity events tend to have higher values for information forwarding features
than other events.
### Conversational Characteristics
We found that high-activity events in general tend to generate more conversation between users than the events in other categories. We observe this behavior through several features. Refer to Table 10.
Feature Name | Description | high-activity, others | Hypothesis, p-value
---|---|---|---
replies | \pbox20cmlog(total replies divided by total tweets) | −1.4016,−1.6474 | 1, p=10−4
norm_replies | \pbox20cmlog(number of replies divided by | |
total number of unique users) | −1.5796,−1.9294 | 1, p=6.7×10−4 |
tweets_replied | \pbox20cmlog(number of tweets which generated | |
replies divided by total tweets) | −1.7784,−2.0668 | 1, p=0.001 |
uniq_users_replied | \pbox20cmlog(unique users who have written | |
a reply divided by total tweets) | −1.7524,−2.0352 | 1, p=0.001 |
Table 10: (Refer to Section 3.2.) This table lists all the features which
characterize the conversational aspect of high-activity and remaining events.
Using these features, we argue in Section 3.2 that high-activity events tend
to invoke more conversation amongst users than their counterparts.
The features replies and norm_replies both count the number of replies, but have been normalized slightly differently. Both have a higher value for high-activity events suggesting that high-activity events in general tend to spark more conversation between the users. The tweets_replied feature counts the number of tweets which have generated replies (it has been log-normalized by the total number of tweets in the event). This is also higher for high-activity indicating that such events on average have more tweets which invoke a reply from people. The uniq_users_replied feature counts the number of unique users who have participated in an conversation. Again, this number is found to be higher for high-activity events than for others suggesting that more users tend to engage in a conversation about these events. All these features collectively suggest that high-activity events tend to have a _conversational characteristic_ associated with them.
### Topical Focus Characteristics
We find that high-activity events have a lot more focus in terms of the topical content than the remaining events. This possibly suggests that when a news item is sensational, people seldom deviate from the topic of the news to other things.
We used four features listed in Table 11 to study the topic focus characteristics of high-impact events.
Feature Name | Description | high-activity, others | Hypothesis, p-value
---|---|---|---
uniq_words | \pbox20cmlog(total unique words | |
divided by total tweets) | −0.1982,0.1651 | 1, p=0 |
uniq_chars | \pbox20cmlog(total unique characters | |
divided by total tweets) | 2.0009,2.0456 | 1, p=0 |
uniq_hashtags | \pbox20cmlog(number of unique hashtags | |
divided by total tweets) | −1.1126,0.8761 | 1, p=0 |
uniq_urls | \pbox20cmlog(number of unique urls | |
divided by total tweets) | −0.7194,−0.4951 | 1, p=0 |
Table 11: (Refer to Section 3.3.) This table summarizes all the features that
were used to study the topical focus characteristics of high-activity events.
The number of unique words (uniq_words) and characters (uniq_chars) for high-activity events is lower than the remaining events suggesting that the information content for high-activity events is more focused than for the remaining events (as they do not need a diverse vocabulary). Hashtags on Twitter are a sequence of characters that follow the # symbol. Conventionally, their purpose is to indicate the topic of the tweet. Again, this number (uniq_hashtags; log-normalized by the total number of tweets) is lower for the high-activity events than for the remaining events. The number of unique URLs (uniq_urls; which can be taken to interpret similar semantics as the hashtags) is also lower for high-activity events than for the rest.
### Early prediction of high-activity events
The results from sections 3.1, 3.2 and 3.3 suggest that high-activity events differ considerably from other events in terms of how they are received by the users and in terms of the response they invoke from the network.
In the next phase, our goal is to supervised machine learning only the early tweets of an event to predict whether an event will generate high-activity or not. A list of all the features used for classification is shown in Table 14. The classification was carried using logistic regression provided by the Weka package. The data was split approximately into \(60-20-20\) of training, test and validation sets and the results were averaged over 5 runs of experiments.
Table 13 illustrates the prediction results from the earliest 5% of the tweets tweets, and from using all the tweets. We the false positive rate using only the early tweets is almost as good as the false positive rate using all the tweets. The same observation holds for the metrics precision and ROC-area as well. However, we observe an \(18\%\) increase in the recall (0.455 to 0.540). This suggests that some high-activity events perhaps do not start displaying their unique characteristics well enough in their early stages.
| Early 5% Tweets | All Tweets
---|---|---
high-activity | others | high-activity | others
high-impact | 194 | 232 | 230 | 196
non-high-impact | 43 | 4765 | 47 | 4761
Table 12: Confusion matrix while predicting the top 8% of events as high-
activity. The predictions were made using the early 5% of the tweets, and by
using all the tweets from the event.
| Early 5% Tweets | All Tweets
---|---|---
| FP-Rate | Precision | Recall | ROC-area | FP-Rate | Precision | Recall | ROC-area
high-activity | 0.009 | 0.819 | 0.455 | 0.900 | 0.01 | 0.830 | 0.540 | 0.945
others | 0.545 | 0.954 | 0.991 | 0.900 | 0.460 | 0.960 | 0.990 | 0.945
Table 13: Classification results of detecting whether an event from the top 8%
is high-impact or not while predicting from features extracted from the
earliest 5% of the tweets and from all the tweets belonging to the event.
Feature Name | Normalized By | Normalization Method
---|---|---
component_size | None |
total_seconds | total_tweets | log(x)−log(y)
total_tweets | None |
total_retweets | total_tweets | log(x)−log(y)
total_tweets_retweeted | total_tweets | log(x)−log(y)
retweets_most_retweeted | total_retweets | log(x)−log(y)
total_mentions | total_tweets | log(x)−log(y)
total_unique_mentions | total_mentions | log(x)−log(y)
total_tweets_with_mention | total_tweets | log(x)−log(y)
total_tweets_with_mostfrequent_mention | total_tweets_with_mention | log(x)−log(y)
total_hashtags | total_tweets | log(x)−log(y)
total_unique_hashtags | total_hashtags | log(x)−log(y)
total_tweets_with_hashtag | total_tweets | log(x)−log(y)
total_tweets_with_mostfrequent_hashtag | total_tweets_with_hashtag | log(x)−log(y)
total_urls | total_tweets | log(x)−log(y)
total_unique_urls | total_urls | log(x)−log(y)
total_tweets_with_url | total_tweets | log(x)−log(y)
total_tweets_with_mostfrequent_url | total_tweets_with_url | log(x)−log(y)
total_unique_verified_users | total_verified_users | log(x)−log(y)
total_verified_users | total_tweets | log(x)−log(y)
total_unique_users | total_tweets | log(x)−log(y)
total_replies | total_unique_users | log(x)−log(y)
total_tweets_first_replied | total_tweets | log(x)−log(y)
total_unique_users_replied | total_unique_users | log(x)−log(y)
total_tweets_replied | total_tweets | log(x)−log(y)
total_words | total_tweets | log(x)−log(y)
total_unique_words | total_words | log(x)−log(y)
total_characters | total_tweets | log(x)−log(y)
total_rt_count | total_tweets | log(x)−log(y)
total_fav_count | total_tweets | log(x)−log(y)
total_positive_sentiment | total_tweets | x/y
total_negative_sentiment | total_tweets | x/y
Table 14: List of features used for characterization and classification. The
“Normalization Method” column corresponds to the method used to normalize the
value of the first column using the value of the second column. For example,
the total number of retweets was normalized dividing it by the total number of
tweets, and then taking the logarithm. Zero values were replaced by 10−8.
|
1509.08101 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 19606,
"num_imgs": 2,
"llama3_tokens_count": 6291
} | [
"content_image/1509.08101/x1.png",
"content_image/1509.08101/x2.png"
] | # Representation Benefits of Deep Feedforward Networks
Matus Telgarsky
###### Abstract
This note provides a family of classification problems, indexed by a positive integer \(k\), where all shallow networks with fewer than exponentially (in \(k\)) many nodes exhibit error at least \(1/6\), whereas a deep network with 2 nodes in each of \(2k\) layers achieves zero error, as does a recurrent network with 3 distinct nodes iterated \(k\) times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.
## 1 Overview
A _neural network_ is a function whose evaluation is defined by a graph as follows. Root nodes compute \(x\mapsto\sigma(w_{0}+\left\langle w,x\right\rangle)\), where \(x\) is the input to the network, and \(\sigma:\mathbb{R}\to\mathbb{R}\) is typically a nonlinear function, for instance the ReLU (Rectified Linear Unit) \(\sigma_{\textsc{r}}(z)=\max\{0,z\}\). Internal nodes perform a similar computation, but now their input vector is the collective output of their parents. The choices of \(w_{0}\) and \(w\) may vary from node to node, and the possible set of functions obtained by varying these parameters gives the function class \(\mathfrak{N}(\sigma;m,l)\), which has \(l\) layers each with at most \(m\) nodes.
The representation power of \(\mathfrak{N}(\sigma;m,l)\) will be measured via the _classification error_\(\mathcal{R}_{\textup{z}}\). Namely, given a function \(f:\mathbb{R}^{d}\to\mathbb{R}\), let \(\tilde{f}:\mathbb{R}^{d}\to\{0,1\}\) denote the corresponding classifier \(\tilde{f}(x):=\mathds{1}[f(x)\geq 1/2]\), and additionally given a sequence of points \(((x_{i},y_{i}))_{i=1}^{n}\) with \(x_{i}\in\mathbb{R}^{d}\) and \(y_{i}\in\{0,1\}\), define \(\mathcal{R}_{\textup{z}}(f):=n^{-1}\sum_{i}\mathds{1}[\tilde{f}(x_{i})\neq y_{ i}]\).
**Theorem 1.1**.: _Let positive integer \(k\), number of layers \(l\), and number of nodes per layer \(m\) be given with \(m\leq 2^{(k-3)/l-1}\). Then there exists a collection of \(n:=2^{k}\) points \(((x_{i},y_{i}))_{i=1}^{n}\) with \(x_{i}\in[0,1]\) and \(y\in\{0,1\}\) such that_
\[\min_{f\in\mathfrak{N}(\sigma_{\textsc{r}};2,2k)}\mathcal{R}_{\textup{z}}(f)=0 \qquad\textup{and}\qquad\min_{g\in\mathfrak{N}(\sigma_{\textsc{r}};m,l)} \mathcal{R}_{\textup{z}}(g)\geq\frac{1}{6}.\]
For example, approaching the error of the \(2k\)-layer network (which has \(\mathcal{O}(k)\) nodes and weights) with \(2\) layers requires at least \(2^{(k-3)/2-1}\) nodes, and with \(\sqrt{k-3}\) layers needs at least \(2^{\sqrt{k-3}-1}\) nodes.
The purpose of this note is to provide an elementary proof of Theorem1.1 and its refinement Theorem1.2, which amongst other improvements will use a _recurrent neural network_ in the upper bound. Section2 will present the proof, and Section3 will tie these results to the literature on neural network expressive power and circuit complexity, which by contrast makes use of product nodes rather than standard feedforward networks when showing the benefits of depth.
### Refined bounds
<figure><img src="content_image/1509.08101/x1.png"><figcaption>Figure 1: The 3-ap.</figcaption></figure>
There are three refinements to make: the classification problem will be specified, the perfect network will be an even simpler _recurrent_ network, and \(\sigma\) need not be \(\sigma_{\textsc{r}}\).
Let \(n\)-ap (the _\(n\)-alternating-point_ problem) denote the set of \(n\) uniformly spaced points within \([0,1-2^{-n}]\) with alternating labels, as depicted in Figure1; that is, the points \(((x_{i},y_{i}))_{i=1}^{n}\) with \(x_{i}=i2^{-n}\), and \(y_{i}=0\) when \(i\) is even, and otherwise \(y_{i}=1\). As the \(x\) values pass from left to right, the labels change as often as possible; the key is that adding a constant number of nodes in a flat network only corrects predictions on a constant number of points, whereas adding a constant number of nodes in a deep network can correct predictions on a constant _fraction_ of the points.
Let \(\mathfrak{R}(\sigma;m,l;k)\) denote \(k\) iterations of a _recurrent network_ with \(l\) layers of at most \(m\) nodes each, defined as follows. Every \(f\in\mathfrak{R}(\sigma;m,l;k)\) consists of some fixed network \(g\in\mathfrak{N}(\sigma;m,l)\) applied \(k\) times:
Consequently, \(\mathfrak{R}(\sigma;m,l;k)\subseteq\mathfrak{N}(\sigma;m,lk)\), but the former has \(\mathcal{O}(ml)\) parameters whereas the latter has \(\mathcal{O}(mlk)\) parameters.
Lastly, say that \(\sigma:\mathbb{R}\to\mathbb{R}\) is _\(t\)-sawtooth_ if it is piecewise affine with \(t\) pieces, meaning \(\mathbb{R}\) is partitioned into \(t\) consecutive intervals, and \(\sigma\) is affine within each interval. Consequently, \(\sigma_{\textsc{r}}\) is 2-sawtooth, but this class also includes many other functions, for instance the decision stumps used in boosting are 2-sawtooth, and decision trees with \(t-1\) nodes correspond to \(t\)-sawtooths.
**Theorem 1.2**.: _Let positive integer \(k\), number of layers \(l\), and number of nodes per layer \(m\) be given. Given a \(t\)-sawtooth \(\sigma:\mathbb{R}\to\mathbb{R}\) and \(n:=2^{k}\) points as specified by the \(n\)-ap, then_
\[\min_{f\in\mathfrak{R}(\sigma_{\textsc{r}};2,2;k)}\mathcal{R}_{\textup{z}}(f)= 0\qquad\textup{and}\qquad\min_{g\in\mathfrak{N}(\sigma;m,l)}\mathcal{R}_{ \textup{z}}(g)\geq\frac{n-4(tm)^{l}}{3n}.\]
This more refined result can thus say, for example, that on the \(2^{k}\)-ap one needs exponentially (in \(k\)) many parameters when boosting decision stumps, linearly many parameters with a deep network, and constantly many parameters with a recurrent network.
## 2 Analysis
This section will first prove the lower bound via a counting argument, simply tracking the number of times a function within \(\mathfrak{N}(\sigma;m,l)\) can cross 1/2. The upper bound will exhibit a network in \(\mathfrak{N}(\sigma_{\textsc{r}};2,2)\) which can be composed with itself \(k\) times to exactly fit the \(n\)-ap. These bounds together prove Theorem1.2, which in turn implies Theorem1.1.
### Lower bound
The lower bound is proved in two stages. First, composing and summing sawtooth functions must also yield a sawtooth function, thus elements of \(\mathfrak{N}(\sigma;m,l)\) are sawtooth whenever \(\sigma\) is. Secondly, a sawtooth function can not cross \(1/2\) very often, meaning it can’t hope to match the quickly changing labels of the \(n\)-ap.
To start, \(\mathfrak{N}(\sigma;m,l)\) is sawtooth as follows.
**Lemma 2.0**.: _If \(\sigma\) is \(t\)-sawtooth, then every \(f\in\mathfrak{N}(\sigma;m,l)\) with \(f:\mathbb{R}\to\mathbb{R}\) is \((tm)^{l}\)-sawtooth._
The proof is straightforward and deferred momentarily. The key observation is that adding together sawtooth functions grows the number of regions very slowly, whereas composition grows the number very quickly, an early sign of the benefits of depth.
Given a sawtooth function, its classification error on the \(n\)-ap may be lower bounded as follows.
**Lemma 2.0**.: _Let \(((x_{i},y_{i}))_{i=1}^{n}\) be given according to the \(n\)-ap. Then every \(t\)-sawtooth function \(f:\mathbb{R}\to\mathbb{R}\) satisfies \(\mathcal{R}_{\textup{z}}(f)\geq(n-4t)/(3n)\)._
Proof.: Recall the notation \(\tilde{f}(x):=\mathds{1}[f(x)\geq 1/2]\), whereby \(\mathcal{R}_{\textup{z}}(f):=n^{-1}\sum_{i}\mathds{1}[y_{i}\neq\tilde{f}(x_{i})]\). Since \(f\) is piecewise monotonic with a corresponding partition \(\mathbb{R}\) having at most \(t\) pieces, then \(f\) has at most \(2t-1\) crossings of 1/2: at most one within each interval of the partition, and at most 1 at the right endpoint of all but the last interval. Consequently, \(\tilde{f}\) is piecewise _constant_, where the corresponding partition of \(\mathbb{R}\) is into at most \(2t\) intervals. This means \(n\) points with alternating labels must land in \(2t\) buckets, thus the total number of points landing in buckets with at least three points is at least \(n-4t\). Since buckets are intervals and signs must alternate within any such interval, at least a third of the points in any of these buckets are labeled incorrectly by \(\tilde{f}\). ∎
To close, the proof of Section2.1 proceeds as follows. First note how adding and composing sawtooths grows their complexity.
**Lemma 2.0**.: _Let \(f:\mathbb{R}\to\mathbb{R}\) and \(g:\mathbb{R}\to\mathbb{R}\) be respectively \(k\)- and \(l\)-sawtooth. Then \(f+g\) is \((k+l)\)-sawtooth, and \(f\circ g\) is \(kl\)-sawtooth._
Proof of Section2.1.: Let \(\mathcal{I}_{f}\) denote the partition of \(\mathbb{R}\) corresponding to \(f\), and \(\mathcal{I}_{g}\) denote the partition of \(\mathbb{R}\) corresponding to \(g\).
First consider \(f+g\), and moreover any intervals \(U_{f}\in\mathcal{I}_{f}\) and \(U_{g}\in\mathcal{I}_{g}\). Necessarily, \(f+g\) has a single slope along \(U_{f}\cap U_{g}\). Consequently, \(f+g\) is \(|\mathcal{I}|\)-sawtooth, where \(\mathcal{I}\) is the set of all intersections of intervals from \(\mathcal{I}_{f}\) and \(\mathcal{I}_{g}\), meaning \(\mathcal{I}:=\{U_{f}\cap U_{g}:U_{f}\in\mathcal{I}_{f},U_{g}\in\mathcal{I}_{g}\}\). By sorting the left endpoints of elements of \(\mathcal{I}_{f}\) and \(\mathcal{I}_{g}\), it follows that \(|\mathcal{I}|\leq k+l\) (the other intersections are empty).
Now consider \(f\circ g\), and in particular consider the image \(f(g(U_{g}))\) for some interval \(U_{g}\in\mathcal{I}_{g}\). \(g\) is affine with a single slope along \(U_{g}\), therefore \(f\) is being considered along a single unbroken interval \(g(U_{g})\). However, nothing prevents \(g(U_{g})\) from hitting all the elements of \(\mathcal{I}_{f}\); since \(U_{g}\) was arbitrary, it holds that \(f\circ g\) is \((|\mathcal{I}_{f}|\cdot|\mathcal{I}_{g}|)\)-sawtooth. ∎
The proof of Section2.1 follows by induction over layers of \(\mathfrak{N}(\sigma;m,l)\).
Proof of Section2.1.: The proof proceeds by induction over layers, showing the output of each node in layer \(i\) is \((tm)^{i}\)-sawtooth as a function of the neural network input. For the first layer, each node starts by computing \(x\mapsto w_{0}+\left\langle w,x\right\rangle\), which is itself affine and thus 1-sawtooth, so the full node computation \(x\mapsto\sigma(w_{0}+\left\langle w,x\right\rangle)\) is \(t\)-sawtooth by Section2.1. Thereafter, the input to layer \(i\) with \(i>1\) is a collection of functions \((g_{1},\ldots,g_{m^{\prime}})\) with \(m^{\prime}\leq m\) and \(g_{j}\) being \((tm)^{i-1}\)-sawtooth by the inductive hypothesis; consequently, \(x\mapsto w_{0}+\sum_{j}w_{j}g_{j}(x)\) is \(m(tm)^{i-1}\)-sawtooth by Section2.1, whereby applying \(\sigma\) yields a \((tm)^{i}\)-sawtooth function (once again by Section2.1). ∎
### Upper bound
<figure><img src="content_image/1509.08101/x2.png"><figcaption>Figure 2: fm, f2m, and f3m.</figcaption></figure>
Consider the _mirror map_\(f_{\textup{m}}:\mathbb{R}\to\mathbb{R}\), depicted in Figure2, and defined as
\[f_{\textup{m}}(x):=\begin{cases}2x&\textup{when $0\leq x\leq 1/2$},\\ 2(1-x)&\textup{when $1/2<x\leq 1$},\\ 0&\textup{otherwise}.\end{cases}\]
Note that \(f_{\textup{m}}\in\mathfrak{N}(\sigma_{\textsc{r}};2,2)\); for instance, \(f_{\textup{m}}(x)=\sigma_{\textsc{r}}(2\sigma_{\textsc{r}}(x)-4\sigma_{\textsc {r}}(x-1/2))\). The upper bounds will use .
To assess the effect of the _post-composition_\(f_{\textup{m}}\circ g\) for any \(g:\mathbb{R}\to\mathbb{R}\), note that \(f_{\textup{m}}\circ g\) is \(2g(x)\) whenever \(g(x)\in[0,1/2]\), and \(2(1-g(x))\) whenever \(g(x)\in(1/2,1]\). Visually, this has the effect of reflecting (or folding) the graph of \(g\) around the horizontal line through 1/2 and then rescaling by 2. Applying this reasoning to \(f_{\textup{m}}^{k}\) leads to \(f_{\textup{m}}^{2}\) and \(f_{\textup{m}}^{3}\) in Figure2, whose peaks and troughs match the \(2^{2}\)-ap and \(2^{3}\)-ap, and moreover have the form of a piecewise affine approximations to sinusoids; indeed, it was suggested before, by Bengio and LeCun (2007), that Fourier transforms are efficiently represented with deep networks.
These compositions may be written as follows.
**Lemma 2.0**.: _Let real \(x\in[0,1]\) and positive integer \(k\) be given, and choose the unique nonnegative integer \(i_{k}\in\{0,\ldots,2^{k-1}\}\) and real \(x_{k}\in[0,1)\) so that \(x=(i_{k}+x_{k})2^{1-k}\). Then_
\[f_{\textup{m}}^{k}(x)=\begin{cases}2x_{k}&\textup{when $0\leq x_{k}\leq 1/2$}, \\ 2(1-x_{k})&\textup{when $1/2<x_{k}<1$}.\end{cases}\]
In order to prove this form and develop a better understanding of \(f_{\textup{m}}\), consider its _pre-composition_ behavior \(g\circ f_{\textup{m}}\) for any \(g:\mathbb{R}\to\mathbb{R}\). Now, \((g\circ f_{\textup{m}})(x)=g(2x)\) whenever \(x\in[0,1/2]\), but \((g\circ f_{\textup{m}})(x)=g(2-2x)\) when \(x\in(1/2,1]\); whereas post-composition reflects around the horizontal line at 1/2 and then scales vertically by 2, pre-composition first scales horizontally by \(1/2\) and then reflects around the vertical line at 1/2, providing a condensed mirror image and motivating the name _mirror map_.
Proof of Section2.2.: The proof proceeds by induction on the number of compositions \(l\). When \(l=1\), there is nothing to show. For the inductive step, the mirroring property of pre-composition with \(f_{\textup{m}}\) combined with the symmetry of \(f_{\textup{m}}^{l}\) (by the inductive hypothesis) implies that every \(x\in[0,1/2]\) satisfies
\[(f_{\textup{m}}^{l}\circ f)(x)=(f_{\textup{m}}^{l}\circ f)(1-x)=(f_{\textup{m} }^{l}\circ f)(x+1/2).\]
Consequently, it suffices to consider \(x\in[0,1/2]\), which by the mirroring property means \((f_{\textup{m}}^{l}\circ f_{\textup{m}})(x)=f_{\textup{m}}^{l}(2x)\). Since the unique nonnegative integer \(i_{l+1}\) and real \(x_{l+1}\in[0,1)\) satisfy \(2x=2(i_{l+1}+x_{l+1})2^{-l-1}=(i_{l+1}+x_{l+1})2^{-l}\), the inductive hypothesis applied to \(2x\) grants
\[(f_{\textup{m}}^{l}\circ f)(x)=f_{\textup{m}}^{l}(2x)=\begin{cases}2x_{l+1}& \textup{when $0\leq x_{l+1}\leq 1/2$},\\ 2(1-x_{l+1})&\textup{when $1/2<x_{l+1}<1$},\end{cases}\]
which completes the proof. ∎
Before closing this subsection, it is interesting to view \(f_{\textup{m}}^{k}\) in one more way, namely its effect on \(((x_{i},y_{i}))_{i=1}^{n}\) provided by the \(n\)-ap with \(n:=2^{k}\). Observe that \(((f_{\textup{m}}(x_{i}),y_{i}))_{i=1}^{n}\) is an \((n/2)\)-ap with all points duplicated except \(x_{1}=0\), and an additional point with \(x\)-coordinate \(1\).
### Proof of Theorems1.1 and 1.2
It suffices to prove Theorem1.2, which yields Theorem1.1 since \(\sigma_{\textsc{r}}\) is 2-sawtooth, whereby the condition \(m\leq 2^{(k-3)/l-1}\) implies
\[\frac{n-4(2m)^{l}}{3n}=\frac{1}{3}-(2m)^{l}2^{-k}\left(\frac{4}{3}\right)\geq \frac{1}{3}-2^{k-3}2^{-k}\left(\frac{4}{3}\right)=\frac{1}{3}-\frac{1}{6},\]
and the upper bound transfers since \(\mathfrak{R}(\sigma_{\textsc{r}};2,2;k)\subseteq\mathfrak{N}(\sigma_{\textsc{r }};2,2k)\).
Continuing with Theorem1.2, any \(f\in\mathfrak{N}(\sigma;m,l)\) is \((tm)^{l}\)-sawtooth by Section2.1, whereby Section2.1 gives the lower bound. For the upper bound, note that by construction, and moreover \(f_{\textup{m}}^{k}(x_{i})=\tilde{f_{\textup{m}}^{k}}(x_{i})=y_{i}\) on every \((x_{i},y_{i})\) in the \(n\)-ap by Section2.2.
## 3 Related work
The standard classical result on the representation power of neural networks is due to Cybenko (1989), who proved that neural networks can approximate continuous functions over \([0,1]^{d}\) arbitrarily well. This result, however, is for flat networks.
An early result showing the benefits of depth is due to Håstad (1986), who established, via an incredible proof, that boolean circuits consisting only of and gates and or gates require exponential size in order to approximate the parity function well. These gates correspond to multiplication and addition over the boolean domain, and moreover the parity function is the Fourier basis over the boolean domain; as mentioned above, \(f_{\textup{m}}^{k}\) as used here is a piecewise affine approximation of a Fourier basis, and it was suggested previously by Bengio and LeCun (2007) that Fourier transforms admit efficient representations with deep networks. Lastly, note that Håstad (1986)’s work has one of the same weaknesses as the present result, namely of only controlling a countable family of functions which is in no sense dense.
More generally, networks consisting of sum and product nodes, but now over the reals, have been studied in the machine learning literature, where it was showed by Bengio and Delalleau (2011) that again there is an exponential benefit to depth. While this result was again for a countable class of functions, more recent work by Cohen et al. (2015) aims to give a broader characterization.
Still on the topic of representation results, there is a far more classical result which deserves mention. Namely, the surreal result of Kolmogorov (1957) states that a continuous function \(f:[0,1]^{d}\to\mathbb{R}\) can be _exactly_ represented by a network with \(\mathcal{O}(d^{2})\) nodes in 3 layers; this network needs multiple distinct nonlinearities and therefore is not an element of \(\mathfrak{N}\left(\sigma;\mathcal{O}(d^{2}),3\right)\) for a fixed \(\sigma\), however one can treat these specialized nonlinearities as goalposts for other representation results. Indeed, similarly to the \(f_{\textup{m}}^{k}\) used here, [8]’s nonlinearities have fractal structure.
Lastly, while this note was only concerned with finite sets of points, it is worthwhile to mention the relevance of representation power to statistical questions. Namely, by the seminal result of Anthony and Bartlett (1999, Theorem 8.14), the VC dimension of \(\mathfrak{N}(\sigma_{\textsc{r}};m,l)\) is at most \(\mathcal{O}(m^{8}l^{2})\), indicating that these exponential representation benefits directly translate into statistical savings. Interestingly, note that \(f_{\textup{m}}^{k}\) has an exponentially large Lipschitz constant (exactly \(2^{k}\)), and thus an elementary statistical analysis via Lipschitz constants and Rademacher complexity (Bartlett and Mendelson, 2002) can inadvertently erase the benefits of depth as presented here.
## References
* Anthony and Bartlett (1999) Martin Anthony and Peter L. Bartlett. _Neural Network Learning: Theoretical Foundations_. Cambridge University Press, 1999.
* Bartlett and Mendelson (2002) Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. _JMLR_, 3:463–482, Nov 2002.
* Bengio and Delalleau (2011) Yoshua Bengio and Olivier Delalleau. Shallow vs. deep sum-product networks. In _NIPS_, 2011.
* Bengio and LeCun (2007) Yoshua Bengio and Yann LeCun. Scaling learning algorithms towards AI. In Léon Bottou, Olivier Chapelle, D. DeCoste, and J. Weston, editors, _Large Scale Kernel Machines_. MIT Press, 2007.
* Cohen et al. (2015) Nadav Cohen, Or Sharir, and Amnon Shashua. On the expressive power of deep learning: A tensor analysis. 2015. arXiv:1509.05009 [cs.NE].
* Cybenko (1989) George Cybenko. Approximation by superpositions of a sigmoidal function. _Mathematics of Control, Signals and Systems_, 2(4):303–314, 1989.
* Håstad (1986) Johan Håstad. _Computational Limitations of Small Depth Circuits_. PhD thesis, Massachusetts Institute of Technology, 1986.
* Kolmogorov (1957) Andrey Nikolaevich Kolmogorov. On the representation of continuous functions of several variables by superpositions of continuous functions of one variable and addition. 114:953–956, 1957.
|
1312.5094 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 19438,
"num_imgs": 4,
"llama3_tokens_count": 5976
} | [
"content_image/1312.5094/x1.png",
"content_image/1312.5094/x2.png",
"content_image/1312.5094/x3.png",
"content_image/1312.5094/x4.png"
] | Anomalous Quark Chromomagnetic Moment and Dynamics of Elastic Scattering
Nikolai Kochelev¹ and Nikolai Korchagin²
[FOOTNOTE:1][ENDFOOTNOTE]
[FOOTNOTE:2][ENDFOOTNOTE]
_Bogoliubov Laboratory of Theoretical Physics, Institute for Nuclear Research,_
_Dubna, Moscow region, 141980 Russia_
**Abstract**
We estimate the contribution of nonperturbative quark-gluon chromomagnetic interaction to the high energy elastic proton-proton cross section at large momentum transfer. It is shown that this contribution is very large in the accessible kinematic region of the present experiments. We argue that Odderon which is the \(P=C=-1\) partner of Pomeron, is governed by the spin-flip component related to the nonperturbative three-gluon exchange induced by the anomalous quark chromomagnetic moment. We discuss the possible spin effects in the elastic proton-proton and proton-antiproton scattering coming from the interference of spin-flip nonperturbative Odderon and nonspin-flip Pomeron exchanges.
## 1 Introduction
High energy elastic proton-proton and proton-antiproton cross sections reveal very complicated dynamics which is rather difficult to explain within the framework of Quantum Chromodynamics (QCD) (see the discussion in [1, 2, 3, 4, 5, 6, 7, 8, 9]). In a conventional approach at small transfer momentum experimental data can be described quite well by the diffractive scattering induced by Pomeron exchange between hadrons. At large \(-t\gg 1\) GeV\({}^{2}\) in the popular Donnachie-Landshoff (DL) model the dominant contribution comes from the exchange by Odderon which is the \(P=C=-1\) partner of Pomeron. It was suggested that this effective exchange originated from the perturbative three gluon exchange in the proton-proton and proton-antiproton scattering [10]. The experimental support for the existence of such exchange comes from high energy ISR data on the difference in the dip structure around \(\mid t\mid\approx 1.4\) GeV\({}^{2}\) in the proton-proton and proton-antiproton differential cross sections at \(\sqrt{s}=53\) GeV [11]. However, there is no any signal for Odderon at very small transfer momentum. We would like to emphasize that one cannot expect the perturbative QCD DL approach to be valid even at the largest transfer momentum \(-t\sim 14\) GeV\({}^{2}\) accessible at ISR energies. This is related to the fact that in the three-gluon exchange model, which is applied to describe elastic cross sections in the interval \(-t=3-14\) GeV\({}^{2}\), the average virtuality of exchanged gluons \(\hat{t}\approx t/9\) is quite small -\(\hat{t}=0.3-1.6\) GeV\({}^{2}\). Therefore, in this kinematic region nonperturbative QCD effects should be taken into account.
The attempt to include some of the nonperturbative effects into the DL model was made in [12]. In that paper the strength of three-gluon exchange with perturbative quark-gluon vertices was considered as a free parameter and its value was found from the fit of the data. Therefore, a good description of the large \(-t\) cross sections in the paper is not the result of calculation but rather of the fine tuning to experimental data.
One of the successful models of nonperturbative effects is the instanton liquid model for QCD vacuum [13, 14]. Instantons describe nontrivial topological gluon field excitations in vacuum and their existence leads to the spontaneous chiral symmetry breaking in QCD. One of the manifestations of this phenomenon is the appearance of dynamical quark mass and nonperturbative helicity-flip quark-gluon interaction [15, 14]. Such new interaction can be treated as a nonperturbative anomalous quark chromomagnetic moment (AQCM). It was shown that AQCM gives a very important contribution the to quark-quark scattering at large energies for both polarized and nonpolarized cases [15, 14, 16, 17, 18]. One of the applications of these results is a new model for Pomeron based on AQCM and nonperturbative two gluon exchange between hadrons suggested in [14, 17].
In this paper, we extend this model to the case of the three gluon colorless exchange between nucleons. It will be shown that a nonperturbative version of the Donnachie-Landshoff Odderon model based on AQCM describes well high energy data for the elastic proton-proton, proton-antiproton cross sections at large transfer momentum. The spin effects in elastic scattering are also under discussion.
## 2 Anomalous quark chromomagnetic moment and Odderon exchange
The interaction vertex of a massive quark with a gluon can be written in the following form:
\[V_{\mu}(k_{1}^{2},k_{2}^{2},q^{2})t^{a}=-g_{s}t^{a}[\gamma_{\mu}F_{1}(k_{1}^{2 },k_{2}^{2},q^{2})-\frac{\sigma_{\mu\nu}q_{\nu}}{2M_{q}}F_{2}(k_{1}^{2},k_{2}^ {2},q^{2})],\] (1)
where the form factors \(F_{1,2}\) describe nonlocality of the interaction, \(k_{1,2}\) is the momentum of incoming and outgoing quarks, respectively, \(q=k_{1}-k_{2}\), \(M_{q}\) is the quark mass, and \(\sigma_{\mu\nu}=(\gamma_{\mu}\gamma_{\nu}-\gamma_{\nu}\gamma_{\mu})/2\). Within the instanton model the shape of the form factor \(F_{2}(k_{1}^{2},k_{2}^{2},q^{2})\) is
\[F_{2}(k_{1}^{2},k_{2}^{2},q^{2})=\mu_{a}\Phi_{q}(\mid k_{1}\mid\rho/2)\Phi_{q} (\mid k_{2}\mid\rho/2)F_{g}(\mid q\mid\rho)\ ,\] (2)
where
\[\Phi_{q}(z) = -z\frac{d}{dz}(I_{0}(z)K_{0}(z)-I_{1}(z)K_{1}(z)),\]
\[F_{g}(z) = \frac{4}{z^{2}}-2K_{2}(z)\] (3)
are the Fourier-transformed quark zero-mode and instanton fields, respectively, \(I_{\nu}(z)\) and \(K_{\nu}(z)\) are the modified Bessel functions and \(\rho\) is the instanton size.
AQCM is defined by formula
\[\mu_{a}=F_{2}(0,0,0).\]
For our estimation below we will use the value of AQCM \(\mu_{a}=-1\) which is within the interval \(-\mu_{a}\sim 0.4-1.6\) given by the instanton model [17]. This value is also supported by hadron spectroscopy (see [19] and references therein). Recently, a similar value of AQCM was also obtained within the Dyson-Schwinger equation approach with nonperturbative quark and gluon propagators [20]. In Fig.1, the Donnachie-Landshoff perturbative QCD (pQCD) and nonperturbative AQCM-induced three gluon exchange between two nucleons are presented.
<figure><img src="content_image/1312.5094/x1.png"><figcaption>Figure 1: The left panel is the Donnachie-Landshoff mechanism for the large−t proton-proton scattering. The right panels are the example of the AQCMcontribution induced by the second term in Eq.1.</figcaption></figure>
Within the DL model the differential cross-section of the proton-proton and proton-antiproton scattering is given by the formula
\[\frac{d\sigma}{dt}\approx\frac{244P^{4}}{s^{6}t^{2}R^{12}}\mid M_{qq}(\theta) \mid^{6}\] (4)
where \(M_{qq}\) is the matrix element at the quark level, \(\theta\) is the scattering angle in the center of mass, \(P\) is the probability of the three quark configuration in a proton, and \(R\) is the proton radius. In the pQCD DL approach at the quark level
\[\mid M^{pQCD}_{qq}(\theta)\mid^{2}=\frac{128\pi^{2}\alpha_{s}^{2}}{9}\frac{ \hat{s}^{2}}{\hat{t}^{2}},\] (5)
where \(\hat{s}\approx s/9\), at \(\hat{s}\gg-\hat{t}\) \(\hat{t}/\hat{s}\sim-\sin^{2}\theta/4\), and the following values of the parameters were taken _ad hoc_ :
\[P=1/10,\ \ \alpha_{s}=0.3,\ \ R=0.3fm.\] (6)
We should emphasize that DL assumed a very small proton radius which is far away from the real proton size \(R\approx 1\) fm. For more suitable values \(P=1\) and \(R=1\) fm, we got \(d\sigma/dt\sim 8\cdot 10^{-4}/t^{8}\) mb/GeV\({}^{2}\). It is about two orders of magnitude less than high energy data \(d\sigma/dt\approx 9\cdot 10^{-2}/t^{8}\) mb/GeV\({}^{2}\) at large \(-t\), Fig.2 .
<figure><img src="content_image/1312.5094/x2.png"><figcaption>Figure 2: The contribution of pQCD exchange (dashed line) and AQCMcontribution (solid line) to the elastic proton-proton scattering at largeenergy and large momentum transfer in comparison with data [21].</figcaption></figure>
For the AQCM contribution at the quark level we have
\[\mid M^{AQCM}_{qq}(\hat{s},\hat{t})\mid^{2} = \frac{16\pi^{3}}{3}\alpha_{s}(\mid\hat{t}\mid)\mid\mu_{a}\mid\rho _{c}^{2}F_{g}^{2}(\sqrt{\mid\hat{t}\mid}\rho_{c})\frac{\hat{s}^{2}}{\mid\hat{t }\mid}+\frac{\pi^{4}}{2}\mu_{a}^{2}\rho_{c}^{4}F_{g}^{4}(\sqrt{\mid\hat{t}\mid }\rho_{c})\hat{s}^{2}.\] (7)
For estimation, we use \(R=1\) fm, \(P=1\)¹, dynamical quark mass \(M_{q}=280\) MeV, average instanton size \(\rho_{c}=1/3\) fm and the strong coupling constant
[FOOTNOTE:1][ENDFOOTNOTE]
\[\alpha_{s}(q^{2})=\frac{4\pi}{9\ln((q^{2}+m_{g}^{2})/\Lambda_{QCD}^{2})},\] (8)
with \(\Lambda_{QCD}=0.280\) GeV and \(m_{g}=0.88\) GeV [17]. To get Eq.7 the approximation \(F_{1}(k_{1}^{2},k_{2}^{2},q^{2})\approx 1\) was used and we neglected nonzero virtuality of quarks in a proton. The final result for the AQCM contribution to the proton-proton and proton-antiproton cross section is presented by the solid line in Fig.2. We should mention that the AQCM contribution asymptotically decays as \(1/t^{11}\) due to the form factor Eq.3. Therefore, asymptotically at very large transfer momentum perturbative \(1/t^{8}\) should give the dominating contribution. However, in the kinematic region accessible at the present time in experiments \(-t\leq 14\) GeV\({}^{2}\), the nonperturbative AQCM contribution describes the available large \(-t\) data very well, Fig.2.
<figure><img src="content_image/1312.5094/x3.png"><figcaption>Figure 3: The interference between a) DL-type AQCM diagram and b) Pomeronspin-flip induced by AQCM.</figcaption></figure>
Finally, some part of the difference between the structure of the dip around \(-t\approx 1-2\) GeV\({}^{2}\) in the proton-proton and proton-antiproton elastic scatterating at ISR energies might be related to the difference in the sign of the interference between the AQCM Odderon and Pomeron spin-flip amplitudes, Fig.3.
In our approach the spin-flip component, which is proportional to \(t\), gives the dominating contribution to the negative charge parity Odderon amplitude. In the region of small transfer momentum this contribution to the amplitude of the \(PP\) and \(P\bar{P}\) scattering has the dependence
\[M\sim\frac{\sqrt{-t}}{(m_{g}^{2}-t)^{3}},\] (9)
due to quark spin-flip induced by AQCM. In Eq.9, \(m_{g}\approx 0.4\) GeV is the dynamical gluon mass [22]. Therefore, the difference in the \(PP\) and \(P\bar{P}\) differential cross sections at small \(-t\) and the difference in the total \(PP\) and \(P\bar{P}\) cross sections should be very small at high energies. This is in agreement with experimental data.
Of course, one can describe \(PP\) and \(\bar{P}P\) large \(-t>3.5\) GeV\({}^{2}\) data by using the assumption about a specific \(t\) dependence of the Pomeron trajectory (see, for example [23]). However, in anyway, it is necessary to introduce the additional \(C=-1\) exchange with high intercept to describe the difference in the \(PP\) and \(\bar{P}P\) elastic cross sections at \(\sqrt{s}=53\) GeV. A natural candidate for such exchange is the nonperturbative three gluon DL-type exchange. We would like to mention that the sizable contribution from the conventional Pomeron exchange at large \(-t>3.5\) GeV\({}^{2}\) is not expected due to the huge suppression factor at large energies, \((s/s_{0})^{2\alpha^{\prime}_{P}t}\), which comes from the nonzero slope of the Pomeron trajectory \(\alpha^{\prime}_{P}\approx 0.25\) GeV\({}^{-2}\).
In the estimation above we assume, as in the DL model, that momenta of exchanged gluons are approximately equal. The justification of this assumption is quite clear. To keep a proton as a bound state of three quarks at large transfer momentum, all quarks in the proton should scatter approximately to the same angle. In fact, one can also consider more complicated multigluon contributions to elastic scattering, but we believe that such contribution will be suppressed by either additional factors \(\alpha_{s}\) or by extra factors \(1/t^{n}\) coming from gluon propagators and/or from form factors in the quark-gluon vertices.
## 3 Single-spin asymmetry \(A_{n}\) in \(Pp\) and \(P\bar{P}\) elastic scattering
One of the long-standing problems of QCD is the understanding of the large spin effects observed in the different high energy reactions [1], [24]. Recently, we have shown that the AQCM contribution leads to very large single-spin asymmetry (SSA) in the quark-quark scattering [16] and, therefore, it can be considered as a fundamental mechanism for explanation of anomalously large SSA observed in different inclusive and exclusive reactions at high energy. In elastic scattering, large SSA was found in the proton-proton scattering at AGS energies at large transfer momentum, Fig.4.
<figure><img src="content_image/1312.5094/x4.png"><figcaption>Figure 4: Single-spin asymmetry in the elastic PP→PP scattering at largemomentum transfer at AGS [25].</figcaption></figure>
In the bases of the c.m. helicity amplitudes SSA is given by the formula
\[A_{N}=-\frac{2Im[\Phi_{5}^{*}(\Phi_{1}+\Phi_{2}+\Phi_{3}-\Phi_{4})]}{\mid\Phi_ {1}\mid^{2}+\mid\Phi_{2}\mid^{2}+\mid\Phi_{3}\mid^{2}+\mid\Phi_{4}\mid^{2}+4 \mid\Phi_{5}\mid^{2}},\] (10)
where the helicity amplitudes \(\Phi_{1}=<++\mid++>\), \(\Phi_{2}=<++\mid-->\), \(\Phi_{3}=<+-\mid+->\), \(\Phi_{4}=<++\mid-->\) and \(\Phi_{5}=<++\mid-+>\). It is evident that due to the negative charge parity Odderon contribution, the helicity-flip amplitude \(\Phi_{5}\) should have a different sign for the proton-proton and proton-antiproton scattering. Therefore, SSA in the case of the elastic proton-antiproton scattering flips the sign in comparison with the proton-proton scattering. This prediction can be tested by the PAX Collaboration at HESR [26]. Due to the dominance of spin one \(t\)-channel gluon exchanges in the structure of Pomeron and Odderon, we can also expect that single-spin asymmetry at large \(-t\) should have a weak energy dependence. This prediction can be checked in the polarized proton-proton elastic scattering in the pp2pp experiment at RHIC in case of extending their kinematics to the large transfer momentum region [29]². However, the calculation of absolute value of SSA in the elastic \(PP\) and \(P\bar{P}\) scattering at large transfer momenta is a very difficult task, because one needs to know spin-flip and non-spin flip components of both Odderon and Pomeron exchanges. Furthermore, in the region of small transfer momenta and low energies it is needed to include the effects of secondary Reggion exchanges as well.
[FOOTNOTE:2][ENDFOOTNOTE]
## 4 Conclusion
In summary, it is shown that the anomalous quark-gluon nonperturbative vertex gives a large contribution to the elastic proton-proton and proton-antiproton scattering at large momentum transfer. One can treat three-gluon exchange induced by this vertex as effective Odderon exchange with the spin-flip dominance in its amplitude. We should mention that the anomalous quark chromomagnetic moment is proportional to \(1/\alpha_{s}\)[15]. Therefore, non-spin flip component in Odderon due to perturbative vertex should be suppressed by \(\alpha_{s}\) factor. We argue that a strong spin dependence of the Odderon amplitude might lead to the large spin effects in the proton-proton and proton-antiproton scattering at large momentum transfer.
Our approach is based on the existence of two quite different scales in hadron physics. One of them is related to the confinement radius \(R\approx 1\) fm and it is consistent, as well, with an average distance between instanton and antiinstanton within the instanton liquid model, \(R_{I\bar{I}}\approx 1\) fm [13, 14] . This scale is responsible for the diffractive type scattering at small momentum transfer. Another one is fixed by the scale of spontaneous symmetry breaking. Within the instanton model it is given by an average instanton size in QCD vacuum \(\rho_{c}\approx 1/3\) fm. This scale leads to the appearance of a large dynamical quark mass and large anomalous quark chromomagnetic moment and is responsible for the dynamics of the hadron-hadron elastic scattering at large momentum transfer. We would like to mention that the two scale model for the hadron structure was discussed in different aspects in the papers [27, 28].
## 5 Acknowledgments
The authors are very grateful to A.E. Dorokhov, A.V. Efremov, S. V. Goloskokov, E.A. Kuraev, N.N. Nikolaev, L.N. Lipatov, O.V. Selyugin and V.V. Uzhinsky for useful discussions. The work of N.K. was supported in part by a visiting scientist grant from the University of Valencia and by the MEST of the Korean Government (Brain Pool Program No. 121S-1-3-0318). We also acknowledge that this work was initiated through the series of APCTP-BLTP JINR Joint Workshops.
## References
* [1] A. D. Krisch, arXiv:1001.0790 [hep-ex].
* [2] R. Fiore, L. L. Jenkovszky, R. Orava, E. Predazzi, A. Prokudin and O. Selyugin, Int. J. Mod. Phys. A **24**, 2551 (2009) [arXiv:0810.2902 [hep-ph]].
* [3] I. M. Dremin, arXiv:1311.4159 [hep-ph].
* [4] V. Uzhinsky and A. Galoyan, arXiv:1111.4984 [hep-ph].
* [5] C. Bourrely, J. M. Myers, J. Soffer and T. T. Wu, Phys. Rev. D **85**, 096009 (2012) [arXiv:1202.3611 [hep-ph]].
* [6] O. V. Selyugin, arXiv:1303.5553 [hep-ph].
* [7] E. Martynov, Phys. Rev. D **87**, 114018 (2013) [arXiv:1305.3093 [hep-ph]].
* [8] A. Donnachie and P. V. Landshoff, Phys. Lett. B **727**, 500 (2013) [arXiv:1309.1292 [hep-ph]].
* [9] V. A. Khoze, A. D. Martin and M. G. Ryskin, arXiv:1312.3851 [hep-ph].
* [10] A. Donnachie and P. V. Landshoff, Z. Phys. C **2**, 55 (1979) [Erratum-ibid. C **2**, 372 (1979)].
* [11] A. Donnachie and P. V. Landshoff, Nucl. Phys. B **231**, 189 (1984); A. Donnachie and P. V. Landshoff, Nucl. Phys. B **267**, 690 (1986).
* [12] A. Donnachie and P. V. Landshoff, Phys. Lett. B **387**, 637 (1996) [hep-ph/9607377].
* [13] T. Schäfer and E.V. Shuryak, Rev. Mod. Phys. **70**, 1323 (1998).
* [14] D. Diakonov, Prog. Par. Nucl. Phys. **51**, 173 (2003).
* [15] N. I. Kochelev, Phys. Lett. B **426**, 149 (1998) [hep-ph/9610551].
* [16] N. Kochelev and N. Korchagin, Phys. Lett. B **729**, 117 (2014) arXiv:1308.4857 [hep-ph].
* [17] N. Kochelev, Phys. Part. Nucl. Lett. **7**, 326 (2010) [arXiv:0907.3555 [hep-ph]].
* [18] N. Kochelev, JETP Lett. **83**, 527 (2006) [hep-ph/0606091].
* [19] D. Ebert, R. N. Faustov and V. O. Galkin, Phys. Rev. D **79**, 114029 (2009) [arXiv:0903.5183 [hep-ph]].
* [20] I. C. Cloet and C. D. Roberts, arXiv:1310.2651 [nucl-th].
* [21] E. Nagy, R. S. Orr, W. Schmidt-Parzefall, K. Winter, A. Brandt, F. W. Busser, G. Flugge and F. Niebergall _et al._, Nucl. Phys. B **150**, 221 (1979); W.Faissler et al., Phys. Rev. **D23**, 33 (1981).
* [22] A. C. Aguilar, D. Binosi and J. Papavassiliou, Phys. Rev. D **88**, 074010 (2013) [arXiv:1304.5936 [hep-ph]].
* [23] A. I. Bugrii, Z. E. Chikovani, L. L. Jenkovszky and M. Z. Maksimov, Z. Phys. C **4**, 45 (1980).
* [24] E. Leader, Camb. Monogr. Part. Phys. Nucl. Phys. Cosmol. **15**, 1 (2001).
* [25] D. G. Crabb, W. A. Kaufman, A. D. Krisch, A. M. T. Lin, D. C. Peaslee, R. A. Phelps, R. S. Raymond and T. Roser _et al._, Phys. Rev. Lett. **65**, 3241 (1990).
* [26] V. Barone _et al._ [PAX Collaboration], hep-ex/0505054.
* [27] A. E. Dorokhov and N. I. Kochelev, Phys. Lett. B **304**, 167 (1993).
* [28] P. Schweitzer, M. Strikman and C. Weiss, JHEP **1301**, 163 (2013) [arXiv:1210.1267 [hep-ph]].
* [29]http://www.rhic.bnl.gov/pp2pp/
|
0802.0423 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 52339,
"num_imgs": 0,
"llama3_tokens_count": 17649
} | [] | # Approximability Distance in the Space of \(H\)-Colourability Problems
Tommy Färnqvist
Department of Computer and Information Science
Linköpings universitet
SE-581 83 Linköping, Sweden
¹
Peter Jonsson
Department of Computer and Information Science
Linköpings universitet
SE-581 83 Linköping, Sweden
¹
Johan Thapper
Department of Computer and Information Science
Linköpings universitet
SE-581 83 Linköping, Sweden
¹
[FOOTNOTE:1][ENDFOOTNOTE]
###### Abstract
A graph homomorphism is a vertex map which carries edges from a source graph to edges in a target graph. We study the approximability properties of the _Weighted Maximum \(H\)-Colourable Subgraph_ problem (Max \(H\)-Col). The instances of this problem are edge-weighted graphs \(G\) and the objective is to find a subgraph of \(G\) that has maximal total edge weight, under the condition that the subgraph has a homomorphism to \(H\); note that for \(H=K_{k}\) this problem is equivalent to Max \(k\)-cut. To this end, we introduce a metric structure on the space of graphs which allows us to extend previously known approximability results to larger classes of graphs. Specifically, the approximation algorithms for Max cut by Goemans and Williamson and Max \(k\)-cut by Frieze and Jerrum can be used to yield non-trivial approximation results for Max \(H\)-Col. For a variety of graphs, we show near-optimality results under the Unique Games Conjecture. We also use our method for comparing the performance of Frieze & Jerrum’s algorithm with Håstad’s approximation algorithm for general Max 2-Csp. This comparison is, in most cases, favourable to Frieze & Jerrum.
**Keywords**: optimisation, approximability, graph homomorphism, graph \(H\)-colouring, computational complexity
## 1 Introduction
Let \(G\) be a simple, undirected and finite graph. Given a subset \(S\subseteq V(G)\), a _cut_ in \(G\) with respect to \(S\) is the edges from a vertex in \(S\) to a vertex in \(V(G)\setminus S\). The Max cut-problem asks for the size of a largest cut in \(G\). More generally, a \(k\)-cut in \(G\) is the edges going from \(S_{i}\) to \(S_{j}\), \(i\neq j\), where \(S_{1},\ldots,S_{k}\) is a partitioning of \(V(G)\), and the Max \(k\)-cut-problem asks for the size of a largest \(k\)-cut. The problem is readily seen to be identical to finding a largest \(k\)-colourable subgraph of \(G\). Furthermore, Max \(k\)-cut is known to be **APX**-complete for every \(k\geq 2\) and consequently does not admit a _polynomial-time approximation scheme_ (Ptas).
In the absence of a Ptas, it is interesting to determine the best possible approximation ratio \(c\) within which a problem can be approximated or, alternatively the smallest \(c\) for which it can be proved that no polynomial-time approximation algorithm exists (typically under some complexity-theoretic assumption such as \(\mbox{\bf P}\neq\mbox{\bf NP}\)). An approximation ratio of \(.878567\) for Max cut was obtained in 1995 by Goemans and Williamson [15] using semidefinite programming. Frieze and Jerrum [14] determined lower bounds on the approximation ratios for Max \(k\)-cut using similar techniques. Sharpened results for small values of \(k\) have later been obtained by de Klerk et al. [9]. Under the assumption that the _Unique Games Conjecture_ holds, Khot et al. [25] showed the approximation ratio for \(k=2\) to be essentially optimal and also provided upper bounds on the approximation ratio for \(k>2\). Håstad [20] has shown that semidefinite programming is a universal tool for solving the general Max 2-Csp problem over any domain, in the sense that it establishes non-trivial approximation results for all of those problems.
In this paper, we study approximability properties of a generalised version of Max \(k\)-cut called Max \(H\)-Col for undirected graphs \(H\). Jonsson et al. [21] have shown that, when \(H\) is loop-free, Max \(H\)-Col does not admit a Ptas. Note that if \(H\) contains a loop, then Max \(H\)-Col is a trivial problem. We present approximability results for Max \(H\)-Col where \(H\) is taken from different families of graphs. Many of these results turns out to be close to optimal under the Unique Games Conjecture. Our approach is based on analysing approximability algorithms applied to problems which they are not originally intended to solve. This vague idea will be clarified below.
Denote by \({\cal G}\) the set of all simple, undirected and finite graphs. A _graph homomorphism_\(h\) from \(G\) to \(H\) is a vertex map which carries the edges in \(G\) to edges in \(H\). The existence of such a map will be denoted by \(G\to H\). If both \(G\to H\) and \(H\to G\), the graphs \(G\) and \(H\) are said to be _homomorphically equivalent_. This equivalence will be denoted by \(G\equiv H\). For a graph \(G\in{\cal G}\), let \({\cal W}(G)\) be the set of _weight functions_\(w:E(G)\rightarrow{\mathbb{Q}}^{+}\) assigning weights to edges of \(G\). For a \(w\in{\cal W}(G)\), we let \(\|w\|=\sum_{e\in E(G)}w(e)\) denote the total weight of \(G\). Now, _Weighted Maximum \(H\)-Colourable Subgraph_ (Max \(H\)-Col) is the maximisation problem with
**Instance:**: An edge-weighted graph \((G,w)\), where \(G\in{\cal G}\) and \(w\in{\cal W}(G)\).
**Solution:**: A subgraph \(G^{\prime}\) of \(G\) such that \(G^{\prime}\to H\).
**Measure:**: The weight of \(G^{\prime}\) with respect to \(w\).
Given an edge-weighted graph \((G,w)\), denote by \(mc_{H}(G,w)\) the measure of the optimal solution to the problem Max \(H\)-Col. Denote by \(mc_{k}(G,w)\) the (weighted) size of the largest \(k\)-cut in \((G,w)\). This notation is justified by the fact that \(mc_{k}(G,w)=mc_{K_{k}}(G,w)\). In this sense, Max \(H\)-Col generalises Max \(k\)-cut. The decision version of Max \(H\)-Col, the \(H\)-_colouring_ problem has been extensively studied (See [17] and its many references.) and Hell and Nešetřil [16] have shown that the problem is in **P** if \(H\) contains a loop or is bipartite, and **NP**-complete otherwise. Langberg et al. [27] have studied the approximability of Max \(H\)-Col when \(H\) is part of the input. We also note that Max \(H\)-Col is a specialisation of the Max Csp problem.
The homomorphism relation \(\rightarrow\) defines a quasi-order, but not a partial order on the set \({\cal G}\). The failing axiom is that of antisymmetry, since \(G\equiv H\) does not necessarily imply \(G=H\). To remedy this, let \({\cal G_{\equiv}}\) denote the set of equivalence classes of \({\cal G}\) under homomorphic equivalence. The relation \(\rightarrow\) is defined on \({\cal G_{\equiv}}\) in the obvious way and on this set it is a partial order. In fact, \(\rightarrow\) provides a lattice structure on \({\cal G_{\equiv}}\) and this lattice will be denoted by \({\cal C}_{S}\). For a more in-depth treatment of graph homomorphisms and the lattice \({\cal C}_{S}\), see [17]. Here, we endow \({\cal G_{\equiv}}\) with a metric \(d\) defined in the following way: for \(M,N\in{\cal G}\), let
\[d(M,N)=1-\inf_{\genfrac{}{}{0.0pt}{}{G\in{\cal G}}{w\in{\cal W}(G)}}\frac{mc_{ M}(G,w)}{mc_{N}(G,w)}\cdot\inf_{\genfrac{}{}{0.0pt}{}{G\in{\cal G}}{w\in{\cal W }(G)}}\frac{mc_{N}(G,w)}{mc_{M}(G,w)}.\] (1)
In Lemma 3 we will show that \(d\) satisfies the following property:
* Let \(M,N\in{\cal G}\) and assume that \(mc_{M}\) can be approximated within \(\alpha\). Then, \(mc_{N}\) can be approximated within \((1-d(M,N))\cdot\alpha\).
Hence, we can use \(d\) for extending previously known approximability bounds on Max \(H\)-Col to new and larger classes of graphs. For instance, we can apply Goemans and Williamson’s algorithm (which is intended for solving Max \(K_{2}\)-Col) to Max \(C_{11}\)-Col (i.e. the cycle on 11 vertices) and analyse how well the problem is approximated (we will see later on that Goemans and Williamson’s algorithm approximates Max \(C_{11}\)-Col within 0.79869).
In certain cases, the metric \(d\) is related to a well-studied graph parameter known as _bipartite density_\(b(H)\) [1, 3, 6, 18, 28]: if \(H^{\prime}\) is bipartite subgraph of \(H\) with maximum number of edges, then
\[b(H)=\frac{e(H^{\prime})}{e(H)}.\]
In the end of Section 2 we will see that \(b(H)=1-d(K_{2},H)\) for all edge-transitive graphs \(H\). We note that while \(d\) is invariant under homomorphic equivalence, this is not in general true for bipartite density.
The paper is divided into two main parts. Section 2 is used for proving the basic properties of \(d\), showing that it is well-defined on \({\cal G_{\equiv}}\), and that it is a metric. After that, we show that \(d\) is computable by linear programming and that the computation of \(d(M,N)\) can be simplified whenever \(M\) or \(N\) is edge-transitive. We conclude this part by providing some examples.
The second part of the paper uses \(d\) for studying the approximability of Max \(H\)-Col. For several classes of graphs, we investigate optimiality issues by exploiting inapproximability bounds that are consequences of the Unique Games Conjecture. Comparisons are also made to the bounds achieved by the general Max 2-Csp-algorithm by Håstad [20]. Our investigation covers a spectrum of graphs, ranging from graphs with few edges and/or containing large smallest cycles to graphs containing \(\Theta(n^{2})\) edges. Dense graphs are considered from two perspectives; firstly as graphs having a number of edges close to maximal and secondly as graphs from the \({\cal G}(n,p)\) model of random graphs, pioneered by Erdős and Rényi [13].
The techniques used in this paper seem to generalise naturally to larger sets of problems. This and other questions are discussed in Section 4 which concludes our paper.
## 2 Approximation via the Metric \(d\)
In this section we start out by proving basic properties of the metric \(d\), that \(({\cal G_{\equiv}},d)\) is a metric space, and that proximity of graphs \(M,N\) in this space lets us interrelate the approximability of Max \(M\)-Col and Max \(N\)-Col. Sections 2.2 and 2.3 are devoted to showing how to compute \(d\).
### The Space \(({\cal G_{\equiv}},d)\)
We begin by introducing a function \(s:{\cal G}\times{\cal G}\rightarrow\mathbb{R}\) which enables us to express \(d\) in a natural way and simplify forthcoming proofs. Let \(M,N\in{\cal G}\) and define
\[s(M,N)=\inf_{\genfrac{}{}{0.0pt}{}{G\in{\cal G}}{w\in{\cal W(G)}}}\frac{mc_{M} (G,w)}{mc_{N}(G,w)}.\] (2)
The definition of \(d\) from \((\ref{def:metric1})\) can then be written as follows:
\[d(M,N)=1-s(N,M)\cdot s(M,N).\] (3)
A consequence of \((\ref{def:s})\) is that the relation \(mc_{M}(G,w)\geq s(M,N)\cdot mc_{N}(G,w)\) holds for all \(G\in{\cal G}\) and \(w\in{\cal W}(G)\). Using this observation, we show that \(s(M,N)\) and thereby \(d(M,N)\) behaves well under graph homomorphisms and homomorphic equivalence.
Lemma 1: _Let \(M,N\in{\cal G}\) and \(M\to N\). Then, for every \(G\in{\cal G}\) and every weight function \(w\in{\cal W}(G)\),_
\[mc_{M}(G,w)\leq mc_{N}(G,w).\]
Proof: If \(G^{\prime}\to M\) for some subgraph \(G^{\prime}\) of \(G\), then \(G^{\prime}\to N\) as well. The lemma immediately follows. ∎
Corollary 1: _If \(M\) and \(N\) are homomorphically equivalent, then \(mc_{M}(G,w)=mc_{N}(G,w)\)._
Corollary 2: _Let \(M_{1}\equiv M_{2}\) and \(N_{1}\equiv N_{2}\) be two pairs of homomorphically equivalent graphs. Then, for \(i,j,k,l\in\{1,2\}\),_
\[s(N_{i},M_{j})=s(N_{k},M_{l}).\]
Proof: Corollary 1 shows that for all \(G\in{\cal G}\) and \(w\in{\cal W}(G)\), we have
\[\frac{mc_{M_{j}}(G,w)}{mc_{N_{i}}(G,w)}=\frac{mc_{M_{l}}(G,w)}{mc_{N_{k}}(G,w)}.\]
Now, take the infimum over graphs \(G\) and weight functions \(w\) on both sides. ∎
Corollary 2 shows that \(s\) and \(d\) are well-defined as functions on the set \({\cal G_{\equiv}}\). We can now show that \(d\) is indeed a metric on this space.
Lemma 2: _The pair \(({\cal G_{\equiv}},d)\) forms a metric space._
Proof: Positivity and symmetry follows immediately from the definition and the fact that \(s(M,N)\leq 1\) for all \(M\) and \(N\). Since \(s(M,N)=1\) if and only if \(N\to M\), it also holds that \(d(M,N)=0\) if and only if \(M\) and \(N\) are homomorphically equivalent. That is, \(d(M,N)=0\) if and only if \(M\) and \(N\) represent the same member of \({\cal G_{\equiv}}\). Furthermore, for graphs \(M,N\) and \(K\in{\cal G}\):
Therefore, with \(a=s(M,N)\cdot s(N,M),b=s(N,K)\cdot s(K,N)\) and \(c=s(M,K)\cdot s(K,M)\geq a\cdot b\),
\[d(M,N)+d(N,K)-d(M,K)=1-a+1-b-(1-c)\geq\\ \geq 1-a-b+a\cdot b=(1-a)\cdot(1-b)\geq 0,\]
which shows that \(d\) satisfies the triangle inequality. ∎
We say that a maximisation problem \(\Pi\) can be approximated within \(c<1\) if there exists a randomised polynomial-time algorithm \(A\) such that \(c\cdot Opt(x)\leq{\bf E}(A(x))\leq Opt(x)\) for all instances \(x\) of \(\Pi\). Our next result shows that proximity of graphs \(G\) and \(H\) in \(d\) allows us to determine bounds on the approximability of Max \(H\)-Col from known bounds on the approximability of Max \(G\)-Col.
Lemma 3: _Let \(M,N,K\) be graphs. If \(mc_{M}\) can be approximated within \(\alpha\), then \(mc_{N}\) can be approximated within \(\alpha\cdot\left(1-d(M,N)\right)\). If it is **NP**-hard to approximate \(mc_{K}\) within \(\beta\), then \(mc_{N}\) is not approximable within \(\beta/\left(1-d(N,K)\right)\) unless **P**= **NP**._
Proof: Let \(A(G,w)\) be the measure of the solution returned by an algorithm which approximates \(mc_{M}\) within \(\alpha\). We know that for all \(G\in{\cal G}\) and \(w\in{\cal W}(G)\) we have the inequalities \(mc_{N}(G,w)\geq s(N,M)\cdot mc_{M}(G,w)\) and \(mc_{M}(G,w)\geq s(M,N)\cdot mc_{N}(G,w)\). Consequently,
\[mc_{N}(G,w)\geq mc_{M}(G,w)\cdot s(N,M)\geq A(G,w)\cdot s(N,M)\\ \geq mc_{M}(G,w)\cdot\alpha\cdot s(N,M)\geq mc_{N}(G,w)\cdot \alpha\cdot s(N,M)\cdot s(M,N)\\ =mc_{N}(G,w)\cdot\alpha\cdot(1-d(M,N)).\]
For the second part, assume to the contrary that there exists a polynomial-time algorithm \(B\) that approximates \(mc_{N}\) within \(\beta/(1-d(N,K))\). According to the first part \(mc_{K}\) can then be approximated within \((1-d(N,K))\cdot\beta/(1-d(N,K))=\beta\). This is a contradiction unless \(\mbox{\bf P}=\mbox{\bf NP}\). ∎
### Exploiting Symmetries
We have seen that the metric \(d(M,N)\) can be defined in terms of \(s(M,N)\). In fact, when \(M\to N\) we have \(1-d(M,N)=s(M,N)\). It is therefore of interest to find an expression for \(s\) which can be calculated easily. After Lemma 4 (which shows how \(mc_{M}(G,w)\) depends on \(w\)) we introduce a different way of describing the solutions to Max \(M\)-Col which makes the proofs of the following results more natural. In Lemma 5, we show that a particular type of weight function provides a lower bound on \(mc_{M}(G,w)/mc_{N}(G,w)\). Finally, in Lemma 6, we provide a simpler expression for \(s(M,N)\) which depends directly on the automorphism group and thereby the symmetries of \(N\). This expression becomes particularly simple when \(N\) is edge-transitive. An immediate consequence of this is that \(s(K_{2},H)=b(H)\) for edge-transitive graphs \(H\).
The optimum \(mc_{H}(G,w)\) is sub-linear with respect to the weight function, as is shown by the following lemma.
Lemma 4: _Let \(G,H\in{\cal G}\), \(\alpha\in{\mathbb{Q}}^{+}\) and let \(w,w_{1},\ldots,w_{r}\in{\cal W}(G)\) be weight functions on \(G\). Then,_
* \(mc_{H}(G,\alpha\cdot w)=\alpha\cdot mc_{H}(G,w)\)_,_
* \(mc_{H}(G,\sum_{i=1}^{r}w_{i})\leq\sum_{i=1}^{r}mc_{H}(G,w_{i})\)_._
Proof: The first part is trivial. For the second part, let \(G^{\prime}\) be an optimal solution to the instance \((G,\sum_{i=1}^{r}w_{i})\) of Max \(H\)-Col. Then, the measure of this solution equals the sum of the measures of \(G^{\prime}\) as a (possibly suboptimal) solution to each of the instances \((G,w_{i})\). ∎
An alternative description of the solutions to Max \(H\)-Col is as follows: let \(G\) and \(H\in{\cal G}\), and for any vertex map \(f:V(G)\to V(H)\), let \(f^{\#}:E(G)\to E(H)\) be the (partial) edge map induced by \(f\). In this notation \(h:V(G)\to V(H)\) is a graph homomorphism precisely when \((h^{\#})^{-1}(E(H))=E(G)\) or, alternatively when \(h^{\#}\) is a total function. The set of solutions to an instance \((G,w)\) of Max \(H\)-Col can then be taken to be the set of vertex maps \(f:V(G)\to V(H)\) with the measure
\[w(f)=\sum_{e\in(f^{\#})^{-1}(E(H))}w(e).\]
In the remaining part of this section, we will use this description of a solution. Let \(f:V(G)\to V(H)\) be an optimal solution to the instance \((G,w)\) of Max \(H\)-Col. Define the weight \(w_{f}\in{\cal W}(H)\) as follows: for each \(e\in E(H)\), let
\[w_{f}(e)=\sum_{e^{\prime}\in(f^{\#})^{-1}(e)}\frac{w(e^{\prime})}{mc_{H}(G,w)}.\]
We now prove the following result:
Lemma 5: _Let \(M,N\in{\cal G}\) be two graphs. Then, for every \(G\in{\cal G}\), every \(w\in{\cal W}(G)\), and any optimal solution \(f\) to \((G,w)\) of Max \(N\)-Col, it holds that_
\[\frac{mc_{M}(G,w)}{mc_{N}(G,w)}\geq mc_{M}(N,w_{f}).\]
Proof: Arbitrarily choose an optimal solution \(g:V(N)\to V(M)\) to the instance \((N,w_{f})\) of Max \(M\)-Col. Then, \(g\circ f\) is a solution to \((G,w)\) as an instance of Max \(M\)-Col. The weight of this solution is \(mc_{M}(N,w_{f})\cdot mc_{N}(G,w)\), which implies that
\[mc_{M}(G,w)\geq mc_{M}(N,w_{f})\cdot mc_{N}(G,w),\]
and the result follows after division by \(mc_{N}(G,w)\). ∎
Let \(M\) and \(N\in{\cal G}\) be graphs and let \(A=\mbox{\rm Aut}(N)\) be the automorphism group of \(N\). We will let \(\pi\in A\) act on \(\{u,v\}\in E(N)\) by \(\pi\cdot\{u,v\}=\{\pi(u),\pi(v)\}\). The graph \(N\) is edge-transitive if and only if \(A\) acts transitively on the edges of \(N\). Let \({\cal{\hat{W}}}(N)\) be the set of weight functions \(w\in{\cal W}(N)\) which satisfy \(\|w\|=1\) and for which \(w(e)=w(\pi\cdot e)\) for all \(e\in E(N)\) and \(\pi\in\mbox{\rm Aut}(N)\).
Lemma 6: _Let \(M,N\in{\cal G}\). Then,_
\[s(M,N)=\inf_{w\in{\cal{\hat{W}}}(N)}mc_{M}(N,w).\]
_In particular, when \(N\) is edge-transitive,_
\[s(M,N)=mc_{M}(N,1/e(N)).\]
Proof: The easy direction goes through as follows:
\[s(M,N)\leq\inf_{w\in{\cal{\hat{W}}}(N)}\frac{mc_{M}(N,w)}{mc_{N}(N,w)}=\inf_{w \in{\cal{\hat{W}}}(N)}mc_{M}(N,w).\]
For the first part of the lemma, it will be sufficient to prove that the following inequality holds for for some \(w^{\prime}\in{\cal{\hat{W}}}\).
\[\frac{mc_{M}(G,w)}{mc_{N}(G,w)}\geq mc_{M}(N,w^{\prime})\] (4)
Taking the infimum over graphs \(G\) and weight functions \(w\in{\cal W}(G)\) in the left-hand side of this inequality will then show that
\[s(M,N)\geq mc_{M}(N,w^{\prime})\geq\inf_{w\in{\cal{\hat{W}}}(N)}mc_{M}(N,w).\]
Let \(A=\mbox{\rm Aut}(N)\) be the automorphism group of \(N\). Let \(\pi\in A\) be an arbitrary automorphism of \(N\). If \(f\) is an optimal solution to \((G,w)\) as an instance of Max \(N\)-Col, then so is \(f_{\pi}=\pi\circ f\). Let \(w_{\pi}=w_{\pi\circ f}\). By Lemma 5, inequality (4) is satisfied by \(w_{\pi}\). Summing \(\pi\) in this inequality over \(A\) gives
\[|A|\cdot\frac{mc_{M}(G,w)}{mc_{N}(G,w)}\geq\sum_{\pi\in A}mc_{M}(N,w_{\pi}) \geq mc_{M}(N,\sum_{\pi\in A}w_{\pi}),\]
where the last inequality follows from Lemma 4. The weight function \(\sum_{\pi\in A}w_{\pi}\) can be determined as follows.
\[\sum_{\pi\in A}w_{\pi}(e)=\sum_{\pi\in A}\frac{\sum_{e^{\prime}\in(f^{\#})^{-1 }(\pi\cdot e)}w(e^{\prime})}{mc_{N}(G,w)}=\frac{|A|}{|Ae|}\cdot\frac{\sum_{e^{ \prime}\in(f^{\#})^{-1}(Ae)}w(e^{\prime})}{mc_{N}(G,w)},\]
where \(Ae\) denotes the orbit of \(e\) under \(A\). Thus, \(w^{\prime}\sum_{\pi\in A}w_{\pi}/|A|\in{\cal{\hat{W}}}(N)\) and \(w^{\prime}\) satisfies \((\ref{eq:ineq1b})\) so the first part follows.
For the second part, note that when the automorphism group \(A\) acts transitively on \(E(N)\), there is only one orbit \(Ae=E(N)\). Then, the weight function \(w^{\prime}\) is given by
\[w^{\prime}(e)=\frac{1}{e(N)}\cdot\frac{\sum_{e^{\prime}\in(f^{\#})^{-1}(E(N))} w(e^{\prime})}{mc_{N}(G,w)}=\frac{1}{e(N)}\cdot\frac{mc_{N}(G,w)}{mc_{N}(G,w)}.\]
∎
### Tools for Computing Distances
From Lemma 6 it follows that in order to determine \(s(M,N)\), it is sufficient to minimise \(mc_{M}(N,w)\) over \({\cal{\hat{W}}}(N)\). We will now use this observation to describe a linear program for computing \(s(M,N)\). For \(i\in\{1,\ldots,r\}\), let \(A_{i}\) be the orbits of \(\mbox{\rm Aut}(N)\) acting on \(E(N)\). The measure of a solution \(f\) when \(w\in{\cal{\hat{W}}}(N)\) is equal to \(\sum_{i=1}^{r}w_{i}\cdot f_{i}\), where \(w_{i}\) is the weight of an edge in \(A_{i}\) and \(f_{i}\) is the number of edges in \(A_{i}\) which are mapped to an edge in \(M\) by \(f\). Note that given a \(w\), the measure of a solution \(f\) depends only on the vector \((f_{1},\ldots,f_{r})\in{\mathbb{N}}^{r}\). Therefore, take the solution space to be the set of such vectors:
\[F=\{\,(f_{1},\ldots,f_{r})\;|\;\mbox{\rm\,$f$ is a solution to $(N,w)$ of \mbox{\sc Max $M$-Col}\,}\}\]
Let the variables of the linear program be \(w_{1},\ldots,w_{r}\) and \(s\), where \(w_{i}\) represents the weight of each element in the orbit \(A_{i}\) and \(s\) is an upper bound on the solutions.
\[\begin{array}[]{lll}\min s\\ \vskip 3.0pt plus 1.0pt minus 1.0pt\sum_{i}f_{i}\cdot w_{i}\leq s&\mbox{\rm\, for each $(f_{1},\ldots,f_{r})\in F$\,}\\ \sum_{i}|A_{i}|\cdot w_{i}=1\\ \vskip 3.0pt plus 1.0pt minus 1.0ptw_{i},s\geq 0\\ \end{array}\]
Given a solution \(w_{i},s\) to this program, \(w(e)=w_{i}\) when \(e\in A_{i}\) is a weight function which minimises \(mc_{M}(G,w)\). The value of this solution is \(s=s(M,N)\).
Example 1: The _wheel graph_ on \(k\) vertices, \(W_{k}\), is a graph that contains a cycle of length \(k-1\) plus a vertex \(v\) not in the cycle such that \(v\) is connected to every other vertex. We call the edges of the \(k-1\)-cycle _outer edges_ and the remaining \(k-1\) edges _spokes_. It is easy to see that \(W_{k}\) contains a maximum clique of size 4 if \(k=4\) (in fact, \(W_{4}=K_{4}\)) and a maximum clique of size 3 in all other cases. Furthermore, \(W_{k}\) is 3-colourable if and only if \(k\) is odd, and 4-colourable otherwise. This implies that for odd \(k\), the wheel graphs are homomorphically equivalent to \(K_{3}\).
We will determine \(s(K_{3},W_{n})\) for even \(n\geq 6\) using the previously described construction of a linear program. Note that the group action of \(\mbox{\rm Aut}(W_{n})\) on \(E(W_{n})\) has two orbits, one which consists of all outer edges and one which consists of all the spokes. If we remove one outer edge or one spoke from \(W_{k}\), then the resulting graph can be mapped homomorphically onto \(K_{3}\). Therefore, it suffices to choose \(F=\{f,g\}\) with \(f=(k-1,k-2)\) and \(g=(k-2,k-1)\) since all other solutions will have a smaller measure than at least one of these. The program for \(W_{k}\) looks like this:
\[\begin{array}[]{lll}\min s\\ (k-1)\cdot w_{1}+(k-2)\cdot w_{2}\leq s\\ (k-2)\cdot w_{1}+(k-1)\cdot w_{2}\leq s\\ (k-1)\cdot w_{1}+(k-1)\cdot w_{2}=1\\ w_{i},s\geq 0\\ \end{array}\]
The solution is \(w_{1}=w_{2}=1/(2k-2)\) with \(s(K_{3},W_{k})=s=(2k-3)/(2k-2)\).
Example 2: An example where the weights in the optimal solution to the linear program are different for different orbits is given by \(s(K_{2},K_{8/3})\). The _rational complete graph_\(K_{8/3}\) has vertex set \(\{0,1,\ldots,7\}\), which is thought of as placed on a circle with \(0\) following \(7\). There is an edge between any two vertices which are at a distance at least 3 from each other. Each vertex has distance 4 to exactly one other vertex, which means there are 4 such edges. These edges form one orbit \(A_{1}\) and the remaining 8 edges form the other orbit \(A_{2}\). There are two maximal solutions, \(f=(0,8)\) and \(g=(4,6)\) which gives the program:
\[\begin{array}[]{lll}\min s\\ 0\cdot w_{1}+8\cdot w_{2}\leq s\\ 4\cdot w_{1}+6\cdot w_{2}\leq s\\ 4\cdot w_{1}+8\cdot w_{2}=1\\ w_{i},s\geq 0\\ \end{array}\]
The solution to this program is \(w_{1}=1/20\) and \(w_{2}=1/10\) with the optimum being \(4/5\).
In some cases, it may be hard to determine a desired distance between \(H\) and \(M\) or \(N\). If we know that \(H\) is homomorphically sandwiched between \(M\) and \(N\) so that \(M\to H\to N\), then we can provide an upper bound on the distance of \(H\) to \(M\) or \(N\) by using the distance between \(M\) and \(N\). Formally, we have:
Lemma 7: _Let \(M\to H\to N\). Then,_
\[s(M,H)\geq s(M,N)\qquad\textrm{and}\qquad s(H,N)\geq s(M,N).\]
Proof: Since \(H\to N\), it follows from Lemma 1 that \(mc_{H}(G,w)\leq mc_{N}(G,w)\). Thus,
\[s(M,H)=\inf_{\genfrac{}{}{0.0pt}{}{G\in{\cal G}}{w\in{\cal W}(G)}}\frac{mc_{M} (G,w)}{mc_{H}(G,w)}\geq\inf_{\genfrac{}{}{0.0pt}{}{G\in{\cal G}}{w\in{\cal W}( G)}}\frac{mc_{M}(G,w)}{mc_{N}(G,w)}=s(M,N).\]
The second part follows similarly. ∎
We will see several applications of this lemma in Sections 3.1 and 3.2.
## 3 Approximability of Max _H_-Col
Let \(A\) be an approximation algorithm for Max \(H\)-Col. Our method basically allows us to measure how well \(A\) performs on other problems Max \(H^{\prime}\)-Col. In this section, we will apply the method to various algorithms and various graphs. We do two things for each kind of graph under consideration: compare the performance of our method with that of some existing, leading, approximation algorithm and investigate how close to optimality we can get. Our main algorithmic tools will be the following:
Theorem 3.1 (Goemans and Williamson [15]): \(mc_{2}\) _can be approximated within_
\[\alpha_{GW}=\min_{0<\theta<\pi}\frac{\theta/\pi}{(1-\cos\theta)/2}\approx.8785 67.\]
Theorem 3.2 (Frieze and Jerrum [14]): \(mc_{k}\) _can be approximated within_
\[\alpha_{k}\sim 1-\frac{1}{k}+\frac{2\ln k}{k^{2}}.\]
Here, the relation \(\sim\) indicates two expressions whose ratio tends to \(1\) as \(k\rightarrow\infty\). We note that de Klerk et al. [9] have presented the sharpest known bounds on \(\alpha_{k}\) for small values of \(k\); for instance, \(\alpha_{3}\geq 0.836008\).
Let \(v(G),e(G)\) denote the number of vertices and edges in \(G\), respectively. Håstad has shown the following:
Theorem 3.3 (Håstad [20]): _Let \(H\) be a graph. There is an absolute constant \(c>0\) such that \(mc_{H}\) can be approximated within_
\[1-\frac{t(H)}{d^{2}}\cdot(1-\frac{c}{d^{2}\log d})\]
_where \(d=v(H)\) and \(t(H)=d^{2}-2\cdot e(H)\)._
We will compare the performance of this algorithm on Max \(H\)-Col with the performance of the algorithms in Theorems 3.1 and 3.2 analysed using Lemma 3 and estimates of the distance \(d\). This comparison is not entirely fair since H stad’s algorithm was probably not designed with the goal of providing optimal results—the goal was to beat random assignments. However, it is the currently best algorithm that can approximate Max \(H\)-Col for arbitrary \(H\in{\cal G}\). For this purpose, we introduce two functions, \(FJ_{k}\) and _Hå_, such that, if \(H\) is a graph, \(FJ_{k}(H)\) denotes the best bound on the approximation guarantee when Frieze and Jerrum’s algorithm for Max \(k\)-cut is applied to the problem \(mc_{H}\), while \(\textrm{{H\aa}}(H)\) is the guarantee when Håstad’s algorithm is used to approximate \(mc_{H}\).
To be able to investigate the eventual near-optimality of our approximation method we will rely on the Unique Games Conjecture (UGC). Khot [24] suggested this conjecture as a possible direction for proving inapproximability properties of some important constraint satisfaction problems over two variables. We need the following problem only for stating the conjecture:
Definition 1: The Unique Label Cover problem \(\mathcal{L}(V,W,E,[M],\{\pi^{v,w}\}_{(v,w)\in E})\) is the following problem: Given is a bipartite graph with left side vertices \(V\), right side vertices \(W\), and a set of edges \(E\). The goal is to assign one ‘label’ to every vertex of the graph, where \([M]\) is the set of allowed labels. The labelling is supposed to satisfy certain constraints given by bijective maps \(\sigma_{v,w}:[M]\rightarrow[M]\). There is one such map for every edge \((v,w)\in E\). A labelling ‘satisfies’ an edge \((v,w)\) if \(\sigma_{v,w}(\mathrm{label}(w))=\mathrm{label}(v)\). The optimum of the unique label cover problem is defined to be the maximum fraction of edges satisfied by any labelling.
Now, UGC is the following:
Conjecture 1 (Unique Games Conjecture): For any \(\eta,\gamma>0\), there exists a constant \(M=M(\eta,\gamma)\) such that it is **NP**-hard to distinguish whether the Unique Label Cover problem with label set of size \(M\) has optimum at least \(1-\eta\) or at most \(\gamma\).
From hereon we assume that UGC is true, which gives us the following inapproximability results:
Theorem 3.4 (Khot et al. [25]):
* _For every_ \(\varepsilon>0\)_, it is NP-hard to approximate_ \(mc_{2}\) _within_ \(\alpha_{GW}+\varepsilon\)_._
* _It is NP-hard to approximate_ \(mc_{k}\) _within_ \((1-1/k+(2\ln k)/k^{2}+O((\ln\ln k)/k^{2}))\)_._
### Sparse Graphs
In this section, we investigate the performance of our method on graphs which have relatively few edges, and we see that the _girth_ of the graphs plays a central role. The girth of a graph is the length of a shortest cycle contained in the graph. Similarly, the odd girth of a graph gives the length of a shortest odd cycle in the graph.
Before we proceed we need some facts about cycle graphs. Note that the odd cycles form a chain in the lattice \({\cal C}_{S}\) between \(K_{2}\) and \(C_{3}=K_{3}\) in the following way:
\[K_{2}\rightarrow\cdots\to C_{2i+1}\to C_{2i-1}\to \cdots\to C_{3}=K_{3}.\]
The following lemma gives the values of \(s(M,N)\) for pairs of graphs in this chain. The value depends only on the target graph of the homomorphism.
Lemma 8: _Let \(k<m\) be positive, odd integers. Then,_
\[s(K_{2},C_{k})=s(C_{m},C_{k})=\frac{k-1}{k}.\]
Proof: Note that \(C_{2k+1}\not\to K_{2}\) and \(C_{2k+1}\not\to C_{2m+1}\). However, after removing one edge from \(C_{2k+1}\), the remaining subgraph is isomorphic to the path \(P_{2k+1}\) which in turn is embeddable in both \(K_{2}\) and \(C_{2m+1}\). Since \(C_{2k+1}\) is edge-transitive, the result follows from Lemma 6. ∎
With Lemma 8 at hand, we can prove the following:
Proposition 1: _Let \(k\geq 3\) be odd. Then, \(FJ_{2}(C_{k})\geq\frac{k-1}{k}\cdot\alpha_{GW}\) and \(\textrm{{H\aa}}(C_{k})=\frac{2}{k}+\frac{c}{k^{2}\log k}-\frac{2c}{k^{3}\log k}\). Furthermore, \(mc_{C_{k}}\) cannot be approximated within \(\frac{k}{k-1}\cdot\alpha_{GW}+\varepsilon\) for any \(\varepsilon>0\)._
Proof: From Lemma 8 we see that \(s(K_{2},C_{k})=\frac{k-1}{k}\) which implies (using Lemma 3) that \(FJ_{2}(C_{k})\geq\frac{k-1}{k}\cdot\alpha_{GW}\). Furthermore, \(mc_{2}\) cannot be approximated within \(\alpha_{GW}+\varepsilon^{\prime}\) for any \(\varepsilon^{\prime}>0\). From the second part of Lemma 3, we get that \(mc_{C_{k}}\) cannot be approximated within \(\frac{k}{k-1}\cdot(\alpha_{GW}+\varepsilon^{\prime})\) for any \(\varepsilon^{\prime}\). With \(\varepsilon^{\prime}=\varepsilon\cdot\frac{k-1}{k}\) the result follows.
Finally, we see that
\[\textrm{{H\aa}}(C_{k})=1-\frac{k^{2}-2k}{k^{2}}\cdot\left(1-\frac{c}{k^{2}\log k }\right)=\frac{ck+2k^{2}\log k-2c}{k^{3}\log k}=\]
\[=\frac{2}{k}+\frac{c}{k^{2}\log k}-\frac{2c}{k^{3}\log k}.\]
Håstad’s algorithm does not perform particularly well on sparse graphs; this is reflected by its performance on cycle graphs \(C_{k}\) where the approximation guarantee tends to zero when \(k\rightarrow\infty\). We will see that this trend is apparent for all graph types studied in this section.
Now we can continue with a result on a class of graphs with large girth:
Proposition 2: _Let \(m>k\geq 4\). If \(H\) is a graph with odd girth \(g\geq 2k+1\) and minimum degree \(\geq\frac{2m-1}{2(k+1)}\), then \(FJ_{2}(H)\geq\frac{2k}{2k+1}\cdot\alpha_{GW}\) and \(mc_{H}\) cannot be approximated within \(\frac{2k+1}{2k}\cdot\alpha_{GW}+\varepsilon\) for any \(\varepsilon>0\). Asymptotically, \(\textrm{{H\aa}}(H)\) is bounded by \(\frac{c}{n^{2}\log n}+\frac{2(n^{g/(g-1)})^{3}}{n^{4}n^{1/(g-1)}}-\frac{2n^{g/ (g-1)}n^{1/(g-1)}c}{n^{4}\log n}\), where \(n=v(H)\)._
Proof: Lai & Liu [26] have proved that if \(H\) is a graph with odd girth \(\geq 2k+1\) and minimum degree \(\geq\frac{2m-1}{2(k+1)}\), then there exists a homomorphism from \(H\) to \(C_{2k+1}\). Thus, \(K_{2}\to H\to C_{2k+1}\) which implies that \(1-d(K_{2},H)\geq 1-d(K_{2},C_{2k+1})=\frac{2k}{2k+1}\). By Lemma 3, \(FJ_{2}(H)\geq\frac{2k}{2k+1}\cdot\alpha_{GW}\), but \(mc_{H}\) cannot be approximated within \(\frac{2k+1}{2k}\cdot\alpha_{GW}+\varepsilon\) for any \(\varepsilon>0\).
Dutton and Brigham [10] show that one upper bound on \(e(H)\) has asymptotic order \(n^{1+2/(g-1)}\). This lets us say that
\[\textrm{{H\aa}}(H)\sim 1-\frac{n^{2}-2\cdot n^{1+2/(g-1)}}{n^{2}}\cdot\left(1- \frac{c}{n^{2}\log n}\right)=\]
\[=\frac{cn^{2}+2n^{(3g-1)/(g-1)}\log n-2n^{(g+1)/(g-1)}c}{n^{4}\log n}=\]
\[=\frac{c}{n^{2}\log n}+\frac{2(n^{g/(g-1)})^{3}}{n^{4}n^{1/(g-1)}}-\frac{2n^{g /(g-1)}n^{1/(g-1)}c}{n^{4}\log n}.\]
∎
If we restrict ourselves to planar graphs, then it is possible to show the following:
Proposition 3: _Let \(H\) be a planar graph with girth at least \(g=\frac{20k-2}{3}\). If \(v(H)=n\), then \(FJ_{2}(H)\geq\frac{2k}{2k+1}\cdot\alpha_{GW}\) and \(\textrm{{H\aa}}(H)\leq\frac{6}{n}-\frac{12}{n^{2}}+\frac{c}{n^{2}\log n}-\frac {6c}{n^{3}\log n}+\frac{12c}{n^{4}\log n}\). \(mc_{H}\) cannot be approximated within \(\frac{2k+1}{2k}\cdot\alpha_{GW}+\varepsilon\) for any \(\varepsilon>0\)._
Proof: Borodin et al. [7] have proved that \(H\) is \((2+\frac{1}{k})\)-colourable which is equivalent to saying that there exists a homomorphism from \(H\) to \(C_{2k+1}\). The proof proceeds as for Proposition 2.
The planar graph \(H\) cannot have more than \(3n-6\) edges so \(\textrm{{H\aa}}(H)\) is bounded from above by
\[1-\frac{n^{2}-2(3n-6)}{n^{2}}\cdot\left(1-\frac{c}{n^{2}\log n}\right)=\]
\[=\frac{cn^{2}-6nc+12c+6n^{3}\log n-12n^{2}\log n}{n^{4}\log n}=\]
\[=\frac{6}{n}-\frac{12}{n^{2}}+\frac{c}{n^{2}\log n}-\frac{6c}{n^{3}\log n}+ \frac{12c}{n^{4}\log n}.\]
(In fact, \(H\) contains no more than \(\max\{g(n-2)/(g-2),n-1\}\) edges, but using this only makes for a more convoluted expression to study.) ∎
Proposition 3 can be strengthened and extended in different ways: one is to consider a result by Dvořák et al. [11]. They have proved that every planar graph \(H\) of odd-girth at least 9 is homomorphic to the Petersen graph \(P\). The Petersen graph is edge-transitive and it is known (cf. [3]) that the bipartite density of \(P\) is \(4/5\) or, in other words, \(s(K_{2},P)=4/5\). Consequently, \(mc_{H}\) can be approximated within \(\frac{4}{5}\cdot\alpha_{GW}\) but not within \(\frac{4}{5}\cdot\alpha_{GW}+\varepsilon\) for any \(\varepsilon>0\). This is better than Proposition 3 for planar graphs with girth strictly less than 13.
Another way of extending Proposition 3 is to consider graphs embeddable on higher-genus surfaces. For instance, the lemma is true for graphs embeddable on the projective plane, and it is also true for graphs of girth _strictly_ greater than \(\frac{20k-2}{3}\) whenever the graphs are embeddable on the torus or Klein bottle. These bounds are direct consequences of results in Borodin et al.
We conclude the section by looking at a class of graphs that have small girth. Let \(0<\beta<1\), be the approximation threshold for \(mc_{3}\), i.e. \(mc_{3}\) is approximable within \(\beta\) but not within \(\beta+\varepsilon\) for any \(\varepsilon>0\). Currently, we know that \(\alpha_{3}\leq 0.836008\leq\beta\leq\frac{102}{103}\)[9, 22]. The wheel graphs \(W_{k}\) from Example 1 are homomorphically equivalent to \(K_{3}\) for odd \(k\) and we conclude (by Lemma 3) that \(mc_{W_{k}}\) has the same approximability properties as \(mc_{3}\) in this case. For even \(k\geq 6\), \(W_{k}\) is not homomorphically equivalent to \(K_{3}\), though.
Proposition 4: _For \(k\geq 6\) and even, \(FJ_{3}(W_{k})\geq\alpha_{3}\cdot\frac{2k-3}{2k-2}\) but \(mc_{W_{k}}\) is not approximable within \(\beta\cdot\frac{2k-2}{2k-3}\). \(\textrm{{H\aa}}(W_{k})=\frac{4}{k}-\frac{4}{k^{2}}+\frac{c}{k^{2}\log k}-\frac {4c}{k^{3}\log k}+\frac{4c}{k^{4}\log k}\)._
Proof: We know from Example 1 that \(K_{3}\to W_{k}\) and \(s(K_{3},W_{k})=\frac{2k-3}{2k-2}\). The first part of the result follows by an application of Lemma 3.
\[\textrm{{H\aa}}(W_{k})=1-\frac{t(W_{k})}{d^{2}}\cdot\left(1-\frac{c}{d^{2}\log d }\right)=/d=k,e(W_{k})=2(k-1)/=\]
\[=1-\frac{k^{2}-4(k-1)}{k^{2}}\cdot\left(1-\frac{c}{k^{2}\log k}\right)=\]
\[=\frac{k^{2}c+4k^{3}\log k-4kc-4k^{2}\log k+4c}{k^{4}\log k}=\]
\[=\frac{4}{k}-\frac{4}{k^{2}}+\frac{c}{k^{2}\log k}-\frac{4c}{k^{3}\log k}+ \frac{4c}{k^{4}\log k}\]
∎
We see that \(FJ_{3}(W_{k})\rightarrow\alpha_{3}\) when \(k\rightarrow\infty\), while \(\textrm{{H\aa}}(W_{k})\) tends to 0.
### Dense and Random Graphs
We will now study _dense_ graphs, i.e. graphs \(H\) containing \(\Theta(v(H)^{2})\) edges. For a graph \(H\) on \(n\) vertices, we obviously have \(H\to K_{n}\). If we assume that \(\omega(H)\geq r\), then we also have \(K_{r}\to H\). Thus, if we could determine \(s(K_{r},K_{n})\), then we could use Lemma 7 to calculate a bound on \(FJ_{n}(H)\).
Let \(\omega(G)\) denote the size of the largest clique in \(G\) and \(\chi(G)\) denote the chromatic number of \(G\). The Tur n graph \(T(n,r)\) is a graph formed by partitioning a set of \(n\) vertices into \(r\) subsets, with sizes as equal as possible, and connecting two vertices whenever they belong to different subsets. Tur n graphs have the following properties [31]:
* \(e(T(n,r))=\lfloor\left(1-\frac{1}{r}\right)\cdot\frac{n^{2}}{2}\rfloor\);
* \(\omega(T(n,r))=\chi(T(n,r))=r\);
* if \(G\) is a graph such that \(e(G)>e(T(v(G),r))\), then \(\omega(G)>r\).
Lemma 9: _Let \(r\) and \(n\) be positive integers. Then,_
\[s(K_{r},K_{n})=e(T(n,r))/e(K_{n})\]
Proof: Since \(K_{n}\) is edge-transitive, it suffices to show that \(mc_{r}(K_{n},1/e(K_{n}))=e(T(n,r))/e(K_{n})\). Assume \(mc_{r}(K_{n},1/e(K_{n}))\cdot e(K_{n})>e(T(n,r))\). This implies that there exists an \(r\)-partite graph \(G\) on \(k\) vertices with strictly more than \(e(T(n,r))\) edges — this is impossible since \(\omega(G)>r\) and, consequently, \(\chi(G)>r\). Thus, \(mc_{K_{r}}(K_{n},1/e(K_{n}))\cdot e(K_{n})=e(T(n,r))\) because \(T(n,r)\) is an \(r\)-partite subgraph of \(K_{n}\). ∎
Now, we are ready to prove the following:
Proposition 5: _Let \(v(H)=n\) and pick \(r\in\mathbb{N}\), \(\sigma\in\mathbb{R}\) such that_
\[\left\lfloor\left(1-\frac{1}{r}\right)\cdot\frac{n^{2}}{2}\right\rfloor\leq \sigma\cdot n^{2}=e(H)\leq\frac{n(n-1)}{2}.\]
_Then,_
\[FJ_{n}(H)\geq\alpha_{n}\cdot\frac{2\left\lfloor\left(1-\frac{1}{r}\right)\cdot \frac{n^{2}}{2}\right\rfloor}{n\cdot(n-1)}\sim 1-\frac{1}{r}-\frac{1}{n}+\frac {2\ln n}{n(n-1)}\]
\[\textrm{{H\aa}}(H)=2\sigma+\frac{c}{n^{2}\log n}-\frac{2\sigma\cdot c}{n^{2} \log n}.\]
Proof: We have \(K_{r}\to H\) due to Tur n and \(H\to K_{n}\) holds trivially since \(v(H)=n\). By Lemma 9
\[s(K_{r},K_{n})=\frac{2\left\lfloor\left(1-\frac{1}{r}\right)\cdot\frac{n^{2}}{ 2}\right\rfloor}{n\cdot(n-1)}.\]
The first part of the result follows from Lemma 3 since \(d(H,K_{n})\leq d(K_{r},K_{n})=1-s(K_{r},K_{n})\) and some straightforward calculations.
\[\textrm{{H\aa}}(H)=1-\frac{n^{2}-\sigma\cdot n^{2}}{n^{2}}\cdot\left(1-\frac{c }{n^{2}\log n}\right)=\]
\[=\frac{c+2\sigma\cdot n^{2}\log n-2\sigma\cdot c}{n^{2}\log n}=\frac{c}{n^{2} \log n}+2\sigma-\frac{2\sigma\cdot c}{n^{2}\log n}.\]
∎
Note that when \(r\) and \(n\) grow, \(FJ_{n}(H)\) tends to \(1\). This means that, asymptotically, we cannot do much better. If we compare the expression for \(FJ_{n}(H)\) with the inapproximability bound for \(mc_{n}\) (Theorem 3.4), we see that all we could hope for is a faster convergence towards \(1\). As \(\sigma\) satisfies \(\left(1-\frac{1}{r}\right)\cdot\frac{1}{2}\leq\sigma\leq\left(1-\frac{1}{n} \right)\cdot\frac{1}{2}\), we conclude that \(\textrm{{H\aa}}(H)\) also tends to \(1\) as \(r\) and \(n\) grow. To get a better grip on how \(\textrm{{H\aa}}(H)\) behaves we look at two extreme cases.
For a maximal \(\sigma=\left(1-\frac{1}{r}\right)\cdot\frac{1}{2}\), \(\textrm{{H\aa}}(H)\) becomes
\[1-\frac{1}{n}+\frac{c}{n^{3}\log n}.\]
On the other hand, this guarantee, for a minimal \(\sigma=\left(1-\frac{1}{r}\right)\cdot\frac{1}{2}\) is
\[1-\frac{1}{r}+\frac{c}{rn^{2}\log n}.\]
At the same time, it is easy to see that Frieze and Jerrum’s algorithm makes these points approximable within \(\alpha_{n}\) (since, in this case, \(H\equiv K_{n}\)) and \(\alpha_{r}\) (since Turán’s theorem tells us that \(H\to K_{r}\) holds in this case), respectively. Our conclusion is that Frieze and Jerrum’s and Håstad’s algorithms perform almost equally well on these graphs asymptotically.
Another way to study dense graphs is via random graphs. Let \({\cal G}(n,p)\) denote the random graph on \(n\) vertices in which every edge is chosen randomly and independently with probability \(p=p(n)\). We say that \({\cal G}(n,p)\) has a property \(A\)_asymptotically almost surely_ (a.a.s.) if the probability it satisfies \(A\) tends to \(1\) as \(n\) tends to infinity. Here, we let \(p=c\) for some \(0<c<1.\)
For \(G\in{\cal G}(n,p)\) it is well known that a.a.s. \(\omega(G)\) assumes one of at most two values around \(\frac{2\ln n}{\ln(1/p)}\) [5, 30]. It is also known that, almost surely \(\chi(G)\sim\frac{n}{2\ln(np)}\ln\left(\frac{1}{1-p}\right)\), as \(np\rightarrow\infty\) [4, 29]. Let us say that \(\chi(G)\) is concentrated in width \(s\) if there exists \(u=u(n,p)\) such that a.a.s. \(u\leq\chi(G)\leq u+s\). Alon and Krivelevich [2] have shown that for every constant \(\delta>0\), if \(p=n^{-1/2-\delta}\) then \(\chi(G)\) is concentrated in width \(s=1\). That is, almost surely, the chromatic number takes one of two values.
Proposition 6: _Let \(H\in{\cal G}(n,p)\). When \(np\rightarrow\infty\), \(FJ_{m}(H)\sim 1-\frac{2}{m}+\frac{2\ln m}{m^{2}}+\frac{1}{m^{2}}-\frac{2\ln m} {m^{3}}\), where \(m=\omega(H)\). \(\textrm{{H\aa}}(H)=p-\frac{p}{n}+(1-p)\cdot\frac{c}{n^{2}\log n}+\frac{pc}{n^{ 3}\log n}\)._
Proof: Let \(k=\chi(H)\).
\[FJ_{m}(H)\geq\alpha_{m}\cdot s(K_{m},K_{k})\sim\left(1-\frac{1}{m}+\frac{2\ln m }{m^{2}}\right)\cdot\frac{2\left\lfloor\left(1-\frac{1}{m}\right)\cdot\frac{k^ {2}}{2}\right\rfloor}{k(k-1)}\sim\]
\[\sim\frac{(m^{2}-m+2\ln m)(m-1)k}{m^{3}(k-1)}=\]
\[=\frac{k}{k-1}-\frac{2k}{m(k-1)}+\frac{k}{m^{2}(k-1)}+\frac{2k\ln m}{m^{2}(k-1 )}-\frac{2k\ln m}{m^{3}(k-1)}\ (**)\]
If \(n\) is large, then \(k\gg m\) and
\[(**)\sim 1-\frac{2}{m}+\frac{2\ln m}{m^{2}}+\frac{1}{m^{2}}-\frac{2\ln m}{m^{3 }}.\]
The expected number of edges for a graph \(H\in{\cal G}(n,p)\) is \(\binom{n}{2}p\), so
\[\textrm{{H\aa}}(H)=1-\frac{t(G)}{d^{2}}\cdot(1-\frac{c}{d^{2}\log d})=/d=n,e(G )=\binom{n}{2}p/=\]
\[=1-\frac{n^{2}-(n^{2}-n)p}{n^{2}}\cdot(1-\frac{c}{n^{2}\log n})=1-\frac{n-pn+p }{n}\cdot(1-\frac{c}{n^{2}\log n})=\]
\[=1-(1-p+\frac{p}{n})\cdot(1-\frac{c}{n^{2}\log n})=\]
\[=\frac{pn^{3}\log n+nc-pnc-pn^{2}\log n+pc}{n^{3}\log n}=\]
\[=p-\frac{p}{n}+(1-p)\cdot\frac{c}{n^{2}\log n}+\frac{pc}{n^{3}\log n}\]
∎
We see that, in the limiting case, \(\textrm{{H\aa}}(H)\) tends to \(p\), while \(FJ_{m}(H)\) tends to \(1\). Again, this means that, for large enough graphs, we cannot do much better. With a better analysis, one could possibly reach an expression for \(FJ_{m}(H)\) that has a faster convergence rate.
Of course, it is interesting to look at what happens for graphs \(H\in{\cal G}(n,p)\) where \(np\) does not tend to \(\infty\) when \(n\rightarrow\infty\). The following theorem lets us do this.
Theorem 3.5 (Erdős and Rényi [13]): _Let \(c\) be a positive constant and \(p=\frac{c}{n}\). If \(c<1\), then a.a.s. no component in \({\cal G}(n,p)\) contains more than one cycle, and no component has more than \(\frac{\ln n}{c-1-\ln c}\) vertices._
Now we see that if \(np\rightarrow\varepsilon\) when \(n\rightarrow\infty\) and \(0<\varepsilon<1\), then \({\cal G}(n,p)\) almost surely consists of components with at most one cycle. Thus, each component resembles a cycle where, possibly, trees are attached to certain cycle vertices, and each component is homomorphically equivalent to the cycle it contains. Since we know from Section 3.1 that Frieze and Jerrum’s algorithm performs better than Håstads algorithm on cycle graphs, it follows that the same relationship holds in this part of the \({\cal G}(n,p)\) spectrum.
## 4 Conclusions and Open Problems
We have seen that applying Frieze and Jerrum’s algorithm to Max \(H\)-Col gives comparable to or better results than when applying Håstad’s Max 2-Csp algorithm for the classes of graphs we have considered. One possible explanation for this is that the analysis of the Max 2-Csp algorithm only aims to prove it better than a random solution on expectation, which may leave room for strengthenings of the approximation guarantee. At the same time, we are probably overestimating the distance between the graphs. It is likely that both results can be improved.
Kaporis et al. [23] have shown that \(mc_{2}\) is approximable within \(.952\) for any given average degree \(d\) and asymptotically almost all random graphs \(G\) in \({\cal G}(n,m=\left\lfloor\frac{d}{2}n\right\rfloor)\), where \({\cal G}(n,m)\) is the probability space of random graphs on \(n\) vertices and \(m\) edges selected uniformly at random. In a similar vein, Coja-Oghlan et al. [8] give an algorithm that approximates \(mc_{k}\) within \(1-O(1/\sqrt{np})\) in expected polynomial time, for graphs from \({\cal G}(n,p)\). It would be interesting to know if these results could be carried further, to other graphs \(G\), so that better approximability bounds on Max \(H\)-Col, for \(H\) such that \(G\to H\), could be achieved.
Erdős [12] has proved that for any positive integers \(k\) and \(l\) there exists a graph of chromatic number \(k\) and girth at least \(l\). It is obvious that such graphs cannot be sandwiched between \(K_{2}\) and a cycle as was the case of the graphs of high girth in Section 3.1. A different idea is thus required to deal with these graphs. In general, to apply our method more precisely, we need a better understanding of the structure of \({\cal C}_{S}\) and how this interacts with our metric \(d\).
The idea of defining a metric on a space of problems which relates their approximability can be extended to more general cases. It should not prove too difficult to generalise the framework introduced in this paper to Max CSP over directed graphs or even languages consisting of a single, finitary relation. How far can this generalisation be carried out? Could it provide any insight into the approximability of Max CSP on arbitrary constraint languages?
When considering inapproximability, we have strongly relied on the Unique Games Conjecture—hence, we are part of the growing body interested in seeing UGC settled. We note, though, that weaker inapproximability results exist for both Max cut [19] and Max \(k\)-cut [22] and that they are applicable in our setting. We want to emphasise that our method is not _per se_ dependent on the truth of the UGC.
## References
* [1] N. Alon. Bipartite subgraph. _Combinatorica_, 16:301–311, 1996.
* [2] N. Alon and M. Krivelevich. The concentration of the chromatic number of random graphs. _Combinatorica_, 17:303–313, 1997.
* [3] A. Berman and X.-D. Zhang. Bipartite density of cubic graphs. _Discrete Mathematics_, 260:27–35, 2003.
* [4] B. Bollobás. The chromatic number of random graphs. _Combinatorica_, 8(1):49–55, 1988.
* [5] B. Bollobás and P. Erdős. Cliques in random graphs. _Mathematical Proceedings of the Cambridge Philosophical Society_, 80(419):419–427, 1976.
* [6] J. Bondy and S. Locke. Largest bipartite subgraphs in triangle-free graphs with maximum degree three. _Journal of Graph Theory_, 10:477–504, 1986.
* [7] O. Borodin, S.-J. Kim, A. Kostochka, and D. West. Homomorphisms from sparse graphs with large girth. _Journal of Combinatorial Theory, ser. B_, 90:147–159, 2004.
* [8] A. Coja-Oghlan, C. Moore, and V. Sanwalani. _MAX_ k-_CUT_ and approximating the chromatic number of random graphs. _Random Structures and Algorithms_, 28:289–322, 2005.
* [9] E. de Klerk, D. Pasechnik, and J. Warners. Approximate graph colouring and MAX-\(k\)-CUT algorithms based on the \(\theta\) function. _Journal of Combinatorial Optimization_, 8:267–294, 2004.
* [10] R. Dutton and R. Brigham. Edges in graphs with large girth. _Graphs and Combinatorics_, 7(4):315–321, 1991.
* [11] Z. Dvořák, R. Škrekovski, and T. Valla. Planar graphs of odd-girth at least 9 are homomorphic to the Petersen graph. To appear in SIAM Journal on Discrete Mathematics.
* [12] P. Erdős. Graph theory and probability. _Canadian Journal of Mathematics_, 11:34–38, 1959.
* [13] P. Erdős and A. Rényi. On the evolution of random graphs. _Publications of the Mathematical Institute of the Hungarian Academy of Sciences_, 5:17–61, 1960.
* [14] A. Frieze and M. Jerrum. Improved approximation algorithms for MAX\(k\)-CUT and MAXBISECTION. _Algorithmica_, 18(1):67–81, 1997.
* [15] M. Goemans and D. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. _Journal of the ACM_, 42:1115–1145, 1995.
* [16] P. Hell and J. Nešetřil. On the complexity of \(H\)-coloring. _Journal of Combinatorial Theory, Series B_, 48:92–110, 1990.
* [17] P. Hell and J. Nešetřil. _Graphs and Homomorphisms (Oxford Lecture Series in Mathematics and Its Applications)_. Oxford Lecture Series in Mathematics and Its Applications. Oxford University Press, 2004.
* [18] G. Hophkins and W. Staton. Extremal bipartite subgraphs of cubic triangle-free graphs. _Journal of Graph Theory_, 6:115–121, 1982.
* [19] J. Håstad. Some optimal inapproximability results. _Journal of the ACM_, 48:798–869, 2001.
* [20] J. Håstad. Every 2-CSP allows nontrivial approximation. In _Proceedings of the 37th Annual ACM Symposium on the Theory of Computing (STOC-2005)_, pages 740–746, 2005.
* [21] P. Jonsson, A. Krokhin, and F. Kuivinen. Ruling out polynomial-time approximation schemes for hard constraint satisfaction problems. In _Proceedings of the 2nd International Computer Science Symposium in Russia (CSR-2007)_, pages 182–193, 2007. Full version available at http://www.dur.ac.uk/andrei.krokhin/papers/hardgap.pdf.
* [22] V. Kann, S. Khanna, J. Lagergren, and A. Panconesi. On the hardness of approximating MAX \(k\)-CUT and its dual. _Chicago Journal of Theoretical Computer Science_, 1997(2), 1997.
* [23] A. Kaporis, L. Kirousis, and E. Stavropoulos. Approximating almost all instances of Max-cut within a ratio above the Håstad threshold. In _Proceedings of the 14th Annual European Symposium on Algorithms (ESA-2006)_, pages 432–443, 2006.
* [24] S. Khot. On the power of unique 2-prover 1-round games. In _Proceedings of the 34th Annual ACM Symposium on the Theory of Computing (STOC-2002)_, pages 767–775, 2002.
* [25] S. Khot, G. Kindler, E. Mossel, and R. O’Donnel. Optimal inapproximability results for MAX-CUT and other two-variable CSPs? _SIAM Journal of Computing_, 37(1):319–357, 2007.
* [26] H.-J. Lai and B. Liu. Graph homomorphism into an odd cycle. _Bulletin of the Institute of Combinatorics and its Applications_, 28:19–24, 2000.
* [27] M. Langberg, Y. Rabani, and C. Swamy. Approximation algorithms for graph homomorphism problems. In _Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques_, pages 176–187. Springer, 2006.
* [28] S. Locke. A note on bipartite subgraphs of triangle-free regular graphs. _Journal of Graph Theory_, 14:181–185, 1990.
* [29] T. Łuczak. The chromatic number of random graphs. _Combinatorica_, 11(1):45–54, 1991.
* [30] D. Matula. The employee party problem. _Notices of the American Mathematical Society_, 19, 1972. A – 382.
* [31] P. Turán. On an extremal problem in graph theory. _Matematicko Fizicki Lapok_, 48:436–452, 1941.
|
1205.1626 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 69659,
"num_imgs": 6,
"llama3_tokens_count": 21056
} | [
"content_image/1205.1626/x1.png",
"content_image/1205.1626/x2.png",
"content_image/1205.1626/x3.png",
"content_image/1205.1626/x4.png",
"content_image/1205.1626/x5.png",
"content_image/1205.1626/x6.png"
] | # Wind braking of magnetars
H. Tong¹² , R. X. Xu³ , L. M. Song² and G. J. Qiao³
[FOOTNOTE:1][ENDFOOTNOTE]
[FOOTNOTE:3][ENDFOOTNOTE]
[FOOTNOTE:2][ENDFOOTNOTE]
###### Abstract
Considering recent observations challenging the traditional magnetar model, we explore the wind braking of magnetars. There is evidence for strong multipole magnetic fields in active magnetars, but the dipole field inferred from spin down measurements may be strongly biased by a particle wind. Recent challenging observations of magnetars may be explained naturally in the wind braking scenario: (1) The supernova energies of magnetars are of normal value; (2) The non-detection in _Fermi_ observations of magnetars; (3) The problem posed by the low-magnetic field soft gamma-ray repeaters; (4) The relation between magnetars and high magnetic field pulsars; (5) A decreasing period derivative during magnetar outbursts. Transient magnetars with \(L_{\rm x}<-\dot{E}_{\rm rot}\) may still be magnetic dipole braking. This may explain why low luminosity magnetars are more likely to have radio emissions. A strong reduction of dipole magnetic field is possible only when the particle wind is very collimated at the star surface. A small reduction of dipole magnetic field may result from detailed considerations of magnetar wind luminosity. In the wind braking scenario, magnetars are neutron stars with strong multipole field. For some sources, a strong dipole field may be no longer needed. A magnetism-powered pulsar wind nebula will be one of the consequences of wind braking. For a magnetism-powered pulsar wind nebula, we should see a correlation between the nebula luminosity and the magnetar luminosity. Under the wind braking scenario, a braking index smaller than three is expected. Future braking index measurement of a magnetar may tell us whether magnetars are wind braking or magnetic dipole braking.
pulsars: general—stars: magnetars—stars: neutron †
[FOOTNOTE:†][ENDFOOTNOTE]
## 1 Introduction
Anomalous X-ray pulsars (AXPs) and soft gamma-ray repeaters (SGRs) are magnetar candidates, i.e., neutron stars powered by strong magnetic field decay (Thompson & Duncan 1995, 1996). In studying them, the assumption of magnetic dipole braking is often employed (Duncan & Thompson 1992; Kouveliotou et al. 1998). However, the magnetic dipole braking mechanism is originally designed for rotation-powered pulsars. Since both the persistent and burst emissions of magnetars are from a different energy reservoir (magnetic energy instead of rotational energy), it is possible that they have a different braking mechanism, e.g., wind braking (Harding et al. 1999; Thompson et al. 2000).
A strong dipole magnetic field obtained by assuming magnetic dipole braking is often taken as confirmation of a neutron star’s magnetar nature (\(B_{\rm dip}>B_{\rm QED}=4.4\times 10^{13}\,\rm G\), Kouveloitou et al. 1998). However, the magnetic dipole braking assumption will also result in several problems challenging the magnetar model (Mereghetti 2008; Tong & Xu 2011).
1. The spin down time scale of a newly born magnetar will be less than the shock breakout time due to the presence of a strong dipole magnetic field. This will cause the supernovae associated with magnetars more energetic than canonical supernovae (Duncan & Thompson 1992). However, observations of supernova remnants associated with AXPs and SGRs show that the corresponding supernova energies are of canonical value (Vink & Kuiper 2006). This failed prediction of the magnetar model may be circumvented if the initial rotational energy of magnetars are carried away in non-electromagnetic form, e.g., gravitational waves (Dall’Osso et al. 2009). However, in Dall’Osso et al. (2009), a relatively low dipole magnetic field is also required (\(B_{\rm dip}\leq 10^{14}\,\rm G\)). If magnetars have a different braking mecahnism and consequently their dipole magnetic field is much lower, this may explain their supernova energy problem.
2. If AXPs and SGRs are neutron stars with strong dipole field, then although they rotate rather slowly (periods: \(2\)–\(12\) seconds), they will also accelerate particles to very high energy. In the outer magnetosphere, these particles will emit high-energy gamma-rays which are detectable by \(\it Fermi\)-LAT (Cheng & Zhang 2001). This may be viewed as an independent measurement of strong dipole magnetic field, i.e., through unipolar induction effect. However, \(\it Fermi\)-LAT observations of all AXPs and SGRs show no significant detection (Sasmaz Mus & Gogus 2010; Abdo et al. 2010). Therefore, there are conflicts between the out gap model in the case of magnetars and _Fermi_-LAT observations (Tong et al. 2010a, 2011). It is possible that magnetars have a different braking mechanism and their dipole magnetic field is not so strong.
3. In the traditional picture of the magnetar model, magnetars are young neutron stars with both strong dipole field and strong multipole field (Thompson & Duncan 1995, 1996; Thompson et al. 2002). The observation of the low magnetic field soft gamma-ray repeater SGR 0418+5729 has challenged the traditional magnetar prescription (Rea et al. 2010). This source tells us that magnetar-like activities (an anomalous X-ray luminosity or SGR-type bursts) do not require a strong dipole magnetic field. The timing of SGR Swift J1822.3–1606 further strengthens this point (Rea et al. 2012a). The originally required strong dipole magnetic field in most AXPs and SGRs mainly provides the braking torque. It is possible that not only SGR 0418+5729 but also many other AXPs and SGRs do not have a strong dipole magnetic field if they have a different braking mechanism.
4. There are high magnetic field rotation-powered pulsars (HBPSRs) along with magnetars (Ng & Kaspi 2011). Although they are close to each other on the \(P\)–\(\dot{P}\) diagram, they show very different timing behavior. The timing behaviors of HBPSRs are similar to that of normal pulsars (Ng & Kaspi 2011). Therefore, it is reasonable that they have the same braking mechanism as that of normal pulsars. However, magnetars are very noisy (Gavriil & Kaspi 2002; Woods et al. 2002; Archibald et al. 2008), and the period derivatives of magnetars can vary significantly (up to a factor of 10, Gavriil & Kaspi 2004; Camilo et al. 2007; Woods et al. 2007). Therefore, it is possible that magnetars have a different braking mechanism, e.g. wind braking. The variation of wind luminosity will cause the variation of their period derivatives.
All these issues are related to the dipole magnetic field and the braking mechanism of magnetars. A different braking mechanism of magnetars may help to solve these problems. Electrodynamics of magnetars show that they may have globally twisted magnetospheres (Thompson et al. 2002; Beloborodov & Thompson 2007). The twisted magnetosphere will enhance their spin-down torque and also modify their persistent emissions. Changes in their global magnetospheric structure will result in changes in their spin-down rate and persistent flux (Beloborodov 2009). In the case of wind braking, the large scale dipole field is unchanged. It is the change of the particle wind luminosity that causes change of the spin-down rate. Since both their persistent emissions and the particle wind are magnetism-powered, it is natural that their spin-down behavior and persistent emissions are correlated. Based on previous researches (Harding et al. 1999; Thompson et al. 2000), we explore the wind braking of magnetars in more detail and apply it to all AXPs and SGRs. A comparison with up-to-date observations is also given.
Observations supporting the existence of a particle wind in magnetars are given in Section 2. Rotational energy loss rate due to a particle wind is calculated in Section 3. Several aspects of wind braking of magnetars are given in Section 4. Discussions and conclusions are presented in Section 5 and 6, respectively.
## 2 Existence of a particle wind
### Qualitative description of wind braking of magnetars
In the magnetic dipole braking scenario of normal pulsars, the star’s rotational energy is carried away by magnetic dipole radiation plus a rotation-powered particle wind (Michel 1969; Xu & Qiao 2001; Spitkovsky 2006). The rotational energy loss rate is quantitatively similar to the magnetic dipole radiation in vacuum (Xu & Qiao 2001; Spitkovsky 2006). The surface dipole magnetic field is almost the same as that of magnetic dipole braking in vacuum. A particle wind mainly causes higher order modifications of pulsar timing, e.g. braking index (Michel 1969; Manchester et al. 1985; Xu & Qiao 2001; Contopoulos & Spitkovsky 2006; Wang et al. 2012) and timing noise (Lyne et al. 2010; Liu et al. 2011). Around young neutron stars, we may see a rotation-powered pulsar wind nebula (Gaensler & Slane 2006).
In the case of magnetars, the star’s persistent X-ray luminosity is much higher than its rotational energy loss rate. Since the persistent X-ray luminosity is from magnetic field decay, it is possible that a particle flow (i.e., a magnetism-powered particle wind) is also produced during the decay of the star’s magnetic field. The luminosity of this particle wind¹ can be as high as the star’s persistent X-ray luminosity, therefore it can also be much higher than the star’s rotational energy loss rate (Duncan 2000; Section 2.3 below). This particle wind will “comb out” the magnetic field lines in the closed field line regions (Harding et al. 1999). The net result is an enhanced rotational energy loss rate for a given dipole magnetic field (Harding et al. 1999; Thompson et al. 2000; Section 3 below). In this “wind aided” spin down scenario, the corresponding dipole magnetic field will be much lower than the magnetic dipole braking case (Harding et al. 1999; Section 4 below). Wind braking of magnetars will also help us to explain recent observations challenging the the traditional magnetar model (Section 4 and 5 below).
[FOOTNOTE:1][ENDFOOTNOTE]
Below, we assume that the star’s dipole magnetic field is constant during its life time. It is the evolution and variation of particle wind luminosity that cause the evolution and variation of AXPs/SGRs’ timing properties. For example, a short term variation of particle wind will cause a variation of the star’s period derivative and also contribute to its timing noise.
### Observational clues for the existence of a particle wind
The existence of a (rotation-powered) particle wind in normal pulsars is well established. The observations of intermittent pulsars give direct support for the existence a particle wind (Kramer et al. 2006; Camilo et al. 2012). However, the existence of a magnetism-powered particle wind in magnetars is still unknown. Below we will give several observations of AXPs and SGRs, which may provide some hints for the existence of a particle wind.
1. The AXP 1E 2259+586 experiences an enhanced period of spin down during outburst (Kaspi et al. 2003). Variations of period derivative are also seen in AXP 1E 1048.1–5937 (Gavriil & Kaspi 2004), AXP XTE J1810–197 (Camilo et al. 2007), SGR 1806–20 (Woods et al. 2007) and AXP 1E 1547.0–5408 (Camilo et al. 2008) etc. A decreasing period derivative is also observed in the radio-loud magnetar, accompanying by a decaying X-ray luminosity and radio luminosity (Levin et al. 2012; Anderson et al. 2012). This may be due to a decaying particle wind during outbursts. In the absence of a strong particle wind, an untwisting magnetosphere of a magnetar may explain the decreasing period derivative (Beloborodov 2009). However the dipole field is of large scale compared with that of multipole field, a varying dipole field (especially short time scale variations) is hard to accomplish (Camilo et al. 2007; Levin et al. 2012). In the case of wind braking of magnetars, the global dipole field is unchanged. Since the particle wind may be the consequences of small amplitude seismic activities (Thompson & Duncan 1996), it can vary dramatically even on short time scales. A varying particle wind will cause a varying period derivative. The long term decay of particle wind luminosity during outburst can account for the decreasing period derivative (e.g., AXP XTE J1810–197, Camilo et al. 2007; the radio-loud magnetar, Levin et al. 2012). During an outburst, we should expect the wind luminosity first increases then decreases. This will cause the period derivative first increases then decreases, which may be the case of SGR 1806-20 (Woods et al. 2007) and AXP 1E 1547.0-5408 (before outburst, Camilo et al. 2008).
2. AXPs and SGRs have a higher level of timing noise than normal pulsars (Gavriil & Kaspi 2002; Woods et al. 2002; Archibald et al. 2008). The timing noise may be correlated with period derivatives. The timing noise of normal pulsars may be the result of a varying (rotation-powered) particle wind (Lyne et al. 2010; Liu et al. 2011). Then it is possible that AXPs and SGRs are also braked down by a particle wind. Since AXPs and SGRs are magnetism-powered, the particle wind may also from magnetic field decay, i.e., a magnetism-powered particle wind. This magnetism-powered particle wind may vary significantly with time, similar to the magnetar’s persistent X-ray luminosity. Then it may cause a higher level of timing noise in magnetars than that in normal pulsars and HBPSRs.
3. If AXPs and SGRs harbor a strong enough particle wind (either rotation-powered or magnetism-powered), then we should see a pulsar wind nebula around the putative star. If the the particle wind is magnetism-powered , the same as that of their persistent X-ray luminosities, then we should see some correlation between the nebula luminosity and the stellar luminosity. A possible extended emission is found around AXP 1E 1547.0–5408 (Vind & Bamba 2009). The luminosity of the extended emission is correlated with the star’s luminosity (Olausen et al. 2011). Therefore, the extended emission around AXP 1E 1547.0–5408 may be a magnetism-powered pulsar wind nebula instead of a duct scattering halo. If this is confirmed in the future, it will be a strong evidence for the existence a magnetism-powered particle wind in magnetars. A magnetism-powered pulsar wind nebula may also accelerate particles to very high energy and radiate high energy photons. An extended emission and a TeV source are both seen in the case of SGR Swift J1834.9--0846 (Kargaltsev et al. 2012). If the extended emission is found to be a pulsar wind nebula and the associated with the TeV source is confirmed, then it is also likely to be a magnetism-powered pulsar wind nebula². A candidate pulsar wind nebula which may contain magnetic energy contribution is seen around RRAT J1819–1458 (Rea et al. 2009). [FOOTNOTE:2][ENDFOOTNOTE]
In summary, there are many uncertainties and ambiguities if we attribute the above observations to a particle wind in magnetars. However, we do not know whether AXPs and SGRs have a (magnetism-powered) particle wind or not. The possibility of such a particle wind can not be ruled out by present observations either. A magnetism-powered particle wind in magnetars is helpful to our understanding of the different observational aspects stated above. Therefore, the above observational facts may give us some clues for the existence of a particle wind in magnetars. Whether a particle wind really exists or not can be tested by future studies.
### Estimation of wind luminosity
In the magnetar model, the bursts and outbursts are related with the magnetar’s seismic activities (Thompson & Duncan 1995, 1996). If the observable bursts are associated with large amplitude seismic activities, then the low amplitude seismic activities may mainly result in a particle wind (Thompson & Duncan 1996). According to Thompson & Duncan (1996, eq.(71) there), the particle wind luminosity is
\[L_{\rm p}\simeq 2\times 10^{35}\left(\frac{B_{\rm c}}{10^{15}\,\rm G}\right)^{ 2}\left(\frac{t}{10^{4}\,\rm yr}\right)^{-1}\left(\frac{\Delta R_{\rm c}}{1\, \rm km}\right)\,\rm erg\,s^{-1},\] (1)
where \(B_{\rm c}\) is crustal field strength, \(t\) is the star’s age, and \(\Delta R_{\rm c}\) is the crustal thickness. The above equation is only valid for crustal field strength less than \(6\times 10^{15}\,\rm G\), above which the crust may undergo plastic deformations.
The persistent X-ray luminosity of AXPs and SGRs are from magnetic field decay, e.g., internal heating (Thompson & Duncan 1996) or magnetospheric current heating (Thomspon et a. 2002; Beloborodov & Thompson 2007). A particle wind may also be produced during this process. Since the particle wind and the persistent X-ray luminosity are from the same energy reservoir, a natural estimation of the particle wind luminosity is (Duncan 2000)
\[L_{\rm p}\sim L_{\rm x}\sim 10^{35}\,\rm erg\,s^{-1},\] (2)
which is valid for most AXPs and SGRs. For the transient magnetars, they have a lower quiescent X-ray luminosity. Their particle wind luminosity may also be correspondingly lower.
In the wind braking scenario, magnetars are neutron stars with strong multipole fields. The strong twisted mangetic field in the vicinity of magnetars will accelerate particles to very high energy. Thus, a corona of high energy particle will be formed (Beloborodov & Thompson 2007). The footprint of magnetic field lines are anchored to the stellar crust. In the presence of frequent low amplitude seismic activities, the corona of magnetars will be disturbed continuously. The excitation of such a particle wind in magnetars may be due to their seismic activities, especially small amplitude seismic activities (Thompson & Duncan 1996; Thompson & Duncan 2001; Timokhin et al. 2008). The particles in the magnetar magnetosphere can flow out in two ways. (1) During bursts and giant flares. This burst component of particle wind has its duty cycles (Thompson & Duncan 1995; see numerical simulations of Parfrey et al. 2012; Section 4.5 below). (2) During persistent state. The long term average of many small amplitude seismic activities may result in a persistent particle outflow of magnetars (Thompson & Duncan 1996; Duncan 2000). We will mainly focus on the persistent component of particle wind.
In conclusion, we have already some observational clues for the existence of a particle wind in magnetars. Their luminosities can also be estimated, although the underlying mechanism is still lacking. Since both the magnetar’s persistent X-ray luminosity and the particle wind are from magnetic field decay, the particle wind luminosity may be as high as their persistent X-ray luminosities. Therefore, the particle wind luminosity in magnetars can be much higher than their rotational energy loss rate. The existence of such a strong particle wind will modify the spin-down behavior of magnetars qualitatively.
## 3 Rotational energy loss rate due to a particle wind
### Description of the global magnetospheric structure
The magnetospheres of pulsars and magnetars contain regions of open and closed magnetic field. The closed field lines extend to the light cylinder radius in the case of normal pulsars (Contopoulos & Spitkovksy 2006). In the case of magnetars, the closed field line region may be smaller. In the presence of a strong particle wind, the natural radial extension of closed field line regions is the radius where the kinetic energy density of particle wind equals the magnetic energy density (Harding et al. 1999; Thompson et al. 2000). Particle flows in the closed field line regions belong to the domain of closed field line region electrodynamics of magnetars (Thomspon et al. 2002; Beloborodov & Thompson 2007; Tong et al. 2010b). Particle flow collimated around the polar cap may dominate the spin down of the central star. The opening angle of the polar cap region is determined by the coupling between the magnetar crust and its magnetosphere. The total particle luminosity \(L_{\rm p}\) is determined by the decay of magnetic field energy. Only a fraction of this particle wind can flow out to infinity and contribute to the spin down of the magnetar. The escaping particle luminosity is denoted as \(L_{\rm w}\), i.e., wind luminosity. Then it is natural that \(L_{\rm w}\leq L_{\rm p}\). For a given particle luminosity, the maximum braking case is accomplished when the wind luminosity equals the total particle luminosity.
### The simplest case: \(L_{\rm w}=L_{\rm p}\)
For a neutron star with angular velocity \(\Omega=2\pi/P\) (\(P\) is rotation period), its light cylinder radius \(R_{\rm{lc}}\) is (the radius where the rotational velocity equals the speed of light)
\[R_{\rm{lc}}=\frac{c}{\Omega}=\frac{Pc}{2\pi}=4.8\times 10^{10}\left(\frac{P}{1 0\,\rm s}\right)\,\rm cm,\] (3)
where \(c\) is the speed of light. In the case of magnetars, with the aid of a particle wind, the magnetic field lines are combed out at a radius \(r_{\rm{open}}\) (where the particle energy density equals the magnetic energy density, Harding et al. 1999)
\[r_{\rm{open}}=r_{0}\left(\frac{B_{0}^{2}r_{0}^{2}c}{2L_{\rm w}}\right)^{1/4}=r _{0}\left(\frac{B_{0}^{2}r_{0}^{2}c}{2L_{\rm p}}\right)^{1/4}=4.1\times 10^{9} b_{0}^{1/2}L_{\rm p,35}^{-1/4}\,\rm cm,\] (4)
where \(r_{0}=10^{6}\,\rm cm\) is neutron star radius, \(B_{0}=b_{0}\times B_{\rm QED}\) is dipole magnetic field at the magnetic pole, and \(L_{\rm w}=L_{\rm p}=L_{\rm p,35}\times 10^{35}\,\rm erg\,s^{-1}\) is the particle wind luminosity (assuming³\(L_{\rm w}=L_{\rm p}\), and assuming the escaping particle wind becomes near isotropic at \(r_{\rm open}\)). The polar cap radius now is
[FOOTNOTE:3][ENDFOOTNOTE]
\[R_{\rm{pc}}=r_{0}(r_{0}/r_{\rm{open}})^{1/2}=1.6\times 10^{4}b_{0}^{-1/4}L_{ \rm p,35}^{1/8}\,\rm cm.\] (5)
The corresponding polar cap opening angle is
\[\theta_{\rm open}^{2}=r_{0}/r_{\rm open}=2.4\times 10^{-4}\,b_{0}^{-1/2}\,L_{ \rm p,35}^{1/4}.\] (6)
Typically, \(\theta_{\rm open}=1.6\times 10^{-2}\,b_{0}^{-1/4}\,L_{\rm p,35}^{1/8}\). The polar cap opening angle \(\theta_{\rm open}\) depends on the wind luminosity \(L_{\rm w}\).
This forms the basic structure of a wind-loaded magnetosphere. The star may form a current circuit in the open field line regions. The rotational energy loss rate due to this particle wind is (Harding et al. 1999)
\[\dot{E}_{\rm{w}}=\frac{B_{0}^{2}r_{0}^{6}\Omega^{4}}{3c^{3}}\left(\frac{R_{\rm {lc}}}{r_{\rm{open}}}\right)^{2}.\] (7)
For the traditional magnetic dipole braking, the corresponding rotational energy loss rate is⁴ (Shaprio & Teukolsky 1983)
[FOOTNOTE:4][ENDFOOTNOTE]
\[\dot{E}_{\rm{d}}=\frac{B_{0}^{2}r_{0}^{6}\Omega^{4}}{6c^{3}}.\] (8)
Therefore, eq.(7) can be rewritten as
\[\dot{E}_{\rm{w}}=\frac{2}{\sqrt{3}}\dot{E}_{\rm{d}}\left(\frac{L_{\rm p}}{\dot {E}_{\rm{d}}}\right)^{1/2}.\] (9)
A second way to calculate the rotational energy loss rate due to a particle wind is provided by Thompson et al. (2000)⁵. The outflowing particles will corotate with the star up to the radius \(r_{\rm open}\). For relativistic (also mildly relativistic) particles, the rotational energy carried away by this particle wind is (Thompson et al. 2000)
[FOOTNOTE:5][ENDFOOTNOTE]
\[\dot{E}_{\rm w}=\frac{2}{3}\frac{L_{\rm p}}{c^{2}}\Omega^{2}r_{\rm open}^{2}= \frac{2}{\sqrt{3}}\dot{E}_{\rm{d}}\left(\frac{L_{\rm p}}{\dot{E}_{\rm{d}}} \right)^{1/2}.\] (10)
A third way to calculate the rotational energy loss rate due to a particle wind can be done in analogy with that of Xu & Qiao (2001). The electric current in the two polar caps will carry away the rotational energy of the star in the presence of an acceleration potential. This acceleration potential is due to unipolar induction. Assuming maximum acceleration potential, the rotational energy loss rate is
\[\dot{E}_{\rm{w}}=2I_{\rm pc}\Phi_{\rm max}=\frac{3}{\sqrt{3}}\dot{E}_{\rm{d}} \left(\frac{L_{\rm p}}{\dot{E}_{\rm{d}}}\right)^{1/2},\] (11)
where \(I_{\rm pc}=\pi R_{\rm pc}^{2}\rho_{\rm GJ}c\) is the polar cap current (for one polar cap), \(\rho_{\rm GJ}\) is the Goldreich-Julian density, and \(\Phi_{\rm max}\) is the maximum acceleration potential due to unipolar induction (Ruderman & Sutherland 1975)
\[\Phi_{\rm max}=\frac{B_{0}r_{0}^{2}\Omega}{2c}\left(\frac{R_{\rm pc}}{r_{0}} \right)^{2}.\] (12)
Therefore, irrespective of the details of particle wind, accurate within a factor of two, the rotational energy loss rate due to a particle wind can be written as (Harding et al. 1999)
\[\dot{E}_{\rm{w}}=\dot{E}_{\rm{d}}\left(\frac{L_{\rm w}}{\dot{E}_{\rm{d}}} \right)^{1/2}=\dot{E}_{\rm{d}}\left(\frac{L_{\rm p}}{\dot{E}_{\rm{d}}}\right)^ {1/2}\] (13)
The second identity is obtained by assuming \(L_{\rm w}=L_{\rm p}\).
From equation (13), we see that
1. For a rotation-powered particle wind \(L_{\rm p}\sim-\dot{E}_{\rm rot}\), \(\dot{E}_{\rm w}\sim\dot{E}_{\rm d}\sim-\dot{E}_{\rm rot}\), wind braking is quantitatively similar to the case of magnetic dipole braking in vacuum. The effects of particle wind will mainly cause higher order modifications, e.g., a different braking index etc. This is the case of normal pulsars.
2. For magnetars, there may be a magnetism-powered particle wind \(L_{\rm p}\gg-\dot{E}_{\rm rot}\). Wind braking of magnetars will result in \(\dot{E}_{\rm w}=-\dot{E}_{\rm rot}\gg\dot{E}_{\rm d}\). Therefore, the magnetic dipole braking is enhanced due to the presence of a magnetism-powered particle wind (Harding et al. 1999). This will cause a strong reduction of magnetar’s dipole field. Meanwhile, high order effects will also exists, e.g., a different braking index, larger timing noise, a magnetism-powered pulsar wind nebula, etc.
### Detailed considerations of wind luminosity
The above simplest case assumes the escaping wind luminosity is equal to the total particle luminosity. From equation (6), the polar opening angle depends on the escaping wind luminosity. This means that the polar cap opening angle (at the star surface) is affected by the physics happening at \(r_{\rm open}\). It is not known how this is accomplished. Alternatively, the polar cap opening angle of the particle wind may be an independent parameter. The total particle luminosity may involve a particular angular distribution. This angular distribution may result from coupling between the magnetar crust and its magnetosphere. The typical time scale of this coupling may be estimated from quasi-periodic oscillations in magnetars (Timokhin et al. 2008; Watts 2011). The fundamental frequency is about \(\nu\sim 20\,\rm Hz\). The length scale of coupling between the neutron star and its magnetosphere is
\[r_{\rm max}\sim\frac{c}{3\nu}\sim 5\times 10^{8}\left(\frac{20\,\rm Hz}{\nu} \right)\,\rm cm.\] (14)
The corresponding polar cap opening angle is
\[\theta_{\rm s}^{2}=\frac{r_{0}}{r_{\rm max}}\sim 2\times 10^{-3}\left(\frac{ \nu}{20\,\rm Hz}\right).\] (15)
Typically, \(\theta_{\rm s}\sim 0.05\left(\frac{\nu}{20\,\rm Hz}\right)^{1/2}\). The particles will mainly flow through the polar cap area with opening angle \(\theta_{\rm s}\). In the following calculations, we will take \(\theta_{\rm s}\) as the fundamental input parameter. \(r_{\rm max}\) etc will be functions of \(\theta_{\rm s}\).
The particles from the two polar cap regions can flow out to radius larger than \(r_{\rm max}\). Considering the presence of strong magnetic field, a significant amount of the outflowing particles may be trapped in the closed field line regions in the magnetosphere⁶. Only a fraction of them can flow out to infinity and therefore carry away the star’s rotational energy. The Alfvén radius characterize the effect of magnetic field quantitatively. We denote it as \(r_{\rm open}\) in accordance with equation (4). In the present case, it is also defined as the radius when the particle energy density equals the magnetic energy density
[FOOTNOTE:6][ENDFOOTNOTE]
\[\gamma\rho(r)c^{2}\sim\frac{B(r)^{2}}{8\pi},\] (16)
where \(\gamma\) and \(\rho(r)\) are the Lorentz factor and mass density, respectively. When particles move along magnetic field lines, their kinetic energy is conserved (not considering radiation losses). The mass density may scale with the local Goldreich-Julian charge density \(\rho(r)\propto\rho_{\rm GJ}\propto 1/r^{3}\). Therefore
\[\gamma\rho_{\rm s}c^{2}\left(\frac{r_{0}}{r_{\rm open}}\right)^{3}\sim\frac{B_ {0}^{2}}{8\pi}\left(\frac{r_{0}}{r_{\rm open}}\right)^{6},\] (17)
where \(\rho_{\rm s}\) is the mass density at the star surface. According to the definition of particle luminosity and assuming uniform distribution across the polar cap region
\[L_{\rm p}=2\pi(r_{0}\theta_{\rm s})^{2}\gamma\rho_{\rm s}c^{2}\,c,\] (18)
then \(r_{\rm open}\) is
\[r_{\rm open} = r_{0}\left(\frac{B_{0}^{2}}{8\pi}\frac{2\pi(r_{0}\theta_{\rm s}) ^{2}\,c}{L_{\rm p}}\right)^{1/3}\] (19)
\[= 7\times 10^{9}\,b_{0}^{2/3}L_{\rm p,35}^{-1/3}\left(\theta_{\rm s }/0.05\right)^{2/3}\,\rm cm.\] (20)
Only the escaping wind particles can carry away the star’s rotational energy. From the definition of wind luminosity, \(L_{\rm w}\propto\theta_{\rm open}^{2}\propto 1/r_{\rm open}\). At the same time, the total particle luminosity is \(L_{\rm p}\propto\theta_{\rm s}^{2}\propto 1/r_{\rm max}\). The wind luminosity is related to the total particle luminosity
\[L_{\rm w}=L_{\rm p}\frac{r_{\rm max}}{r_{\rm open}}.\] (21)
Taken the polar cap opening angle as the fundamental parameter, \(r_{\rm max}\) will be \(r_{\rm max}=r_{0}/\theta_{\rm s}^{2}\). Therefore, the wind luminosity is
\[L_{\rm w}=6\times 10^{33}\,b_{0}^{-2/3}L_{\rm p,35}^{4/3}(\theta_{\rm s}/0.05) ^{-8/3}\,\rm erg\,s^{-1}.\] (22)
The wind luminosity depends strongly on the polar cap opening angle, i.e., how the neutron star couples with the magnetosphere. In the present case, the wind luminosity is a fraction of the total particle luminosity. Then, it must be that \(L_{\rm w}\leq L_{\rm p}\). In terms of \(r_{\rm max}\) and \(r_{\rm open}\), it must be that \(r_{\rm max}\leq r_{\rm open}\).
The calculation of rotational energy loss rate is the same as the previous section. From equation (13), the rotational energy loss rate due to a particle wind in the present case is
\[\dot{E}_{\rm{w}}=\dot{E}_{\rm{d}}\left(\frac{L_{\rm w}}{\dot{E}_{\rm{d}}} \right)^{1/2},\] (23)
where \(L_{\rm w}\) is determined by equation (22). The neutron star’s dipole magnetic field is obtained by equaling \(-\dot{E}_{\rm rot}=\dot{E}_{\rm w}\),
\[B_{0} = 3.3\times 10^{32}\left(\frac{\dot{P}}{P}\right)^{3/2}L_{\rm p,35 }^{-1}(\theta_{\rm s}/0.05)^{2}\rm\,G\] (24)
\[= 3.3\times 10^{14}\left(\frac{\dot{P}/10^{-11}}{P/10\,\rm s} \right)^{3/2}L_{\rm p,35}^{-1}(\theta_{\rm s}/0.05)^{2}\rm\,G.\]
The dipole magnetic field is determined by four parameters: the period and its derivative, the total particle luminosity, and the polar cap opening angle. If the polar cap opening angle is three times smaller, the dipole magnetic field will be ten times lower.
In conclusion, considering detailed modeling of wind luminosity, the rotational energy loss rate is reduced compared with the simplest case. The model parameter space is larger with the addition of another variable \(\theta_{\rm s}\). There are parameter space that the corresponding dipole magnetic field is only slightly lower than the magnetic dipole braking case. At the same time, there are also some parameter space that the dipole magnetic field is much lower than the magnetic dipole braking case. The following calculations in Section 4 are mainly done in the simplest case. This corresponds to maximum braking for a given particle luminosity. In this way, we want to demonstrate to which extent can wind braking of magnetars help to explain the current observations. For the calculations in Section 4.2, 4.4, and 4.5, the conclusions are unaffected by different assumptions. For the calculations in Section 4.1 and 4.3, the results may only change quantitatively.
## 4 Wind braking of magnetars
Wind braking of magnetars had been considered previously by Marsden et al. (1999, for the case of SGR 1900+14), Harding et al. (1999, for the case of SGR 1806–20), Thompson et al. (2000, for the case of SGR 1900+14). They mainly talked about wind braking during outbursts, although some of the formulae of the long term wind-aided spin down are also given by Thompson et al. (2000, Section 4.1 there). We explore the wind braking in more details and apply it to all magnetars. A comparison with up-to-date observations is also presented.
### Dipole magnetic field
From eq. (4) and eq. (13), the presence of particle wind amplifies the magnetic dipole braking rotational energy loss rate. Therefore, wind braking is valid only when
\[L_{\rm p}\geq\dot{E}_{\rm{d}}.\] (25)
Equaling the rotational energy loss rate \(-\dot{E}_{\rm rot}\) (\(=-I\Omega\dot{\Omega}\)) and eq. (13), we get
\[-\dot{E}_{\rm rot}=\dot{E}_{\rm{w}}\leq L_{\rm p}.\] (26)
Wind braking of magnetars is valid only when the wind luminosity is greater than the star’s rotational energy loss rate. Equation (26) can be rewritten as
\[-\dot{E}_{\rm rot}=\dot{E}_{\rm w}\geq\dot{E}_{\rm{d}}.\] (27)
The characteristic magnetic field obtained by assuming magnetic dipole braking is only the upper limit of the star’s true dipole magnetic field.
Assuming magnetic dipole braking
\[-\dot{E}_{\rm rot}=\dot{E}_{\rm{d}}=\frac{B_{0}^{2}r_{0}^{6}\Omega^{4}}{6c^{3}},\] (28)
the dipole magnetic field (at the magnetic pole) is
\[B_{0}=6.4\times 10^{19}\sqrt{P\dot{P}}\,{\rm G}=6.4\times 10^{14}\left(\frac{P }{10\,\rm{s}}\frac{\dot{P}}{10^{-11}}\right)^{1/2}\,{\rm G}.\] (29)
It is two times larger than usually reported since the polar magnetic field is two times larger than the equatorial magnetic field (eq. (5.17) in Lyne & Graham-Smith 2012 and corresponding discussions). However, the above magnetic dipole braking is originally designed for rotation-powered pulsars. Magnetars may be wind braking instead of magnetic dipole braking, as discussed above. In the case of wind braking
\[-\dot{E}_{\rm rot}=\dot{E}_{\rm{w}}=\dot{E}_{\rm{d}}\left(\frac{L_{\rm p}}{ \dot{E}_{\rm{d}}}\right)^{1/2}.\] (30)
The corresponding dipole magnetic field is
\[B_{0}=4.0\times 10^{25}\frac{\dot{P}}{P}\,L_{\rm p,35}^{-1/2}\,{\rm G}=4.0 \times 10^{13}\frac{\dot{P}/10^{-11}}{P/10\,\rm{s}}\,L_{\rm p,35}^{-1/2}\,{\rm G}.\] (31)
For typical AXPs and SGRs, the dipole magnetic field in the case of wind braking is about ten times lower than that of magnetic dipole braking. Therefore, AXPs and SGRs may be magnetars without strong dipole field. Only strong multipole field (\(\sim 10^{14}-10^{15}\,\rm{G}\)) is required to power their bursts, persistent emissions, and braking.
At the time when Harding et al. wrote their wind braking paper (Harding et al. 1999), they did not realize that there are two kinds of magnetic fields in magnetars: dipole field and multipole field. When they saw that a strong dipole field is not needed in the case of winding braking, Harding et al. said that “the magnetar model must be abandoned” as the penalty of wind braking. With the presence of multipole field, AXPs and SGRs can also show magnetar-like activites without a strong dipole field. This point is demonstrated clearly by the observation of SGR 0418+5729 (Rea et al. 2010). The timing of SGR Swift J1822.3\(-\)1606 further strengthens this point (Rea et al. 2012a).
Table 1 summaries the observed parameters and deduced quantities for all AXPs and SGRs (17 in total), which have period, period derivative, and persistent X-ray luminosity measured. Figure 1 shows the magnetar persistent X-ray luminosity versus the star’s rotational energy loss rate. We employ the following two ways to the model the particle luminosity from magnetars.
1. Considering that for all AXPs and SGRs, they must have a strong multipole field (\(\sim 10^{14}-10^{15}\,\rm G\)) in order to show magnetar-like activities. This is also true for low magnetic field SGRs (Rea et al. 2010, 2012a). Therefore, if the total field strength determines the particle luminosity, the particle luminosity will be more or less the same for all magnetars. In this case, we assume a particle luminosity \(L_{\rm p}=10^{35}\,\rm erg\,s^{-1}\) for all sources. From Figure 1, we see that all AXPs and SGRs are braked by a particle wind except AXP 1E 1547.0--5408. For AXP 1E 1547.0--5408, the effect of a particle wind will mainly result in high order spin down behaviors, e.g. a magnetism-powered particle wind surrounding the putative magnetar⁷. [FOOTNOTE:7][ENDFOOTNOTE]
2. On the other hand, different sources may have a different evolution history. Irrespective of the detailed wind mechanism, the magnetar’s particle luminosities may follow their persistent X-ray luminosities. In this case, we assume that the particle luminosities are the same as their persistent X-ray luminosities. From Figure 1, except for the five sources with \(L_{\rm x}<-\dot{E}_{\rm rot}\), the rest of AXPs and SGRs are all braked down by a particle wind⁸. [FOOTNOTE:8][ENDFOOTNOTE]
At present, we do not know the detailed mechanism of magnetar wind. The actual case may lie between these two extremes.
Figure 2 and 3 show the dipole magnetic field in the case of wind braking versus the dipole magnetic field in the case of magnetic dipole braking. From Figure 2 and 3, we see that
1. For most AXPs and SGRs, their dipole magnetic field by assuming wind braking are ten time lower than that of magnetic dipole braking. This may help us to understand why the magnetar supernova energies are of canonical value (Vink & Kuiper 2006; Dall’Osso et al. 2009). Numerical simulation of particle wind during magnetar bursts also suggests that the long term averaged period derivative may be greatly amplified (Parfrey et al. 2012). The actual dipole magnetic field may be significant lower than the magnetic dipole braking case. This is consistent with our considerations here.
2. The corresponding dipole magnetic field \(B_{\rm dip,w}\) ranges from \(10^{12}\,\rm G\) to \(10^{15}\,\rm G\). A strong dipole magnetic field (Kouveliotou et al. 1998, \(>B_{\rm QED}=4.4\times 10^{13}\,\rm G\)) is no longer a necessary input. In the wind braking scenario, magnetars are neutrons with strong multipole field. For most sources, their dipole field may or may not be as strong as their multipole field.
3. For several sources, their \(B_{\rm dip,w}\) are in the range \(10^{13}\,\rm{G}-10^{14}\,\rm G\). This is similar to that that of X-ray dim isolated neutron stars (Kaplan & van Kerkwijk 2011; Tong et al. 2010b). Therefore, when the magnetar activities of these sources calm down, they will become X-ray dim isolated neutron stars naturally.
4. There are now more low magnetic field magnetars, with \(B_{\rm dip,w}<4.4\times 10^{13}\,\rm G\). Therefore, in the case of wind braking, SGR 0418+5729 (Rea et al. 2010) is not such peculiar as before.
Furthermore, from eq.(30) and eq. (31) we see that for a given magnetar
1. The variation of particle wind will result in a variation of \(\dot{P}\), \(\dot{P}\propto L_{\rm p}^{1/2}\). The may explain the variation of \(\dot{P}\) of many AXPs and SGRs (see Section 2.2 above). A decreasing particle wind will result in a decreasing period derivative during magnetar outbursts (Camilo et al. 2007; Levin et al. 2012; Anderson et al. 2012).
2. Although magnetars and high magnetic field pulsars (HBPSRs) are close to each other on the \(P-\dot{P}\) diagram, they may be totally different from each other. In the case of wind braking, magnetars are neutron stars with strong multipole field. While HBPSRs may be neutron stars only with strong dipole field. Observationally, AXPs and SGRs have a larger level of timing noise (Gavriil & Kaspi et al. 2002; Woods et al. 2002; Archibald et al. 2008). This may be the result that they are wind braking instead of magnetic dipole braking. Meanwhile, most HBPSRs do not show magnetar-like activities which may be that most of them do not have as strong multipole fields as magnetars (Ng & Kaspi 2011; Pons & Perna 2011).⁹ [FOOTNOTE:9][ENDFOOTNOTE]
The above calculations are done by assuming \(L_{\rm w}=L_{\rm p}\). As discussed in Section 3.3, a strong reduction of dipole magnetic field is possible only when \(L_{\rm w}\) is comparable to \(L_{\rm p}\). This corresponds to a very collimated particle wind at the star surface. A small reduction of dipole magnetic field results from detailed modeling of wind luminosity. Assuming a constant polar cap opening angle \(\theta_{\rm s}=0.05\) and \(L_{\rm p}=L_{\rm x}\), the corresponding dipole magnetic field is also shown in table 1 and Figure 4 (The case is similar when assuming \(\theta_{\rm s}=0.05\) and \(L_{\rm p}=10^{35}\,\rm erg\,s^{-1}\)). As pointed out in the beginning of this section, if the wind luminosity is lower than the rotational energy loss rate, the dipole magnetic field will be the same as that of magnetic dipole braking. Then, the effect of a particle wind will mainly reflected in higher order modifications, e.g. braking index, timing noise, a magnetism-powered pulsar wind nubulae etc. Considering detailed modeling of wind luminosity, the dipole magnetic field will be the same as the magnetic dipole braking case for most sources. Only for four sources, their dipole magnetic fields are lower than the magnetic dipole braking case.
source name | P | ˙P | Lx | Bdip,d | Bdip,w | Bdip,w | Bdip,w
---|---|---|---|---|---|---|---
| second | 10−11 | 1035ergs−1 | 1014G | 1014G | 1014G | 1014G
SGR 0526–66 | 8.05 | 3.8 | 1.4 | 11.2 | 1.9 | 1.6 | Bdip,d
SGR 1900+14 | 5.2 | 9.2 | 0.83-1.3a | 14.0 | 7.1 | 6.9 | Bdip,d
SGR 1806–20 | 7.6 | 75 | 1.6 | 48.3 | 39.5 | 31.2 | Bdip,d
SGR 1627–41 | 2.59 | 1.9 | 0.025 | 4.5 | 2.9 | Bdip,d | Bdip,d
SGR 0418+5729 | 9.08 | <0.0006b | <0.00062b | 0.15 | 0.00026 | 0.011 | 0.09
Swift J1822.3-1606 | 8.44 | 0.0092 | 0.004 | 0.56 | 0.0044 | 0.069 | Bdip,d
4U 0142+61 | 8.69 | 0.203 | 1.1 | 2.7 | 0.093 | 0.089 | 0.34
1E 1048.1–5937 | 6.46 | 2.25 | 0.059 | 7.7 | 1.4 | 5.7 | Bdip,d
1E 2259+586 | 6.98 | 0.0484 | 0.34 | 1.2 | 0.028 | 0.048 | 0.18
1E 1841–045 | 11.78 | 3.93 | 1.9 | 13.8 | 1.3 | 0.97 | 10.6
1E 1547.0–5408 | 2.07 | 2.318 | 0.0058 | 4.4 | B∗dip,d | Bdip,d | Bdip,d
1RXS J170849.0–400910 | 11.0 | 1.91 | 0.59 | 9.3 | 0.69 | 0.9 | Bdip,d
XTE J1810–197 | 5.54 | 0.777 | 0.00031 | 4.2 | 0.56 | Bdip,d | Bdip,d
CXOU J010043.1–721134 | 8.02 | 1.88 | 0.61 | 7.9 | 0.94 | 1.2 | Bdip,d
CXO J164710.2–455216 | 10.61 | 0.083 | 0.0044 | 1.9 | 0.031 | 0.47 | Bdip,d
CXOU J171405.7–381031 | 3.83 | 6.40 | 0.22 | 10.0 | 6.7 | Bdip,d | Bdip,d
PSR J1622–4950 | 4.33 | 1.7 | 0.0063 | 5.5 | 1.6 | Bdip,d | Bdip,d
Table 1: Measured quantities and inferred dipole magnetic field of magnetars.
<figure><img src="content_image/1205.1626/x1.png"><figcaption>Figure 1: Persistent X-ray luminosities of magnetars versus their spin downluminosities. The solid line is Lx=−˙Erot. See table 1 and text for details.</figcaption></figure>
<figure><img src="content_image/1205.1626/x2.png"><figcaption>Figure 2: Dipole magnetic field in the case of wind braking versus dipolemagnetic field in the case of magnetic dipole braking. A wind luminosityLp=1035ergs−1 is assumed for all sources. The solid, dashed, and dotted linesare for Bdip,w=Bdip,d, 0.1Bdip,d, 0.01Bdip,d, respectively. The dot-dashedline marks the position of quantum critical magnetic field BQED=4.4×1013G.</figcaption></figure>
<figure><img src="content_image/1205.1626/x3.png"><figcaption>Figure 3: The same as Figure 2. The wind luminosities are assumed to be thesame as their persistent X-ray luminosities. See text for details.</figcaption></figure>
<figure><img src="content_image/1205.1626/x4.png"><figcaption>Figure 4: The same as Figure 2, assuming θs=0.05 and Lp=Lx. See text fordetails.</figcaption></figure>
### Acceleration potential
Most of the electromagnetic emission of magnetars is thought to originate in the closed field line region (Thompson et al. 2002; Beloborodov & Thompson 2007; Tong et al. 2010b). Meanwhile, since the rotational energy is always present, we should also see some rotation-powered activities in magnetars (Zhang 2003). The rotation-powered activities are almost inevitable especially when we assume that AXPs and SGRs are also magnetic dipole braking as rotation-powered pulsars (Tong et al. 2011). The acceleration potential in open field line regions characterizes this point quantitatively.
The maximum acceleration potential in pulsar open field line regions is (Ruderman & Sutherland 1975)
\[\Phi_{\rm max}=\frac{B_{0}r_{0}^{2}\Omega}{2c}\left(\frac{R_{\rm pc}}{r_{0}} \right)^{2}.\] (32)
In the case of magnetic dipole braking \(R_{\rm pc}=r_{0}(r_{0}/R_{\rm lc})^{1/2}\), the corresponding acceleration potential is
\[\Phi_{\rm max}=\left(\frac{3}{2}\frac{-\dot{E}_{\rm rot}}{c}\right)^{1/2}.\] (33)
In the case of wind braking, the polar cap radius is given by eq.(5). Although the polar cap radius is larger than the magnetic dipole braking case, the dipole magnetic field is lower. The net effect will be concealed. The corresponding acceleration potential is
\[\Phi_{\rm max}=\left(\frac{\sqrt{3}}{2}\frac{-\dot{E}_{\rm rot}}{c}\right)^{1/ 2}.\] (34)
The maximum acceleration potential is the same (within a factor of two) in the wind braking case and magnetic dipole braking case.
Although the maximum acceleration potential is the same, the detailed acceleration mechanism will be qualitatively different. In the presence of a particle wind, vacuum gaps may not be formed, e.g., outer gap etc. This may explain the conflicts between outer gap model in the case of magnetars and _Fermi_ observations (Tong et al. 2010a, 2011). Meanwhile, space charge limited flow type acceleration mechanism may still exist (Xu 2007). In a wind loaded magnetosphere, detailed calculations of space charge limited flows are needed in the future.
In calculating Figure 3, we show that only those sources with \(L_{\rm x}>-\dot{E}_{\rm rot}\) are wind braked down. While for sources with \(L_{\rm x}<-\dot{E}_{\rm rot}\), they are still magnetic dipole braking, the same as rotation-powered pulsars. A magnetosphere similar to that of rotation-powered pulsars is prepared during the persistent state. This may be taken as the initial state. An outburst will may trigger the radio emission of magnetars as observed. Then it is natural that only sources with \(L_{\rm x}<-\dot{E}_{\rm rot}\) can have radio emissions. This may explain the “fundamental plane” of magnetar radio emissions found by Rea et al. (2012b). More detailed investigations are needed.
### Spin down evolution and age
A given magnetar, with dipole magnetic field \(B_{0}=b_{0}\times B_{\rm QED}=b_{0}\times 4.4\times 10^{13}\,{\rm G}\) (\(b_{0}\sim 1\) from eq.(31)), and a wind luminosity \(L_{\rm w}=L_{\rm p}=L_{\rm p,35}\times 10^{35}\,{\rm erg\,s^{-1}}\), will evolve from magnetic dipole braking at early stage to wind braking at later stage. At present, we assume that \(B_{0}\) and \(L_{\rm p}\) are both constants (A decaying particle wind will be considered in Section 5.1 below). From eq.(25) and eq.(8), at the early stage, the star rotates very fast and \(\dot{E}_{\rm{d}}\) is larger than \(L_{\rm p}\). Therefore, the star will be braked down by magnetic dipole radiation at early stage. However, at later stage, the star will be slowed down and \(\dot{E}_{\rm{d}}\) will be smaller than \(L_{\rm p}\). Therefore, the star will become wind braking at later stage. The transition from magnetic dipole braking to wind braking happens at
\[L_{\rm p}=\dot{E}_{\rm{d}}=\frac{B_{0}^{2}r_{0}^{6}\Omega^{4}}{6c^{3}}.\] (35)
The corresponding rotation period is
\[P_{1}=0.66\,b_{0}^{1/2}L_{\rm p,35}^{-1/4}\,{\rm s}.\] (36)
\(P_{1}\) can also be obtained by requiring \(r_{\rm open}\leq R_{\rm lc}\) (Thompson et al. 2000). When the star’s rotation period is less than \(P_{1}\) it will be braked down by magnetic dipole radiation. The corresponding period derivative at the transition point is
\[\dot{P}_{1}=7.2\times 10^{-13}\,b_{0}^{3/2}L_{\rm p,35}^{1/4}.\] (37)
If the magnetar rotation period at birth is much less than \(P_{1}\), then the star age at \(P_{1}\) is
\[t_{1}=\tau_{\rm c,1}\equiv\frac{P_{1}}{2\dot{P}_{1}}=1.4\times 10^{4}\,b_{0}^{ -1}L_{\rm p,35}^{-1/2}\,{\rm yr}.\] (38)
The transition age \(t_{1}\) is similar to the supernova remnant age associated with AXPs (Vink & Kuiper 2006). Beginning from \(t_{1}\), \(L_{\rm p}>-\dot{E}_{\rm rot}\) (eq.(26)), the star will be braked down by a particle wind. Furthermore, the particle wind of magnetars is from magnetic energy decay \(L_{\rm p}\sim-\dot{E}_{\rm B}\), where \(E_{\rm B}\) is the star’s magnetic energy stored mainly in the form of multipole field. Therefore, during wind braking phase, \(-\dot{E}_{\rm B}>-\dot{E}_{\rm rot}\). The star’s activities will be dominated by magnetic energy output rather than rotational energy output. AXP/SGR-like activities may appear, i.e., the pulsar becomes a magnetar.
We now consider how a magnetar evolves from \((P_{1},\dot{P}_{1})\) to \((P_{2},\dot{P}_{2})\) (Thompson et al. 2000). When we assume \(B_{0}\) and \(L_{\rm p}\) are both constants, then from eq.(30), at wind braking phase
\[\frac{\dot{P}}{P}=\frac{\dot{P}_{1}}{P_{1}}=\frac{\dot{P}_{2}}{P_{2}}={\rm constant}.\] (39)
The period will evolve with time as
\[P_{2}=P_{1}\exp\{\frac{t_{2}-t_{1}}{2\tau_{\rm c,1}}\},\] (40)
where \(t_{2}\) and \(t_{1}\) are the star’s true age at \(P_{2}\) and \(P_{1}\), respectively. \(\tau_{\rm c,1}\) is the characteristic age at \(P_{1}\). For transition from magnetic dipole braking to wind braking, \(t_{1}=\tau_{\rm c,1}\). However, in the general case, \(t_{1}\) is not always equal to \(\tau_{\rm c,1}\). The star’s age at a given period \(P_{2}\) is
\[t_{2}=t_{1}+2\tau_{\rm c,1}\log\frac{P_{2}}{P_{1}}.\] (41)
After \(t_{1}\), the star’s period increases exponentially. For \(P_{2}\) not very large than \(P_{1}\), we have \(t_{2}\sim t_{1}=\tau_{\rm c,1}=\tau_{\rm c,2}\), where \(\tau_{\rm c,2}\) is the star’s characteristic age at \(P_{2}\).
### Braking index
The braking index of a pulsar is defined as (Shapiro & Teukolsky 1983)
\[\dot{\Omega}=-({\rm constant})\Omega^{n},\] (42)
where \(n\) is called the braking index. \(n=3\) for magnetic dipole braking. For wind braking, from eq.(30), we have
\[-I\Omega\dot{\Omega}=\left(\frac{B_{0}^{2}r_{0}^{6}\Omega^{4}}{6c^{3}}\right)^ {1/2}L_{\rm p}^{1/2}.\] (43)
Therefore, \(n=1\) for wind braking (assuming \(B_{0}\) and \(L_{\rm p}\) are both constants). The braking index of PSR J1734-3333, \(n=0.9\pm 0.2\), may imply that it is of wind braking (Espinoza et al. 2011; a rotation-powered particle wind). Future braking index measurement of a magnetar will help us make clear whether magnetars are magnetic dipole braking or wind braking. Because the braking index will deviate from one if \(B_{0}\) and/or \(L_{\rm p}\) changes with time, a braking index of a magnetar may also tell us the evolution of its particle wind.
### Duty cycles of particle wind
Harding et al. (1999) considered the duty cycles of a particle wind whose luminosity is \(L_{\rm p}\sim 10^{37}\,{\rm erg\,s^{-1}}\). It is shown that, due to the duty cycles of particle wind, the dipole magnetic field and age vary continuously from the dipole braking case to the wind braking case (Figure 1 in Harding et al. 1999). However, the particle luminosity considered by Harding et al. (1999) is much stronger than we considered here \(L_{\rm p}\sim 10^{35}\,{\rm erg\,s^{-1}}\). It is possible that there are two types of particle wind:
1. A persistent component associated with the magnetar’s persistent emissions. The particle luminosity is \(L_{\rm pp}\sim L_{\rm x}\sim 10^{35}\,{\rm erg\,s^{-1}}\).
2. A burst component associated with outbursts of magnetars. The corresponding particle luminosity may be about \(L_{\rm pb}\sim L_{\rm burst}\sim 10^{37}\,{\rm erg\,s^{-1}}\).
The burst component of particle wind may contribute to the enhanced spindown of magnetars after glitches (Kaspi et al. 2003) and the possible “radiation braking” during giant flares of SGR 1900+14 (Thompson et al. 2000; Parfrey et al. 2012).
The long term averaged spindown of magnetars can be modeled similarly to that of Harding et al. (1999)
\[-\langle\dot{E}_{\rm rot}\rangle=\dot{E}_{\rm w,burst}D_{\rm p}+\dot{E}_{\rm w ,persistent}(1-D_{\rm p}),\] (44)
where \(D_{\rm p}\) is the duty cycle of the burst component of particle wind. From eq.(13), the above equation can be rewritten as
\[-\langle\dot{E}_{\rm rot}\rangle=\dot{E}_{\rm{d}}^{1/2}L_{\rm pb}^{1/2}D_{\rm p }+\dot{E}_{\rm{d}}^{1/2}L_{\rm pp}^{1/2}(1-D_{\rm p})=\dot{E}_{\rm{d}}^{1/2}L_ {\rm eff}^{1/2},\] (45)
where \(L_{\rm eff}^{1/2}=L_{\rm pb}^{1/2}D_{\rm p}+L_{\rm pp}^{1/2}(1-D_{\rm p})\) is the effective particle luminosity. For typical parameters, the effective particle luminosity is
\[L_{\rm eff,35}^{1/2}=10\,L_{\rm pb,37}^{1/2}D_{\rm p}+L_{\rm pp,35}^{1/2}(1-D_ {\rm p}).\] (46)
For \(D_{\rm p}=0\), this is just the case we considered above. For \(D_{\rm p}=1\), this corresponds to a strong wind case (\(L_{\rm p}=10^{37}\,{\rm erg\,s^{-1}}\)) as considered by Harding et al. (1999). The duty cycle can be estimated from the observations of the transient magnetar SGR 1627-41 (Mereghetti et al. 2009). The duration between two outbursts is about ten years. Therefore, the maximum value of duty cycle is about \(0.1\). The corresponding effective particle luminosity is \(L_{\rm eff,35}^{1/2}=L_{\rm pb,37}^{1/2}+0.9L_{\rm pp,35}^{1/2}\), about two times larger than the persistent component of the particle wind. In conclusion, the previous discussions are still valid considering the possible existence of a burst component of particle wind.
## 5 Discussions
### A decaying particle wind
In the magnetar model, both the persistent and burst emissions of AXPs and SGRs are powered by magnetic field decay. The total magnetic field will decay with time. Meanwhile, the photon luminosity as well as the particle luminosity will also evolve with time. Eventually both the photon luminosity and particle luminosity will also decay with time (Turolla et al. 2011). In the case of a decaying particle wind, the spin down evolution of magnetars will be different from previous considerations. Considering different avenues for magnetic field decay, the total magnetic field may decay with time in a power law form (Heyl & Kulkarni 1998). The consequent magnetic energy decay rate \(-\dot{E}_{\rm B}\) will also of power law form. Since the particle luminosity is from the magnetic energy decay, we may assume a power law form of particle luminosity
\[L_{\rm p}(t)=L_{\rm p,0}\left(\frac{t}{t_{\rm D}}\right)^{-\alpha},\ 0\leq \alpha\leq 2,\] (47)
where \(L_{\rm p,0}\) and \(\alpha\) are constants, \(t_{\rm D}\) is the time when the magnetic field starts to decay significantly. \(t_{\rm D}\) may be of the same order as \(t_{1}\) when wind braking starts to operate. For \(\alpha\) larger than two, \(L_{\rm p}(t)\) decays more rapidly than \(\dot{E}_{\rm{d}}\). In this case, the wind braking criterion is not fulfilled (eq. (25)). In the case of decaying particle wind, by integrating eq. (30), we can get the spin down evolution of magnetars. For \(0\leq\alpha<2\), the period evolves with time
\[P_{2}=P_{1}\,\exp\{\frac{t_{2}(t_{2}/t_{1})^{-\alpha/2}-t_{1}}{(2-\alpha)\tau_ {\rm c,1}}\},\] (48)
and the star age at a given period is
\[t_{2}\left(\frac{t_{2}}{t_{1}}\right)^{-\alpha/2}=t_{1}+(2-\alpha)\tau_{\rm c, 1}\log\frac{P_{2}}{P_{1}}.\] (49)
For the special case of \(\alpha=2\), the corresponding expressions for period and age are
\[P_{2}=P_{1}\,\left(\frac{t_{2}}{t_{1}}\right)^{t_{1}/2\tau_{\rm c,1}},\] (50)
\[t_{2}=t_{1}\left(\frac{P_{2}}{P_{1}}\right)^{2\tau_{\rm c,1}/t_{1}}.\] (51)
Equation (50) is now the same as the magnetic dipole braking case by setting \(t_{1}=\tau_{\rm c,1}\) (eq. (5.18) in Lyne & Graham-Smith 2012).
#### 5.1.1 Calculation of braking index
The braking index predicted for the most luminous AXP (4U 0412+61, Dib et al. 2007) is shown as function of age in Figure 5. \(L_{\rm p,0}=10^{37}\,\rm erg\,s^{-1}\) is assumed. For a constant particle wind, the braking index \(n=1\) is obtained, as previously discussed. For the critical case \(\alpha=2\), the braking index \(n=3\) is obtained the same as the magnetic dipole braking case, as can be seen from eq. (50). For the intermediate case \(0<\alpha<2\), a braking index \(n=1-3\) is obtained. Future braking index measurement of this source may tell us whether it is wind braking or magnetic dipole braking.
<figure><img src="content_image/1205.1626/x5.png"><figcaption>Figure 5: Braking index in the case of wind braking as a function of age. Theparameters of AXP 4U 0142+61 are used. The thick solid, dashed, dotted, dot-dashed, and thin solid lines are for α=0, 0.5, 1, 1.5, 2, respectively.</figcaption></figure>
### The presence of a fallback disk
In the case of wind braking, the star’s true age is of the same order of the characterristic age \(t\sim\tau_{\rm c}\). For those magnetars whose supernova remnant age \(t_{\rm snr}\sim\tau_{\rm c}\), then it is understandable that they are wind braking. However, for AXP 1E 2259+586, its supernova remnant age \(t_{\rm snr}\approx 10^{4}\,{\rm yr}\ll\tau_{\rm c}=23\times 10^{4}\,{\rm yr}\) (Vink & Kuiper 2006). For a decaying particle wind, the star’s true age can be less than \(\tau_{\rm c}\). However, \(L_{\rm p}(t_{\rm snr})\) will be larger than \(L_{\rm x}\sim 10^{35}\,\rm erg\,s^{-1}\). Therefore, additional torque may be needed for AXP 1E 2259+586.
The presence of a fallback disk may help to solve this age discrepancy (Shi & Xu 2003). At early phase, a fallback disk provides the braking torque of the magnetar. At the end of disk braking, the star has been slowed down significantly, e.g. \(t_{1}=2\times 10^{3}\,{\rm yr}\), \(P_{1}=6.7\,\rm s\). For a particle luminosity \(L_{\rm p}=10^{35}\,\rm erg\,s^{-1}\), the evolution of rotation period is shown in Figure 6.
Observationally, there may be a debris disk around 1E 2259+586 (Kaplan et al. 2009). If we assume that SGR 0418+5729 is also a young magnetar, then a fallback disk is also needed (in the early stage) to spin down it to the present period (Alpar et al. 2011). For the disk torque to operate effectively, the dipole magnetic field can not be too high, e.g., \(B_{\rm dip}=10^{12}-10^{13}\,\rm{G}\) is required (Shi & Xu 2003; Alpar et al. 2011). This is consistent with the dipole magnetic field obtained by assuming wind braking (see eq.(31)).
In conclusion, there may be a fallback disk in the early stage of a magnetar. This fallback disk may help to solve the age discrepancy. At present, they have been slowed down significantly and have become wind braking.
<figure><img src="content_image/1205.1626/x6.png"><figcaption>Figure 6: Evolution of rotation period as a function of age, calculations forAXP 1E 2259+586. The star is AXP 1E 2259+586, tsnr is taken as the true age.</figcaption></figure>
### Spin down evolution of newly born magnetars
Magnetars are thought to be descendants of rapidly rotating proto-neutron stars, with rotation period \(\sim 1\,\rm ms\) (Duncan & Thompson 1992). A strong dipole field (\(B_{\rm dip}\sim 10^{15}\,\rm G\)) will cause the spin down time scale of the magnetar less than the supernova shock breakout time. This will cause the supernova associated with magnetar birth more energetic (Duncan & Thompson 1992). However, studies of supernova remnants associated with AXPs and SGRs show that the putative supernova energies are of canonical value (Vink & Kuiper 2006). This provides challenges to the traditional magnetar model. If magnetars are wind braking instead of magnetic dipole braking, then they will have much weaker dipole field. The corresponding spin down time scale will be much longer than the shock breakout time, \(\tau_{\rm sd}\sim 60B_{14}^{-2}(P_{\rm i}/1\,{\rm ms})^{2}\,\rm hr\). This may explain the observations of Vink & Kuiper (2006).
Moreover, in the presence of strong multipole field, magnetars are of prolate shape. This may cause them to emit strong gravitational waves after birth (Dall’Osso et al. 2009). The gravitational waves will also carry away some amount of the initial rotational energy. For gravitational wave to operate effectively, its competing process (i.e., magnetic dipole braking) can not be too strong. Therefore, a weaker magnetic dipole field is required \(B_{\rm dip}\lesssim 10^{14}\,\rm G\). This is also consistent with the result of wind braking. In the actual case, a combination of these two processes, i.e., longer spin down time scale and gravitational wave emissions, may account for the observations. Their contributions depends on the dipole and multipole field strength of the star, which may vary from source to source.
### Magnetism-powered pulsar wind nebula
A particle wind with luminosity \(10^{35}\,\rm erg\,s^{-1}\) may produce a visible nebula around the central magnetar (The putative nebula may also contain contributions from a rotation-powered particle wind). This pulsar wind nebula is magnetism-powered in nature since the particle wind is originated from magnetic field decay. There may be a pulsar wind nebula around AXP 1E 1547.0-5408 (Vink & Bamba 2009). Since both the particle wind and the persistent X-ray luminosity of a magnetar are from magnetic field decay, there will be a strong correlation between them. Then for a magnetism-powered pulsar wind nebula, we should see a correlation between the nebula luminosity and the magnetar luminosity. This is just the case of Figure 2 in Olausen et al. (2011). In Olausen et al. (2011), they see a strong correlation between the extended emission of AXP 1E 1547.0-5408 and its source flux. Therefore, Olausen et al. concluded that the pulsar wind nebula origin for the extended emission is ruled out and it is a dust scattering halo. However, a strong correlation between the extended emission and the source flux just rules out the rotation-powered pulsar wind nebula hypothesis. Such a correlation is a natural result if the pulsar wind nebula is magnetism-powered. Future multiband observations of this source may tell us whether it is a magnetism-powered pulsar wind nebula or a dust scattering halo.
For a magnetism-powered pulsar wind nebula, an extreme case is that the nebula luminosity can exceed that of the star’s rotational energy loss rate \(L_{\rm pwn}>-\dot{E}_{\rm rot}\). However, for young magnetars, their rotational energy loss rates are also very high. Therefore, the extreme case may be very hard to achieve. A possible case is that we can see a high conversion efficiency of the putative nebula. The possible pulsar wind nebula seen around RRAT J1819\(-\)1458 has a relatively high conversion efficiency (Rea et al. 2009). It may contain contributions from a magnetism-powered particle wind.
## 6 Conclusions
Considering recent observations challenging the traditional magnetar model (neutron stars with both strong dipole field and strong multipole field), we explore the wind braking of magnetars. There are some observational clues for the existence of a magnetism-powered particle wind. The total particle luminosity is estimated to be \(\sim 10^{35}\,\rm erg\,s^{-1}\), comparable to their persistent X-ray luminosities. Such a particle wind will amplify the star’s rotational energy loss rate. The consequent dipole magnetic field is about ten times smaller than that of magnetic dipole braking, if the particle flow is strongly collimated at the star surface. In the wind braking scenario, magnetars are neutron stars with strong multipole field. For some sources, a strong dipole field may be no longer necessary. Wind braking of magnetars may help us to explain some challenging observations of magnetars.
A magnetism-powered pulsar wind nebula and a braking index smaller than three are the two predictions of the wind braking model¹⁰. Future studies will tell us whether magnetars are wind braking or magnetic dipole braking.
[FOOTNOTE:10][ENDFOOTNOTE]
The authors would like to thank the Referee for detailed and thoughtful comments, and B. Zhang for helpful discussions. H. Tong would like to thank KIAA at PKU for support of visiting. This work is supported by National Basic Research Program of China (2012CB821800, 2009CB824800), National Natural Science Foundation of China (11103021, 11225314, 10935001, 10833003), West Light Foundation of CAS (LHXZ201201), and the John Templeton Foundation.
## References
* (1) Abdo, A. A., Ackermann, M., Ajello, M., et al. 2010, ApJ, 725, L73
* (2) Alpar, M. A., Ertan, U., & Kaliskan, S. 2011, ApJ, 732, L4
* (3) Anderson, G. E., Gaensler, B. M., Slane, P. O., et al. 2012, ApJ, 751, 53
* (4) Archibald, A. M., Dib, R., Livingstone, M. A., et al. 2008, AIP Conference Proceedings, 983, 265
* (5) Beloborodov, A. M., & Thompson, C. 2007, ApJ, 657, 967
* (6) Beloborodov, A. M. 2009, ApJ, 703, 1044
* (7) Camilo, F., Cognard, I., Ranson, S. M., et al. 2007, ApJ, 663, 497
* (8) Camilo, F., Reynolds, J., Johnson, S., et al. 2008, ApJ, 679, 681
* (9) Camilo, F., Ranson, S. M., Chatterjee, S., et al. 2012, ApJ, 746, 63
* (10) Cheng, K. S., & Zhang, L. 2001, ApJ, 562, 918
* (11) Contopoulos, I., & Spitkovsky, A. 2006, ApJ, 643, 1139
* (12) Dall’Osso, S., Shore, S. N., & Stella, L. 2009, MNRAS, 398, 1869
* (13) Dib, R., Kaspi, V. M., & Gavriil, F. P. 2007, ApJ, 666, 1152
* (14) Duncan, R. C., & Thompson, C. 1992, ApJ, 392, L9
* (15) Duncan, R. C. 2000, AIP conference proceedings, 526, 830
* (16) Espinoza, C. M., Lyne, A. G., Kramer, M., et al. 2011, ApJ, 741, L13
* (17) Gaensler, B. M., & Slane, P. O. 2006, ARAA, 44, 17
* (18) Gavriil, F. P., & Kaspi, V. M. 2002, ApJ, 567, 1067
* (19) Gavriil, F. P., & Kaspi, V. M. 2004, ApJ, 609, L67
* (20) Gavriil, F. P., Gonzalez, M. E., Gotthelf, E. V., et al. 2008, Science, 319, 1802
* (21) Harding , A. K., Contopoulos, I., & Kazanas, D. 1999, ApJ, 525, L125
* (22) Heyl, J., & Kulkarni, S. R. 1998, ApJ, 506, L61
* (23) Kaplan, D. L., Chakrabarty, D., Wang, Z., Wachter, S. 2009, ApJ, 700, 149
* (24) Kaplan, D. L., & van Kerkwij, M. H. 2011, ApJ, 740, L30
* (25) Kargaltsev, O., Kouveliotou, C., Palov, G. G., et al. 2012, ApJ, 748, 26
* (26) Kaspi, V. M., Gavriil, F. P., & Woods, P. M. 2003, ApJ, 588, L93
* (27) Kouveliotou, C., Dieters, S., Strohmayer, T., et al. 1998, Nature, 393, 235
* (28) Kramer, M., Lyne, A. G., O’brien, et al. 2006, Science, 312, 549
* (29) Levin, L., Bailes, M., Bates, S. D., et al. 2012, MNRAS, 442, 2489
* (30) Liu, X. W., Na, X. S., Xu, R. X., & Qiao, G. J. 2011, Chinese physics letters, 28, 019701
* (31) Lyne, A. G., & Graham-Smith, F. 2012, Pulsar astronomy (4th ed.), Cambridge University Press, Cambridge
* (32) Lyne, A. G., Hobbs, G., Kramer, M., et al. 2010, Science, 329, 408
* (33) Manchester, R. N., Durdin, J. M., & Newton, L. M. 1985, Nature, 313, 374
* (34) Marsden, D., Rothschild, R. E., & Lingenfelter, R. E. 1999, ApJ, 520, L107
* (35) Medin, Z., & Lai, D. 2010, MNRAS, 406, 1379
* (36) Mereghetti, S. 2008, A&ARv, 15, 225
* (37) Mereghetti, S., Tiengo, A., Esposito, P., et al. 2009, arXiv:0908.0414 (proceeding of Neutron Stars and Gamma Ray Bursts: Recent Developments and Future Directions)
* (38) Michel, F. C. 1969, ApJ, 158, 727
* (39) Ng, C. Y., & Kaspi, V. M. 2011, AIP Conference Proceedings, 1379, 60
* (40) Olausen, S. A., Kaspi, V. M., Ng, C. Y., et al. 2011, ApJ, 742, 4
* (41) Parfrey, K., Beloborodov, A. M., & Hui, L. 2012, ApJ, 754, L12
* (42) Pons, J. A., & Perna, R. 2011, ApJ, 741, 123
* (43) Rea, N., McLaughlin, M. A., Gaensler, B. M., et al. 2009, ApJ, 703, L41
* (44) Rea, N., Esposito, P., Turolla, R., et al. 2010, Science, 330, 944
* (45) Rea, N., Israel, G. L., Esposito. P., et al. 2012a, ApJ, 754, 27
* (46) Rea, N., Pons, J. A., Torres, D. F., et al. 2012b, ApJ, 748, L12
* (47) Ruderman, M. A., & Sutherland, P. G. 1975, ApJ, 196, 51
* (48) Sasmaz Mus, S., & Gogus, E. 2010, ApJ, 723, 100
* (49) Shapiro, S. L., & Teukolsky S. A. 1983, Block holes, white dwarfs, and nuetron stars, John Wiley & Sons, New York
* (50) Shi, Y., & Xu, R. X. 2003, ApJ, 596, L75
* (51) Spitkovsky, A. 2006, ApJ, 648, L51
* (52) Thompson, C., & Duncan, R. C. 1995, MNRAS, 275, 255
* (53) Thompson, C., & Duncan, R. C. 1996, ApJ, 473, 322
* (54) Thompson, C., Duncan, R. C., Woods, P. M., et al. 2000, , ApJ, 543, 340
* (55) Thompson, C., & Duncan, R. C. 2001, ApJ, 561, 980
* (56) Thompson, C., Lyutikov, M., & Kulkarni, S. R. 2002, ApJ, 574, 332
* (57) Timokhin, A. N., Eichler, D., Lyubarsky, Yu. 2008, ApJ, 680, 1398
* (58) Tong, H., Song, L. M., & Xu, R. X. 2010a, ApJ, 725, L196
* (59) Tong, H., Xu, R. X., Peng, Q. H., & Song, L. M. 2010b, RAA, 10, 553
* (60) Tong, H., Song L. M., & Xu, R. X. 2011, ApJ, 738, 31
* (61) Tong, H., & Xu, R. X. 2011, Int. Jour. Mod. Phys. E, 20, 15
* (62) Turolla, R., Zane, S., Pons, J. A., et al. 2011, , ApJ, 740, 105
* (63) Vink, J., & Kuiper, L. 2006, MNRAS, 370, L14
* (64) Vink, J., & Bamba, A. 2009, ApJ, 707, L148
* (65) Wang, J., Wang, N., Tong, H., & Yuan, J. 2012, Ap&SS, 340, 307
* (66) Watts, A. L. 2011, arXiv:1111.0514
* (67) Woods, P. M., Kouveliotou, C., Gogus, E., et al. 2002, ApJ, 567, 381
* (68) Woods, P. M., Kouveliotou, C., Finger, M. H., et al. 2007, ApJ, 654, 470
* (69) Xu, R. X., & Qiao, G. J. 2001, ApJ, 561, L85
* (70) Xu, R. X. 2007, Advances in space research, 40, 1453
* (71) Zhang, B. 2003, Astrophysics and Space Science Library, 298, 27
|
1604.06980 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 41464,
"num_imgs": 2,
"llama3_tokens_count": 14445
} | [
"content_image/1604.06980/x1.png",
"content_image/1604.06980/x4.png"
] | # On exact and optimal recovering of missing values for sequences
Nikolai Dokuchaev
Department of Mathematics & Statistics, Curtin University, GPO Box U1987, Perth 6845,
Western Australia, Australia
Submitted April 24, 2016. Revised January 4, 2017
###### Abstract
The paper studies recoverability of missing values for sequences in a pathwise setting without probabilistic assumptions. This setting is oriented on a situation where the underlying sequence is considered as a sole sequence rather than a member of an ensemble with known statistical properties. Sufficient conditions of recoverability are obtained; it is shown that sequences are recoverable if there is a certain degree of degeneracy of the Z-transforms. We found that, in some cases, this degree can be measured as the number of the derivatives of Z-transform vanishing at a point. For processes with non-degenerate Z-transform, an optimal recovering based on the projection on a set of recoverable sequences is suggested. Some robustness of the solution with respect to noise contamination and truncation is established.
**Key words**: data recovery, discrete time, sampling theorem, band-limited interpolation.
†
[FOOTNOTE:†][ENDFOOTNOTE]
## 1 Introduction
The paper studies optimal recovering of missing values for sequences, or discrete time deterministic processes. This important problem was studied intensively. The classical results for stationary stochastic processes with the spectral density \(\phi\) is that a single missing value is recoverable with zero error if and only if
\[\int_{-\pi}^{\pi}\phi(\omega)^{-1}d\omega=-\infty.\] (1)
(Kolmogorov [12], Theorem 24). Stochastic stationary Gaussian processes without this property are called _minimal_ [12]. In particular, a process is recoverable if it is “band-limited” meaning that the spectral density is vanishing on an arc of the unit circle \({\mathbb{T}}=\{z\in{\bf C}:\ |z|=1\}\). This illustrates the relationship of recoverability with the notion of bandlimitiness or its relaxed versions such as (1). In particular, criterion (1) was extended on stable processes [14] and vector Gaussian processes [15].
In theory, a process can be converted into a band-limited and recoverable process with a low-pass filter. However, a ideal low-pass filter cannot be applied if there are missing values. This leads to approximation and optimal estimation of missing values. For the forecasting and other applications, it is common to use band-limited approximations of non-bandlimited underlying processes. There are many works devoted to smoothing and sampling an based on frequency properties; see e.g. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 15, 16, 17].
The present paper also consider band-limited approximations. We consider approximation of an observed sequence in \(\ell_{r}\)-norms rather than matching the values at selected points. The solution is not error-free; the error can be significant if the underlying process is not band-limited. This is different from a setting in [2, 3, 4, 11, 13], where error-free recovering was considered. Our setting is closer to the setting from [18, 20]. In [18], optimization was considered as minimization of the total energy for an approximating bandlimited process within a given distance from the original process smoothed by an ideal low-pass filter. In [20], extrapolation of a band-limited process matching a finite number of points process was considered using special Slepian’s type basis in the frequency domain.
The present paper considers optimal recovering of missing values of sequences (discrete time processes) based on intrinsic properties of sequences, in the pathwise setting, without using probabilistic assumptions on the ensemble. This setting targets a scenario where a sole underlying sequence is deemed to be unique and such that one cannot rely on statistics collected from observations of other similar samples. To address this, we use a pathwise optimality criterion that does not involve an expectation on a probability space. For this setting, we obtained explicit optimal estimates for missing values of a general type processes (Theorems 1 and 2). We identified some classes of processes with degenerate Z-transforms allowing error-free recoverability (Corollary 1 and 3). For a special case of a single missing values, this gives a condition of error-free recoverability of sequences reminding classical criterion (1) for stochastic processes but based on intrinsic properties of sequences, in the pathwise setting (Corollary 3). In addition, we established numerical stability and robustness of the method with respect to the input errors and data truncation (Section 5).
## 2 Some definitions and background
Let \({\mathbb{Z}}\) be the set of all integers. For a set \(G\subset{\mathbb{Z}}\) and \(r\in[1,\infty]\), we denote by \(\ell_{r}(G)\) a Banach space of complex valued sequences \(\{x(t)\}_{t\in G}\) such that \(\|x\|_{\ell_{r}(G)}\stackrel{{{\Delta}}}{{=}} \left(\sum_{t\in G}|x(t)|^{r}\right)^{1/r}<+\infty\) for \(r\in[1,+\infty)\), and \(\|x\|_{r(G)}\stackrel{{{\Delta}}}{{=}}\sup_{t\in G }|x(t)|<+\infty\) for \(r=\infty\).
For \(x\in\ell_{2}({\mathbb{Z}})\), we denote by \(X={\cal Z}x\) the Z-transform
\[X(z)=\sum_{t=-\infty}^{\infty}x(t)z^{-t},\]
defined for \(z\in{\bf C}\) such that the series converge. For \(x\in\ell_{2}({\mathbb{Z}})\), the function \(X\left(e^{i\omega}\right)|_{\omega\in(-\pi,\pi]}\) is defined as an element of \(L_{2}(-\pi,\pi)\). For \(x\in\ell_{1}({\mathbb{Z}})\), the function \(X\left(e^{i\omega}\right)\) is defined for all \(\omega\in(-\pi,\pi]\) and is continuous in \(\omega\).
Let \(m\in{\mathbb{Z}}\) be given, \(m\geq 0\). For \(s\in{\mathbb{Z}}\), let \(M_{s}=\{s,s+1,s+2,...,s+m\}\).
We consider data recovery problem for input processes \(x\in\ell_{r}\) such that the trace \(\{x(t)\}_{t\in{{\mathbb{Z}}\setminus M_{s}}}\) represents the available observations; the values \(\{x(t)\}_{t\in M_{s}}\) are missing.
**Definition 1**.: _Let \({\cal Y}\subset\ell_{r}\) be a class of sequences. We say that this class is recoverable if, for any \(s\in{\mathbb{Z}}\), there exists a mapping \(F:\ell_{r}({{\mathbb{Z}}\setminus M_{s}})\to{\bf R}^{m+1}\) such that \(x|_{M_{s}}=F\left(x|_{{{\mathbb{Z}}\setminus M_{s}}}\right)\) for all \(x\in{\cal Y}\)._
For a sequence that does not belong to a recoverable class, it is natural to accept, as an approximate solution, the corresponding values of the closest process from a preselected recoverable class. More precisely, given observations \(x|_{{{\mathbb{Z}}\setminus M_{s}}}\) and a recoverable class \({\cal Y}\subset\ell_{r}\), we suggest to find an optimal solution \(\widehat{x}\in{\cal Y}\) of the minimization problem
\[\hbox{Minimize}\quad\sum_{t\in{{\mathbb{Z}}\setminus M_{s}}}| \widehat{x}(t)-x(t)|^{2}\]
\[\hbox{over}\quad\widehat{x}\in{\cal Y},\] (2)
and accept the trace \(\widehat{x}|_{M_{s}}\) as the recovered missing values \(x|_{M_{s}}\).
## 3 Recovering based on band-limited smoothing
We assume that we are given \(\Omega\in(0,\pi)\). Let \(\ell_{2}^{{ BL},{\Omega}}\) be the set of all \(x\in\ell_{2}({\mathbb{Z}})\) such that \(X\left(e^{i\omega}\right)=0\) for \(|\omega|>\Omega\) for \(X={\cal Z}x\). We will call sequences \(x\in\ell_{2}^{{ BL},{\Omega}}\)_band-limited_. Let \(\ell_{2}^{{ BL},{\Omega}}({{\mathbb{Z}} \setminus M_{s}})\) be the subset of \(\ell_{2}({{\mathbb{Z}}\setminus M_{s}})\) consisting of traces \(x|_{{{\mathbb{Z}}\setminus M_{s}}}\) for all sequences \(x\in\ell_{2}^{{ BL},{\Omega}}\).
**Proposition 1**.: _For any \(x\in\ell_{2}^{{ BL},{\Omega}}({{\mathbb{Z} }\setminus M_{s}})\), there exists a unique \(\widehat{x}\in\ell_{2}^{{ BL},{\Omega}}\) such that \(\widehat{x}(t)=x(t)\) for \(t\in{{\mathbb{Z}}\setminus M_{s}}\)._
In a general case, where the sequence of observations \(x|_{{\mathbb{Z}}\setminus M_{s}}\) does not necessarily represents a trace of a band-limited process, we will be using approximation described in the following lemma.
**Lemma 1**.: _There exists a unique optimal solution \(\widehat{x}\in\ell_{2}^{{ BL},{\Omega}}\) of the minimization problem (2) with \(r=2\) and \({\cal Y}=\ell_{2}^{{ BL},{\Omega}}\)._
Under the assumptions of Lemma 1, there exists a unique band-limited process \(\widehat{x}\) such that the trace \(\widehat{x}|_{{{\mathbb{Z}}\setminus M_{s}}}\) provides an optimal approximation of its observable trace \(x|_{{{\mathbb{Z}}\setminus M_{s}}}\). The corresponding trace \(\widehat{x}|_{M_{s}}\) is uniquely defined and can be interpreted as the solution of the problem of optimal recovering of the missing values \(x|_{M_{s}}\) (optimal in the sense of problem (2) given \(\Omega\)). In this setting, the process \(\widehat{x}\) is deemed to be a smoothed version of \(x\), and the process \(\eta=x-\widehat{x}\) is deemed to be an irregular noise. This justifies acceptance of \(\widehat{x}|_{M_{s}}\) as an estimate of missing values. It can be noted that the recovered values depend on the choice of \(\Omega\); the selection of \(\Omega\) has to be based on some presumptions about cut-off frequencies suitable for particular applications.
Let \(H(z)\) be the transfer function for an ideal low-pass filter such that \(H\left(e^{i\omega}\right)={\mathbb{I}}_{[-\Omega,\Omega]}(\omega)\), where \({\mathbb{I}}\) denotes the indicator function. Let \(h={\cal Z}^{-1}H\); it is known that \(h(t)=\Omega\,{\rm sinc\,}(\Omega t)/\pi\); we use the notation \({\rm sinc\,}(x)=\sin(x)/x\), and we use notation \(\circ\) for the convolution in \(\ell_{2}({\mathbb{Z}})\). The definitions imply that \(h\circ x\in\ell_{2}^{{ BL},{\Omega}}\) for any \(x\in\ell_{2}({\mathbb{Z}})\).
Consider a matrix \({\rm A}=\{h(k-p)\}_{k=0,p=0}^{m,m}\in{\bf R}^{(m+1)\times(m+1)}\). Let \(I_{m+1}\) be the unit matrix in \({\bf R}^{(m+1)\times(m+1)}\).
**Lemma 2**.: _The matrix \(I_{m+1}-{\rm A}\) is non-degenerate._
**Theorem 1**.: _Let \(x\in\ell_{2}({\mathbb{Z}})\) and \(\Omega\in(0,\pi)\). Given observations \(x|_{{{\mathbb{Z}}\setminus M_{s}}}\), the problem (2) with \(r=2\) and \({\cal Y}=\ell_{2}^{{ BL},{\Omega}}\) has a unique optimal solution \(\widehat{x}\in\ell_{2}^{{ BL},{\Omega}}\) which yields an estimate of \(x|_{M_{s}}\) defined as_
\[\widehat{x}(s+p)=y_{p},\quad p=0,1,...,m,\] (3)
_where \(y=\{y_{p}\}_{p=0}^{m}\in{\bf C}^{m+1}\) is defined as_
\[y=(I_{m+1}-{\rm A})^{-1}z,\] (4)
_with \(z=\{z_{p}\}_{p=0}^{m}\in{\bf C}^{m+1}\) defined as_
\[z_{p}=\sum_{t\in{{\mathbb{Z}}\setminus M_{s}}}h(p-t)x(t).\] (5)
**Corollary 1**.: _For any \(\Omega\in(0,\pi)\), the class \(\ell_{2}^{{ BL},{\Omega}}\) is recoverable in the sense of Definition 1._
**Remark 1**.: _Equations (3)-(5) applied to a band-limited process \(x\in\ell_{2}^{{ BL},{\Omega}}\) represent a special case of the result [9, 10]. The difference is that \(x\) is Theorem 1 and (3)-(5) is not necessarily band-limited._
### The case of a single missing value
It appears that the solution for the special case of a single missing value (i.e. where \(m=0\)) allows a convenient explicit formula.
**Corollary 2**.: _Let \(\Omega\in(0,\pi)\) and \(x\in\ell_{2}({\mathbb{Z}})\) be given. Given observations \(x|_{{\mathbb{Z}}\setminus\{s\}}\), the problem (2) with \(r=2\) and \({\cal Y}=\ell_{2}^{{ BL},{\Omega}}\) has a unique solution \(\widehat{x}\in\ell_{2}^{{ BL},{\Omega}}\) which yields an estimate of \(x(s)\) defined as_
\[\widehat{x}(s)=\frac{\Omega}{\pi-\Omega}\sum_{t\in{{\mathbb{Z}} \setminus M_{s}}}x(t){\rm sinc\,}[\Omega(s-t)].\] (6)
_This solution is optimal in the sense of problem (2) with \(m=0\), \(M_{s}=\{s\}\), \(r=2\), and \({\cal Y}=\ell_{2}^{{ BL},{\Omega}}\), given \(\Omega\in(0,\pi)\)._
**Remark 2**.: _Corollary 2 applied to a band-limited process \(x_{ BL}\in\ell_{2}^{{ BL}, {\Omega}}\) gives a formula_
\[x_{ BL}(s)=\frac{\Omega}{\pi-\Omega}\sum_{t\in {{\mathbb{Z}}\setminus M_{s}}}x_{ BL}(t){\rm sinc\,}[\Omega( s-t)].\]
_This formula is known [9, 10]; however, equation (6) is Corollary 2 is different since \(x\) in (6) is not necessarily band-limited._
## 4 Recovering without smoothing
Theorem 1 suggests to replace missing values by corresponding values of a smoothed band-limited process. This process is actually different from the underlying input process; this could cause a loss of some information contained in high-frequency components. Besides, it could be difficult to justify a particular choice of \(\Omega\) in (6) defining the degree of smoothing. To overcome this, we consider below the limit case where \(\Omega\to\pi-0\).
Again, we consider input sequences \(\{x(t)\}_{t\in{{\mathbb{Z}}\setminus M_{s}}}\) representing the observations available; the values for \(t\in M_{s}\) are missing.
Without a loss of generality, we assume that either \(s=0\) or \(m=0\).
Let \(\omega_{0}\in(0,\pi]\) be given. For \(x\in\ell_{2}\), l
For \(\sigma=(\sigma_{0},\sigma_{1}...,\sigma_{m})\in{\bf R}^{m+1}\) such that \(\sigma_{k}\geq 0\), \(k=0,1,...,m\), let
\[{\cal X}_{\sigma}\stackrel{{{\Delta }}}{{=}}\Bigl{\{}x\in\ell_{1}:\ \sum_{t\in{\mathbb{Z}}}|t|^{m}|x(t)|<+\infty, \quad\left|\frac{d^{k}X}{d\omega^{k}}\left(e^{i\omega_{0}}\right)\right|\leq \sigma_{k},\]
\[k=0,1,...,m,\quad X={\cal Z}x\Bigr{\}}.\]
Here and below we assume, as usual, that \(d^{k}X/d\omega^{k}=X\) for \(k=0\).
It can be shown that, for \(x\in{\cal X}_{\sigma}\) and \(X={\cal Z}x\), we have that the functions \(\frac{d^{k}X\left(e^{i\omega}\right)}{d\omega^{k}}\) are continuous in \(\omega\) for \(k=0,1,...,m\).
**Definition 2**.: _Let \({\cal X}_{0}\) be the corresponding set \({\cal X}_{\sigma}\) with \(\sigma=0\), i.e. with \(\sigma_{p}=0\) for \(p=0,1,...,m\). We will call \(x\) degenerate of order \(m\)._
Let us introduce a matrix \({\rm B}(\omega)=\{b_{pk}(\omega)\}_{k=0,p=0}^{m,m}\in{\bf C}^{(m+1)\times(m+1)}\) such that
\[b_{pk}(\omega)=[-i(s+k)]^{p}e^{-i\omega(s+k)},\quad\omega\in(- \pi,\pi].\]
In particular, if \(m=0\), then \({\rm B}(\omega)=e^{-i\omega s}\). If \(m>0\), then, by the assumptions, \(s=0\) and \(b_{pk}(\omega)=(-ik)^{p}e^{-i\omega k}\).
**Lemma 3**.: _For any \(\omega\in(-\pi,\pi]\), the matrix \({\rm B}(\omega)\) is non-degenerate._
**Theorem 2**.: _Let \(x\in\ell_{1}({\mathbb{Z}})\) be given such that \(\sum_{t\in{\mathbb{Z}}}|t|^{m}|x(t)|<+\infty\). Given observations \(x|_{{{\mathbb{Z}}\setminus M_{s}}}\), the problem (2) with \(r=1\) and \({\cal Y}={\cal X}_{0}\) has a unique solution \(\widehat{x}\in\ell_{2}^{{ BL},{\Omega}}\) which yields an estimate of \(x|_{M_{s}}\) defined as_
\[\widehat{x}(s+p)=y_{p}(\omega_{0}),\quad p=0,1,...,m,\] (7)
_where \(y(\omega)=\{y_{p}(\omega)\}_{p=0}^{m}\in{\bf C}^{m+1}\) is defined as_
\[y(\omega)={\rm B}(\omega)^{-1}z(\omega),\] (8)
_with \(z(\omega)=\{z_{p}(\omega)\}_{p=0}^{m}\in{\bf C}^{m+1}\) defined as_
\[z_{p}(\omega)=-\sum_{t\in{{\mathbb{Z}}\setminus M_{s}}}(-it)^{p} e^{-i\omega t}x(t).\] (9)
Under the assumptions of Theorem 2, there exists a unique recoverable process \(\widehat{x}\in{\cal X}_{0}\) such that \(\widehat{x}|_{t\in{{\mathbb{Z}}\setminus M_{s}}}=x|_{t\in{{\mathbb{Z}} \setminus M_{s}}}\). The corresponding trace \(\widehat{x}|_{M_{s}}\) is uniquely defined and can be interpreted as the solution of the problem of optimal recovering of the missing values \(x|_{M_{s}}\) (optimal in the sense of problem (2) for \({\cal Y}={\cal X}_{0}\)). In addition, Theorem 2 implies that \({\cal X}_{0}\neq\emptyset\) for any \(m\geq 0\); this follows from the implication from this theorem that a sequence from \(\ell_{1}\) can be transformed into a sequence in \({\cal X}_{\sigma}\) by changing its \(m\) terms.
**Corollary 3**.: _The class \({\cal X}_{0}\) is recoverable in the sense of Definition 1 with \(r=1\) and \({\cal Y}={\cal X}_{0}\)._
**Remark 3**.: _By Corollary 3 applied with \(m=0\), a single missing value process \(x\in\ell_{1}\) is recoverable if \(X\left(e^{\omega_{0}}\right)=0\) for \(X={\cal Z}x\); this reminds condition (1) for spectral density of minimal Gaussian processes [12]._
### The case of a single missing value
Again, the solution for the special case of a single missing value (i.e. where \(m=0\) and \(M_{s}=\{s\}\)) allows a simple explicit formula.
**Corollary 4**.: _Let \(s\in{\mathbb{Z}}\) and \(x\in\ell_{1}({\mathbb{Z}})\) be given. Given observations \(x|_{{\mathbb{Z}}\setminus\{s\}}\), the problem (2) with \(r=1\) and \({\cal Y}={\cal X}_{0}\) has a unique solution \(\widehat{x}\in\ell_{2}^{{ BL},{\Omega}}\) which yields an estimate of \(x(s)\) defined as_
\[\widehat{x}(s)=-\sum_{t\neq s}e^{i\omega_{0}(s-t)}x(t),\] (10)
_where the optimality is understood in the sense of problem (2) with \(m=0\), \(M_{s}=\{s\}\), \(r=1\), and \({\cal Y}={\cal X}_{0}\)._
**Remark 4**.: _Formula (10) with \(\omega_{0}=\pi\) has the form_
\[\widehat{x}(s)=-\sum_{t\in{{\mathbb{Z}}\setminus M_{s}}}(-1)^{t-s }x(t).\] (11)
_This represents the limit case of formula (6), since_
\[\frac{\Omega}{\pi-\Omega}{\rm sinc\,}[\Omega(s-t)]\to-(-1)^{t-s} \quad\hbox{as}\quad\Omega\to\pi-0\]
_for all \(t\neq s\)._
### Optimality in the minimax sense
It will be convenient to use mappings \(\delta_{p}:{\bf C}^{m+1}\to{\bf C}\), where \(p\in\{0,1,...,m\}\), such that \(\delta_{p}(y)=y_{p}\) for a vector \(y=(y_{0},y_{1},...,y_{m})\in{\bf C}^{m+1}\).
**Proposition 2**.: _In addition to the optimality in the sense of problem (2) with \({\cal Y}={\cal X}_{0}\), solutions obtained in Theorems 2 and Corollalry 2 are also optimal in the following sense._
1. _If_ \(m=0\)_, then solution (_6_) is optimal in the minimax sense such that_ \[\sup_{x\in{\cal X}_{\sigma}}|\widehat{x}(s)-x(s)|\leq\sigma_{0} \leq\sup_{x\in{\cal X}_{\sigma}}|\widetilde{x}(s)-x(s)|\] (12) _for any estimator_ \(\widetilde{x}(s)=F\left(x|_{{\mathbb{Z}}\setminus\{s\}}\right)\)_, where_ \(F:\ell_{1}({\mathbb{Z}}\setminus\{s\})\to{\bf C}\) _is a mapping._
2. _If_ \(m\geq 0\) _and_ \(s=0\)_, then solution (_7_)-(_9_) is optimal in the mininax sense such that_ \[\sup_{x\in{\cal X}_{\sigma}}|\delta_{p}({\rm B}(\omega_{0}) \widehat{\eta})|\leq\sigma_{p}\leq\sup_{x\in{\cal X}_{\sigma}}|\delta_{p}({\rm B }(\omega_{0})\widetilde{\eta})|,\] \[p=0,1,...,m,\] (13) _for any estimator_ \(\widetilde{x}|_{M_{s}}=F\left(x|_{{\mathbb{Z}}\setminus M_{s}}\right)\)_, where_ \(F:\ell_{1}({{\mathbb{Z}}\setminus M_{s}})\to{\bf C}^{m+1}\) _is a mapping,_ \(\widehat{\eta}=\{\widehat{x}(t)-x(t)\}_{t=s}^{s+m}\in{\bf C}^{m+1}\)_,_ \(\widetilde{\eta}=\{\widetilde{x}(t)-x(t)\}_{t=s}^{s+m}\in{\bf C}^{m+1}\)_._
## 5 Robustness with respect to noise contamination and data truncation
Let us consider a situation where an input process \(x|_{{\mathbb{Z}}\setminus M_{s}}\) is observed with an error. In other words, assume that we observe a process \(x_{\eta}|_{{\mathbb{Z}}\setminus M_{s}}=x|_{{\mathbb{Z}}\setminus M_{s}}+\eta| _{{\mathbb{Z}}\setminus M_{s}}\), where \(\eta\) is a noise.
For a matrix \(S\in{\bf C}^{m+1}\) and \(r_{1},r_{2}\in[1,+\infty]\), we denote by \(\|S\|_{r_{1},r_{2}}\) the operator norm of this matrix considered as an operator \(S:{\bf C}^{m+1}_{r_{1}}\to{\bf C}^{m+1}_{r_{2}}\), where \({\bf C}^{m+1}_{r}\) denote the linear normed space formed as \({\bf C}^{m+1}\) provided with \(\ell_{r}\)-norm.
**Proposition 3**.: _In the notations of Theorem 1,_
\[\|\widehat{x}|_{M_{s}}\|_{\ell_{\theta}(M_{s})}\leq\left\|(I_{m+1 }-{\rm A})^{-1}\right\|_{2,\theta}\|x|_{{{\mathbb{Z}}\setminus M_{s}}}\|_{\ell _{2}({{\mathbb{Z}}\setminus M_{s}})}.\]
_for any \(\theta\in[1,+\infty]\). In particular, under the assumption of Corollary 2,_
\[|\widehat{x}(s)|\leq\frac{\Omega}{\pi-\Omega}\|x\|_{\ell_{2}({{ \mathbb{Z}}\setminus M_{s}})}.\]
**Proposition 4**.: _In the notations of Theorem 2,_
\[\|\widehat{x}|_{M_{s}}\|_{\ell_{\theta}(M_{s})}\leq\left\|{\rm B} (\omega_{0})^{-1}\right\|_{\infty,\theta}\sum_{t\in{{\mathbb{Z}}\setminus M_{s }}}|t|^{m}|x(t)|\]
_for any \(\theta\in[1,+\infty]\). In particular, under the assumption of Corollary 4,_
\[|\widehat{x}(s)|\leq\|x\|_{\ell_{1}({{\mathbb{Z}}\setminus M_{s}})}.\]
Propositions 3 and 4 ensure robustness of the data recovering with respect to noise contamination and truncation. This can be shown as the following.
Let \(\widehat{x}_{\eta}|_{M_{s}}\) be the sequence of corresponding values defined by (3)-(5) or (7)-(9) with \(x_{\eta}|_{{\mathbb{Z}}\setminus M_{s}}\) as an input, and let \(\widehat{x}|_{M_{s}}\) be the corresponding values defined by (3)-(5) or with \(x|_{{\mathbb{Z}}\setminus M_{s}}\) as an input. By Proposition 3,
\[\|(\widehat{x}-\widehat{x}_{\eta})|_{M_{s}}\|_{\ell_{r}(M_{s})} \leq\|(I_{m+1}-{\rm A})^{-1}\|_{\rho,2}\|\eta\|_{\ell_{2}({{\mathbb{Z}} \setminus M_{s}})}\] (14)
for all \(\eta|_{{\mathbb{Z}}\setminus M_{s}}\in\ell_{2}({{\mathbb{Z}}\setminus M_{s}})\). In particular, under the assumption of Corollary 2, i.e. for \(m=0\) and \(M_{s}=\{s\}\), it follows that, in the notations of Theorem 1,
\[|\widehat{x}(s)-\widehat{x}(s)|\leq\frac{\Omega}{\pi-\Omega}\| \eta\|_{\ell_{2}({{\mathbb{Z}}\setminus M_{s}})}.\] (15)
Similarly, Propositions 4 implies that
\[|\widehat{x}(s)-\widehat{x}_{\eta}(s)|\leq\|z_{\eta}(\omega_{0}) \|_{\ell_{1}({{\mathbb{Z}}\setminus M_{s}})}\] (16)
for all \(\eta|_{{\mathbb{Z}}\setminus M_{s}}\in\ell_{1}({{\mathbb{Z}}\setminus M_{s}})\), under the assumptions of this theorem, with \(z_{\eta}(p,\omega)=\{z_{\eta}(p,\omega)\}_{p=0}^{m}\in{\bf C}^{m+1}\) defined as
\[z_{\eta}(p,\omega)=-\sum_{t\in{{\mathbb{Z}}\setminus M_{s}}}(-it )^{p}e^{-i\omega t}\eta(t).\]
This demonstrates some robustness of the method with respect to the noise in the observations. In particular, this ensures robustness of the estimate with respect to truncation of the input processes, such that infinite sequences \(x\in\ell_{r}({{\mathbb{Z}}\setminus M_{s}})\), \(r\in\{1,2\}\), are replaced by truncated sequences \(x_{\eta}(t)=x(t){\mathbb{I}}_{\{|t|\leq q\}}\) for \(q>0\); in this case \(\eta(t)={\mathbb{I}}_{|t|>q}x(t)\). Clearly, \(\|\eta\|_{\ell_{r}({{\mathbb{Z}}\setminus M_{s}})}\to 0\) as \(q\to+\infty\). This overcomes principal impossibility to access infinite sequences of observations.
The experiments with sequences generated by Monte-Carlo simulation demonstrated a good numerical stability of the method; the results were quite robust with respect to deviations of input processes and truncation.
### On a choice between recovering formulae (6) and (10)
It can be seen from (14) and (16) that recovering formula (10) is less robust with respect to data truncation and the noise contamination than recovering formula (6). In addition, recovering formula (10) is not applicable to \(x\in\ell_{2}({\mathbb{Z}})\setminus\ell_{1}({\mathbb{Z}})\). On the other hand, application of (10) does not require to select \(\Omega\). In practice, numerical implementation requires to replace a sequence \(\{x(t)\}\) by a truncated sequence \(x(t){\mathbb{I}}_{\{t:\ |t|\leq q\}}\); technically, this means that both formulas could be applied. The choice between (6) and (10) and of a particular \(\Omega\) for (6) should be done based on the purpose of the model. In general, a more numerically robust result can be achieved with choice of a smaller \(\Omega\).
This can be illustrated with the following example for a case of a single missing value. Consider a band-limited input \(x\in\ell_{2}^{{ BL},{\Omega}}\) with a missing value \(x(0)\) (i.e, \(m=0\) and \(s=0\), in the notations above). In theory, application of (6) with \(\Omega\) replaced by \(\Omega_{1}\in(\Omega,\pi]\) produces error-free recovering, i.e. \(\widehat{x}(0)=x(0)\). However, application of (6) with \(\Omega\) replaced by \(\Omega_{2}\in(0,\Omega_{1})\) may lead to a large error \(\widehat{x}(0)-x(0)\).
On the other hand, application of (10), where \(\Omega\) is not used, performs better than (6) with too small miscalculated \(\Omega_{1}\). This is illustrated by Figure 1 that shows an example of a process \(x(t)\in\ell_{2}^{{ BL},{\Omega}}\) with \(\Omega=0.1\pi\) and recovered values \(\widehat{x}(0)\) corresponding to band-limited extensions obtained from (6) with \(\Omega=0.1\pi\) and \(\Omega=0.05\pi\). In addition, this figure shows \(\widehat{x}(0)\) calculated by (10).
<figure><img src="content_image/1604.06980/x1.png"><figcaption>Figure 1: Example of a path x∈ℓBL,Ω2 with Ω=0.1π and the recovered valuesˆx(0) calculated using 100 observations: (i) calculated by (6) for Ω=0.1π(top); (ii) calculated by (6) with Ω=0.05π (middle); (iii) calculated by (10)(bottom).</figcaption></figure>
On the hand, the presence of a noise in processes that are nor recoverable without error may lead to a larger error for estimate (10). This is illustrated by Figure 2 that shows an example of a noisy process \(x\) and recovered values \(\widehat{x}(0)\) corresponding to band-limited extensions obtained from (6) with \(\Omega=0.1\pi\) and \(\Omega=0.05\pi\). In addition, this figure shows \(\widehat{x}(0)\) calculated by (10).
<figure><img src="content_image/1604.06980/x4.png"><figcaption>Figure 2: Example of a path x∈ℓ2(Z∖Ms) and the recovered values ˆx(0)calculated using 100 observations: (i) calculated by (6) for Ω=0.1π (top);(ii) calculated by (6) with Ω=0.05π (middle); (iii) calculated by (10)(bottom).</figcaption></figure>
In these experiments, we used \(M_{s}=\{0\}\) and truncated sums (6) and (10) with 100 members.
## 6 Proofs
_Proof of Proposition 1_. It is known [9, 10, 11] that a continuous time bandlimited function can be recovered without error from an oversampling sequence where a finite number of sample values is unknown. This implies that if \(x\in\ell_{2}^{{ BL},{\Omega}}\) is such that \(x(t)=0\) for \(t\in{{\mathbb{Z}}\setminus M_{s}}\), then \(x\equiv 0\). Then the proof of Proposition 1 follows. \(\Box\)
_Proof of Lemma 1._ It suffices to prove that \(\ell_{2}^{{ BL},{\Omega}}({{\mathbb{Z}} \setminus M_{s}})\) is a closed linear subspace of \(\ell_{2}({{\mathbb{Z}}\setminus M_{s}})\). In this case, there exists a unique projection \(\widehat{x}|_{{{\mathbb{Z}}\setminus M_{s}}}\) of \(x|_{{{\mathbb{Z}}\setminus M_{s}}}\) on \(\ell_{2}^{{ BL},{\Omega}}({{\mathbb{Z}} \setminus M_{s}})\), and the proof will be completed.
Let \({\mathbb{B}}\) be the set of all mappings \(X:{\mathbb{T}}\to{\bf C}\) such that \(X\left(e^{i\omega}\right)\in L_{2}(-\pi,\pi)\) and such that \(X\left(e^{i\omega}\right)=0\) for \(|\omega|>\Omega\) for \(X=\ Zx\).
Consider the mapping \(\zeta:{\mathbb{B}}\to\ell_{2}^{{ BL},{ \Omega}}({{\mathbb{Z}}\setminus M_{s}})\) such that
\[x(t)=(\zeta(X))(t)=\frac{1}{2\pi}\int_{-\pi}^{\pi}X\left(e^{i \omega}\right)e^{i\omega t}d\omega,\quad t\in{{{\mathbb{Z}}\setminus M_{s}}}.\]
It is a linear continuous operator. By Proposition 1, it is a bijection.
Since the mapping \(\zeta:{\mathbb{B}}\to\ell_{2}^{{ BL},{ \Omega}}({{\mathbb{Z}}\setminus M_{s}})\) is continuous, it follows that the inverse mapping \(\zeta^{-1}:\ell_{2}^{{ BL},{\Omega}}({{ \mathbb{Z}}\setminus M_{s}})\to{\mathbb{B}}\) is also continuous; see e.g. Corollary in Ch.II.5 [19], p. 77. Since the set \({\mathbb{B}}\) is a closed linear subspace of \(L_{2}(-\pi,\pi)\), it follows that \(\ell_{2}^{{ BL},{\Omega}}({{\mathbb{Z}} \setminus M_{s}})\) is a closed linear subspace of \(\ell_{2}({{\mathbb{Z}}\setminus M_{s}})\). Then a solution \(\widehat{x}\) of problem (2) is such that \(\widehat{x}|_{D}\) is a projection of \(x|_{D}\) on \(\ell_{2}^{{ BL},{\Omega}}({{\mathbb{Z}} \setminus M_{s}})\) which is unique. Then the proof of Lemma 1 follows. \(\Box\)
_Proof of Lemma 2._ Let \(\bar{y}=\{\bar{y}_{k}\}_{k=0}^{m}\in{\bf C}^{m+1}\) be arbitrarily selected such that \(\|\bar{y}\|_{\ell_{2}}\neq 0\). Let \(y\in\ell_{2}({\mathbb{Z}})\) be such that \(y|_{{{\mathbb{Z}}\setminus M_{s}}}=0\) and that \(\bar{y}=y|_{M}\). In this case, \(y\notin\ell_{2}^{{ BL},{\Omega}}\); it follows, for instance, from Proposition 1. Let \(Y={\cal Z}y\). We have that \({\cal Z}(h\circ y)=H\left(e^{i\omega}\right)Y\left(e^{i\omega}\right)\). Hence \(\|H\left(e^{i\omega}\right)Y\left(e^{i\omega}\right)\|_{L_{2}(-\pi,\pi)}<\|Y \left(e^{i\omega}\right)\|_{L_{2}(-\pi,\pi)}\). This implies that \(\|h\circ y\|_{\ell_{2}}<\|y\|_{\ell_{2}}\). Hence
\[\|{\rm A}\bar{y}\|_{\ell_{2}}=\|{\mathbb{I}}_{M}(h\circ y)\|_{ \ell_{2}}\leq\|h\circ y\|_{\ell_{2}}<\|y\|_{\ell_{2}}=\|\bar{y}\|_{\ell_{2}}.\]
Since the space \(\ell_{2}(M)\) is finite dimensional, it follows that \(\|{\rm A}\|_{2,2}<1\). Then the statement of Lemma 2 follows. \(\Box\)
_Proof of Theorem 1_. Assume that the input sequences \(\{x(t)\}_{t\in{{\mathbb{Z}}\setminus M_{s}}}\) are extended on \(M_{s}\) such that \(x|_{M_{s}}=\widehat{x}|_{M_{s}}\), where \(\widehat{x}\) is the optimal process that exists according to Lemma 1. Then \(\widehat{x}\) is a unique solution of the minimization problem
\[\hbox{Minimize}\quad\sum_{t\in{\mathbb{Z}}}|x_{ BL }(t)-x(t)|^{2}\]
\[\hbox{over}\quad x_{ BL}\in\ell_{2}^{{ BL},{\Omega}}.\] (17)
By the property of the low-pass filters, \(\widehat{x}=h\circ x\). Hence the optimal process \(\widehat{x}\in\ell_{2}^{{ BL},{\Omega}}\) from Lemma 1 is such that
\[\widehat{x}=h\circ\left(x{\mathbb{I}}_{{{\mathbb{Z}}\setminus M_{ s}}}+\widehat{x}{\mathbb{I}}_{M_{s}}\right).\]
Hence
\[\widehat{x}(t)=\sum_{s\in{{\mathbb{Z}}\setminus M_{s}}}h(t-s)x(s) +\sum_{s\in M_{s}}h(t-s)\widehat{x}(s).\] (18)
This gives that
\[x(t)-\sum_{s\in M_{s}}{\rm A}_{t,s}x(s)=z_{t}.\]
This gives (3)-(5). \(\Box\)
_Proof of Corollary 1_. If \(x\in\ell_{2}^{{ BL},{\Omega}}\), then \(\widehat{x}=x\), since it is a solution of (2). By Theorem 1, \(\widehat{x}\) is obtained as is required in Definition 1 with \(r=2\) and \({\cal Y}=\ell_{2}^{{ BL},{\Omega}}\). \(\Box\)
_Proof of Lemma 3_. The case where \(m=0\) is trivial, since \({\rm B}(\omega)=e^{-\omega s}\) in this case. Let us consider the case where \(m>0\); by the assumptions, \(s=0\) in this case. Suppose that there exists \(\omega\in(-\pi,\pi]\) such that the matrix \({\rm B}(\omega)\) is degenerate. In this case, there exists \(q=\{q(k)\}_{k=0}^{m}\in{\bf C}^{m+1}\) such that \(q\neq 0\) and \({\rm B}(\omega)y=0\). Let \(Q(z)\stackrel{{{\Delta}}}{{=}}\sum_{k=s}^{s+m}q( k)z^{k}=\sum_{k=0}^{m}q(k)z^{k}\), \(z\in{\bf C}\). By the definition of \({\rm B}(\omega)\), it follows that \(\frac{d^{p}Q}{d\omega^{p}}\left(e^{i\omega}\right)=0\) for \(p=0,1,...,m\). Hence \(\frac{d^{p}Q}{dz^{p}}(z_{0})=0\) at \(z_{0}=e^{i\omega}\) for \(p=0,1,...,m\). Hence \(Q\equiv 0\). Therefore, the vector \(q\) cannot be non-zero. This completes the proof. \(\Box\)
_Proof of Theorem 2_. Let \(y\in\ell_{1}\) be selected such that \(y(t)=x(t)\) for \(t\notin M_{s}\) and \(y|_{M_{s}}=0\). Let \(Y={\cal Z}y\), and let \(\widehat{x}\in\ell_{1}\) be selected such that \(\widehat{x}(t)=x(t)\) for \(t\notin M_{s}\), with some choice of \(\widehat{x}|_{M_{s}}\). Let \(\widehat{X}={\cal Z}\widehat{x}\). It follows from the definitions that
\[\frac{d^{p}\widehat{X}}{d\omega^{p}}\left(e^{i\omega}\right)= \frac{d^{p}Y}{d\omega^{p}}\left(e^{i\omega}\right)+\sum_{t=s}^{s+m}(-i\omega t )^{p}e^{-i\omega t}\widehat{x}(t)\]
\[=-z_{p}(\omega)+\delta_{p}({\rm B}(\omega)y(\omega)),\quad p=0,1, ...,m.\]
For \(\omega=\omega_{0}\), this gives \({\rm B}(\omega_{0})y(\omega_{0})=z(\omega_{0})\). Hence there is a unique choice that ensures that \(\widehat{x}\in{\cal X}_{0}\) and \(\widehat{x}|_{{\mathbb{Z}}\setminus M_{s}}=x|_{{\mathbb{Z}}\setminus M_{s}}\); this choice is defined by equations (7)-(9). Clearly, this is a unique optimal solution of the minimization problem (13) with \(r=1\) and \({\cal Y}={\cal X}_{0}\). This completes the proof of Theorem 2. \(\Box\)
_Proof of Proposition 2_. It suffices to prove statement (ii) only, since statment (i) is its special case. Let \(x\in{\cal X}_{\sigma}\) for some \(\sigma\neq 0\), and let \(Y\left(e^{i\omega}\right)=\sum_{k\in{{\mathbb{Z}}\setminus M_{s}}}e^{-i\omega k }x(k)\), \(\omega\in(-\pi,\pi]\); this function is observable. By the definitions, it follows that
\[X\left(e^{i\omega}\right)=Y\left(e^{i\omega}\right)+\sum_{t\in M _{s}}e^{-i\omega k}x(t)\]
and
\[\frac{d^{p}X}{d\omega^{p}}\left(e^{i\omega}\right)=\frac{d^{p}Y}{ d\omega^{p}}\left(e^{i\omega}\right)+\delta_{p}({\rm B}(\omega)y(\omega)), \quad p=0,1,...,m.\]
For \(\omega=\omega_{0}\), it gives
\[\xi=-z(\omega_{0})+{\rm B}(\omega_{0})y(\omega_{0}),\]
where \(\xi=\{\xi_{p}\}_{p=0}^{m}\in{\bf C}^{m+1}\) has components \(\xi_{p}=\frac{d^{p}X}{d\omega^{p}}\left(e^{i\omega_{0}}\right)\) such that \(|\xi_{p}|\leq\sigma_{p}\). Using the estimator from Theorem 2, we accept the value \(\widehat{y}(\omega_{0})={\rm B}(\omega_{0})^{-1}z(\omega_{0})\) as the estimate of \(y(\omega_{0})=\{x(s+p)\}_{p=0}^{m}\). We have that \({\rm B}(\omega_{0})y(\omega_{0})-{\rm B}(\omega_{0})\widehat{y}(\omega_{0})=\xi\). It follows that the first inequality in (13) holds. If \(\sigma=0\) then the estimator is error-free.
Let us show that the second inequality in (13) holds. Suppose that we use another estimator \(\widetilde{x}(s)=\widetilde{F}\left(x|_{{\mathbb{Z}}\setminus M_{s}}\right)\), where \(\widetilde{F}:\ell_{2}({{\mathbb{Z}}\setminus M_{s}})\to{\bf C}\) is some mapping. Let \(p\in\{0,1,...,m\}\), and let \(X_{\pm}\left(e^{i\omega}\right)\) be such that \(\delta_{k}({\rm B}(\omega)y(\omega))=\pm\sigma_{k}{\mathbb{I}}_{\{k=p\}}\), \(k\in\{0,1,...,m\}\), and \(x_{\pm}(t)=0\) for \(t\in{{\mathbb{Z}}\setminus M_{s}}\) for \(x_{\pm}={\cal Z}^{-1}X_{\pm}\). By the definition of \({\rm B}(\omega)\), it follows . Clearly, \(x_{\pm}\in{\cal X}_{\sigma}\). Moreover, we have that \(\widetilde{x}_{-}|_{M_{s}}=\widetilde{x}_{+}|_{M_{s}}\) for \(\widetilde{x}_{\pm}=\widetilde{F}\left(x_{\pm}|_{{\mathbb{Z}}\setminus M_{s}}\right)\), for any choice of \(\widetilde{F}\), and
\[\max(|\delta_{p}({\rm B}(\omega_{0})\eta_{-})|,|\delta_{p}({\rm B }(\omega_{0})\eta_{+})|)\geq\sigma_{p},\]
\[\quad p=0,1,...,m,\]
where \(\eta_{-}=\{\widetilde{x}_{-}(t)-x_{-}(t)\}_{t=s}^{s+m}\in{\bf C}^{m+1}\), \(\eta_{+}=\{\widetilde{x}_{+}(t)-x_{+}(t)\}_{t=s}^{s+m}\in{\bf C}^{m+1}\). Then the second inequality in (13) and the proof of Proposition 2 follow. \(\Box\)
_Proof of Corollary 3._ If \(x\in{\cal X}_{0}\), then \(\widehat{x}=x\) since it is a solution of (2). By Theorem 2, \(\widehat{x}\) is obtained as is required in Definition 1 with \(r=1\) and \({\cal Y}={\cal X}_{0}\). \(\Box\)
_Proof of Proposition 3_. By Theorem 1,
\[\|\widehat{x}|_{M_{s}}\|_{\ell_{\theta}(M_{s})}\leq\|(I_{m+1}-{ \rm A})^{-1}\|_{2,\theta}\|z\|_{\ell_{2}(M_{s})}.\]
In addition,
\[\|z\|_{\ell_{2}(M_{s})}\leq\|{\mathbb{I}}_{M_{s}}(h\circ x{ \mathbb{I}}_{{{\mathbb{Z}}\setminus M_{s}}})\|_{\ell_{2}({\mathbb{Z}})}\leq\|x |_{{{\mathbb{Z}}\setminus M_{s}}}\|_{\ell_{2}({{\mathbb{Z}}\setminus M_{s}})}.\]
Then the proof of Proposition 3 follows. \(\Box\)
_Proof of Proposition 4_. By Theorem 2,
\[\|\widehat{x}|_{M_{s}}\|_{\ell_{\theta}(M_{s})}\leq\|{\rm B}( \omega_{0})^{-1}\|_{\rho,\theta}\|z(\omega_{0})\|_{\ell_{\rho}({{\mathbb{Z}} \setminus M_{s}})}.\]
Further,
\[|z_{p}(\omega_{0})|\leq\sum_{t\in{{\mathbb{Z}}\setminus M_{s}}}|t |^{m}|x(t)|.\]
Then the proof of Proposition 3 follows. \(\Box\)
## 7 Discussion and possible modifications
The present paper is focused on theoretical aspects of possibility to recover missing values. The paper suggests frequency criteria of error-free recoverability of a single missing value in pathwise deterministic setting. In particular, \(m\) missing values can be recovered for processes that are degenerate of order \(m\) (Definition 2). Corollary 3 gives a recoverability criterion reminding the classical Kolmogorov’s criterion (1) for the spectral densities [12]. However, the degree of similarity is quite limited. For instance, if a stationary Gaussian process has the spectral density \(\phi(\omega)\geq{\rm const\,}\cdot(\pi^{2}-\omega^{2})^{\nu}\) for \(\nu\in(0,1)\), then, according to criterion (1), this process is not minimal [12], i.e. this process is non-recoverable. On the other hand, Corollary 3 imply that single values of processes \(x\in\ell_{1}\) are recoverable if \(X(-1)=0\) for \(X={\cal Z}x\). In particular, this class includes sequences \(x\) such that \(|X\left(e^{i\omega}\right)|\leq{\rm const\,}\cdot(\pi^{2}-\omega^{2})^{\nu}\) for \(\nu\in(0,1)\). Nevertheless, this similarity still could be used for analysis of the properties of pathwise Z-transforms for stochastic Gaussian processes. In particular, assume that \(y=\{y(t)\}_{t\in{\mathbb{Z}}}\) is a stochastic stationary Gaussian process with spectral density \(\phi\) such that (1) does not hold. It follows that adjusted paths \(\{(1+\delta t^{2})^{-1}y(t)\}_{t\in{\mathbb{Z}}}\), where \(\delta>0\), cannot belong to \(\ell_{2}^{{ BL},{\Omega}}\) or \({\cal X}_{0}\). We leave this analysis for the future research.
There are some other open questions. The most challenging problem is to obtain pathwise necessary conditions of recoverability that are close enough to sufficient conditions. In addition, there are more technical questions. In particular, it is unclear if it possible to relax conditions of recoverability described as weighted \(\ell_{1}\)-summarability presented in the definition for \({\cal X}_{\sigma}\). It is also unclear if it is possible to replace the restrictions on the derivatives of Z-transform imposed at one common point for the processes from \({\cal X}_{0}\) by conditions at different points. We leave this for the future research.
### Acknowledgment
This work was supported by ARC grant of Australia DP120100928 to the author. In addition, the author thanks the anonymous referees for their valuable suggestions which helped to improve the paper.
## References
* Alem eta al [2014] Y. Alem, Z. Khalid, R.A. Kennedy. (2014). Band-limited extrapolation on the sphere for signal reconstruction in the presence of noise, _Proc. IEEE Int. Conf. ICASSP’2014_, pp. 4141-4145.
* Cai [2009] T. Cai, G. Xu, and J. Zhang. (2009), On recovery of sparse signals via \(\ell_{1}\) minimization, _IEEE Trans. Inf. Theory_, vol. 55, no. 7, pp. 3388-3397.
* Candes et al [2006a] E. Candés, T. Tao. (2006), Near optimal signal recovery from random projections: Universal encoding strategies? _IEEE Transactions on Information Theory_ 52(12) (2006), 5406-5425.
* Candes et al [2006b] E.J. Candes, J. Romberg, T. Tao. (2006). Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. _IEEE Transactions on Information Theory_**52** (2), 489–509.
* Dokuchaev [2010a] N. Dokuchaev. (2012). On predictors for band-limited and high-frequency time series. _Signal Processing_**92**, iss. 10, 2571-2575.
* Dokuchaev [2012b] N. Dokuchaev. (2012). Predictors for discrete time processes with energy decay on higher frequencies. _IEEE Transactions on Signal Processing_**60**, No. 11, 6027-6030.
* Dokuchaev [2016] N. Dokuchaev. (2016). Near-ideal causal smoothing filters for the real sequences. _Signal Processing_**118**, iss. 1, pp. 285-293.
* Donoho and Stark [1989] D. L. Donoho and P. B. Stark. (1989). Uncertainty principles and signal recovery. _SIAM J. Appl. Math._, vol. 49, no. 3, pp. 906–931.
* Ferreira [1992] P.J.S.G. Ferreira. (1992). Incomplete sampling series and the recovery of missing samples from oversampled band-limited signals, IEEE Transactions on signal processing, 40(1), pp.225–227.
* Ferreira [1994a] P.J.S.G. Ferreira. (1994). The stability of a procedure for the recovery of lost samples in band-limited signals, Signal Processing, 40(2-3), pp.195-205.
* Ferreira [1994] P. G. S. G. Ferreira (1994). Interpolation and the discrete Papoulis-Gerchberg algorithm. _IEEE Transactions on Signal Processing_**42** (10), 2596–2606.
* Kolmogorov [1941] A.N. Kolmogorov. (1941). Interpolation and extrapolation of stationary stochastic series. _Izv. Akad. Nauk SSSR Ser. Mat.,_ 5:1, 3–14.
* Lee and Ferreira [2014] D.G. Lee, P.J.S.G. Ferreira. (2014). Direct construction of superoscillations. _IEEE Transactions on Signal processing_, V. 62, No. 12,3125-3134.
* Peller [2000] V. V. Peller (2000). Regularity conditions for vectorial stationary processes. _In: Complex Analysis, Operators, and Related Topics. The S.A. Vinogradov Memorial Volume._ Ed. V. P. Khavin and N. K. Nikol’skii. Birkhauser Verlag, pp 287-301.
* Pourahmadi [1984] M. Pourahmadi (1984). On minimality and interpolation of harmonizable stable processes. _SIAM Journal on Applied Mathematics_, Vol. 44, No. 5, pp. 1023–1030.
* Pourahmadi [1989] M. Pourahmadi. Estimation and interpolation of missing values of a stationary time series. _J. Time Ser. Anal._, 10 (1989), pp. 149-169
* Tropp [2007] J. Tropp and A. Gilbert. (2007). Signal recovery from partial information via orthogonal matching pursuit, _IEEE Trans. Inf. Theory_, vol. 53, no. 12, pp. 4655–4666.
* Tzschoppe and Huber [2009] R. Tzschoppe, J.B. Huber. (2009). Causal discrete-time system approximation of non-bandlimited continuous-time systems by means of discrete prolate spheroidal wave functions. _Eur. Trans. Telecomm._**20**, 604–616.
* Yosida [1965] K. Yosida. (1965). _Functional Analysis._ Springer, Berlin Heilderberg New York.
* [20] H. Zhao, R. Wang, D. Song, T. Zhang, D. Wu. (2014). Extrapolation of discrete bandlimited signals in linear canonical transform domain _Signal Processing_**94**, 212–218.
|
1804.02505 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 45705,
"num_imgs": 6,
"llama3_tokens_count": 11797
} | [
"content_image/1804.02505/x1.png",
"content_image/1804.02505/x2.png",
"content_image/1804.02505/x3.png",
"content_image/1804.02505/x4.png",
"content_image/1804.02505/x5.png",
"content_image/1804.02505/x6.png"
] | # MVSNet: Depth Inference for
Unstructured Multi-view Stereo
Yao Yao\({}^{1}\)
\({}^{1}\) The Hong Kong University of Science and Technology,
¹
\({}^{2}\) Shenzhen Zhuke Innovation Technology (Altizure),
²
Zixin Luo\({}^{1}\)
\({}^{1}\) The Hong Kong University of Science and Technology,
¹
\({}^{2}\) Shenzhen Zhuke Innovation Technology (Altizure),
²
Shiwei Li\({}^{1}\)
\({}^{1}\) The Hong Kong University of Science and Technology,
¹
\({}^{2}\) Shenzhen Zhuke Innovation Technology (Altizure),
²
Tian Fang\({}^{2}\)
\({}^{1}\) The Hong Kong University of Science and Technology,
¹
\({}^{2}\) Shenzhen Zhuke Innovation Technology (Altizure),
²
Long Quan\({}^{1}\)
\({}^{1}\) The Hong Kong University of Science and Technology,
¹
\({}^{2}\) Shenzhen Zhuke Innovation Technology (Altizure),
²
[FOOTNOTE:1][ENDFOOTNOTE]
[FOOTNOTE:1][ENDFOOTNOTE]
###### Abstract
We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor _DTU_ dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor _Tanks and Temples_ dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.
Keywords:Multi-view Stereo, Depth Map, Deep Learning
## 1 Introduction
Multi-view stereo (MVS) estimates the dense representation from overlapping images, which is a core problem of computer vision extensively studied for decades. Traditional methods use hand-crafted similarity metrics and engineered regularizations (e.g., normalized cross correlation and semi-global matching [12]) to compute dense correspondences and recover 3D points. While these methods have shown great results under ideal Lambertian scenarios, they suffer from some common limitations. For example, low-textured, specular and reflective regions of the scene make dense matching intractable and thus lead to incomplete reconstructions. It is reported in recent MVS benchmarks [1, 18] that, although current state-of-the-art algorithms [7, 36, 8, 32] perform very well on the _accuracy_, the reconstruction _completeness_ still has large room for improvement.
Recent success on convolutional neural networks (CNNs) research has also triggered the interest to improve the stereo reconstruction. Conceptually, the learning-based method can introduce global semantic information such as specular and reflective priors for more robust matching. There are some attempts on the two-view stereo matching, by replacing either hand-crafted similarity metrics [39, 10, 23, 11] or engineered regularizations [34, 19, 17] with the learned ones. They have shown promising results and gradually surpassed traditional methods in stereo benchmarks [9, 25]. In fact, the stereo matching task is perfectly suitable for applying CNN-based methods, as image pairs are rectified in advance and thus the problem becomes the horizontal pixel-wise disparity estimation without bothering with camera parameters.
However, directly extending the learned two-view stereo to multi-view scenarios is non-trivial. Although one can simply pre-rectify all selected image pairs for stereo matching, and then merge all pairwise reconstructions to a global point cloud, this approach fails to fully utilize the multi-view information and leads to less accurate result. Unlike stereo matching, input images to MVS could be of arbitrary camera geometries, which poses a tricky issue to the usage of learning methods. Only few works acknowledge this problem and try to apply CNN to the MVS reconstruction: SurfaceNet [14] constructs the Colored Voxel Cubes (CVC) in advance, which combines all image pixel color and camera information to a single volume as the input of the network. In contrast, the Learned Stereo Machine (LSM) [15] directly leverages the differentiable projection/unprojection to enable the end-to-end training/inference. However, both the two methods exploit the volumetric representation of regular grids. As restricted by the huge memory consumption of 3D volumes, their networks can hardly be scaled up: LSM only handles synthetic objects in low volume resolution, and SurfaceNet applies a heuristic divide-and-conquer strategy and takes a long time for large-scale reconstructions. For the moment, the leading boards of modern MVS benchmarks are still occupied by traditional methods [7, 8, 32].
To this end, we propose an end-to-end deep learning architecture for depth map inference, which computes one depth map at each time, rather than the whole 3D scene at once. Similar to other depth map based MVS methods [35, 3, 8, 32], the proposed network, MVSNet, takes one reference image and several source images as input, and infers the depth map for the reference image. The key insight here is the differentiable homography warping operation, which implicitly encodes camera geometries in the network to build the 3D cost volumes from 2D image features and enables the end-to-end training. To adapt arbitrary number of source images in the input, we propose a variance-based metric that maps multiple features into one cost feature in the volume. This cost volume then undergoes multi-scale 3D convolutions and regress an initial depth map. Finally, the depth map is refined with the reference image to improve the accuracy of boundary areas. There are two major differences between our method and previous learned approaches [15, 14]. First, for the purpose of depth map inference, our 3D cost volume is built upon the camera frustum instead of the regular Euclidean space. Second, our method decouples the MVS reconstruction to smaller problems of per-view depth map estimation, which makes large-scale reconstruction possible.
We train and evaluate the proposed MVSNet on the large-scale _DTU_ dataset [1]. Extensive experiments show that with simple post-processing, MVSNet outperforms all competing methods in terms of completeness and overall quality. Besides, we demonstrate the generalization power of the network on the outdoor _Tanks and Temples_ benchmark [18], where MVSNet ranks first (before April. 18, 2018) over all submissions including the open-source MVS methods (e.g., COLMAP [32] and OpenMVS [29]) and commercial software (Pix4D [30]) without any fine-tuning. It is also noteworthy that the runtime of MVSNet is several times or even several orders of magnitude faster than previous state-of-the-arts.
## 2 Related work
**MVS Reconstruction.** According to output representations, MVS methods can be categorized into 1) direct point cloud reconstructions [22, 7], 2) volumetric reconstructions [20, 33, 14, 15] and 3) depth map reconstructions [35, 3, 8, 32, 38]. Point cloud based methods operate directly on 3D points, usually relying on the propagation strategy to gradually densify the reconstruction [22, 7]. As the propagation of point clouds is proceeded sequentially, these methods are difficult to be fully parallelized and usually take a long time in processing. Volumetric based methods divide the 3D space into regular grids and then estimate if each voxel is adhere to the surface. The downsides for this representation are the space discretization error and the high memory consumption. In contrast, depth map is the most flexible representation among all. It decouples the complex MVS problem into relatively small problems of per-view depth map estimation, which focuses on only one reference and a few source images at a time. Also, depth maps can be easily fused to the point cloud [26] or the volumetric reconstructions [28]. According to the recent MVS benchmarks [1, 18], current best MVS algorithms [8, 32] are both depth map based approaches.
**Learned Stereo.** Rather than using traditional handcrafted image features and matching metrics [13], recent studies on stereo apply the deep learning technique for better pair-wise patch matching. Han _et al._ [10] first propose a deep network to match two image patches. Zbontar _et al._ [39] and Luo _et al._ [23] use the learned features for stereo matching and semi-global matching (SGM) [12] for post-processing. Beyond the pair-wise matching cost, the learning technique is also applied in cost regularization. SGMNet [34] learns to adjust the parameters used in SGM, while CNN-CRF [19] integrates the conditional random field optimization in the network for the end-to-end stereo learning. The recent state-of-the-art method is GCNet [17], which applies 3D CNN to regularize the cost volume and regress the disparity by the soft argmin operation. It has been reported in KITTI banchmark [25] that, learning-based stereos, especially those end-to-end learning algorithms [24, 19, 17], significantly outperform the traditional stereo approaches.
**Learned MVS.** There are fewer attempts on learned MVS approaches. Hartmann _et al._ propose the learned multi-patch similarity [11] to replace the traditional cost metric for MVS reconstruction. The first learning based pipeline for MVS problem is SurfaceNet [14], which pre-computes the cost volume with sophisticated voxel-wise view selection, and uses 3D CNN to regularize and infer the surface voxels. The most related approach to ours is the LSM [15], where camera parameters are encoded in the network as the projection operation to form the cost volume, and 3D CNN is used to classify if a voxel belongs to the surface. However, due to the common drawback of the volumetric representation, networks of SurfaceNet and LSM are restricted to only small-scale reconstructions. They either apply the divide-and-conquer strategy [14] or is only applicable to synthetic data with low resolution inputs [15]. In contrast, our network focus on producing the depth map for one reference image at each time, which allows us to adaptively reconstruct a large scene directly.
<figure><img src="content_image/1804.02505/x1.png"><figcaption>Figure 1: The network design of MVSNet. Input images will go through the 2Dfeature extraction network and the differentiable homograph warping togenerate the cost volume. The final depth map output is regressed from theregularized probability volume and refined with the reference image</figcaption></figure>
## 3 MVSNet
This section describes the detailed architecture of the proposed network. The design of MVSNet strongly follows the rules of camera geometry and borrows the insights from previous MVS approaches. In following sections, we will compare each step of our network to the traditional MVS methods, and demonstrate the advantages of our learning-based MVS system. The full architecture of MVSNet is visualized in Fig. 1.
### Image Features
The first step of MVSNet is to extract the deep features \(\{\mathbf{F}_{i}\}_{i=1}^{N}\) of the \(N\) input images \(\{\mathbf{I}_{i}\}_{i=1}^{N}\) for dense matching. An eight-layer 2D CNN is applied, where the strides of layer 3 and 6 are set to two to divide the feature towers into three scales. Within each scale, two convolutional layers are applied to extract the higher-level image representation. Each convolutional layer is followed by a batch-normalization (BN) layer and a rectified linear unit (ReLU) except for the last layer. Also, similar to common matching tasks, parameters are shared among all feature towers for efficient learning.
The outputs of the 2D network are \(N\)\(32\)-channel feature maps downsized by four in each dimension compared with input images. It is noteworthy that though the image frame is downsized after feature extraction, the original neighboring information of each remaining pixel has already been encoded into the 32-channel pixel descriptor, which prevents dense matching from losing useful context information. Compared with simply performing dense matching on original images, the extracted feature maps significantly boost the reconstruction quality (see Sec. 5.3).
### Cost Volume
The next step is to build a 3D cost volume from the extracted feature maps and input cameras. While previous works [14, 15] divide the space using regular grids, for our task of depth map inference, we construct the cost volume upon the reference camera frustum. For simplicity, in the following we denote \(\mathbf{I}_{1}\) as the reference image, \(\{\mathbf{I}_{i}\}_{i=2}^{N}\) the source images, and \(\{\mathbf{K}_{i},\mathbf{R}_{i},\mathbf{t}_{i}\}_{i=1}^{N}\) the camera intrinsics, rotations and translations that correspond to the feature maps.
#### 3.2.1 Differentiable Homography
All feature maps are warped into different fronto-parallel planes of the reference camera to form \(N\) feature volumes \(\{\mathbf{V}_{i}\}_{i=1}^{N}\). The coordinate mapping from the warped feature map \(\mathbf{V}_{i}(d)\) to \(\mathbf{F}_{i}\) at depth \(d\) is determined by the planar transformation \(\mathbf{x^{\prime}}\sim\mathbf{H}_{i}(d)\cdot\mathbf{x}\), where ‘\(\sim\)’ denotes the projective equality and \(\mathbf{H}_{i}(d)\) the homography between the \(i^{th}\) feature map and the reference feature map at depth \(d\). Let \(\mathbf{n}_{1}\) be the principle axis of the reference camera, the homography is expressed by a \(3\times 3\) matrix:
\[\mathbf{H}_{i}(d)=\mathbf{K}_{i}\cdot\mathbf{R}_{i}\cdot\Big{(}\mathbf{I}- \frac{(\mathbf{t}_{1}-\mathbf{t}_{i})\cdot\mathbf{n}_{1}^{T}}{d}\Big{)}\cdot \mathbf{R}_{1}^{T}\cdot\mathbf{K}_{1}^{T}.\] (1)
Without loss of generality, the homography for reference feature map \(\mathbf{F}_{1}\) itself is an \(3\times 3\) identity matrix. The warping process is similar to that of the classical plane sweeping stereo [5], except that the differentiable bilinear interpolation is used to sample pixels from feature maps \(\{\mathbf{F}_{i}\}_{i=1}^{N}\) rather than images \(\{\mathbf{I}_{i}\}_{i=1}^{N}\). As the core step to bridge the 2D feature extraction and the 3D regularization networks, the warping operation is implemented in differentiable manner, which enables end-to-end training of depth map inference.
#### 3.2.2 Cost Metric
Next, we aggregate multiple feature volumes \(\{\mathbf{V}_{i}\}_{i=1}^{N}\) to one cost volume \(\mathbf{C}\). To adapt arbitrary number of input views, we propose a variance-based cost metric \(\mathcal{M}\) for N-view similarity measurement. Let \(W,H,D,F\) be the input image width, height, depth sample number and the channel number of the feature map, and \(V=\frac{W}{4}\cdot\frac{H}{4}\cdot D\cdot F\) the feature volume size, our cost metric defines the mapping \(\mathcal{M}:\underbrace{\mathbb{R}^{V}\times\cdots\times\mathbb{R}^{V}}_{N}\to \mathbb{R}^{V}\) that:
\[\mathbf{C}=\mathcal{M}(\mathbf{V}_{1},\cdots,\mathbf{V}_{N})=\frac{\sum\limits _{i=1}^{N}{(\mathbf{V}_{i}-\mskip 1.5mu \overline{\mskip-1.5mu \mathbf{V}_{i} \mskip-1.5mu }\mskip 1.5mu )^{2}}}{N}\] (2)
Where \(\mskip 1.5mu \overline{\mskip-1.5mu \mathbf{V}_{i}\mskip-1.5mu }\mskip 1.5mu\) is the average volume among all feature volumes, and all operations above are element-wise.
Most traditional MVS methods aggregate pairwise costs between the reference image and all source images in a heuristic way. Instead, our metric design follows the philosophy that all views should contribute equally to the matching cost and gives no preference to the reference image [11]. We notice that recent work [11] applies the mean operation with multiple CNN layers to infer the multi-patch similarity. Here we choose the ‘variance’ operation instead because the ‘mean’ operation itself provides no information about the feature differences, and their network requires pre- and post- CNN layers to help infer the similarity. In contrast, our variance-based cost metric explicitly measures the multi-view feature difference. In later experiments, we will show that such explicit difference measurement improves the validation accuracy.
#### 3.2.3 Cost Volume Regularization
The raw cost volume computed from image features could be noise-contaminated (e.g., due to the existence of non-Lambertian surfaces or object occlusions) and should be incorporated with smoothness constraints to infer the depth map. Our regularization step is designed for refining the above cost volume \(\mathbf{C}\) to generate a probability volume \(\mathbf{P}\) for depth inference. Inspired by recent learning-based stereo [17] and MVS [14, 15] methods, we apply the multi-scale 3D CNN for cost volume regularization. The four-scale network here is similar to a 3D version UNet [31], which uses the encoder-decoder structure to aggregate neighboring information from a large receptive field with relatively low memory and computation cost. To further lessen the computational requirement, we reduce the 32-channel cost volume to 8-channel after the first 3D convolutional layer, and change the convolutions within each scale from 3 layers to 2 layers. The last convolutional layer outputs a 1-channel volume. We finally apply the _softmax_ operation along the depth direction for probability normalization.
The resulting probability volume is highly desirable in depth map inference that it can not only be used for per-pixel depth estimation, but also for measuring the estimation confidence. We will show in Sec. 3.3.2 that one can easily determine the depth reconstruction quality by analyzing its probability distribution, which leads to a very concise yet effective outlier filtering strategy in Sec. 4.2.1.
### Depth Map
#### 3.3.1 Initial Estimation
The simplest way to retrieve depth map \(\mathbf{D}\) from the probability volume \(\mathbf{P}\) is the pixel-wise winner-take-all [5] (i.e., _argmax_). However, the _argmax_ operation is unable to produce sub-pixel estimation, and cannot be trained with back-propagation due to its indifferentiability. Instead, we compute the _expectation_ value along the depth direction, i.e., the probability weighted sum over all hypotheses:
\[\mathbf{D}=\sum\limits_{d=d_{min}}^{d_{max}}d\times\mathbf{P}(d)\] (3)
Where \(\mathbf{P}(d)\) is the probability estimation for all pixels at depth \(d\). Note that this operation is also referred to as the _soft argmin_ operation in [17]. It is fully differentiable and able to approximate the argmax result. While the depth hypotheses are uniformly sampled within range \([d_{min},d_{max}]\) during cost volume construction, the expectation value here is able to produce a continuous depth estimation. The output depth map (Fig. 2 (b)) is of the same size to 2D image feature maps, which is downsized by four in each dimension compared to input images.
<figure><img src="content_image/1804.02505/x2.png"><figcaption>Figure 2: Illustrations on inferred depth map, probability distributions andprobability map. (a) One reference image of scan 114, DTU dataset [1]; (b) theinferred depth map; (c) the probability distributions of an inlier pixel (top)and an outlier pixel (bottom), where the x-axis is the index of depthhypothesis, y-axis the probability and red lines the soft argmin results; (d)the probability map. As shown in (c), the outlier’s distribution is scatteredand results in a low probability estimation in (d)</figcaption></figure>
#### 3.3.2 Probability Map
The probability distribution along the depth direction also reflects the depth estimation quality. Although the multi-scale 3D CNN has very strong ability to regularize the probability to the single modal distribution, we notice that for those falsely matched pixels, their probability distributions are scattered and cannot be concentrated to one peak (see Fig. 2 (c)). Based on this observation, we define the quality of a depth estimation \(\hat{d}\) as the probability that the ground truth depth is within a small range near the estimation. As depth hypotheses are discretely sampled along the camera frustum, we simply take the probability sum over the four nearest depth hypotheses to measure the estimation quality. Notice that other statistical measurements, such as standard deviation or entropy can also be used here, but in our experiments we observe no significant improvement from these measurements for depth map filtering. Moreover, our probability sum formulation leads to a better control of thresholding parameter for outliers filtering.
#### 3.3.3 Depth Map Refinement
While the depth map retrieved from the probability volume is a qualified output, the reconstruction boundaries may suffer from oversmoothing due to the large receptive field involved in the regularization, which is similar to the problems in semantic segmentation [4] and image matting [37]. Notice that the reference image in natural contains boundary information, we thus use the reference image as a guidance to refine the depth map. Inspired by the recent image matting algorithm [37], we apply a depth residual learning network at the end of MVSNet. The initial depth map and the resized reference image are concatenated as a 4-channel input, which is then passed through three 32-channel 2D convolutional layers followed by one 1-channel convolutional layer to learn the depth residual. The initial depth map is then added back to generate the refined depth map. The last layer does not contain the BN layer and the ReLU unit as to learn the negative residual. Also, to prevent being biased at a certain depth scale, we pre-scale the initial depth magnitude to range [0, 1], and convert it back after the refinement.
### Loss
Losses for both the initial depth map and the refined depth map are considered. We use the mean absolute difference between the ground truth depth map and the estimated depth map as our training loss. As ground truth depth maps are not always complete in the whole image (see Sec. 4.1), we only consider those pixels with valid ground truth labels:
\[Loss=\sum\limits_{p\in\mathbf{p}_{valid}}\underbrace{\|d(p)-\hat{d_{i}}(p)\|_{ 1}}_{Loss0}+\lambda\cdot\underbrace{\|d(p)-\hat{d_{r}}(p)\|_{1}}_{Loss1}\] (4)
Where \(\mathbf{p}_{valid}\) denotes the set of valid ground truth pixels, \(d(p)\) the ground truth depth value of pixel \(p\), \(\hat{d_{i}}(p)\) the initial depth estimation and \(\hat{d_{r}}(p)\) the refined depth estimation. The parameter \(\lambda\) is set to \(1.0\) in experiments.
## 4 Implementations
### Training
#### 4.1.1 Data Preparation
Current MVS datasets provide ground truth data in either point cloud or mesh formats, so we need to generate the ground truth depth maps ourselves. The _DTU_ dataset [1] is a large-scale MVS dataset containing more than 100 scenes with different lighting conditions. As it provides the ground truth point cloud with normal information, we use the screened Poisson surface reconstruction (SPSR) [16] to generate the mesh surface, and then render the mesh to each viewpoint to generate the depth maps for our training. The parameter, _depth-of-tree_ is set to 11 in SPSR to acquire the high quality mesh result. Also, we set the mesh _trimming-factor_ to 9.5 to alleviate mesh artifacts in surface edge areas. To fairly compare MVSNet with other learning based methods, we choose the same training, validation and evaluation sets as in SurfaceNet [14]¹. Considering each scan contains 49 images with 7 different lighting conditions, by setting each image as the reference, _DTU_ dataset provides 27097 training samples in total.
[FOOTNOTE:1][ENDFOOTNOTE]
#### 4.1.2 View Selection
A reference image and two source images (\(N=3\)) are used in our training. We calculate a score \(s(i,j)=\sum_{\mathbf{p}}\mathcal{G}(\theta_{ij}(\mathbf{p}))\) for each image pair according to the sparse points, where \(\mathbf{p}\) is a common track in both view \(i\) and \(j\), \(\theta_{ij}(\mathbf{p})=(180/\pi)\arccos((\mathbf{c}_{i}-\mathbf{p})\cdot( \mathbf{c}_{j}-\mathbf{p}))\) is \(\mathbf{p}\)’s baseline angle and \(\mathbf{c}\) is the camera center. \(\mathcal{G}\) is a piecewise Gaussian function [40] that favors a certain baseline angle \(\theta_{0}\):
\[\mathcal{G}(\theta)=\left\{\begin{array}[]{ll}\exp(-\frac{(\theta-\theta_{0})^ {2}}{2\sigma_{1}^{2}}),\theta\leq\theta_{0}\\ \exp(-\frac{(\theta-\theta_{0})^{2}}{2\sigma_{2}^{2}}),\theta>\theta_{0}\\ \end{array}\right.\]
In the experiments, \(\theta_{0}\), \(\sigma_{1}\) and \(\sigma_{2}\) are set to 5, 1 and 10 respectively.
Notice that images will be downsized in feature extraction, plus the four-scale encoder-decoder structure in 3D regularization part, the input image size must be divisible by a factor of 32. Considering this requirement also the limited GPU memories, we downsize the image resolution from \(1600\times 1200\) to \(800\times 600\), and then crop the image patch with \(W=640\) and \(H=512\) from the center as the training input. The input camera parameters are changed accordingly. The depth hypotheses are uniformly sampled from \(425mm\) to \(935mm\) with a \(2mm\) resolution (\(D=256\)). We use TensorFlow [2] to implement MVSNet, and the network is trained on one Tesla P100 graphics card for around \(100,000\) iterations.
<figure><img src="content_image/1804.02505/x3.png"><figcaption>Figure 3: Reconstructions of scan 9, DTU dataset [1]. From top left to bottomright: (a) the inferred depth map from MVSNet; (b) the filtered depth mapafter photometric and geometric filtering; (c) the depth map rendered from theground truth mesh; (d) the reference image; (e) the final fused point cloud;(f) the ground truth point cloud</figcaption></figure>
### Post-processing
#### 4.2.1 Depth Map Filter
The above network estimates a depth value for every pixel. Before converting the result to dense point clouds, it is necessary to filter out outliers at those background and occluded areas. We propose two criteria, namely _photometric_ and _geometric_ consistencies for the robust depth map filtering.
The photometric consistency measures the matching quality. As discussed in Sec. 3.3.2, we compute the probability map to measure the depth estimation quality. In our experiments, we regard pixels with probability lower than 0.8 as outliers. The geometric constraint measures the depth consistency among multiple views. Similar to the left-right disparity check for stereo, we project a reference pixel \(p_{1}\) through its depth \(d_{1}\) to pixel \(p_{i}\) in another view, and then reproject \(p_{i}\) back to the reference image by \(p_{i}\)’s depth estimation \(d_{i}\). If the reprojected coordinate \(p_{reproj}\) and and the reprojected depth \(d_{reproj}\) satisfy \(|p_{reproj}-p_{1}|<1\) and \(|d_{reproj}-d_{1}|/d_{1}<0.01\), we say the depth estimation \(d_{1}\) of \(p_{1}\) is two-view consistent. In our experiments, all depths should be at least three view consistent. This simple two-step filtering strategy shows strong robustness for filtering different kinds of outliers.
#### 4.2.2 Depth Map Fusion
Similar to other multi-view stereo methods [8, 32], we apply a depth map fusion step to integrate depth maps from different views to a unified point cloud representation. The visibility-based fusion algorithm [26] is used in our reconstruction, where depth occlusions and violations across different viewpoints are minimized. To further suppress reconstruction noises, we determine the visible views for each pixel as in the filtering step, and take the average over all reprojected depths \(\mskip 1.5mu \overline{\mskip-1.5mu d_{reproj}\mskip-1.5mu }\mskip 1.5mu\) as the pixel’s final depth estimation. The fused depth maps are then directly reprojected to space to generate the 3D point cloud. The illustration of our MVS reconstruction is shown in Fig. 3.
## 5 Experiments
### Benchmarking on _Dtu_ dataset
We first evaluate our method on the 22 evaluation scans of the _DTU_ dataset [1]. The input view number, image width, height and depth sample number are set to \(N=5\), \(W=1600\), \(H=1184\) and \(D=256\) respectively. For quantitative evaluation, we calculate the _accuracy_ and the _completeness_ of both the distance metric [1] and the percentage metric [18]. While the matlab code for the distance metric is given by _DTU_ dataset, we implement the percentage evaluation ourselves. Notice that the percentage metric also measures the overall performance of accuracy and completeness as the _f-score_. To give a similar measurement for the distance metric, we define the _overall score_, and take the average of mean accuracy and mean completeness as the reconstruction quality.
Quantitative results are shown in Table 1. While Gipuma [35] performs best in the accuracy, our MVSNet outperforms all methods in both the completeness and the overall quality **with a significant margin**. As shown in Fig. 4, MVSNet produces the most complete point clouds especially in those textureless and reflected areas, which are commonly considered as the most difficult parts to recover in MVS reconstruction.
| Mean Distance (mm) | Percentage (<1mm) | Percentage (<2mm)
---|---|---|---
| Acc. Comp. overall | Acc. Comp. f-score | Acc. Comp. f-score
Camp [3] | 0.835 | 0.554 | 0.695 | 71.75 | 64.94 | 66.31 | 84.83 | 67.82 | 73.02
Furu [7] | 0.613 | 0.941 | 0.777 | 69.55 | 61.52 | 63.26 | 78.99 | 67.88 | 70.93
Tola [35] | 0.342 | 1.190 | 0.766 | 90.49 | 57.83 | 68.07 | 93.94 | 63.88 | 73.61
Gipuma [8] | 0.283 | 0.873 | 0.578 | 94.65 | 59.93 | 70.64 | 96.42 | 63.81 | 74.16
SurfaceNet[14] | 0.450 | 1.04 | 0.745 | 83.8 | 63.38 | 69.95 | 87.15 | 67.99 | 74.4
MVSNet (Ours) | 0.396 | 0.527 | 0.462 | 86.46 | 71.13 | 75.69 | 91.06 | 75.31 | 80.25
Table 1: Quantitative results on the DTU’s evaluation set [1]. We evaluate all
methods using both the distance metric [1] (lower is better), and the
percentage metric [18] (higher is better) with respectively 1mm and 2mm
thresholds
<figure><img src="content_image/1804.02505/x4.png"><figcaption>Figure 4: Qualitative results of scans 9, 11 and 75 of DTU dataset [1]. OurMVSNet generates the most complete point clouds especially in thosetextureless and reflective areas. Best viewed on screen</figcaption></figure>
Method | Rank | Mean | Family | Francis | Horse | Lighthouse | M60 | Panther | Playground | Train
---|---|---|---|---|---|---|---|---|---|---
MVSNet (Ours) | 3.00 | 43.48 | 55.99 | 28.55 | 25.07 | 50.79 | 53.96 | 50.86 | 47.90 | 34.69
Pix4D [30] | 3.12 | 43.24 | 64.45 | 31.91 | 26.43 | 54.41 | 50.58 | 35.37 | 47.78 | 34.96
COLMAP [32] | 3.50 | 42.14 | 50.41 | 22.25 | 25.63 | 56.43 | 44.83 | 46.97 | 48.53 | 42.04
OpenMVG [27] \+ OpenMVS [29] | 3.62 | 41.71 | 58.86 | 32.59 | 26.25 | 43.12 | 44.73 | 46.85 | 45.97 | 35.27
OpenMVG [27] \+ MVE [6] | 6.00 | 38.00 | 49.91 | 28.19 | 20.75 | 43.35 | 44.51 | 44.76 | 36.58 | 35.95
OpenMVG [27] \+ SMVS [21] | 10.38 | 30.67 | 31.93 | 19.92 | 15.02 | 39.38 | 36.51 | 41.61 | 35.89 | 25.12
OpenMVG-G [27] \+ OpenMVS [29] | 10.88 | 22.86 | 56.50 | 29.63 | 21.69 | 6.55 | 39.54 | 28.48 | 0.00 | 0.53
MVE [6] | 11.25 | 25.37 | 48.59 | 23.84 | 12.70 | 5.07 | 39.62 | 38.16 | 5.81 | 29.19
OpenMVG [27] \+ PMVS [7] | 11.88 | 29.66 | 41.03 | 17.70 | 12.83 | 36.68 | 35.93 | 33.20 | 31.78 | 28.10
Table 2: Quantitative results on Tanks and Temples benchmark [18]. MVSNet
achieves best f-score result among all submissions without any fine-tuning
<figure><img src="content_image/1804.02505/x5.png"><figcaption>Figure 5: Point cloud results of the intermediate set of Tanks and Temples[18] dataset, which demonstrates the generalization power of MVSNet on complexoutdoor scenes</figcaption></figure>
### Generalization on _Tanks and Temples_ dataset
The _DTU_ scans are taken under well-controlled indoor environment with fixed camera trajectory. To further demonstrate the generalization ability of MVSNet, we test the proposed method on the more complex outdoor _Tanks and Temples_ dataset [18], using the model trained on _DTU_**without any fine-tuning**. While we choose \(N=5\), \(W=1920\), \(H=1056\) and \(D=256\) for all reconstructions, the depth range and the source image set for the reference image are determined according to sparse point cloud and camera positions, which are recovered by the open source SfM software OpenMVG [27].
Our method ranks first before April 18, 2018 among all submissions of the _intermediate set_[18] according to the online benchmark (Table 2). Although the model is trained on the very different _DTU_ indoor dataset, MVSNet is still able to produce the best reconstructions on these outdoor scenes, demonstrating the strong generalization ability of the proposed network. The qualitative point cloud results of the _intermediate set_ are visualized in Fig. 5.
### Ablations
This section analyzes several components in MVSNet. For all following studies, we use the validation loss to measure the reconstruction quality. The 18 validation scans (see Sec. 4.1) are pre-processed as the training set that we set \(N=3\), \(W=640\), \(H=512\) and \(D=256\) for the validation loss computation.
<figure><img src="content_image/1804.02505/x6.png"><figcaption>Figure 6: Ablation studies. (a) Validation losses of different input viewnumbers. (b) Ablations on 2D image feature, cost metric and depth maprefinement</figcaption></figure>
#### 5.3.1 View Number
We first study the influence of the input view number \(N\) and demonstrate that our model can be applied to arbitrary views of input. While the model in Sec. 4.1 is trained using \(N=3\) views, we test the model using \(N=2,3,5\) respectively. As expected, it is shown in Fig. 6 (a) that adding input views can lower the validation loss, which is consistent with our knowledge about MVS reconstructions. It is noteworthy that testing with \(N=5\) performs better than with \(N=3\), even though the model is trained with the 3 views setting. This highly desirable property makes MVSNet flexible enough to be applied the different input settings.
#### 5.3.2 Image Features
We demonstrate in this study that the learning based image feature could significantly boost the MVS reconstruction quality. To model the traditional patch-based image feature in MVSNet, we replace the original 2D feature extraction network with a single 32-channel convolutional layer. The filter kernel is set to a large number of \(7\times 7\) and the stride is set to 4. As shown in Fig. 6 (b), network with the 2D feature extraction significantly outperforms the single layer one on validation loss.
#### 5.3.3 Cost Metric
We also compare our variance operation based cost metric with the mean operation based metric [11]. The element-wise variance operation in Eq. 2 is replaced with the mean operation to train the new model. It can be found in Fig. 6 (b) that our cost metric results in a faster convergence with lower validation loss, which demonstrates that it is more reasonable to use the explicit difference measurement to compute the multi-view feature similarity.
#### 5.3.4 Depth Refinement
Lastly, we train MVSNet with and without the depth map refinement network. The models are also tested on _DTU_ evaluation set as in Sec. 5.1, and we use the percentage metric [18] to quantitatively compare the two models. While Fig. 6 (b) shows that the refinement does not affect the validation loss too much, the refinement network improves the evaluation results from 75.58 to 75.69 (\(<1mm\)_f-score_) and from 79.98 to 80.25 (\(<2mm\)_f-score_).
### Discussions
#### 5.4.1 Running Time
We compare the running speed of MVSNet to Gipuma [8], COLMAP [32] and SurfaceNet [14] using the \(DTU\) evaluation set. The other methods are compiled from their source codes and all methods are tested in the same machine. MVSNet is much more efficient that it takes around 230 seconds to reconstruct one scan (**4.7 seconds** per view). The running speed is \(\sim 5\times\) faster than Gipuma, \(\sim 100\times\) than COLMAP and \(\sim 160\times\) than SurfaceNet.
#### 5.4.2 GPU Memory
The GPU memory required by MVSNet is related to the input image size and the depth sample number. In order to test on the _Tanks and Temples_ with the original image resolution and sufficient depth hypotheses, we choose the Tesla P100 graphics card (16 GB) to implement our method. It is noteworthy that the training and validation on _DTU_ dataset could be done using one consumer level GTX 1080ti graphics card (11 GB).
#### 5.4.3 Training Data
As mentioned in Sec. 4.1, _DTU_ provides ground truth point clouds with normal information so that we can convert them into mesh surfaces for depth maps rendering. However, currently _Tanks and Temples_ dataset does not provide the normal information or mesh surfaces, so we are unable to fine-tune MVSNet on _Tanks and Temples_ for better performance.
Although using such rendered depth maps have already achieved satisfactory results, some limitations still exist: 1) the provided ground truth meshes are not \(100\%\) complete, so some triangles behind the foreground will be falsely rendered to the depth map as the valid pixels, which may deteriorate the training process. 2) If a pixel is occluded in all other views, it should not be used for training. However, without the complete mesh surfaces we cannot correctly identify the occluded pixels. We hope future MVS datasets could provide ground truth depth maps with complete occlusion and background information.
## 6 Conclusion
We have presented a deep learning architecture for MVS reconstruction. The proposed MVSNet takes unstructured images as input, and infers the depth map for the reference image in an end-to-end fashion. The core contribution of MVSNet is to encode the camera parameters as the differentiable homography to build the cost volume upon the camera frustum, which bridges the 2D feature extraction and 3D cost regularization networks. It has been demonstrated on _DTU_ dataset that MVSNet not only significantly outperforms previous methods, but also is more efficient in speed by several times. Also, MVSNet have produced the state-of-the-art results on _Tanks and Temples_ dataset without any fine-tuning, which demonstrates its strong generalization ability.
## References
* [1] Aanæs, H., Jensen, R.R., Vogiatzis, G., Tola, E., Dahl, A.B.: Large-scale data for multiple-view stereopsis. International Journal of Computer Vision (IJCV) (2016)
* [2] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015), https://www.tensorflow.org/, software available from tensorflow.org
* [3] Campbell, N.D., Vogiatzis, G., Hernández, C., Cipolla, R.: Using multiple hypotheses to improve depth-maps for multi-view stereo. European Conference on Computer Vision (ECCV) (2008)
* [4] Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2017)
* [5] Collins, R.T.: A space-sweep approach to true multi-image matching. Computer Vision and Pattern Recognition (CVPR) (1996)
* [6] Fuhrmann, S., Langguth, F., Goesele, M.: Mve-a multi-view reconstruction environment. Eurographics Workshop on Graphics and Cultural Heritage (GCH) (2014)
* [7] Furukawa, Y., Ponce, J.: Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2010)
* [8] Galliani, S., Lasinger, K., Schindler, K.: Massively parallel multiview stereopsis by surface normal diffusion. International Conference on Computer Vision (ICCV) (2015)
* [9] Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. Computer Vision and Pattern Recognition (CVPR) (2012)
* [10] Han, X., Leung, T., Jia, Y., Sukthankar, R., Berg, A.C.: Matchnet: Unifying feature and metric learning for patch-based matching. Computer Vision and Pattern Recognition (CVPR) (2015)
* [11] Hartmann, W., Galliani, S., Havlena, M., Van Gool, L., Schindler, K.: Learned multi-patch similarity. International Conference on Computer Vision (ICCV) (2017)
* [12] Hirschmuller, H.: Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2008)
* [13] Hirschmuller, H., Scharstein, D.: Evaluation of cost functions for stereo matching. Computer Vision and Pattern Recognition (CVPR) (2007)
* [14] Ji, M., Gall, J., Zheng, H., Liu, Y., Fang, L.: Surfacenet: An end-to-end 3d neural network for multiview stereopsis. International Conference on Computer Vision (ICCV) (2017)
* [15] Kar, A., Häne, C., Malik, J.: Learning a multi-view stereo machine. Advances in Neural Information Processing Systems (NIPS) (2017)
* [16] Kazhdan, M., Hoppe, H.: Screened poisson surface reconstruction. ACM Transactions on Graphics (TOG) (2013)
* [17] Kendall, A., Martirosyan, H., Dasgupta, S., Henry, P.: End-to-end learning of geometry and context for deep stereo regression. Computer Vision and Pattern Recognition (CVPR) (2017)
* [18] Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (TOG) (2017)
* [19] Knöbelreiter, P., Reinbacher, C., Shekhovtsov, A., Pock, T.: End-to-end training of hybrid cnn-crf models for stereo. Computer Vision and Pattern Recognition (CVPR) (2017)
* [20] Kutulakos, K.N., Seitz, S.M.: A theory of shape by space carving. International Journal of Computer Vision (IJCV) (2000)
* [21] Langguth, F., Sunkavalli, K., Hadap, S., Goesele, M.: Shading-aware multi-view stereo. European Conference on Computer Vision (ECCV) (2016)
* [22] Lhuillier, M., Quan, L.: A quasi-dense approach to surface reconstruction from uncalibrated images. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2005)
* [23] Luo, W., Schwing, A.G., Urtasun, R.: Efficient deep learning for stereo matching. Computer Vision and Pattern Recognition (CVPR) (2016)
* [24] Mayer, N., Ilg, E., Hausser, P., Fischer, P., Cremers, D., Dosovitskiy, A., Brox, T.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. Computer Vision and Pattern Recognition (CVPR) (2016)
* [25] Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. Computer Vision and Pattern Recognition (CVPR) (2015)
* [26] Merrell, P., Akbarzadeh, A., Wang, L., Mordohai, P., Frahm, J.M., Yang, R., Nistér, D., Pollefeys, M.: Real-time visibility-based fusion of depth maps. International Conference on Computer Vision (ICCV) (2007)
* [27] Moulon, P., Monasse, P., Marlet, R., Others: Openmvg. an open multiple view geometry library. https://github.com/openMVG/openMVG
* [28] Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A.: Kinectfusion: Real-time dense surface mapping and tracking. IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (2011)
* [29] OpenMVS: open multi-view stereo reconstruction library. https://github.com/cdcseacave/openMVS
* [30] Pix4D: https://pix4d.com/
* [31] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) (2015)
* [32] Schönberger, J.L., Zheng, E., Frahm, J.M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. European Conference on Computer Vision (ECCV) (2016)
* [33] Seitz, S.M., Dyer, C.R.: Photorealistic scene reconstruction by voxel coloring. International Journal of Computer Vision (IJCV) (1999)
* [34] Seki, A., Pollefeys, M.: Sgm-nets: Semi-global matching with neural networks. Computer Vision and Pattern Recognition Workshops (CVPRW) (2017)
* [35] Tola, E., Strecha, C., Fua, P.: Efficient large-scale multi-view stereo for ultra high-resolution image sets. Machine Vision and Applications (MVA) (2012)
* [36] Vu, H.H., Labatut, P., Pons, J.P., Keriven, R.: High accuracy and visibility-consistent dense multiview stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (2012)
* [37] Xu, N., Price, B., Cohen, S., Huang, T.: Deep image matting. Computer Vision and Pattern Recognition (CVPR) (2017)
* [38] Yao, Y., Li, S., Zhu, S., Deng, H., Fang, T., Quan, L.: Relative camera refinement for accurate dense reconstruction. 3D Vision (3DV) (2017)
* [39] Zbontar, J., LeCun, Y.: Stereo matching by training a convolutional neural network to compare image patches. Journal of Machine Learning Research (JMLR) (2016)
* [40] Zhang, R., Li, S., Fang, T., Zhu, S., Quan, L.: Joint camera clustering and surface segmentation for large-scale multi-view stereo. International Conference on Computer Vision (ICCV) (2015)
|
1912.02213 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 87312,
"num_imgs": 10,
"llama3_tokens_count": 22621
} | [
"content_image/1912.02213/Tay_Malk_domain_dash.png",
"content_image/1912.02213/x1.png",
"content_image/1912.02213/x2.png",
"content_image/1912.02213/x4.png",
"content_image/1912.02213/x5.png",
"content_image/1912.02213/x9.png",
"content_image/1912.02213/x11.png",
"content_image/1912.02213/x13.png",
"content_image/1912.02213/x15.png",
"content_image/1912.02213/x19.png"
] | # Constraints on the magnetic field within a stratified outer core
Colin M. Hardy
EPSRC Centre for Doctoral Training in Fluid Dynamics, University of Leeds, Leeds, LS2 9JT, UK
Philip W. Livermore
School of Earth and Environment, University of Leeds, Leeds, LS2 9JT, UK
Jitse Niesen
School of Mathematics, University of Leeds, Leeds, LS2 9JT, UK
###### Abstract
Mounting evidence from both seismology and experiments on core composition suggests the existence of a layer of stably stratified fluid at the top of Earth’s outer core. In this work we examine the structure of the geomagnetic field within such a layer, building on the important but little known work of Malkus (1979). We assume (i) an idealised magnetostrophic spherical model of the geodynamo neglecting inertia, viscosity and the solid inner core, and (ii) a strongly stratified layer of constant depth immediately below the outer boundary within which there is no spherically radial flow. Due to the restricted dynamics, Malkus showed that the geomagnetic field must obey certain a condition which is a refined and more restrictive version of the well known condition of Taylor (1963) which holds on an infinite set of azimuthal rings within the stratified layer. By adopting a spectral representation with truncation \(N\) in each direction, we show that this infinite class collapses to a discrete set of \(O(N^{2})\) Malkus constraints. Although fewer than the \(N^{3}\) degrees of freedom of the magnetic field, their nonlinear nature makes finding a magnetic field that obeys such constraints, here termed a _Malkus state_, a challenging task. Nevertheless, such Malkus states when constrained further by geomagnetic observations have the potential to probe the interior of the core.
By focusing on a particular class of magnetic fields for which the Malkus constraints are linear, we describe a constructive method that turns any purely-poloidal field into an exact Malkus state by adding a suitable toroidal field. We consider poloidal fields following a prescribed smooth profile within the core that match a degree-13 observation-derived model of the magnetic field in epoch 2015 or a degree-10 model of the 10000-yr time averaged magnetic field. Despite the restrictions of the Malkus constraints, a significant number of degrees of freedom remain for the unknown toroidal field and we seek extremal examples. The Malkus state with the least toroidal energy has in both cases a strong azimuthal toroidal field, about double the magnitude of that observed from the poloidal field at the core-mantle boundary. For the 2015 field for a layer of depth 300 km, we estimate a root mean squared azimuthal toroidal field of \(3\) mT with a pointwise maximum of 8 mT occurring at a depth of about 70 km.
## 1 Introduction
The question of whether or not Earth’s liquid outer core contains a stratified layer just below its outer boundary has long been debated (Whaler, 1980; Braginsky, 1967, 1987; Hardy and Wong, 2019; Gubbins, 2007). A stratified layer may result from the pooling of buoyant elements released from the freezing of the solid inner core (Braginsky, 2006; Bouffard et al., 2019), diffusion from the mantle above (Jeanloz, 1990; Buffett and Seagle, 2010) or sub-adiabatic thermal effects (Pozzo et al., 2012). Within a strongly stratified layer, the dynamics would be very different to the remainder of the convecting core because spherical radial motion would be suppressed (Braginsky, 1999; Davies et al., 2015; Cox et al., 2019). In terms of using observations of the changing internal geomagnetic field as a window on the dynamics within the core, the existence of a stratified layer is crucial because motion confined to the stratified layer such as waves may have a pronounced geomagnetic signature, which may be falsely interpreted as emanating from the large-scale dynamo process ongoing beneath.
Observational constraints on the stratified layer are largely from seismology, where analysis of a specific ‘SmKS’ class of waves has revealed a localised decrease in wave velocities in the outermost \(100-300\text{km}\) of the core (Helffrich and Kaneshima, 2013; Lay and Young, 1990; Helffrich and Kaneshima, 2010), suggesting that the outermost part of the core has a different density and/or elasticity than the rest of the core. However, this evidence is far from conclusive because not all studies agree that a stratified layer is necessary to explain seismic measurements (Irving et al., 2018), and there are inherent uncertainties due to the remoteness of the core (Alexandrakis and Eaton, 2010). So far, observational geomagnetism has offered equivocal evidence for stratified layers. Time dependent observational models can be explained by simple core flow structures on the core-mantle boundary (CMB) which have either no layer (Holme, 2015; Amit, 2014) (upwelling at the CMB is permitted), or a strongly stratified layer (in which all radial motion is suppressed), (Lesur et al., 2015).
A complementary approach to understanding the observational signature of a stratified layer is by numerical simulation of a stratified geodynamo model (Nakagawa, 2011). Models of outer core dynamics have demonstrated that dynamo action can be sensitive to variations in the assumed background state of a fully convective outer core, and that the presence of stably stratified layers can significantly alter the dynamics and morphology of the resultant magnetic field (Glane and Buffett, 2018; Christensen, 2018; Olson et al., 2018). Hence comparisons between the magnetic fields from stratified models with the geomagnetic field can be used to infer compatibility with the presence of a stratified layer. This has been used to constrain the possible thickness of a stratified layer such that it is consistent with geomagnetic observations. Yan and Stanley (2018) find that unstratified dynamo simulations significantly underpredict the octupolar component of the geomagnetic field. Their model endorses the presence of a thin stably stratified layer, as the resultant magnetic field can be rendered Earth-like by the inclusion of 60-130 km layer. However, the results are rather sensitive to both the strength of stratification and layer depth, with a thicker layer of 350 km resulting in an incompatible octupole field. Similarly Olson et al. (2017) find that stratified model results compare favorably with the time-averaged geomagnetic field for partial stratification in a thin layer of less than 400 km, but unfavorable for stratification in a thick 1000 km layer beneath the CMB. Additionally, in terms of dynamics, Braginsky (1993); Buffett (2014) show that MAC (Magnetic, buoyancy (Archimedean) and Coriolis forces) waves in the _hidden ocean_ at the top of the core provide a mechanism for the 60 year period oscillations detected in the geomagnetic field (Roberts et al., 2007). The model of Buffett et al. (2016) suggests that MAC waves underneath the CMB are also able to account for a significant part of the fluctuations in length of day (LOD) (Gross, 2001; Holme and De Viron, 2005) through explaining the dipole variation, but are contingent on the existence of a stratified layer at the top of the core with a thickness of at least 100 km. However, not all stratified dynamo model results champion this scenario for the Earth. It has been found that the inclusion of a thin stable layer in numerical models can act to destablise the dynamo, through generating a thermal wind which creates a different differential rotation pattern in the core (Stanley and Mohammadi, 2008). Additionally many distinctive features of the geomagnetic field are not reproduced, as strong stratification leads to the disappearance of reverse flux patches and suppression of all non-axisymmetric magnetic field components (Mound et al., 2019; Christensen and Wicht, 2008).
One reason why there is no clear message from existing geodynamo models is perhaps that they all have been run in parameter regimes very far from Earth’s core (Roberts and Aurnou, 2011). Two important parameters, the Ekman and Rossby numbers, quantify the ratio of rotational to viscous forces \(E\sim 10^{-15}\) and the ratio of inertial to viscous forces \(R_{o}\sim 10^{-7}\) respectively (Christensen and Wicht, 2015). These parameters being so small causes difficulties when attempting to numerically simulate the geodynamo because they lead to small spatial and temporal scales that need to be resolved in any direct numerical simulation, but are extremely computationally expensive to do so. Despite this challenge, numerical models have been used with great success to simulate aspects of the geodynamo, reproducing features such as torsional oscillations (Wicht and Christensen, 2010) that are consistent with observational models (Gillet et al., 2010), geomagnetic jerks (Aubert and Finlay, 2019) and allowing predictions of the Earth’s magnetic field strength (Christensen et al., 2009). Recent simulations have been able to probe more Earth-like parameter regimes than previously possible, achieving very low Ekman numbers of \(E=10^{-7}-10^{-8}\)(Schaeffer et al., 2017; Aubert, 2019). However despite this progress, these simulations remain in parameter regimes vastly different to that of the Earth (Christensen and Wicht, 2015), posing the inescapable question of how representative of the Earth they really are, as force balances can still vary significantly between the simulation regime and the correct regime of the Earth (Wicht and Sanchez, 2019), with the ability to simultaneously reproduce Earth-like field morphology and reversal frequency still beyond current capabilities (Christensen et al., 2010). The assessments conducted by Sprain et al. (2019) highlight that present geodynamo models able unable to satisfactorily reproduce all aspects of Earth’s long term field behaviour.
In this paper we consider the approach proposed by Taylor (1963), based on the assumption that the inertia-free and viscosity-free asymptotic limit is more faithful to Earth’s dynamo than adopting numerically-expedient but nevertheless inflated parameter values. This amounts to setting the values of \(R_{o}\) and \(E\) to zero, which simplifies the governing equations significantly, enabling numerical solutions at less computational expense and importantly for us, analytic progress to be made. The resulting dimensionless magnetostrophic regime then involves an exact balance between the Coriolis force, pressure, buoyancy and the Lorentz force associated with the magnetic field \({\bf B}\) itself:
\[\bm{\hat{z}}\times\bm{u}=-\bm{\nabla}p+F_{B}\bm{\hat{r}}+({\bm{\nabla}}\times \bm{B})\times\bm{B},\] (1)
where \(F_{B}\) is a buoyancy term that acts in the unit radial direction \(\bm{\hat{r}}\)(Fearn, 1998).
Throughout this paper we consider the magnetostrophic balance of equation1. Taylor (1963) showed that, as a consequence of this magnetostrophic balance, the magnetic field must obey at all times \(t\) the well-known condition
\[T(s,t)\equiv\int_{C(s)}(({\bm{\nabla}}\times{\bf B})\times{\bf B})_{\phi}~{}s \text{d}\phi\text{d}z=0,\] (2)
for any geostrophic cylinder \(C(s)\) of radius \(s\), aligned with the rotation axis, where \((s,\phi,z)\) are cylindrical coordinates. This constraint applies in the general case for fluids independent of stratification. It was first shown by Malkus (1979) how (2) can be refined within a stratified layer of constant depth, which in the limit of zero radial flow leads to a more strict constraint. This constraint now applies on every axisymmetric ring coaxial with the rotation axis that lies within the layer and is known as the _Malkus constraint_
\[M(s,z,t)\equiv\int_{0}^{2\pi}((\bm{\nabla}\times\bm{B})\times\bm{B})_{\phi}~{} \text{d}\phi=0,\]
for any \(s\) and \(z\) within the layer. Magnetic fields that satisfy the Taylor or Malkus constraints respectively are termed Taylor or Malkus states.
The associated timescale over which the dominant force balance described by the magnetostrophic equations evolves is \(\sim 10^{4}\) years. However observations show changes in the geomagnetic field on much shorter timescale of years to decades (Jackson and Finlay, 2015). This vast discrepancy in timescales motivates distinguishing between the slowly evolving background state and perturbations from it and considering these two features separately. The theoretically predicted magnetostrophic timescale, represented by Taylor or Malkus states, describes the slow evolution of the magnetic field, and may explain dynamics such as geomagnetic reversals and also the longstanding dominance of the axially symmetric dipolar component of the field. Although rapid dynamics such as MHD waves occur on a much shorter timescale, they cannot be considered in isolation as their structure depends critically upon the background state that they perturb. Thus although insightful models of perturbations can be based upon simple states (e.g. Malkus, 1967), ultimately a close fit to the observed geomagnetic field requires accurate knowledge of the background state. It is the search for such a state that is explored in this paper.
Dynamical models of a non-stratified background state, produced by evolving the magnetic field subject to Taylor’s constraint, have appeared very recently (Wu and Roberts, 2015; Roberts and Wu, 2018; Li et al., 2018) and are currently restricted to axisymmetry, although the model of Li et al. (2018) can be simply extended to a three dimensional system. These models can additionally be used to probe the effect of incorporating inertia driven torsional waves within this framework (Roberts and Wu, 2014).
In this paper we adopt a different strategy and explore the use of both the Taylor and Malkus constraints as a tool for analytically constraining instantaneous structures of the magnetic field throughout Earth’s core. This method ignores any dynamics and asks simply whether we can find a set of magnetic fields which satisfy the necessary constraints: Taylor’s constraint in the interior and Malkus’s constraint in the stratified layer, which will provide plausible background geomagnetic states. However, constructing Malkus states is a non-trivial task. Firstly we need to establish whether such fields can even exist, and if so how numerous they are, before we are able to construct examples of Malkus states. Since we are geophysically motivated, we also wish to determine whether such fields can be compatible with geomagnetic observations.
Our task is a challenging one: even finding magnetic fields that exactly satisfy the comparatively simple case of Taylor’s constraint has proven to be difficult in the 55 years since the seminal paper of Taylor (1963), although notable progress has been made in axisymmetry (Hollerbach and Ierley, 1991; Soward and Jones, 1983) and in 3D (Jault and Cardin, 1999) subject to imposing a specific symmetry. Recently, significant progress has been made in this regard by presenting a more general understanding of the mathematical structure of Taylor’s constraint in three dimensions (Livermore et al., 2008). This method was implemented by Livermore et al. (2009) to construct simple, large scale magnetic fields compatible with geomagnetic observations. It is this which provides the foundation for the work presented here.
The remainder of this paper is structured as follows. In section 2 we present a new, more general derivation of the condition required to be satisfied with a stratified layer of fluid, which under an idealised limit reduces to what is known as Malkus’ constraint. In section 3 we summarise the method for discretising and constructing a Taylor state before extending this to Malkus states in section 4. In section 5 we prove that an arbitrary poloidal field can be transformed into a Malkus state through the addition of an appropriate toroidal field and show how this is a useful approach due to the resultant equations being linear. In section 6 we present our results for an Earth like magnetic field satisfying all relevant constraints, within the linear framework. In section 7 we discuss these results with regard to Earth’s internal field, specifically our estimate of toroidal field strength, before concluding in section 8.
## 2 Derivation of Malkus’ constraint
Within stably stratified fluids radial flows are suppressed, hence in the limit of strong stratification radial fluid velocities are negligibly small (Braginsky, 1999; Davies et al., 2015). We proceed within this idealistic limit and require that \(u_{r}=0\) within a region of stratified fluid that is a volume of revolution: we represent the proposed stratified layer within Earth’s core as a spherically symmetric layer of constant depth. We assume further that the system is in magnetostrophic balance; that is, rapidly rotating with negligible inertia and viscosity. The resulting constraint was first derived by Malkus (1979), however, here we present an alternative and more straightforward derivation courtesy of Dominique Jault (personal communication).
We use the condition for incompressible flow that \(\bm{\nabla}\cdot\bm{u}=0\) and the standard toroidal poloidal decomposition within spherical coordinates \((r,\theta,\phi)\). From the condition that there is no spherically-radial component of velocity then \(\bm{u}\) must be purely toroidal and hence can be written as
\[\bm{u}=\bm{\nabla}\times(\mathcal{T}(r,\theta,\phi)\bm{\hat{r}})=\frac{1}{r \sin\theta}\dfrac{\partial\mathcal{T}}{\partial\phi}\bm{\hat{\theta}}-\frac{1} {r}\dfrac{\partial\mathcal{T}}{\partial\theta}\bm{\hat{\phi}}.\]
Therefore the cylindrically-radial velocity, written in spherical coordinates, is
\[u_{s}=\sin\theta u_{r}+\cos\theta u_{\theta}=\frac{\cos\theta}{r\sin\theta} \dfrac{\partial\mathcal{T}}{\partial\phi}\]
and so
\[\int_{0}^{2\pi}u_{s}~{}\text{d}\phi=\frac{\cos\theta}{r\sin\theta}\int_{0}^{2 \pi}\dfrac{\partial\mathcal{T}}{\partial\phi}~{}\text{d}\phi=0.\]
Now, since \(\bm{\hat{\phi}}\cdot(\hat{\bf z}\times\bm{u})=u_{s}\) then, from the azimuthal component of the magnetostrophic equation1 we have
\[u_{s}=-\dfrac{\partial p}{\partial\phi}+((\bm{\nabla}\times\bm{B})\times\bm{B} )_{\phi}.\]
Integrating this around any circle in a plane orthogonal to \(\bm{\hat{r}}\) centred on the rotation axis, (as illustrated by the red rings in figure0(b)), and using the single-valued nature of pressure, gives Malkus’ constraint,
\[\underbrace{\int_{0}^{2\pi}u_{s}~{}\text{d}\phi}_{=0}=-\underbrace{\int_{0}^{2 \pi}\dfrac{\partial p}{\partial\phi}~{}\text{d}\phi}_{=0}+\int_{0}^{2\pi}((\bm {\nabla}\times\bm{B})\times\bm{B})_{\phi}~{}d\phi=0,\]
or equivalently requiring that the Malkus integral \(M\) is zero:
\[M(s,z,t)\equiv\int_{0}^{2\pi}((\bm{\nabla}\times\bm{B})\times\bm{B})_{\phi}~{} \text{d}\phi=0.\] (3)
We are also able to generalise this constraint from considering the idealistic limit of requiring \(u_{r}=0\) within the stratified fluid to the more general situation of permitting \(u_{r}\neq 0\), where we express the Malkus integral in terms of the radial flow. Now, the flow \({\bf u}\) is no longer purely toroidal and hence
\[M(s,z,t)=\int_{0}^{2\pi}u_{s}~{}\text{d}\phi=\int_{0}^{2\pi}u_{ \theta}\cos\theta\text{d}\phi+\int_{0}^{2\pi}u_{r}\sin\theta~{}\text{d}\phi.\] (4)
We now use the condition for incompressible flow that \(\bm{\nabla}\cdot\bm{u}=0,\)
\[0=\bm{\nabla}\cdot{\bf u}=\frac{1}{r^{2}}\dfrac{\partial(r^{2}u_{r})}{\partial r }+\frac{1}{r\sin\theta}\dfrac{\partial(u_{\theta}\sin\theta)}{\partial\theta}+ \frac{1}{r\sin\theta}\dfrac{\partial u_{\phi}}{\partial\phi},\]
\[\Rightarrow\int_{0}^{2\pi}\left(\frac{\sin\theta}{r}\dfrac{\partial(r^{2}u_{r} )}{\partial r}+\dfrac{\partial(u_{\theta}\sin\theta)}{\partial\theta}\right) \text{d}\phi=-\int_{0}^{2\pi}\dfrac{\partial u_{\phi}}{\partial\phi}\text{d} \phi=0.\]
Now integrating over \([0,\theta]\) we find
\[\int_{0}^{2\pi}u_{\theta}\text{d}\phi=\frac{1}{\sin\theta}\int_{0}^{\theta} \frac{\sin\theta^{\prime}}{r}\int_{0}^{2\pi}\dfrac{\partial(r^{2}u_{r})}{ \partial r}\text{d}\phi\text{d}\theta^{\prime}=-\frac{1}{r\sin\theta}\int_{0}^ {\theta}\sin\theta^{\prime}\dfrac{\partial}{\partial r}\left(r^{2}\int_{0}^{2 \pi}u_{r}d\phi\right)~{}\text{d}\theta^{\prime}\]
\[\Rightarrow M=-\frac{1}{r\tan\theta}\int_{0}^{\theta}\dfrac{\partial}{\partial r }\left(r^{2}\int_{0}^{2\pi}u_{r}\sin\theta^{\prime}\text{d}\phi\right)\text{d} \theta^{\prime}+\int_{0}^{2\pi}u_{r}\sin\theta~{}\text{d}\phi.\]
In the above derivation, no assumption has been made about stratification and this equation holds as an identity in the magnetostrophic regime independent of stratification. In the case considered by Malkus, \(M=0\) is recovered in the limit of \(u_{r}\to 0\).
It is clear that Malkus’ constraint is similar to Taylor’s constraint except now not only does the azimuthal component of the Lorentz force need to have zero average over fluid cylinders, it needs to be zero for the infinite set of constant-\(z\) slices of these cylinders (here termed _Malkus rings_, see figure 0(b)) that lie within the stratified region. In terms of the flow, the increased restriction of the Malkus constraint arises because it requires zero azimuthally-averaged \(u_{s}\) at any given value of \(z\), whereas Taylor’s constraint requires only that the cylindrically averaged \(u_{s}\) vanishes and allows outward flow at a given height to be compensated by inward flow at another. We note that all Malkus states are Taylor states, but the converse is not true.
<figure><img src="content_image/1912.02213/Tay_Malk_domain_dash.png"><figcaption>(a) Earth-like spherical shell with radius rSL=0.9R. A Malkus state defined in a stratified layer surrounds an interiorTaylor state.</figcaption></figure>
## 3 Geometry and representation of a stratified magnetostrophic model
The physical motivation for applying Malkus’ constraint arises from seeking to represent a realistic model for the magnetic field in the proposed stratified layer within Earth’s outer core. Hence we compute solutions for the magnetic field in the Earth-like configuration illustrated in figure0(a), consisting of a spherical region in which Taylor’s constraint applies, representing the convective region of Earth’s core, surrounded by a spherical shell in which Malkus’ constraint applies, representing the stratified layer immediately beneath the CMB. Our method allows a free choice of inner radius \(r_{SL}\), so in order to agree with the bulk of seismic evidence (Helffrich and Kaneshima, 2010, 2013; Lay and Young, 1990), the value \(r_{SL}=0.9R\) is chosen for the majority of our solutions, where \(R\) is the full radius of the core (3845 km). However due to the uncertainty which exists for the thickness of Earth’s stratified layer (Kaneshima, 2017), we also probe how sensitive our results are to layer thickness, considering \(r_{SL}=0.85R\) and \(r_{SL}=0.95R\) as well. The Earth’s inner core is neglected throughout, since incorporating it would lead to additional intricacies due to the cylindrical nature of Taylor’s constraint which leads to a distinction between regions inside and outside the tangent cylinder (Livermore and Hollerbach, 2012; Livermore et al., 2008). Since the focus here is on the outermost reaches of the core, we avoid such complications.
The method used to construct the total solution for the magnetic field throughout Earth’s core that is consistent with the Taylor and Malkus constraints is sequential. Firstly, we use a regular representation of the form shown in equation7 to construct a Malkus state in the stratified layer. Secondly, we construct a Taylor state which matches to the Malkus state at \(r=r_{SL}\); overall the magnetic field is continuous but may have discontinuous derivatives on \(r=r_{SL}\). We note that any flow driven by this magnetic field through the magnetostrophic balance may also be discontinuous at \(r=r_{SL}\) because in general \(u_{r}\neq 0\) in the inner region but \(u_{r}=0\) is assumed in the stratified region. Considerations of such dynamics lie outside the scope of the present study focussed only on the magnetic constraints, but imposing continuity of \(u_{r}\) for example would clearly require additional constraints.
As a pedogogical exercise we also construct some Malkus states within a fully stratified sphere (\(r_{SL}=0\)), as detailed in appendixA. Without the complications of matching to a Taylor state, the equations take a simpler form and we present some first examples in sectionA.2. Dynamically, sustenance of a magnetic field within a fully stratified sphere is of course ruled out by the theory of Busse (1975), which provides a strictly positive lower bound for the radial flow as a condition on the existence of a dynamo. Nonetheless it can be insightful to first consider the full sphere case, as it facilitates the consideration of fundamental principles of the magnetic field and Malkus constraint structure, and allows direct comparisons to be made with similar full sphere Taylor states.
In what follows we represent a magnetic field by a sum of toroidal and poloidal modes with specific coefficients
\[\bm{B}=\sum_{l=1}^{L_{max}}\sum_{m=-l}^{l}\sum_{n=1}^{N_{max}}a_{l,n}^{m}{\bm{ \mathcal{T}}}_{l,n}^{m}+b_{l,n}^{m}{\bm{\mathcal{S}}}_{l,n}^{m}\] (5)
where \(\bm{\mathcal{T}}_{l,n}^{m}={\bm{\nabla}}\times(T_{l,n}(r)Y_{l}^{m}\hat{\bf r})\), \(\bm{\mathcal{S}}_{l,n}^{m}={\bm{\nabla}}\times{\bm{\nabla}}\times(S_{l,n}(r)Y_ {l}^{m}\hat{\bf r})\), \(N_{max}\) is the radial truncation of the poloidal and toroidal field. In the above, \(Y_{l}^{m}\) is a spherical harmonic of degree \(l\) and order \(m\), normalised to unity by its squared integral over solid angle. Positive or negative values of \(m\) indicate respectively a \(\cos m\phi\) or \(\sin m\phi\) dependence in azimuth. The scalar functions \({{T}}_{l,n}^{m}\) and \({{S}}_{l,n}^{m}\), \(n\geq 1\), are respectively chosen to be the functions \(\chi_{l,n}\) and \(\psi_{l,n}\) composed of Jacobi polynomials (Li et al., 2010, 2011). They are orthogonal, and obey regularity conditions at the origin and the electrically insulating boundary condition at \(r=R\)
\[\frac{d\mathcal{S}_{l}^{m}}{dr}+l\mathcal{S}_{l}^{m}/R=\mathcal{T}_{l}^{m}=0.\] (6)
We note that this description is convenient but incomplete when used within the spherical shell, for which the magnetic field does not need to obey regularity at the origin. For simplicity, we nevertheless use this representation in both layers, although restricting the domain of the radial representation to \([0,r_{SL}]\) for the inner region.
## 4 Discretisation of the Taylor constraint
Since the Malkus constraint forms a more restrictive constraint which encompasses the Taylor constraint it is useful for us to first summarise the structure of the Taylor constraint in a full sphere. The integral given in equation2, which Taylor’s constraint requires to be zero, is known as the Taylor integral. Although applied on an infinite set of surfaces, Livermore et al. (2008) showed that Taylor’s constraint reduces to a finite number of constraint equations for a suitably truncated magnetic field expansion
\[\mathcal{S}_{l}^{m}(r)=r^{l+1}\sum_{j=0}^{N_{max}}c_{j}r^{2j}~{}~{}~{}~{}\text {and}~{}~{}~{}~{}\mathcal{T}_{l}^{m}(r)=r^{l+1}\sum_{j=0}^{N_{max}}d_{j}r^{2j},\] (7)
which is an expanded version of (5) for some \(c_{j}\) and \(d_{j}\). The Taylor integral itself then collapses to a polynomial of finite degree which depends upon \(s^{2}\)(Lewis and Bellan, 1990) and the coefficients \(a_{l,n}^{m},b_{l,n}^{m}\), and takes the form
\[T(s)=s^{2}\sqrt{R-s^{2}}Q_{D_{T}}(s^{2})=0,\] (8)
for some polynomial \(Q_{D_{T}}\) of maximum degree \(D_{T}\).
Taylor’s constraint is now equivalent to enforcing that the coefficients of all powers of \(s\) in the polynomial \(Q_{D_{T}}\) equal zero, as this ensures \(T(s)\) vanishes identically on every geostrophic contour. This reduces the infinite number of constraints to a finite number of simultaneous, coupled, quadratic, homogeneous equations. This reduction is vital as it gives a procedure for enforcing Taylor’s constraint in general, and allows the implementation of a method to construct magnetic fields which exactly satisfy this constraint, known as Taylor states, as demonstrated by Livermore et al. (2009). In the next section we see how, with some relatively simple alterations this procedure can be extended to the construction of exact Malkus states.
## 5 Malkus states
This section outlines some general properties of the mathematical structure of Malkus’ constraints and provides the methodology for constructing the first known Malkus states; we also address the questions of existence and uniqueness of solutions and the dimension of the resultant solution space.
Along similar lines as we showed for Taylor’s constraints in section4, on adopting the representation (5) the Malkus integral reduces to a multinomial in \(s^{2}\) and \(z\)(Lewis and Bellan, 1990) and we require
\[M(s,z)=Q_{D_{M}}(s^{2},z)=0\]
for some finite degree multinomial \(Q_{D_{M}}\) in \(s\) and \(z\). Note that the Taylor integral (8) is simply a z-integrated form of \(Q_{D_{M}}\). Equating every multinomial term in \(Q_{D_{M}}(s^{2},z)\) to zero results in a finite set of constraints that are nonlinear in the coefficients \(a_{l,n}^{m}\) and \(b_{l,n}^{m}\).
The number of constraints can be quantified for a given truncation following a similar approach as that employed by Livermore et al. (2008) for Taylor’s constraint, by tracking the greatest exponent of the dimension of length. This analysis is conducted in appendixB and results in the number of Malkus constraints given by
\[C_{M}={C_{T}}^{2}+3C_{T}+2,\] (9)
where the number of Taylor constraints for an equivalent magnetic field is \(C_{T}=L_{max}+2N_{max}-2\) (after the single degeneracy due to the electrically insulating boundary condition is removed) (Livermore et al., 2008). Therefore we find that as expected the Malkus’ constraints are more numerous than Taylor’s constraints. It is significant to notice that \(C_{M}\gg C_{T}\) and in particular for high degree/resolution systems \(C_{M}\approx{C_{T}}^{2}\).
In order to satisfy these constraints, the magnetic field has \(2L_{max}N_{max}(L_{max}+2)\) degrees of freedom (this being the number of unknown spectral coefficients within the truncation of \((L_{max},N_{max})\). In axisymmetry the number of degrees of freedom reduces to \(2N_{max}L_{max}\).
If we truncate the magnetic field quasi uniformly as \(N=\mathcal{O}(L_{max})\approx\mathcal{O}(N_{max})\), then we observe that at high \(N\) the number of constraints (\(O(N^{2})\) Malkus constraints; \(O(N)\) Taylor constraints) is exceeded by the number of degrees of freedom of \(N^{3}\). A simple argument based on linear algebra suggests that many solutions exist at high \(N\), however this may be misleading because the constraints are nonlinear and it is not obvious _a priori_ whether any solutions exist, or if they do, how numerous they might be.
We consider a simple example in sectionA.1, which shows the structure of constraint equations that arise. The example highlights that degeneracy of the constraint equations plays a far more significant role for the Malkus constraints compared with the Taylor constraints, which only have a single weak degeneracy due to the electrically insulating boundary condition (Livermore et al., 2008). However, due to the complex nature of these nonlinear equations, at present we have no theory to predict which constraints will be degenerate and hence the total number of independent constraints.
Because of the apparent uncertainty of the existence of Malkus states, it is instructive to identify whether imposing strong symmetry is useful to identify very simple examples. Owing to symmetries inherent in the spherical harmonics, many classes of simple Taylor states exist, as outlined by Livermore et al. (2009): for example any field that is either symmetric or anti-symmetric with respect to a rotation of \(\pi\) radians about the \(x\)-axis is a Taylor state. Due to the absence of averaging in \(z\), such symmetric magnetic fields do not automatically satisfy the Malkus constraints. However some simple classes of field are guaranteed to be Malkus states, such as single spherical harmonic modes, axisymmetric purely toroidal or poloidal fields since the integrand itself \((({\bm{\nabla}}\times{\bf B})\times{\bf B})_{\phi}\) is zero. Also equatorially symmetric purely toroidal or poloidal fields comprising either only cosine or only sine dependence in azimuth are Malkus states as the resultant integrand is anti-symmetric with respect to a rotation of \(\pi\) radians and hence the azimuthal average over \([0,2\pi]\) causes the Malkus integral to vanish.
## 6 Finding a Malkus state
Owing to the nonlinear albeit finite nature of the Malkus constraints, it is far from obvious whether any solutions exist beyond those of the simple structure explored above. In the next section, we demonstrate the existence of a class of solutions with arbitrarily complex lateral structure.
### A special class of Malkus states
Here we demonstrate that within the class of magnetic fields that all contain a known poloidal component (but whose toroidal component is unknown) then there exists systems where all the Malkus constraints are linear in the unknown spectral parameters. A formal statement of this fact is given in the theorem given below.
**Theorem**.: _Any arbitrary, prescribed, polynomial poloidal field can be transformed into a Malkus state through the addition of an appropriate polynomial toroidal field._
Proof.: We prove below that by considering an arbitrary, prescribed, truncated polynomial poloidal field, the addition of a specific choice of toroidal modes renders the Malkus constraints linear in the unknown toroidal coefficients. By taking a sufficient number of such modes such that the degrees of freedom exceed the number of constraints, it follows that for the general case (barring specific degenerate cases) by solving the linear system the resultant magnetic field is a Malkus state.
To show this, because the Malkus constraint is quadratic in the magnetic field, we introduce the concept of a magnetic field interaction. In general there are three possible field interactions within the Malkus integral, toroidal-toroidal, poloidal-poloidal and toroidal-poloidal, respectively
\[M=\sum_{l_{1},l_{2}}^{L_{max}}\sum_{m}^{L_{max}}\left([\bm{T}_{l_{l}}^{m},\bm{ T}_{l_{2}}^{m}]+[\bm{S}_{l_{l}}^{m},\bm{S}_{l_{2}}^{m}]+[\bm{T}_{l_{l}}^{m}, \bm{S}_{l_{2}}^{m}]\right)\]
where
\[[\bm{T}_{l_{l}}^{m},\bm{T}_{l_{2}}^{m}] =\int_{0}^{2\pi}\frac{l_{1}(l_{1}+1)\mathcal{T}_{l_{l}}^{m} \mathcal{T}_{l_{2}}^{m}}{r^{3}\sin\theta}\left({Y}_{l_{l}}^{m}\dfrac{\partial{ Y}_{l_{2}}^{m}}{\partial\phi}\right)s~{}\text{d}\phi+sc,\] (10)
\[=\int_{0}^{2\pi}\frac{l_{1}(l_{1}+1)\mathcal{S}_{l_{l}}^{m}(\frac {\text{d}^{2}}{\text{d}r^{2}}-l_{2}(l_{2}+1)/r^{2})\mathcal{S}_{l_{2}}^{m}}{r^ {3}\sin\theta}\left({Y}_{l_{l}}^{m}\dfrac{\partial{Y}_{l_{2}}^{m}}{\partial \phi}\right)s~{}\text{d}\phi+sc,\]
\[=\int_{0}^{2\pi}\frac{1}{r^{3}}\left(l_{1}(l_{1}+1){T}_{l_{l}}^{m }\frac{\text{d}\mathcal{S}_{l_{2}}^{m}}{\text{d}r}Y_{l_{1}}^{m}\dfrac{\partial {Y}_{l_{2}}^{m}}{\partial\theta}\right.\left.-l_{2}(l_{2}+1)\mathcal{S}_{l_{2} }^{m}\frac{\text{d}T_{l_{1}}^{m}}{\text{d}r}Y_{l_{2}}^{m}\dfrac{\partial{Y}_{l _{1}}^{m}}{\partial\theta}\right)s~{}\text{d}\phi,\]
where \(sc\) is the symmetric counterpart given by interchanging the vector harmonics and hence the positions of \(l_{1}\) and \(l_{2}\)(Livermore et al., 2008). Note that there is no poloidal-toroidal interaction since the curl of a poloidal vector is toroidal and (\(\bm{\mathcal{T}_{1}}\times\bm{\mathcal{T}_{2}})_{\phi}=0,\) for any two toroidal vectors \(\bm{\mathcal{T}_{1}}\) and \(\bm{\mathcal{T}_{2}}\).
For the situation we consider of a given poloidal field, then the only non-linearity within the unspecified coefficients arises from the toroidal-toroidal interactions, which results in quadratic dependence, just as for the general case with unprescribed poloidal field. However, by restricting attention to toroidal fields that result in no toroidal-toroidal interaction, the unknown toroidal coefficients appear only in a linear form through the toroidal-poloidal interactions. Axisymmetric modes are the simplest set of toroidal modes which are non-self-interacting, however there are too few of them (within the truncation) to solve the resulting linear system which is over-constrained (see figure1).
Therefore we require additional non-axisymmetric toroidal modes, which we choose such that the total set of toroidal modes remains non-self-interacting. This is achieved by exploiting the previously noted observations that any single harmonic is a Malkus state and that the set of equatorially symmetric toroidal modes \(T_{l}^{l}\) is a Malkus state (and therefore has no self-interaction). Owing additionally to azimuthal symmetry, the modes
\[{T_{1}^{0}},T_{2}^{0},\cdots,T_{1}^{-1},T_{1}^{1},T_{2}^{-2},T_{2}^{2},\dots,\]
that is, the modes \(T^{m}_{l}\) with \(m=0\) or \(m=\pm l\), have no self-interactions. Each harmonic mode may be expanded in radial modes up to the truncation \(N_{max}\). The non-interacting nature of the modes may be confirmed from equation10.
The addition of these nonaxisymmetric modes increases the number of degrees of freedom from the axisymmetric case by a factor of three such that it is now larger than the number of constraints (which are now all linear). This can be shown in general since for a toroidal field truncated at \(L_{1},N_{1}\) and a poloidal field truncated at \(L_{2},N_{2}\) the number of Taylor constraints is equal to half of the maximum degree of the polynomial in \(s\), (i.e. \(C_{T}=\frac{1}{2}(L_{1}+L_{2}+2N_{1}+2N_{2})-2\)) (Livermore et al., 2008) and the maximum number of Malkus constraints we have shown is given in terms of \(C_{T}\) by equation9. This results in a situation where if the poloidal field is fixed at a chosen resolution then for a toroidal field truncated quasi uniformly as \(N=\mathcal{O}(L_{max})\approx\mathcal{O}(N_{max})\) then we can see that the number of Malkus constraints scales as \(\frac{9}{4}N^{2}\), which importantly, grows slower than the number of degrees of freedom for the non-axisymmetric linear system which scales as \(3N^{2}\). Hence it is guaranteed that at a sufficiently large resolution toroidal field representation then there will be more degrees of freedom than constraints.
Therefore, barring degenerate cases, Malkus states exist. Compared with the case of a purely axisymmetric toroidal field, the number (but not the specific form) of linear constraints remains unaltered by the addition of these extra non-axisymmetric modes.
It is worth noting that the depth of the stratified layer does not enter into above derivation. The magnetic field solution in fact satisfies the Malkus constraints everywhere within its region of definition: in our case, this is the full sphere \(0\leq r\leq R\). ∎
Figure1 provides a specific example of the number of constraints given a poloidal field of degree \(13\). It demonstrates two important things. Firstly, that due to degeneracy (for which we have no explanation) the independent linear constraints (red triangles) are much fewer than the full set of linear constraints (red squares). Secondly, that the number of degrees of freedom exceed the number of independent constraints at \(L_{max}=N_{max}\geq 10\) if we consider the non-axisymmetric toroidal basis (blue circles) but is not exceeded at any truncation if we adopt the axisymmetric toroidal basis (blue stars). In particular, taking a non-axisymmetric toroidal field with truncation \(L_{max}=N_{max}=13\) gives an infinite set of Malkus states.
We note that the above deviation is based upon a polynomial representation, which is sufficient for our purposes here. However, we know that any continuous function defined on a closed interval can be uniformly approximated as closely as desired by a polynomial function, and hence it can be extended to include an arbitrary magnetic field structure by expressing the relevant scalars in a polynomial basis of suitably large truncation.
We need to match the Malkus state (physically defined within the stratified layer) to a Taylor state in the region beneath. One way of proceeding is to simply evaluate the Malkus state beneath the stratified layer (where it also satisfies Taylor constraint); however this effectively imposes additional constraints on the inner region and is overly restrictive. Instead, we impose the same profile of poloidal field and expand the toroidal component of the Taylor state in the same set of spherical harmonic modes as used for the Malkus state. Such a choice also renders the Taylor constraints linear in the unknown toroidal coefficients.
<figure><img src="content_image/1912.02213/x1.png"><figcaption>Figure 1: This graph compares the number of constraints to degrees of freedom(DOF) as a function of toroidal field spherical harmonic resolution withLmax=Mmax=Nmax, given a fixed poloidal field of Lmax=Mmax=13. This illustratesthat for the non-axisymmetric linear system we construct then the number ofdegrees of freedom (red) exceeds the number of independent constraints (redtriangles) for a toroidal field of resolution Lmax=Nmax≥10.</figcaption></figure>
### Further geophysical constraints
In order to construct a Malkus state according to the above procedure, we need to completely specify the poloidal field. Following Livermore et al. (2009), we downwards continue observation-derived models inside the core \(r\leq R\) by assuming a profile for each poloidal harmonic of degree \(l\) that minimises the Ohmic dissipation within the modelled core
\[(2l+3)r^{l+1}-(2l+1)r^{l+3}.\] (11)
We adopt two choices of observation-derived model. First, we use the CHAOS-6 model (Finlay et al., 2016) at epoch 2015 evaluated to degree 13, the maximum obtainable from geomagnetic observations without significant interference due to crust magnetism (Kono, 2015). Second, we use the time-averaged field over the last 10000 years from the CALS10k.2 model (Constable et al., 2016), which although is defined to degree 10 it has power concentrated mostly at degrees 1–4 because of strong regularisation of sparsely-observed ancient magnetic field structures. Recalling that the magnetostrophic state that we seek is defined over millenial timescales, this longer average provides on the one hand a better approximation to the background state, but on the other a much lower resolution.
Even within these geomagnetically consistent Malkus states, there are nevertheless multiple degrees of freedom remaining. This raises the question of which of the multiple possible solutions are most realistic for the Earth, and motivates us to incorporate additional conditions to distinguish ‘Earth-like’ solutions.
We determine specific solutions through optimising the toroidal field \(\bf T\) through either its Ohmic dissipation or its energy, respectively
\[Q=\frac{\eta}{\mu_{0}}\int_{V}({\bm{\nabla}}\times{\bf T})^{2}dV,\qquad \mathcal{E}=\frac{1}{2\mu_{0}}\int_{V}{\bf T}^{2}dV,\]
where \(\eta\approx 1\) m\({}^{2}\)s\({}^{-1}\) is magnetic diffusivity and \(\mu_{0}=4\pi\times 10^{-7}~{}\text{NA}^{-2}\) is the permeability of free space. Both of these target functions are quadratic in the magnetic field, and so seeking a minimal value subject to the now linear constraints is straightfoward. In our sequential method to find a matched Malkus-Taylor state, we first optimise the Malkus state, and then subsequently find an optimal matching Taylor state.
Of the dissipation mechanisms in the core: Ohmic, thermal and viscous, the Ohmic losses are believed to dominate. On these grounds, the most efficient arrangement of the geomagnetic field would be such that Ohmic dissipation \(Q\) is minimised. It is worth noting that in general our procedure is not guaranteed to provide the Malkus state field with least dissipation, but only an approximation to it, since we effectively separately optimise for the poloidal and toroidal component with least dissipation. In terms of finding a Malkus state with minimum toroidal field energy, this is useful in allowing us to determine the weakest toroidal field which is required in order to transform the imposed poloidal field into a Malkus state.
In sectionA.2 we compare the method of finding the weakest toroidal field required to make a Malkus state, between using only selected toroidal modes, and all toroidal modes (resulting in a nonlinear system). For low truncation, minimisation of the toroidal energy subject to these nonlinear constraints is computationally solvable, and the two approaches produce comparable results. This suggests that estimates for the lower bound of Earth’s toroidal field strength obtained using our linearised approach will not differ greatly from related full non-linear optimisation (that is computationally infeasible).
## 7 An Earth-like example
We now present some visualisations of the specific class of Malkus states discussed above with minimal toroidal field energy for which the system of equations which enforce the constraints is linear. The geometry assumed here is as illustrated in figure0(a), with a Malkus state in the stratified layer in the region \(0.9R<r\leq R\), matching to an inner Taylor state. We shall show the adjustment of the imposed poloidal field structure to a Malkus state by the required additive toroidal field. The strength of this toroidal field will be shown by contour plots of its azimuthal component. We note that the radial component of the magnetic field is defined everywhere by the imposed poloidal field, with the smooth degree 2 radial profile defined in equation11.
### Magnetic field at 2015
We begin by showing in figure2 both the radial and azimuthal structure, \(B_{r}\) and \(B_{\phi}\), of the CHAOS-6 model at epoch 2015 on the CMB, \(r=R\). Of note is that at the truncation to degree 13, the azimuthal component is about half as strong as the radial component.
<figure><img src="content_image/1912.02213/x2.png"><figcaption>(a) Br at the CMB, (max value = 0.91 mT).</figcaption></figure>
Figure3 summarises the strength of toroidal field (in terms of its azimuthal root mean squared value over solid angle) as a function of radius, for different toroidal truncations \(L_{max}=N_{max}\) (shown in different colours). The toroidal field is required to be four orders of magnitude stronger in the stratified layer in order to satisfy the more restrictive Malkus constraints, compared with the inner region in which the weaker Taylor constraint applies, and adopts a profile that is converged by degree 13. The strong toroidal field throughout the stratified layer occurs despite the electrically insulating boundary condition at the outer boundary that requires the toroidal field to vanish. Within the stratified layer, the azimuthal toroidal field strength attains a maximum rms value of 2.5 mT at a radius of about \(0.98R\) or a depth of about 70 km, about double the observed value at the CMB, and locally exceeds the imposed azimuthal poloidal magnetic field (of rms \(0.28\) mT at this radius).
<figure><img src="content_image/1912.02213/x4.png"><figcaption>Figure 3: The root mean squared azimuthal field strength (defined over solidangle) as a function of radius, comparing the strengths of the poloidal field(red) and toroidal field (blue, green, magneta and cyan) for toroidal fieldswith maximum spherical harmonic degree, order and radial resolution, 13 – 16respectively. The poloidal field is the degree 13 field of minimum Ohmicdissipation compatible with the CHAOS-6 model at epoch 2015 (Finlay et al.,2016).</figcaption></figure>
Figure4 shows \(B_{\phi}\) for both the total field and the toroidal component in isolation, using a toroidal truncation of 13 (corresponding to the blue line in figure3.) The top row shows the structure at the radius of maximum rms toroidal field (\(r=0.98R\)), demonstrating that the additive toroidal field component (of maximum 8 mT) dominates the total azimuthal field. The bottom row shows a comparable figure at \(r=0.7R\), in the inner region where only Taylor’s constraint applies. Plotted on the same scale, the required additive toroidal component is tiny compared with the imposed poloidal field. This highlights again that the Malkus constraint is much more restrictive than the Taylor constraint.
<figure><img src="content_image/1912.02213/x5.png"><figcaption>(a) Toroidal field Bϕ at r=0.98R, (max value = 7.74 mT, RMS = 2.50 mT).</figcaption></figure>
For comparison, figure5 shows an equivalent solution to figure4(a,b) but in the absence of stratification (where the magnetic field satisfies only Taylor’s constraint). The toroidal contribution to the azimuthal field is very weak (note the colourbar range is reduced from that of figure4(a,b) from 8 to 0.04 mT) and is of very large scale. This further highlights the weakness of the Taylor constraints compared with the Malkus constraints.
<figure><img src="content_image/1912.02213/x9.png"><figcaption>(a) Toroidal field Bϕ at r=0.98R, (max value = 0.034 mT)</figcaption></figure>
### Time averaged field over the past ten millenia
Here we show results for a poloidal field that is derived from the 10000-year time averaged field from the CALS10k.2 model (Constable et al., 2016). The model is only available up to spherical harmonic degree 10, hence we adopt a truncation of \(L_{max}=N_{max}=10\) for the toroidal field. Due to the absence of small-scale features in the field (caused by regularisation) the maximum value of \(B_{r}\) is reduced to about \(1/2\) of the comparable value from the degree-13 CHAOS-6 model from epoch 2015, and similarly the azimuthal field to about \(1/6\) of its value. We note that over a long enough time span, the magnetic field is generally assumed to average to an axial dipole: a field configuration that is both a Malkus state and one in which the azimuthal component vanishes. Thus a small azimuthal component is consistent with such an assumption.
<figure><img src="content_image/1912.02213/x11.png"><figcaption>(a) Br, max value = 0.50 mT.</figcaption></figure>
Contours of the azimuthal field within the stratified layer (at \(r=0.97R\)) are shown in figure7, which is approximately the radius at which the maximum rms azimuthal toroidal field occurs. As before, the toroidal field dominates the azimuthal component, and its rms (1.66 mT) is about double that on the CMB (0.085 mT). Although its maximum absolute value is about 3 mT, less than the 8 mT found in the 2015 example above, this is consistent with the overall reduction in structure of the imposed poloidal field.
<figure><img src="content_image/1912.02213/x13.png"><figcaption>(a) Toroidal field Bϕ at r=0.97R, (max value = 1.60 mT)</figcaption></figure>
## 8 Discussion
### Estimates of the internal magnetic field strength
Estimating the magnetic field strength inside the core is challenging, because observations made on Earth’s surface, using a potential-field extrapolation, only constrain the poloidal magnetic structure down to the CMB and not beneath, but also even this structure is visible only to about spherical harmonic degree 13. Furthermore, within the framework of such an extrapolation, the toroidal field is zero on the CMB. Estimating the field within the core beyond these surface values requires insight from numerical models or observations of physical mechanisms that are sensitive to the interior field.
Based on numerical models, Zhang and Fearn (1993) suggest that a criterion for stability of the geomagnetic field is a toroidal field no more than 10 times that of the poloidal field, resulting an approximate upper bound of 5 mT. In a more recent and conflicting study, Sreenivasan and Narasimhan (2017) suggest that the mean toroidal field is approximately 10 mT or higher, since this intensity is required for the slow magnetostrophic waves present to be able to originate from small-scale motions in the core. Observation-based studies based on electric field measurements Shimizu et al. (1998) suggest a toroidal field strength at the CMB of anywhere in the range of 1-100 times that of the poloidal magnetic field there (i.e. up to about 100 mT). Buffett (2010) calculated a core averaged field strength of 2.5 mT from measurements of tidal dissipation; Gubbins (2007) estimated the toroidal field strength of 1 mT as compatible with patches of reversed magnetic flux. Lastly, the magnetic signature of both torsional and Rossby waves have led to respective estimates of at least 2 mT for \(B_{s}\) within the core and therefore an RMS strength of \(4\) mT assuming isotropy (Gillet et al., 2010), and an RMS estimate of \(B_{\phi}\) of 12 mT (Hori et al., 2015).
The strong toroidal field within our 2015 models of up to 8 mT (and rms \(B_{\phi}\) of \(2.5\) mT) within the stratified layer (at radius \(r=0.98R\) or a depth of about 70 km) is in agreement with the majority of these estimates. This maximum value is notably about 8 times stronger than the observed radial field on the CMB. In both the 2015 and the 10000-yr averaged model, the rms toroidal field within the stratified layer was about double the radial field on the CMB. Interestingly, the azimuthal component of our solution within the inner unstratified region is about 100 times weaker, demonstrating the extent to which Malkus’ constraint is far more restrictive than Taylor’s constraint.
### Limitations of our model
Our model does not produce a formal lower bound on the azimuthal component of a magnetic field that (a) satisfies both the Malkus and Taylor constraints in their relevant regions along with (b) constraints on the radial field at the CMB. Instead, our results give only an upper bound on the lower bound (e.g. Jackson et al., 2011) because we have made a variety of simplifying assumptions, the most notable of which are (i) we have restricted ourselves to a subspace of Malkus states for which the constraints are linear (ii) we have imposed the entire poloidal profile and (iii) we have used a regular basis set for all magnetic fields even within the stratified layer when this is not strictly necessary. However, we show for the example considered in sectionA.2 that in this case assumption (i) does not have a significant impact and our estimate is close to the full nonlinear lower bound. It may be that the other assumptions also do not cause our azimuthal field estimates to deviate significantly from the actual lower bound.
Leaving aside the minimum toroidal field suggested by our model, our analysis allows two statements to be made on the weakness of the Malkus constraints, and the ability of magnetic structures assumed on the CMB to probe the magnetic structure within the stratified layer. Firstly, our method can find a toroidal field that converts any poloidal field into a Malkus state within a stratified layer of any depth. This means that we cannot use consistency of observation-derived models of the radial field with the Malkus constraints as a discriminant to test the probe the existence (and depth) of a stratified layer: all such models are consistent. Second, even if a stratified layer is assumed, the lateral radial magnetic field structure at the bottom of the layer is unconstrained by its structure at the top because we can find a Malkus state assuming any poloidal profile within the layer. Thus using considerations of the Malkus constraints, models of the surface magnetic field, such as CHAOS-6, cannot be downwards-continued further than the CMB into a stratified layer beneath.
### Model robustness
There remains much uncertainty over the depth of any stably stratified layer at the top of the Earth’s core (Hardy and Wong, 2019). Hence it is natural to consider how our results may change if the layer were to be of a different thickness to the \(10\%\) of core radius used, as such we also calculated minimum toroidal-energy solutions matched to CHAOS-6 in epoch 2015 for layer thicknesses of \(5\%\) and \(15\%\). We find very little dependence of the field strengths internal to the layer on the depth of the layer itself, with our root mean square azimuthal field taking peak values of 2.7, 2.5 and 2.4 mT for thicknesses of \(5\%\), \(10\%\) and \(15\%\) respectively.
The resolution of poloidal field also impacts significantly our optimal solutions. This has already been identified in the comparison between the degree-13 2015 model, and the degree-10 10,000-yr time-averaged model, that respectively resulted in rms azimuthal field estimates of 2.5 and 1.2 mT. Interestingly, for very long time-averaging windows the magnetic field is widely supposed to converge to an axial dipole, and assuming a simple poloidal profile is itself an exact Malkus state, with zero azimuthal field strength. We can further test the effect of resolution by considering maximum poloidal degrees of 6 and 10 for the 2015 model to compare with our solution at degree 13. We find that our estimates for the root mean square azimuthal field (taken over their peak spherical surface) were 1.6 and 2.2 mT respectively. In all these calculations, the spherical harmonic degree representing the toroidal field was taken high enough to ensure convergence. Thus stronger toroidal fields are apparently needed to convert more complex poloidal fields into a Malkus state. This has important implications for the Earth, for which we only know the degree of the poloidal field to about \(13\) due to crustal magnetism. Our estimates of the azimuthal field strength would likely increase if a full representation of the poloidal field were known.
### Ohmic dissipation
Our method can be readily amended to minimise the toroidal Ohmic dissipation, rather than the toroidal energy. In so doing, we provide a new estimate of the lower bound of Ohmic dissipation within the core. Such lower bounds are useful geophysically as they are linked to the rate of entropy increase within the core, which has direct implications for: core evolution, the sustainability of the geodynamo, the age of the inner core and the heat flow into the mantle (Jackson et al., 2011).
The poloidal field with maximum spherical harmonic degree 13 that we use, based on CHAOS-6 (Finlay et al., 2016) and the minimum Ohmic dissipation radial profile (Backus et al., 1996) has by itself an Ohmic dissipation of 0.2 GW. Jackson and Livermore (2009) showed that by adding additional constraints for the magnetic field, a formal lower bound on the dissipation could be raised to 10 GW, and even higher to 100 GW with the addition of further assumptions about the geomagnetic spectrum. This latter bound is close to typical estimates of 1 - 15 TW Jackson and Livermore (2009); Jackson et al. (2011).
The addition of extra conditions derived from the assumed dynamical balance, namely Taylor constraints, were considered by Jackson et al. (2011) by adopting a very specific magnetic field representation. These constraints alone raised their estimate of the lower bound from 0.2 to 10 GW, that is, by a factor of 20. In view of the much stronger Malkus constraints (compared to the Taylor constraints), we briefly investigate their impact here.
We follow our methodology and find an additive toroidal field of minimal dissipation (rather than energy) that renders the total field a Malkus state. The dissipation is altered from \(0.2\) to \(0.7\) GW. That this increase is rather small (only a small factor of about 3) is rather disappointing, but is not in contradiction to our other results. It is generally true that the Malkus constraints are more restrictive than the Taylor constraint, but this comparison can only be made when the same representation is used for both. The method of Jackson et al. (2011) assumed a highly restrictive form, so that in fact their Taylor states were apparently actually more tightly constrained than our Malkus states and thus produced a higher estimate of the lower bound. Despite our low estimate here, additional considerations of Malkus constraint may increase the highest estimates of Jackson and Livermore (2009) well into the geophysically interesting regime.
### Further extensions
The Malkus states we have computed, which match to field observations, provide a plausible background state at the top of the core. It may be interesting for future work to investigate how waves thought to exist within such a stratified layer (Buffett, 2014) may behave when considered as perturbations from such a background state, and whether they remain valid suggestions for explaining secular variation in the geomagnetic field. Similarly, combining our analysis with constraints on \(B_{s}\) from torsional wave models (Gillet et al., 2010) may be insightful, and would combine aspects of both long and short-term dynamics.
It is worth noting though, that we have investigated only static Malkus states without consideration of dynamics: we do not require the magnetic field to be either steady or stable, both of which would apply additional important conditions. An obvious extension to this work then is to investigate the fluid flows which are generated by the Lorentz force associated with these fields. This would then allow a consideration of how such flows would modify the field through the induction equation. These dynamics are however, are still relatively poorly known even for the much simpler problem of Taylor states. Recent progress by Hardy et al. (2018) now allows a full calculation of the flow driven by a Taylor state. A general way to discover stationary and stable Taylor states comparable with geomagnetic observations is still out of reach, and currently the only way to find a stable Taylor state is by time-stepping (e.g. Li et al., 2018).
The well established test used to determine whether the appropriate magnetostrophic force balance is achieved within numerical dynamo simulations is ‘Taylorisation’, which represents a normalised measure of the magnitude of the Taylor integral equation2 and hence the departure from the geophysically relevant, magnetostrophic regime (e.g. Takahashi et al., 2005).
We propose an analogous quantity termed ‘Malkusisation’ defined in the same way, in terms of the Malkus integral:
\[\text{Malkusization}=\frac{|\int_{0}^{2\pi}([{\bm{\nabla}}\times{\bf B}]\times {\bf B})_{\phi}d\phi|}{\int_{0}^{2\pi}|([{\bm{\nabla}}\times{\bf B}]\times{\bf B })_{\phi}|d\phi}\]
This quantity is expected to be very small within a stratified layer adjacent to a magnetostrophic dynamo, provided stratification is sufficiently strong. The recently developed dynamo simulations of Olson et al. (2018); Stanley and Mohammadi (2008) which incorporate the presence of a stratified layer can utilise the computation of this quantity to access the simulation regime.
Finally, we note that the appropriate description of a stratified layer may in fact need to be more complex than a single uniform layer that we assume. Numerical simulations of core flow with heterogeneous CMB heat flux by Mound et al. (2019) find that localised subadiabatic regions that are stratified are possible amid the remaining actively convecting liquid. If indeed local rather than global stratification is the more appropriate model for the Earth’s outermost core then the condition of requiring an exact Malkus state would not apply, and the constraints on the magnetic field would be weakened by the existence of regions of non-zero radial flow.
## 9 Conclusion
In this paper we have shown how to construct magnetic fields that are consistent with geomagnetic observations, a strongly stratified layer and the exact magnetostrophic balance thought to exist within Earth’s core.
To do this, we derived the Malkus constraints that must be satisfied by such a magnetic field, whose structure gives insight into the nature of the Earth’s magnetic field immediately beneath the CMB, where a layer of stratified fluid may be present.
For a fixed magnetic field resolution, although the Malkus constraints are more numerous than the Taylor constraints, many solutions compatible with geomagnetic observations still exist. By making further assumptions about the field structure, we estimate that the toroidal field within the stratified layer is about 8 mT, significantly stronger than the 1 mT of the radial field inferred from degree-13 observations.
## 10 Acknowledgements
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) Centre for Doctoral Training in Fluid Dynamics at the University of Leeds under Grant No. EP/L01615X/1. P.W.L. acknowledges partial support from NERC grant NE/G014043/1. The authors would also like to thank Dominique Jault and Emmanuel Dormy for helpful discussions, as well as the Leeds Deep Earth group for useful comments. Figures were produced using matplotlib (Hunter, 2007).
## Appendix A Full sphere Malkus states
### Simple Example
Here we consider a simple example of an axisymmetric magnetic field in a full sphere of radius \(R\), consisting of four modes: a toroidal \(l=1\), \(n=1\) mode, a toroidal \(l=1\), \(n=2\) mode, a poloidal \(l=1\), \(n=1\) mode and a poloidal \(l=1\), \(n=2\) mode, each of which has an unspecified corresponding coefficient \(\alpha_{l,n}\) and \(\beta_{l,n}\) for toroidal and poloidal modes respectively. Through this we demonstrate the form of the linear constraints which arise from Malkus’ constraint in this case. It is significant to note the vital role of degeneracy within these constraints in permitting a solution.
Through computing the Malkus integral and enforcing that this is zero for all values of \(s\) and \(z\) by requiring that the coefficients of all powers of \(s\) and \(z\) vanish we obtain a series of simultaneous equations:
\[\left(-\frac{11}{8}\beta_{1,2}+2\beta_{1,1}\right)\alpha_{1,2}-\frac{253}{140} \left(\frac{77}{69}\beta_{1,2}+\frac{56}{759}\beta_{1,1}\right)\alpha_{1,1}=0,\]
\[\left(\frac{319}{84}\beta_{1,2}-\frac{10}{3}\beta_{1,1}\right)\alpha_{1,2}- \frac{253}{70}\beta_{1,2}\alpha_{1,1}=0,\]
\[\left(-\frac{165}{56}\beta_{1,2}+\beta_{1,1}\right)\alpha_{1,2}-\frac{253}{140 }\beta_{1,2}\alpha_{1,1}=0,\]
\[\left(\frac{319}{84}\beta_{1,2}-\frac{10}{3}\beta_{1,1}\right)\alpha_{1,2}- \frac{253}{70}\beta_{1,2}\alpha_{1,1}=0,\]
\[\left(-\frac{165}{28}\beta_{1,2}+2\beta_{1,1}\right)\alpha_{1,2}-\frac{253}{70 }\beta_{1,2}\alpha_{1,1}=0,\]
\[\left(-\frac{165}{56}\beta_{1,2}+\beta_{1,1}\right)\alpha_{1,2}-\frac{253}{140 }\beta_{1,2}\alpha_{1,1}=0.\]
Although there are 6 equations here, there are only two independent conditions:
\[\alpha_{1,1}\beta_{1,2}+\frac{5}{2}\alpha_{1,2}\beta_{1,2}=0,~{}~{}~{}~{}\text {and}~{}~{}~{}~{}\alpha_{1,2}\beta_{1,1}+\frac{11}{7}\alpha_{1,2}\beta_{1,2}=0.\]
If both \(\beta_{1,2}\) and \(\alpha_{1,2}\) are nonzero, then these become linear constraints.
Hence, in this case we can see that there are 4 degrees of freedom, 6 constraint equations but only 2 independent constraints. This means that while on first inspection the system appears to be overconstrained with no solution, there are in fact multiple Malkus state solutions, with the solution space being spanned by two degrees of freedom (\(\beta_{1,2},\alpha_{1,2}\)) with the other coefficients determined in terms of these by the relationships:
\[\alpha_{1,1}=-\frac{5}{2}\alpha_{1,2}~{}~{}~{}~{}\text{and}~{}~{}~{}~{}\beta_{ 1,1}=-\frac{11}{7}\beta_{1,2}.\]
Despite the significant degenercy in the Malkus constraints, they are notably more restrictive than the Taylor constraints for this truncation of \(L_{max}=1\), \(N_{max}=2\), for which the Taylor integral is identically zero and so provides no restriction.
### Solution of a low-resolution system
We now present the first known non-trivial solution of a Malkus state. Here we consider a full sphere magnetic field truncated at \(L_{max}=3,~{}N_{max}=3,~{}M_{max}=3\), and impose a minimum Ohmic dissipation poloidal profile that matches the CHAOS-6 model (to degree 3) at \(r=R\).We seek a toroidal field using all spherical harmonic modes within the truncation \(L_{max}=3,~{}N_{max}=3\) (described by 45 degrees of freedom) that when added to this poloidal field satisfies the Malkus constraints. Of the 72 nonlinear constraint equations, only 42 are independent. Thus the number of degrees of freedom exceed the number of independent constraints, although since the constraints are nonlinear it is not immediate that a solution exists. However, using the computer algebra software Maple, we find the solution that minimises toroidal field strength as well as satisfying all the constraints, which is visualised in figure8. We cannot generalise this procedure to higher resolutions because of the numerical difficulty in finding optimal solutions in such a nonlinear problem.
<figure><img src="content_image/1912.02213/x15.png"><figcaption>(a) Br at r=0.9R</figcaption></figure>
For comparison we also compute the solution using the method described in section6.1, which owing to the specific choice of toroidal spherical harmonic modes results in a linear system. The qualitative similarity between these solutions is important in giving an insight into how important it is that we make the (necessary) choice for our higher resolution Earth-like solutions, of only including modes that result in a linear system. Quantitatively this holds too, with rms values of \(B_{\phi}\) of 0.21 and 0.23 mT for the non-linear and linear solutions respectively. Hence we suggest that the estimates for the lower bound of Earth’s toroidal field strength we have calculated would not be significantly different were it possible to solve the full non-linear system.
<figure><img src="content_image/1912.02213/x19.png"><figcaption>Figure 9: Linear solution for Bϕ at r=0.9R, using the method outlined insection 6.1 and used for the Earth-like solutions</figcaption></figure>
## Appendix B Enumeration of constraints
In order to determine the number of Malkus constraints, we calculate the maximum possible exponent in dimension of length within the Malkus integral. Since each constraint equation arises from ensuring a coefficient of a different exponent vanishes, enumerating all possibilities gives the maximum number of constraints.
There are three possible non-zero interactions whose sum comprise the Malkus integral, toroidal-toroidal, toroidal-poloidal and poloidal-poloidal as defined in equation10. Since the poloidal field definition contains two curls whereas the toroidal field only one, then this extra derivative reduces the maximum exponent by one for interations involving a poloidal field as opposed to a toroidal one. This means that the maximal case is determined by the toroidal-toroidal interaction, \([\bm{\mathcal{T}_{1}},\bm{\mathcal{T}}_{2}]\). Since the Malkus integrand is identical to the Taylor integrand, we observe that the maximum radial exponent in the Malkus integrand \((({\bm{\nabla}}\times\bm{\mathcal{T}_{1}})\times\bm{\mathcal{T}_{2}})_{\phi}\) is \(2L_{max}+4N_{max}-1\), as derived by Livermore et al. (2008). This is then reduced by two due to the property that the interaction of two toroidal harmonics that have identical spherical harmonic degrees and orders is zero (Livermore et al., 2008). This requires that one of the two modes has an \(L_{max}\) of at least one smaller than the other, hence resulting in a maximum possible degree in \(r\) of \(2L_{max}+4N_{max}-3\).
Now under a transform in coordinate systems we note that \(r^{n}\) in spherical coordinates can be expressed as \(s^{j}z^{k}\) in cylindrical coordinates, where \(n=j+k\). Since only even values of \(j\) are present this results in \(L_{max}+2N_{max}-2=C_{T}\) non-trivial constraint equations in this dimension. There is no such restriction on \(k\), which can take all values up to the maximum of \(2L_{max}+4N_{max}-3=2C_{T}+1\).
Each one of the constraints arises from a coefficient of a term with a different combination of exponents in \(s\) and \(z\), explicitly, these terms have the following form:
\[(A_{C_{T},0}z^{0}+A_{C_{T},1}z)s^{2C_{T}}+(A_{C_{T}-1,0}z^{0}+A_{ C_{T}-1,1}z+A_{C_{T}-1,2}z^{2}+A_{C_{T}-1,3}z^{3})s^{2(C_{T}-1)}\]
\[+(A_{C_{T}-2,0}z^{0}+\dots+A_{C_{T}-2,5}z^{5})s^{2(C_{T}-2)}+\dots\]
\[+(A_{1,0}+\dots+A_{1,2C_{T}-1}z^{2C_{T}-1})s^{2}+(A_{0,0}+\dots+A _{0,2C_{T}+1}z^{2C_{T}+1}).\] (12)
Hence from the summation of the total number of these terms for every combination of \(j\) and \(k\), with \(j\) even, such that \(j+k\leq 2C_{T}+1\) we have the following expression for the maximum number of Malkus constraints,
\[C_{M}=2\sum_{n=0}^{C_{T}}(n+1)=(C_{T}+1)(C_{T}+2)=C_{T}^{2}+3C_{T}+2.\]
## References
* Alexandrakis and Eaton (2010) C. Alexandrakis and D.W. Eaton. Precise seismic-wave velocity atop Earth’s core: No evidence for outer-core stratification. _Physics of the Earth and Planetary Interiors_, 180(1):59–65, 2010.
* Amit (2014) H. Amit. Can downwelling at the top of the Earth’s core be detected in the geomagnetic secular variation? _Phys Earth Planet In_, 229(C):110–121, April 2014.
* Aubert (2019) J. Aubert. Approaching Earth’s core conditions in high-resolution geodynamo simulations. _Geophysical Journal International_, 2019.
* Aubert and Finlay (2019) J. Aubert and C.C. Finlay. Geomagnetic jerks and rapid hydromagnetic waves focusing at Earth’s core surface. _Nature Geoscience_, 12(5):393, 2019.
* Backus et al. (1996) G. Backus, R. Parker, and C. Constable. _Foundations of Geomagnetism_. CUP, 1996.
* Bouffard et al. (2019) M. Bouffard, G. Choblet, S. Labrosse, and J. Wicht. Chemical convection and stratification in the Earth’s outer core. _Frontiers in Earth Science_, 7:99, 2019.
* Braginsky (1967) S.I. Braginsky. Magnetic waves in the Earth’s core. _Geomagnetism and Aeronomy_, 7:851–859, 1967.
* Braginsky (1987) S.I. Braginsky. Waves in a stably stratified layer on the surface of the terrestrial core. _Geomagn. Aeron._, 27:410–414, 1987.
* Braginsky (1993) S.I. Braginsky. Mac-oscillations of the hidden ocean of the core. _Journal of geomagnetism and geoelectricity_, 45(11-12):1517–1538, 1993.
* Braginsky (1999) S.I. Braginsky. Dynamics of the stably stratified ocean at the top of the core. _Physics of the earth and planetary interiors_, 111(1):21–34, 1999.
* Braginsky (2006) S.I. Braginsky. Formation of the stratified ocean of the core. _Earth and Planetary Science Letters_, 243(3-4):650–656, 2006.
* Buffett (2010) B.A. Buffett. Tidal dissipation and the strength of the Earth’s internal magnetic field. _Nature_, 468(7326):952–954, Dec 2010. URL http://dx.doi.org/10.1038/nature09643.
* Buffett (2014) B.A. Buffett. Geomagnetic fluctuations reveal stable stratification at the top of the Earth’s core. _Nature_, 507(7493):484–487, March 2014.
* Buffett and Seagle (2010) B.A. Buffett and C.T. Seagle. Stratification of the top of the core due to chemical interactions with the mantle. _Journal of Geophysical Research: Solid Earth_, 115(B4), 2010.
* Buffett et al. (2016) B.A. Buffett, N. Knezek, and R. Holme. Evidence for mac waves at the top of Earth’s core and implications for variations in length of day. _Geophysical Journal International_, 204(3):1789–1800, 2016.
* Busse (1975) F. H. Busse. A necessary condition for the Geodynamo. _J. Geophys. Res._, 80:278–280, 1975.
* Christensen (2018) U.R. Christensen. Geodynamo models with a stable layer and heterogeneous heat flow at the top of the core. _Geophysical Journal International_, 215(2):1338–1351, 2018.
* Christensen and Wicht (2008) U.R. Christensen and J. Wicht. Models of magnetic field generation in partly stable planetary cores: Applications to mercury and saturn. _Icarus_, 196(1):16–34, 2008.
* Christensen and Wicht (2015) U.R. Christensen and J. Wicht. Numerical dynamo simulations. _Treatise on Geophysics, Elsevier_, 8(1):245–277, 2015.
* Christensen et al. (2009) U.R. Christensen, V. Holzwarth, and A. Reiners. Energy flux determines magnetic field strength of planets and stars. _Nature_, 457(7226):167, 2009.
* Christensen et al. (2010) U.R. Christensen, J. Aubert, and G. Hulot. Conditions for Earth-like geodynamo models. _Earth and Planetary Science Letters_, 296(3-4):487–496, 2010.
* Constable et al. (2016) C.G. Constable, M. Korte, and S. Panovska. Persistent high paleosecular variation activity in southern hemisphere for at least 10 000 years. _Earth and Planetary Science Letters_, 453:78–86, 2016.
* Cox et al. (2019) G.A. Cox, C.J. Davies, P.W. Livermore, and J. Singleton. Penetration of boundary-driven flows into a rotating spherical thermally stratified fluid. _Journal of Fluid Mechanics_, 864:519–553, 2019.
* Davies et al. (2015) C.J. Davies, M. Pozzo, D. Gubbins, and D. Alfè. Constraints from material properties on the dynamics and evolution of Earth’s core. _Nature Geoscience_, 8(9):678–685, 2015.
* Fearn (1998) D.R. Fearn. Hydromagnetic flow in planetary cores. _Rep. Prog. Phys._, 61:175–235, 1998.
* Finlay et al. (2016) C.C. Finlay, N. Olsen, S. Kotsiaros, N. Gillet, and L. Toeffner-Clausen. Recent geomagnetic secular variation from _Swarm_ and ground observatories as estimated in the CHAOS-6 geomagnetic field model. _Earth Planets Space_, 68(1):1–18, 2016.
* Gillet et al. (2010) N. Gillet, D. Jault, E. Canet, and A. Fournier. Fast torsional waves and strong magnetic field within the Earth’s core. _Nature_, 465(7294):74–77, 2010.
* Glane and Buffett (2018) S. Glane and B.A. Buffett. Enhanced core-mantle coupling due to stratification at the top of the core. 2018.
* Gross (2001) R.S. Gross. A combined length-of-day series spanning 1832–1997: Lunar97. _Physics of the Earth and Planetary Interiors_, 123(1):65–76, 2001.
* Gubbins (2007) D. Gubbins. Geomagnetic constraints on stratification at the top of Earth’s core. _Earth, planets and space_, 59(7):661–664, 2007.
* Hardy and Wong (2019) C.M. Hardy and J. Wong. Stably stratified layers within Earth’s core. _Astronomy & Geophysics_, 60(3):3–30, 2019.
* Hardy et al. (2018) C.M. Hardy, P.W. Livermore, J. Niesen, J. Luo, and K. Li. Determination of the instantaneous geostrophic flow within the three-dimensional magnetostrophic regime. _Proc. R. Soc. A_, 474(2218):20180412, 2018.
* Helffrich and Kaneshima (2010) G. Helffrich and S. Kaneshima. Outer-core compositional stratification from observed core wave speed profiles. _Nature_, 468(7325):807–810, 2010.
* Helffrich and Kaneshima (2013) G. Helffrich and S. Kaneshima. Causes and consequences of outer core stratification. _Physics of the Earth and Planetary Interiors_, 223:2–7, 2013.
* Hollerbach and Ierley (1991) R. Hollerbach and G. Ierley. A modal \(\alpha^{2}\)-dynamo in the limit of asymptotically small viscosity. _Geophys. Astrophys. Fluid Dyn._, 56:133–158, 1991.
* Holme (2015) R. Holme. Large-scale flow in the core. _Treatise on Geophysics, Ed. M. Kono_, 8:91–113, 2015.
* Holme and De Viron (2005) R. Holme and O. De Viron. Geomagnetic jerks and a high-resolution length-of-day profile for core studies. _Geophysical Journal International_, 160(2):435–439, 2005.
* Hori et al. (2015) K. Hori, C.A. Jones, and R.J. Teed. Slow magnetic Rossby waves in the Earth’s core. _Geophysical Research Letters_, 42(16):6622–6629, 2015.
* Hunter (2007) J.D. Hunter. Matplotlib: A 2D graphics environment. _Computing In Science & Engineering_, 9(3):90–95, 2007. doi: 10.1109/MCSE.2007.55.
* Irving et al. (2018) J.C.E. Irving, S. Cottaar, and V. Lekić. Seismically determined elastic parameters for Earth’s outer core. _Science advances_, 4(6):eaar2538, 2018.
* Jackson and Finlay (2015) A. Jackson and C.C. Finlay. Geomagnetic secular variation and its applications to the core. _Treatise on geophysics_, 5:137–184, 2015.
* Jackson and Livermore (2009) A. Jackson and P.W. Livermore. On Ohmic heating in the Earth’s core I: nutation constraints. _Geophys. J. Int._, 179(2):923–928, 2009.
* Jackson et al. (2011) A. Jackson, P.W. Livermore, and G. Ierley. On ohmic heating in the Earth’s core II: Poloidal magnetic fields obeying taylor’s constraint. _Physics of the Earth and Planetary Interiors_, 187(3-4):322–327, 2011.
* Jault and Cardin (1999) D. Jault and P. Cardin. On dynamic geodynamo models with imposed velocity as energy source. _Phys. Earth Planet. Int._, 111:75–81, 1999.
* Jeanloz (1990) R. Jeanloz. The nature of the Earth’s core. _Annual Review of Earth and Planetary Sciences_, 18(1):357–386, 1990.
* Kaneshima (2017) S. Kaneshima. Array analyses of smks waves and the stratification of earth’s outermost core. _Physics of the Earth and Planetary Interiors_, In press, 2017.
* Kono (2015) M. Kono. Geomagnetism - an introduction and overview. _Treatise on geophysics_, 5:1–30, 2015.
* Lay and Young (1990) T. Lay and C.J. Young. The stably-stratified outermost core revisited. _Geophysical Research Letters_, 17(11):2001–2004, 1990.
* Lesur et al. (2015) V. Lesur, K. Whaler, and I. Wardinski. Are geomagnetic data consistent with stably stratified flow at the core-mantle boundary? _Geophys. J. Int._, 201(2):929–946, March 2015.
* Lewis and Bellan (1990) H.R. Lewis and P.M. Bellan. Physical constraints on the coefficients of fourier expansions in cylindrical coordinates. _Journal of Mathematical Physics_, 31(11):2592–2596, 1990.
* Li et al. (2010) K. Li, P. W. Livermore, and A. Jackson. An optimal Galerkin scheme to solve the kinematic dynamo eigenvalue problem in a full sphere. _Journal of computational physics_, 229(23):8666–8683, 2010.
* Li et al. (2011) K. Li, A. Jackson, and P.W. Livermore. Variational data assimilation for the initial value dynamo problem. _Phys. Rev. E_, 84(5):056321, 2011.
* Li et al. (2018) K. Li, A. Jackson, and P.W. Livermore. Taylor state dynamos found by optimal control: axisymmetric examples. _Journal of Fluid Mechanics_, 853:647–697, 2018.
* Livermore and Hollerbach (2012) P.W. Livermore and R. Hollerbach. Successive elimination of shear layers by a hierarchy of constraints in inviscid spherical-shell flows. _J. Math. Phys._, 53(7):073104, 2012.
* Livermore et al. (2008) P.W. Livermore, G. Ierley, and A. Jackson. The structure of Taylor’s constraint in three dimensions. In _Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences_, volume 464, pages 3149–3174. The Royal Society, 2008.
* Livermore et al. (2009) P.W. Livermore, G. Ierley, and A. Jackson. The construction of exact Taylor states. I: The full sphere. _Geophysical Journal International_, 179(2):923–928, 2009.
* Malkus (1967) W.V.R. Malkus. Hydromagnetic planetary waves. _J. Fluid Mech._, 28(4):793–802, 1967.
* Malkus (1979) W.V.R. Malkus. Dynamo macrodynamics in rotating stratified fluids. _Physics of the Earth and Planetary Interiors_, 20(2-4):181–184, 1979.
* Mound et al. (2019) J.E. Mound, C.J. Davies, S. Rost, and J. Aurnou. Regional stratification at the top of Earth’s core due to core-mantle boundary heat flux variations. _Nature Geoscience_, 2019.
* Nakagawa (2011) T. Nakagawa. Effect of a stably stratified layer near the outer boundary in numerical simulations of a magnetohydrodynamic dynamo in a rotating spherical shell and its implications for earth’s core. _Physics of the Earth and Planetary Interiors_, 187(3-4):342–352, 2011.
* Olson et al. (2017) P. Olson, M. Landeau, and E. Reynolds. Dynamo tests for stratification below the core-mantle boundary. _Physics of the Earth and Planetary Interiors_, 271:1–18, 2017.
* Olson et al. (2018) P. Olson, M. Landeau, and E. Reynolds. Outer core stratification from the high latitude structure of the geomagnetic field. _Frontiers in Earth Science_, 6:140, 2018.
* Pozzo et al. (2012) M. Pozzo, C. Davies, D. Gubbins, and D. Alfè. Thermal and electrical conductivity of iron at Earth’s core conditions. _Nature_, 485:355–358, 2012.
* Roberts and Aurnou (2011) P.H. Roberts and J.M. Aurnou. On the theory of core-mantle coupling. _Geophys. Astrophys. Fl. Dyn._, 106(2):157–230, 2011.
* Roberts and Wu (2014) P.H. Roberts and C.C. Wu. On the modified taylor constraint. _Geophysical & Astrophysical Fluid Dynamics_, 108(6):696–715, 2014.
* Roberts and Wu (2018) P.H. Roberts and C.C. Wu. On magnetostrophic mean-field solutions of the geodynamo equations. part 2. _Journal of Plasma Physics_, 84(4), 2018.
* Roberts et al. (2007) P.H. Roberts, Z.J. Yu, and C.T. Russell. On the 60-year signal from the core. _Geophysical and Astrophysical Fluid Dynamics_, 101(1):11–35, 2007.
* Schaeffer et al. (2017) N. Schaeffer, D. Jault, H. C. Nataf, and A. Fournier. Turbulent geodynamo simulations: a leap towards Earth’s core. _Geophysical Journal International_, 211(1):1–29, 2017.
* Shimizu et al. (1998) H. Shimizu, T. Koyama, and H. Utada. An observational constraint on the strength of the toroidal magnetic field at the CMB by time variation of submarine cable voltages. _Geophysical research letters_, 25(21):4023–4026, 1998.
* Soward and Jones (1983) A.M. Soward and C.A. Jones. \(\alpha^{2}\)-Dynamos and Taylor’s Constraint. _Geophys. Astrophys. Fluid Dyn._, 27:87–122, 1983.
* Sprain et al. (2019) C.J. Sprain, A.J. Biggin, C.J. Davies, R.K. Bono, and D.G. Meduri. An assessment of long duration geodynamo simulations using new paleomagnetic modeling criteria (qpm). _Earth and Planetary Science Letters_, 526:115758, 2019.
* Sreenivasan and Narasimhan (2017) B. Sreenivasan and G. Narasimhan. Damping of magnetohydrodynamic waves in a rotating fluid. _Journal of Fluid Mechanics_, 828:867–905, 2017.
* Stanley and Mohammadi (2008) S. Stanley and A. Mohammadi. Effects of an outer thin stably stratified layer on planetary dynamos. _Physics of the Earth and Planetary Interiors_, 168(3-4):179–190, 2008.
* Takahashi et al. (2005) F. Takahashi, M. Matsushima, and Y. Honkura. Simulations of a Quasi-Taylor State Geomagnetic Field Including Polarity Reversals on the Earth Simulator. _Science_, 309:459–461, 2005.
* Taylor (1963) J.B. Taylor. The magneto-hydrodynamics of a rotating fluid and the Earth’s dynamo problem. _Proc. R. Soc. A_, 9:274–283, 1963.
* Whaler (1980) K.A. Whaler. Does the whole of the earth’s core convect? _Nature_, 287(5782):528, 1980.
* Wicht and Christensen (2010) J. Wicht and U.R. Christensen. Torsional oscillations in dynamo simulations. _Geophys. J. Int._, 181:1367–1380, 2010.
* Wicht and Sanchez (2019) J. Wicht and S. Sanchez. Advances in geodynamo modelling. _Geophysical & Astrophysical Fluid Dynamics_, 113(1-2):2–50, 2019.
* Wu and Roberts (2015) C.C. Wu and P.H. Roberts. On magnetostrophic mean-field solutions of the geodynamo equations. _Geophysical & Astrophysical Fluid Dynamics_, 109(1):84–110, 2015.
* Yan and Stanley (2018) C. Yan and S. Stanley. Sensitivity of the geomagnetic octupole to a stably stratified layer in the earth’s core. _Geophysical Research Letters_, 45(20):11–005, 2018.
* Zhang and Fearn (1993) K. Zhang and D. R. Fearn. How strong is the invisible component of the magnetic field in the Earth’s core. _Geophys. Res. Lett._, 20(19):2083–2086, 1993.
|
1804.06857 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 81678,
"num_imgs": 0,
"llama3_tokens_count": 27379
} | [] | # Hamiltonian surgery: Cheeger-type inequalities for nonpositive (stoquastic), real, and Hermitian matrices
Michael Jarret
mjarret@pitp.ca
February 26, 2024
###### Abstract
Cheeger inequalities bound the spectral gap \(\gamma\) of a space by isoperimetric properties of that space and vice versa. In this paper, I derive Cheeger-type inequalities for nonpositive matrices (aka stoquastic Hamiltonians), real matrices, and Hermitian matrices. For matrices written \(H=L+W\), where \(L\) is either a combinatorial or normalized graph Laplacian, I show that,
1. when \(W\) is diagonal and \(L\) has maximum degree \(d_{\max}\), \(2h\geq\gamma\geq\sqrt{h^{2}+d_{\max}^{2}}-d_{\max}\);
2. when \(W\) is real, we can often route negative-weighted edges along positive-weighted edges such that the Cheeger constant of the resulting graph obeys an inequality similar to that above; and
3. when \(W\) is Hermitian, the weighted Cheeger constant obeys \(2h\geq\gamma\)
where \(h\) is the weighted Cheeger constant of \(H\). This constant reduces bounds on \(\gamma\) to information contained in the underlying graph and the Hamiltonian’s ground-state.
If efficiently computable, the constant opens up a very clear path towards adaptive quantum adiabatic algorithms, those that adjust the adiabatic path based on spectral structure. I sketch a bashful adiabatic algorithm that aborts the adiabatic process early, uses the resulting state to approximate the weighted Cheeger constant, and restarts the process using the updated information. Should this approach work, it would provide more rigorous foundations for adiabatic quantum computing without _a priori_ knowledge of the spectral gap.
## 1 Introduction
### Motivation
An \(n\times n\) Hermitian matrix \(H\) has eigenvalues \(\lambda_{0}\leq\lambda_{1}\leq\dots\leq\lambda_{n-1}\). We call the difference in the two lowest eigenvalues of \(H\), \(\gamma=\lambda_{1}-\lambda_{0}\), its spectral gap. Bounding the spectral gap is a problem that could be motivated any number of ways. In quantum theory, the spectral gap determines the runtime of adiabatic algorithms and processes [27, 3, 22] and relates to quantum phase transitions [40]. The spectral gap is also intimately related to the rate at which heat diffuses on a manifold [45, 4] and the rate at which substochastic processes approach their quasistationary distributions [18, 19]. At the computational level, it determines the runtime of various well-known randomized algorithms [42] as well as Fleming-Viot type algorithms for approximating marginals [28, 31, 17, 16]. Each of these is an independently interesting topic, which would motivate its own study of the spectral gap.
Here, I abstract away the context and seek to understand the spectral structure of \(H\) by decomposing it as \(H=L+W\), the sum of a graph Laplacian \(L\) and some other Hermitian matrix \(W\). All Hermitian matrices can be decomposed this way and, as we will see, the decomposition proves fruitful. If \(W\) is diagonal, \(H\) is frequently called a “stoquastic” Hamiltonian or “stoquastic” matrix. A diagonal \(W\) also implies that \(H\) is an infinitesimal generator of a substochastic process and the resulting matrix \(I-\epsilon H\) is a substochastic matrix. When \(W\) is not diagonal, but instead real, the matrix \(H\) may have a _sign problem_, or all off-diagonal terms may not have the same sign. The “problem” is that such Hamiltonians can be difficult to study with Monte Carlo methods [44]. Finally, when \(W\) is a general Hermitian matrix, then \(H\) has no special name; Hamiltonian is special enough.
In this paper, I look to formalize the relationship between \(\gamma\) and some geometrical properties of the ground-state \(\phi_{0}\) of \(H\), or its lowest eigenvector. I always assume that \(H\) is represented in such a way that \(H:\mathbb{C}^{\abs{V}}\longrightarrow\mathbb{C}^{\abs{V}}\) for some graph \(G=(V,E)\). In our representation, \(L\) is the graph Laplacian of \(G\). Correspondingly, we consider functions \(\phi:V\longrightarrow\mathbb{C}\). I often assume that \(H\) has been rotated by a diagonal unitary transformation such that \(\phi_{0}\geq 0\) and will define a weighted Cheeger constant \(h\)[15], capturing the relevant geometric properties of \(\phi_{0}\). It remains unclear how difficult approximating \(h\) is, however in the event that \(W=0\), it reduces to the Cheeger constant of \(G\) and can be efficiently approximated. If and when one can approximate \(h\) remains a very important open question beyond the scope of this paper, though I discuss some related ideas in Section7.
The conceptual lesson of this paper is quite concrete. For any Hermitian matrix \(H\), if \(H\) has a large spectral gap, then \(\phi_{0}\)_has no bottlenecks_. That is to say, that \(\phi_{0}\) is a somewhat smooth distribution over \(G\). Prior results, discussed below, suggest that we should already believe this, but leave open the possibility that there exist cases that betray our intuition. Provided that \(H\) is not diagonal, I show that our intuition is always correct. (In the case that \(H\) is diagonal, our intuition is trivially correct.) I do not, however, show the converse. That is, I leave open the question of whether a small spectral gap implies a bottlenecked \(\phi_{0}\). I show that this is indeed implied in the stoquastic and some real cases, but when this is implied by the general Hermitian case is left open. Adapting these techniques to more general cases appears possible and I will discuss some potential approaches as we progress through the proof. Furthermore, in Section7, we will see that understanding the precise relationship might have far-reaching implications for quantum adiabatic algorithms.
### Previous Work
In this paper, we study isoperimetric inequalities of discrete systems. Such inequalities enjoy a rich history. Within the context of randomized algorithms, the Cheeger constant often provides a means of determining the mixing time of a Markov chain and, thus, the efficiency of certain approximation algorithms [42]. Standard Cheeger inequalities relate the spectral gap \(\gamma\) of the Laplacian \(L\) corresponding to a graph \(G\) and the Cheeger constant \(h\) of that graph. They usually appear in a form similar to
\[2h\geq\gamma\geq\frac{h^{2}}{2}\] (1)
and provide a very useful, intuitive significance to the spectral gap. Although a useful quantity, we know that computing the Cheeger constant exactly for an arbitrary graph is NP-hard [24, 35, 32]. Despite this hardness, the Cheeger constant can indeed be efficiently approximated [42, 33].
The spectral gap, and hence Cheeger constant, is also of primary interest in spectral graph theory, where it is often explored in connection with graph Laplacians [13]. In [15], the authors adapted Cheeger inequalities to apply to the gap in the _Dirichlet eigenvalues_ of a graph. The distinguishing characteristic of the Dirichlet eigenvalues is that they arise by imposing a Dirichlet boundary constraint. This constraint requires that, for some subset of vertices \(\delta V\subseteq V\), all eigenfunctions must satisfy \(f|_{\delta V}=0\). These eigenvalues are also studied quite a bit and numerous bounds appear in the literature. Unfortunately for us, these studies typically focus on the easier problem of bounding eigenvalues, not their differences. Additionally, the few gap inequalities that exist, like those in [15], are not easily applied to most situations we are presently interested in. Thus, we require a new inequality.
To this end, various authors (including me) have pursued Cheeger-type inequalities in the stoquastic case [2, 29] and more general Hermitian matrices [20]. In either case, this problem is actually equivalent to that of determining the differences in the Dirichlet eigenvalues of an appropriate host graph. These inequalities all assume an unfortunate form that looks something like
\[2\norm{H}h\geq\gamma\geq\frac{h^{2}}{2\norm{H}}\] (2)
where \(h\) is an appropriately defined Cheeger constant. We can easily see the weakness of this expression: unlike in the case of graph Laplacians, it is entirely possible that \(h^{2}\sim\norm{H}\sim e^{n}\). Thus, the lower bound from Eq.2 scales like a constant, whereas we would expect from Eq.1 that \(\gamma\gtrsim e^{n}\). A similar argument illuminates the weakness of the upper bound. Suppose the very common situation that \(\norm{H}\sim e^{n}\) and \(h\sim e^{-n}\). Then, the upper bound on \(\gamma\) scales as a constant whereas we expect that \(\gamma\lesssim e^{-n}\). This latter issue leaves open the possibility that one might have a large spectral gap in the presence of a bottleneck. In this work, I will correct these defects.
### Results
Consider a graph \(G=(V,E)\) with edge weights assigned by \(w:V\times V\longrightarrow\mathbb{R}\). Then, for the corresponding graph Laplacian \(L\) and any real diagonal matrix \(W\), \(H=L+W\) admits a weighted Cheeger constant \(h\), defined in [15] and again in Section3. In particular, I prove that for any stoquastic matrix with spectral gap \(\gamma\)
\[{2h\geq\gamma\geq\sqrt{h^{2}+Q^{2}}-Q}\] (3)
where, if \(L\) is a combinatorial Laplacian, \(Q\) is the maximum degree of a vertex of \(G\). If \(L\) is a normalized Laplacian, \(Q=1\).
For any real matrix, we can identify positive off-diagonal terms with negative edge weights (\(E^{-}=\{\{u,v\}\in E|\allowbreak w(u,v)\allowbreak<0\allowbreak\}\)) and show that
\[{2h\geq\gamma\geq\sqrt{k^{2}+Q^{2}}-Q}.\]
if \(\phi_{0}\) is uniform up to phase and
\[{2h\geq\gamma\geq(Q+\rho)^{2}-\sqrt{(Q+\rho)^{2}-k^{2}}}\]
where \(\rho=\lambda_{\abs{V}-1}-\lambda_{0}\) otherwise. Above, \(h\) is the weighted Cheeger constant corresponding to the graph \(G^{+}=(V,E\setminus E^{-})\) under the original weight function and \(k\) the weighted Cheeger constant of \(G^{+}\) with a redistributed weight function \(w^{+}\) to be defined in Section4. In Section6, we will see that these equations can often be relaxed to
\[{2h\geq\gamma\geq\epsilon\left(\sqrt{h^{2}+Q^{2}}-Q\right)}\]
for a constant \(\epsilon\), which may be easier to apply and retains appropriate scaling behavior. In other words, at least asymptotically, I reduce the problem of bounding the gap of a signed graph \(G\) to that of determining the appropriate Cheeger constant of \(G^{+}\).
Finally, I provide the upper bound
\[2h\geq\gamma\] (4)
for any Hermitian matrix.
Not only does this expression correct the problems mentioned in Section1.2, but the improvement over these statements can be quite drastic and firmly establishes some conceptual points. Note that in cases where \(h\) is large compared to the maximum degree \(Q\), which often happens when \(\norm{W}\) is sufficiently large, the lower bound in Eq.2 becomes weak whereas Eq.3 remains tight. Furthermore, the form of the expression guarantees that the inequality scales appropriately for all relative sizes of \(L\) and \(W\) and, hence, all Hermitian matrices. Although establishing the lower bound in Eq.2 is unlikely in general, expanding around \(h\approx 0\) does yield a similar expression. Furthermore, when \(h\) is large relative to \(Q\), Eq.3 guarantees that \(\gamma\sim h\).
The efficiency with which one can classically approximate \(h\) remains unclear, but the quantity only depends upon information about the ground-state distribution of \(H\) and the corresponding graph \(G\). This opens up the possibility that an adiabatic algorithm may be able to efficiently approximate \(h\), even if a classical method remains elusive. This ability would be a great advantage to the field of adiabatic optimization, as it could be used to determine the appropriate time dependence of an adiabatic evolution without _a priori_ knowledge of the spectral gap. Such an evolution can be necessary to produce quantum speedups, like those achieved in adiabatic Grover search [39]. This idea will be discussed in detail in Section7, but conclusive results, should they exist, are left for future work.
## 2 Preliminaries
### The Rayleigh Quotient
For an \(n\times n\) Hermitian operator \(H\) acting on the space \(\mathcal{S}=\{f:\intrange{1}{n}\longrightarrow\mathbb{C}^{n}\}\), one defines the Rayleigh quotient corresponding to a function \(f\in\mathcal{S}\) as
\[R(H,f)=\frac{\langle f,Hf\rangle}{\langle f,f\rangle}.\] (5)
Thus, the eigenvalues \(\lambda_{0}(H)\leq\lambda_{1}(H)\leq\dots\leq\lambda_{n-1}(H)\) of \(H\) can be written as
\[\lambda_{i}(H)=\inf_{f\perp T_{i-1}}\frac{\langle f,Hf\rangle}{\langle f,f\rangle}\] (6)
where \(T_{i}\) is the space spanned by the functions \(f_{j}\) achieving \(\lambda_{j}(H)\) for each \(0\leq j\leq i\). We call \(f\) achieving \(\lambda_{0}(H)\) the _ground-state_. Of particular interest in this paper is the spectral gap \(\gamma(H)\) of \(H\), or the difference in its two lowest eigenvalues, \(\gamma(H)=\lambda_{1}(H)-\lambda_{0}(H)\). Usually, we will just write \(\gamma=\gamma(H)\) and \(\lambda_{i}=\lambda_{i}(H)\) and reserve the argument for when it is necessary to distinguish the eigenvalues of two matrices.
Our first goal is to rewrite \(H\) in a form useful for the current work. Presently, we only seek lower bounds for real matrices, so we can prove a quick comparison theorem between \(\gamma(H)\) and \(\gamma(\Re(H))\) where \(\Re(H)\) is the real part of \(H\). One can immediately obtain a useful upper bound on the spectral gap of \(H\) by considering the function \(\phi_{0}\) obtaining \(\lambda_{0}(H)\) in Eq.6. For a Hermitian matrix \(H\) with spectral gap \(\gamma(H)\), ground-state \(\phi_{0}\), and \(U=\mathrm{diag}(\phi_{0}/\abs{\phi_{0}})\) where the ratio and absolute value are taken pointwise,
\[\gamma(H)\leq\gamma(\Re(U^{\dagger}HU)).\]
Proof.: This proof is very straightforward. First, suppose \(H\) has ground-state \(\phi_{0}\). Then, let \(U=\mathrm{diag}(\phi_{0}/\abs{\phi_{0}})\) where the ratio and absolute value are taken pointwise. Obviously, \(U\) is unitary and \(U^{\dagger}\phi_{0}\geq 0\). Now, write \(\Im(U^{\dagger}HU)=iS\), where \(S\in\mathbb{R}^{n\times n}\) is skew-symmetric. Thus, \(\lambda_{0}\) satisfies
\[\lambda_{0} =\inf_{f\in\mathbb{C}^{n}}\frac{\langle f,Hf\rangle}{\langle f,f\rangle}\]
\[=\inf_{U^{\dagger}f\in\mathbb{C}^{n}}\frac{\langle Uf,HUf\rangle} {\langle Uf,Uf\rangle}\]
\[=\inf_{f\geq 0}\frac{\langle f,U^{\dagger}HUf\rangle}{\langle f,f\rangle}\]
\[=\inf_{f\geq 0}\frac{\langle f,\Re(U^{\dagger}HU)f\rangle+\langle f ,iSf\rangle}{\langle f,f\rangle}\]
\[=\inf_{f\geq 0}\frac{\langle f,\Re(U^{\dagger}HU)f\rangle}{ \langle f,f\rangle}\]
where the second equality follows from our choice of \(U\) and the final equality from the skew-symmetry of \(S\). Now, the Rayleigh quotient for \(\lambda_{1}\) becomes
\[\lambda_{1} =\inf_{\begin{subarray}{c}f\perp\phi_{0}\\ f\in\mathbb{C}^{n}\end{subarray}}\frac{\langle f,Hf\rangle}{\langle f,f\rangle}\]
\[=\inf_{\begin{subarray}{c}Uf\perp\phi_{0}\\ f\in\mathbb{C}^{n}\end{subarray}}\frac{\langle Uf,HUf\rangle}{\langle Uf,Uf\rangle}\]
\[=\inf_{\begin{subarray}{c}f\perp U^{\dagger}\phi_{0}\\ f\in\mathbb{C}^{n}\end{subarray}}\frac{\langle f,U^{\dagger}HUf\rangle}{ \langle f,f\rangle}\]
\[=\inf_{\begin{subarray}{c}f\perp U^{\dagger}\phi_{0}\\ f\in\mathbb{C}^{n}\end{subarray}}\frac{\langle f,\Re(U^{\dagger}HU)f\rangle+ \langle f,iSf\rangle}{\langle f,f\rangle}\]
\[\leq\inf_{\begin{subarray}{c}f\perp U^{\dagger}\phi_{0}\\ f\in\mathbb{R}^{n}\end{subarray}}\frac{\langle f,\Re(U^{\dagger}HU)f\rangle+ \langle f,iSf\rangle}{\langle f,f\rangle}\]
\[=\inf_{\begin{subarray}{c}f\perp U^{\dagger}\phi_{0}\\ f\in\mathbb{R}^{n}\end{subarray}}\frac{\langle f,\Re(U^{\dagger}HU)f\rangle}{ \langle f,f\rangle}.\]
Above, the inequality follows from introducing the additional constraint on the infimum. Thus, the gap of \(\gamma(H)\leq\gamma(\Re(U^{\dagger}HU))\). ∎
### Stoquastic Hamiltonians
Section2.1 guarantees us that, at least in the case of upper bounds, we hereafter need only consider \(\Re(U^{\dagger}HU)\). Hence, we no longer address the issue of upper bounding the gap of a Hermitian matrix, since the bound is implied by any bounds on real matrices. Although determining an appropriate \(U\) to actually perform the rotation in Section2.1 might be a hard problem in general,¹ there exist certain cases where this becomes relatively easy. One convenient way to describe these situations is through the _frustration index_ of the matrix \(\Theta=H/\abs{H}\) where, again, the ratio and absolute value are taken pointwise.
[FOOTNOTE:1][ENDFOOTNOTE]
If we view \(\Theta\) as an adjacency matrix, as will be made precise in the following section, we can consider a cycle cover of \(\Theta\) given by the successor function \(\sigma:\intrange{1}{n}\longrightarrow\intrange{1}{n}\). Here, \(\sigma\) is just a permutation of \(\intrange{1}{n}\). Then, the sequence \(i\rightarrow\sigma(i)\rightarrow\sigma\cdot\sigma(i)\rightarrow\dots\to i\) is a cycle through \(\Theta\), which we refer to as \(c_{\sigma}(i)\). We call the set of all successor functions \(C=\{c_{\sigma}\}\).
For any \(1\leq i\leq n\), we define the signature of the cycle \(c_{\sigma}(i)\) as
\[{\mathrm{sig}(c_{\sigma}(i))=\prod_{k\in c_{\sigma}(i)}\left[-\Theta_{k,\sigma (k)}\right]=(-1)^{\abs{c_{\sigma}(i)}}\prod_{k\in c_{\sigma}(i)}\Theta_{k, \sigma(k)}.}\]
In analogy to the standard definition, we somewhat carelessly define the _frustration index_ of \(\Theta\) as the minimum number of elements of \(\Theta\) that need to be removed such that \(\mathrm{sig}(c_{\sigma}(i))\in\{0,1\}\) for all \(c_{\sigma}\in C\) and \(i\in\intrange{1}{n}\)[5, 36, 34]. This particular definition is clearly far from ideal, since complex phases imply that this is not a strict question of combinatorics, and we should prefer a functional definition similar to that of [34] in the future. Despite its failings, we can use this definition to define _stoquastic_ matrices. We call a matrix _stoquastic_ if it has frustration index \(0\). This definition of stoquastic diverges from much of the literature on the subject. (See, e.g. [9].) Nonetheless, it is a bit more descriptive and (potentially) avoids redefining well-known mathematical concepts.² We introduce this definition for two reasons: (1) because frustration index has been used to obtain better isoperimetric inequalities [36, 34], setting the stage for future work, and (2) because it extends our results to a broader class of matrices. Importantly, this property can be efficiently checked (at least in the dimension of the matrix), so that one can determine whether or not stoquastic spectral bounds apply even if one is unsure that a matrix is stoquastic. Thus, this definition makes the methods presented below easier to apply in many cases.
[FOOTNOTE:2][ENDFOOTNOTE]
The unitary \(U\) that transforms \(H\) such that all off-diagonal terms of \(U^{\dagger}HU\) are nonpositive is immediate. First for any cycle we can decompose \(\sigma\) into paths \(\sigma_{1}\) and \(\sigma_{2}\).
\[1 =\prod_{k\in c_{\sigma}(i)}\left[-\Theta_{k,\sigma(k)}\right]\]
\[=\left(\prod_{k\in c_{\sigma_{1}}(i)}\left[-\Theta_{k,\sigma(k)} \right]\right)\left(\prod_{k\in c_{\sigma_{2}}(i)}\left[-\Theta_{k,\sigma(k)} \right]\right)\]
\[=\left(\prod_{k\in c_{\sigma_{1}}(i)}\left[-\Theta_{k,\sigma_{1}( k)}\right]\right)\left(\prod_{k\in c_{\sigma_{2}}^{-1}(i)}\left[-\Theta^{ \dagger}_{k,\sigma_{2}(k)}\right]\right)\]
where the final line follows because, since \(\Theta\) is Hermitian, every point in a cycle forms its own cycle. In other words, beginning at \(i\), the product \(\prod_{k\in c_{\sigma}(i)}^{\sigma^{j}(i)}(-\Theta_{k,\sigma(k)})\) is entirely independent of the particular path chosen. This immediately implies the well-known fact that the frustration index of a real, nonnegative \(\Theta\) is \(0\) if and only if \(\Theta\) describes a bipartite graph. The path-independence above also implies that one can explicitly construct the appropriate unitary \(U\) by choosing a vertex, say \(i\) and then, for all \(j\) in some cycle with \(i\), \(U_{jj}=\prod_{k=i}^{\sigma^{-1}(j)}(-\Theta_{ij})U_{ii}\). Since every pair of vertices forms a simple cycle, this reduces to the constraint that, provided \(\Theta_{ij}\neq 0\), \(U_{jj}=-\Theta_{ij}U_{ii}\). Thus, we know that this definition of \(U\) is consistent and unique up to a global phase. Furthermore, it clearly performs the appropriate transformation. Thus, if we satisfy stoquasticity, we know _a priori_ that \(U^{\dagger}HU\) has all nonpositive off-diagonal elements. More importantly, because \(U\) is diagonal, we do not need to do the unitary transformation; we can simply replace each off diagonal term \(w_{uv}\) with \(-\abs{w_{uv}}\) and obtain the resulting matrix.
Despite the utility of this condition in producing bounds for a larger class of matrices, in what follows we assume the problem has been reduced such that \(H\mapsto U^{\dagger}HU\), guaranteeing that all off-diagonal terms are nonpositive and the ground-state \(\phi_{0}\geq 0\). This allows for a simpler presentation.
### Graph Laplacians
We wish to characterize \(\Re(U^{\dagger}HU)\) in terms of graph Laplacians. Although the standard combinatorial and normalized graph Laplacian are defined such that all diagonal elements are nonnegative and all off-diagonal elements are nonpositive, we can relax the latter constraint and consider _signed_ Laplacians. For our purposes, the only difference between a signed and standard Laplacian is that signed Laplacians have no constraint on the non-positivity of their off-diagonal terms, however our definitions are somewhat atypical [6].³
[FOOTNOTE:3][ENDFOOTNOTE]
We begin by considering a connected weighted graph \(G=(V,E)\) with weight function \(w:V\times V\longrightarrow\mathbb{R}\) where we require \(w(u,v)=0\) whenever \((u,v)\notin E\). Additionally, we require that \(w(u,v)=w(v,u)\) or that \(G\) is undirected.⁴ For ease of presentation, we will also lower arguments to \(w\) such that \(w_{uv}=w(u,v)\). Since we are allowing the possibility of negative edge weights, we introduce the notation \(E^{+}=\left\{\{u,v\}\in E|w_{uv}>0\right\}\) for the set of all positive-weighted edges and \(E^{-}\) for the set of all negative-weighted edges. We also define \(G^{\pm}=(V,E^{\pm})\) and note that \(G^{+}\subseteq G\). Now we can include some standard definitions for the combinatorial and normalized Laplacians, keeping in mind that edge weights may be negative.
[FOOTNOTE:4][ENDFOOTNOTE]
#### 2.3.1 The combinatorial Laplacian
To define the combinatorial Laplacian for a graph \(G\), we first let the degree of a vertex \(u\in V\) be \(d_{u}=\sum_{v}w_{uv}\). Then, the combinatorial graph Laplacian \(L\) is
\[L(u,v)=\begin{cases}d_{u}&u=v\\ -w_{uv}&u\neq v\end{cases}\]
where \(d_{u}=\sum_{v}w_{uv}\).
For any function \(f:V\longrightarrow\mathbb{R}\) (or \(f:V\longrightarrow\mathbb{C}\)), one can easily see that
\[Lf(u)=\sum_{v}w_{uv}[f(u)-f(v)]\]
where we have adopted the standard convention that \(Lf(u)=[Lf](u)\). (This is just to say that \(Lf\neq L\circ f\), since \(L:\mathbb{R}^{\lvert V\rvert}\longrightarrow\mathbb{R}^{\lvert V\rvert}\).) One can easily argue that if \(f\) is an eigenfunction of \(L\), then \(f\) satisfies
\[\lambda f(u)=\sum_{v}w_{uv}[f(u)-f(v)].\]
Now, let \(W:V\longrightarrow\mathbb{R}\). We can represent \(W\) as an \(n\times n\) diagonal matrix and write \(W_{u}\equiv W_{uu}\). Then, if \(f\) is an eigenfunction of \(L+W\), \(f\) satisfies
\[(\lambda-W_{u})f(u)=\sum_{v}w_{uv}[f(u)-f(v)].\] (7)
Recalling the definition of the Rayleigh quotient, \(R(L+W,f)\), we have that the eigenvalues of \(L+W\) satisfy
\[\lambda_{i}=\inf_{\begin{subarray}{c}\tiny{f\perp T_{i-1}}\end{subarray}}\frac {\sum_{\{u,v\}\in E(G)}w_{uv}[f(u)-f(v)]^{2}+\sum_{u}W_{u}f^{2}(u)}{\sum_{u}f^ {2}(u)}\] (8)
where \(T^{(D)}_{i}\) is the subspace spanned by the functions \(f_{j}\) achieving \(\lambda^{(D)}_{j}\) for \(0\leq j\leq i\). This equation actually defines the _Dirichlet eigenvalues_ of the graph \(G\) embedded in an appropriate host graph. In the following subsection, I will make this mapping precise.
#### 2.3.2 Dirichlet eigenvalues
For a given subgraph \(S\subseteq G\), we can consider eigenfunctions of \(S\) under boundary constraints and their corresponding eigenvalues. To proceed, we define the edge and vertex boundary sets
1. \(\partial S=\left\{\{u,v\}\in E(G)\;|\;u\in V(S),v\notin V(S)\right\}\) and
2. \(\delta S=\left\{u\in V(G\setminus S)\;|\;\{u,v\}\in\partial S\;\text{for some} \;v\in V\right\}\).
Any function \(f:S\longrightarrow\mathbb{R}\) can be extended to a function \(f:S\cup\delta S\longrightarrow\mathbb{R}\) with the Dirichlet boundary condition \(f(u\in\delta S)=0\) or \(\restr{f}{\delta S}=0\). _Dirichlet eigenvalues_ are the eigenvalues of \(S\) under this boundary constraint. To be precise,
\[\lambda_{i}^{(D)}=\inf_{\tiny{\begin{subarray}{c}f\perp T^{(D)}_{i-1}\\ \restr{f}{\delta S}=0\end{subarray}}}\frac{\sum_{\{u,v\}\in E(S)\cup\partial S }w_{uv}[f(u)-f(v)]^{2}}{\sum_{u\in V(S)}f^{2}(u)}\] (9)
where \(T^{(D)}_{i}\) is the subspace spanned by the functions \(f_{j}\) achieving \(\lambda^{(D)}_{j}\) for \(0\leq j\leq i\).
Now, recall that in the previous section we had a graph \(G\) with weight function \(w:E(G)\longrightarrow\mathbb{R}\). We embed this graph in a host graph \(G^{\prime}\supseteq G\) and extend the function \(w:E(G)\cup\partial G\longrightarrow\mathbb{R}\) by requiring that \(W_{u}=\sum_{v\in\delta G}w_{uv}\). That is, if the degree of vertex \(u\) in \(G\) is \(d_{u}\), then the degree of vertex \(u\) in \(G^{\prime}\) is \(d_{u}+W_{u}\). (See Figure1.) Now, one can explicitly impose the Dirichlet constraint on Eq.9 and recover Eq.8:
\[\lambda_{i}^{(D)} =\inf_{\tiny{\begin{subarray}{c}f\perp T^{(D)}_{i-1}\\ \restr{f}{\delta G}=0\end{subarray}}}\frac{\sum_{\{u,v\}\in E(G)\cup\partial G }w_{uv}[f(u)-f(v)]^{2}}{\sum_{u\in V(G)}f^{2}(u)}\]
\[=\inf_{\tiny{f\perp T_{i-1}}}\frac{\sum_{\{u,v\}\in E(G)}w_{uv}[f (u)-f(v)]^{2}+\sum_{u\in V(G)}W_{u}f^{2}(u)}{\sum_{u\in V(G)}f^{2}(u)}.\]
This embedding identity is often a useful way to geometrize a physical potential and both descriptions can be useful depending upon one’s goals.
[FIGURE:S2.F1][ENDFIGURE]
#### 2.3.3 The normalized Laplacian
Although the expressions in Section2.3.2 are sufficient to completely characterize all real matrices, we can derive a more elegant bound by perturbing the normalized Laplacian rather than the combinatorial Laplacian. We let \(D=\diag{(d_{u})_{u}}\) and define the symmetric normalized Laplacian as \(\mathcal{L}=D^{-1/2}LD^{-1/2}\). Explicitly, this can be written
\[\mathcal{L}(u,v)=\begin{cases}1&u=v\\ -\frac{w_{uv}}{\sqrt{d_{u}d_{v}}}&u\neq v.\end{cases}\]
Similar to the combinatorial Laplacian, for any function \(f:V\longrightarrow\mathbb{R}\), the operator \(\mathcal{L}\) satisfies
\[\mathcal{L}f(u)=\frac{1}{\sqrt{d_{u}}}\sum_{v}w_{uv}\left[\frac{f(u)}{\sqrt{d_ {u}}}-\frac{f(v)}{\sqrt{d_{v}}}\right]\]
and eigenfunctions \(f\) of \(\mathcal{L}+W\) satisfy
\[(\lambda-W_{u})f(u) =\frac{1}{\sqrt{d_{u}}}\sum_{v}w_{uv}\left[\frac{f(u)}{\sqrt{d_{u }}}-\frac{f(v)}{\sqrt{d_{v}}}\right].\]
Letting \(\phi=f/\sqrt{d}\),
\[(\lambda-W_{u})\phi(u)d_{u} =\sum_{v}w_{uv}\left[\phi(u)-\phi(v)\right].\] (10)
Our treatment of Eqs.10 and 7 can be unified by considering equations of the form
\[L_{q}\phi(u)=(\lambda-W_{u})q_{u}\phi(u)=\sum_{v}w_{uv}\left[\phi(u)-\phi(v)\right]\] (11)
where taking \(q_{u}=d_{u}\) reproduces Eq.10 and \(q_{u}=1\) reproduces Eq.7.
Hence, eigenvalues of either Laplacian are given by their respective Rayleigh quotients,
\[\lambda_{i}=\inf_{\tiny{f\perp qT_{i-1}}}\frac{\sum_{\{u,v\}}w_{uv}[f(u)-f(v)] ^{2}}{\sum_{u}q_{u}f^{2}(u)}\] (12)
where \(T_{i}\) is the subspace spanned by the functions \(f_{j}\) achieving \(\lambda_{j}\) for \(0\leq j\leq i\). Similarly, for either Laplacian perturbed by a diagonal matrix \(W\), the eigenvalues are given by
\[\lambda_{i}=\inf_{\tiny{f\perp qT_{i-1}}}\frac{\sum_{\{u,v\}}w_{uv}[f(u)-f(v)] ^{2}+\sum_{u}q_{u}W_{u}f^{2}(u)}{\sum_{u}q_{u}f^{2}(u)}.\] (13)
This can once again be seen as Dirichlet eigenvalues as in Section2.3.2, however one must be careful as the expression arising from Eq.13 for normalized Laplacians diverges from the correct expression for Dirichlet eigenvalues of the host graph.
### The spectral gap
Now that we have a characterization of the Dirichlet eigenvalues, we are prepared to handle the spectral gap of the operator \(L_{q}+W\). Suppose that \(\lambda_{0}\) has eigenfunction \(\phi\geq 0\). Then, we can characterize the spectral gap of \(L_{q}+W\) as follows.
\[\gamma=\inf_{\tiny{g\perp q\phi^{2}}}\frac{\sum_{\{u,v\}}w_{uv}\phi(u)\phi(v)[ g(u)-g(v)]^{2}}{\sum_{u}q_{u}g^{2}(u)\phi^{2}(u)}\]
Proof.: Before proceeding, we need the standard fact that for any \(g:V\longrightarrow\mathbb{R}\),
\[\sum_{\{u,v\}}w_{uv}\left[g(u)\phi(u)-g(v)\phi(v)\right]^{2}=\sum_{u}(\lambda_ {0}-W_{u})q_{u}g^{2}(u)\phi^{2}(u)+\sum_{\{u,v\}}w_{uv}\left[g(u)-g(v)\right]^ {2}\phi(u)\phi(v).\] (14)
To see this, begin with Eq.7 and write
\[\sum_{u}(\lambda_{0}-W_{u})q_{u}g^{2}(u)\phi^{2}(u)=\sum_{u}g^{2}(u)\sum_{v}w_ {uv}\phi(u)[\phi(u)-\phi(v)]\]
\[=\sum_{u}\left[d_{u}g^{2}(u)\phi^{2}(u)-\sum_{v}w_{uv}g^{2}(u)\phi(u)\phi(v)\right]\]
\[=\sum_{\{u,v\}}w_{uv}\left(g^{2}(u)\phi^{2}(u)+g^{2}(v)\phi^{2}(v)-\left[g^{2} (u)+g^{2}(v)\right]\phi(u)\phi(v)\right)\]
\[=\sum_{\{u,v\}}w_{uv}\left(\left[g(u)\phi(u)-g(v)\phi(v)\right]^{2}-\left[g^{2 }(u)+g^{2}(v)-2g(u)g(v)\right]\phi(u)\phi(v)\right)\]
\[=\sum_{\{u,v\}}w_{uv}\left(\left[g(u)\phi(u)-g(v)\phi(v)\right]^{2}-\left[g(u) -g(v)\right]^{2}\phi(u)\phi(v)\right).\]
With this in hand, we turn to \(\lambda_{1}\).
\[\lambda_{1} =\inf_{\tiny{f\perp q\phi}}\frac{\sum_{\{u,v\}}w_{uv}[f(u)-f(v)]^ {2}+\sum_{u}q_{u}W_{u}f^{2}(u)}{\sum_{u}q_{u}f^{2}(u)}\]
\[=\inf_{\tiny{g\perp q\phi^{2}}}\frac{\sum_{\{u,v\}}w_{uv}[g(u) \phi(u)-g(v)\phi(v)]^{2}+\sum_{u}q_{u}W_{u}g^{2}(u)\phi^{2}(u)}{\sum_{u}q_{u}g ^{2}(u)\phi^{2}(u)}\]
\[=\inf_{\tiny{g\perp q\phi^{2}}}\frac{\sum_{\{u,v\}}w_{uv}\phi(u) \phi(v)[g(u)-g(v)]^{2}+\lambda_{0}\sum_{u}q_{u}\phi^{2}(u)g^{2}(u)}{\sum_{u}q_ {u}g^{2}(u)\phi^{2}(u)}\]
\[=\inf_{\tiny{g\perp q\phi^{2}}}\frac{\sum_{\{u,v\}}w_{uv}\phi(u) \phi(v)[g(u)-g(v)]^{2}}{\sum_{u}q_{u}g^{2}(u)\phi^{2}(u)}+\lambda_{0}.\]
Thus, we have that
\[\gamma=\inf_{\tiny{g\perp q\phi^{2}}}\frac{\sum_{\{u,v\}}w_{uv} \phi(u)\phi(v)[g(u)-g(v)]^{2}}{\sum_{u}q_{u}g^{2}(u)\phi^{2}(u)}.\]
∎
## 3 Warm-up: Cheeger upper bounds
### The Cheeger constant
The Cheeger constant of a graph describes the graph’s isoperimetric ratio, or the surface area to volume ratio of any subgraph. Noting that Section2.4 gives an expression for the gap that is equivalent to the Rayleigh quotient of a weighted graph with weights \(\omega_{uv}=w_{uv}\phi(u)\phi(v)\), we use \(\omega\) as a modified weight function for defining both area and volume. That is, for a subgraph \(S\subseteq G\) we let
1. \(\overline{S}=G\setminus S\),
2. the boundary vertices \(\delta S=\{u\in\overline{S}\;|\;u\sim v\in S\},\)
3. the surface area \(\abs{\partial S}=\sum_{u\in V(S),v\in\delta S}w_{uv}\phi(u)\phi(v)\), and
4. the volume \(\vol(S)=\sum_{u\in S}q_{u}\phi^{2}(u)\).
Then, we reproduce the weighted Cheeger constant of [15]
\[h=\min_{S\subset G}\frac{\abs{\partial S}}{\min_{S^{\prime}\in\{S,\overline{S} \}}\vol(S^{\prime})}.\] (15)
Note that in the event that both \(w_{uv}=1\) for all \(\{u,v\}\in E(G)\) and \(\phi\neq 0\) is any trivial function, this reproduces the ratio
\[\frac{\text{\# edges in $\partial S$}}{\text{\# vertices in S}}\]
which is the standard Cheeger constant for an unweighted graph.
### The upper bound
Section2.4 instructs us that we can use any function \(g\perp q\phi^{2}\) to upper bound the gap and Section2.1 allows us to ignore the case that \(H\) is not real. Thus, the upper bound derives from simply choosing an appropriate trial function in Section2.4. For any \(H=L+W\) with ground-state \(\phi\) corresponding to weighted Cheeger constant \(h\)
\[\gamma\leq 2h.\]
Proof.: For \(S\) achieving the infimum in Eq.15, we put the function
\[g(u)=\begin{cases}\vol(\overline{S})&u\in S\\ -\vol(S)&u\notin S.\end{cases}\]
into Section2.4. Without loss of generality, we assume that \(\vol(S)\leq\vol(\overline{S})\) and find that
\[\gamma \leq\frac{\sum_{\{u,v\}\in E(G)}w_{uv}\phi(u)\phi(v)[g(u)-g(v)]^{ 2}}{\sum_{u}q_{u}g^{2}(u)\phi^{2}(u)}\]
\[=\frac{\left(\sum_{\{u,v\}\in\partial S}w_{uv}\phi(u)\phi(v) \right)[\vol(S)+\vol(\overline{S})]^{2}}{\vol(\overline{S})^{2}\sum_{u\in V(S) }q_{u}\phi^{2}(u)+\vol(S)^{2}\sum_{u\in V(\overline{S})}q_{u}\phi^{2}(u)}\]
\[\leq\frac{(h\vol(S))[\vol(S)+\vol(\overline{S})]^{2}}{\vol(S)[ \vol(S)^{2}+\vol(\overline{S})^{2}]}\]
\[=h\frac{[\vol(S)+\vol(\overline{S})]^{2}}{\vol(S)^{2}+\vol( \overline{S})^{2}}\]
\[\leq 2h.\]
∎
Section3.2 also holds for all Hermitian matrices by Section2.1.
## 4 Removing negative edge weights
In this section, I provide a theorem relating the spectrum of the graph \(G=(V,E)\) with edge weights \(w:E\longrightarrow\mathbb{R}\) to the graph \(G^{+}=(V,E\setminus E^{-})\). For edges with negative edge weights and endpoints \((x,y)\), we consider the set of paths from \(x\) to \(y\) through \(G^{+}\), denoted \(P(x,y)\). Thus, a path from \(x\) to \(y\) is a member of the set \(P(x,y)\). The strategy behind this theorem is to consider an edge \(\{u,v\}\) with weight \(w_{uv}<0\) as in Fig.1(a). Then, we find some path connecting \(u\) and \(v\) that traverses \(G^{+}\) and route the negative weights along this path. Routing is not an uncommon approach (see, e.g. [23]) and has a lot in common with the method of proving Poincaré inequalities [13].
[FIGURE:S4.F2][ENDFIGURE]
Suppose that for a graph \(S\subseteq G\), \(S^{+}\) is connected and there exists an \(\alpha:\bigcup_{(x,y)\in E(S)}P(x,y)\longrightarrow[0,1]\) such that for any \(\{u,v\}\in E(S)\),
1. \(\sum_{\tiny{p\in P(u,v)}}\alpha_{p}=1\);
2. and \(0<\omega_{uv}= w_{uv}-\sum_{\tiny{w_{xy}<0}}\sum_{\tiny{ \begin{subarray}{c}p\in P(x,y)\\ (u,v)\in p\end{subarray}}}\abs{w_{xy}}\ell_{p}\alpha_{p}\),
then, for each \(i\), there exists an \(\widetilde{\omega}\geq\omega\) such that
\[\lambda_{i}^{(D)} =\inf_{\tiny{\begin{subarray}{c}f\perp qT^{(D)}_{i-1}\\ \restr{f}{\delta S}=0\end{subarray}}}\frac{\sum_{\{u,v\}\in E(S^{+})\cup \partial S}\widetilde{\omega}_{uv}[f(u)-f(v)]^{2}}{\sum_{u\in V(S)}q_{u}f^{2}( u)}.\]
and \(\lambda^{(D)}_{0}\) is unique.
Proof.: Consider
\[\lambda_{i}^{(D)} =\inf_{\tiny{\begin{subarray}{c}f\perp qT^{(D)}_{i-1}\\ \restr{f}{\delta S}=0\end{subarray}}}\frac{\sum_{\{u,v\}\in E(S)\cup\partial S }w_{uv}[f(u)-f(v)]^{2}}{\sum_{u\in V(S)}q_{u}f^{2}(u)}\]
\[\geq\inf_{\tiny{\begin{subarray}{c}f\perp qT^{(D)}_{i-1}\\ \restr{f}{\delta S}=0\end{subarray}}}\frac{\sum_{\{u,v\}\in E(S^{+})\cup \partial S}\left(w_{uv}-\sum_{\tiny{w_{xy}<0}}\sum_{\tiny{\begin{subarray}{c}p \in P(x,y)\\ (u,v)\in p\end{subarray}}}\abs{w_{xy}}\ell_{p}\alpha_{p}\right)[f(u)-f(v)]^{2} }{\sum_{u\in V(S)}q_{u}f^{2}(u)}\]
\[=\inf_{\tiny{\begin{subarray}{c}f\perp qT^{(D)}_{i-1}\\ \restr{f}{\delta S}=0\end{subarray}}}\frac{\sum_{\{u,v\}\in E(S^{+})\cup \partial S}\omega_{uv}[f(u)-f(v)]^{2}}{\sum_{u\in V(S)}q_{u}f^{2}(u)}\]
where we have applied Jensen’s inequality. Thus, there exists some \(\widetilde{\omega}\geq\omega\) such that
\[\lambda_{i}^{(D)} =\inf_{\tiny{\begin{subarray}{c}f\perp qT^{(D)}_{i-1}\\ \restr{f}{\delta S}=0\end{subarray}}}\frac{\sum_{\{u,v\}\in E(S^{+})\cup \partial S}\widetilde{\omega}_{uv}[f(u)-f(v)]^{2}}{\sum_{u}q_{u}f^{2}(u)}.\]
∎
Furthermore, since this is just the Rayleigh quotient corresponding to the Dirichlet eigenvalues of a connected graph, the Perron-Frobenius theorem applies and we also have that \(\lambda_{0}^{(D)}\) is unique. Section4 also applies to the characterization of \(\gamma\) in Section2.4: Suppose that for a graph \(G=(V,E)\), \(G^{+}\) is connected and there exists an \(\alpha:\bigcup_{(x,y)\in E}P(x,y)\longrightarrow[0,1]\) such that for any \((u,v)\in E\)
1. \(\sum_{\tiny{p\in P(u,v)}}\alpha_{p}=1\) and
2. \(\widetilde{\omega}_{uv}>w_{uv}\phi(u)\phi(v)-\sum_{\tiny{w_{xy}<0 }}\sum_{\tiny{\begin{subarray}{c}p\in P(x,y)\\ \{u,v\}\in p\end{subarray}}}\abs{w_{xy}}\phi(x)\phi(y)\ell_{p}\alpha_{p}\),
where \(\ell_{p}=\abs{p}\) is the length of path \(p\),
\[\gamma=\inf_{\tiny{g\perp q\phi^{2}}}\frac{\sum_{\{u,v\}\in E(G^{+})} \widetilde{\omega}_{uv}[g(u)-g(v)]^{2}}{\sum_{u}q_{u}g^{2}(u)\phi^{2}(u)}.\] (16)
The unsightliness of \(\widetilde{\omega}\) is not lost on me. Nonetheless, the expression is quite intuitive. Basically, a potentially useful Cheeger-type bound exists whenever we can redistribute negative weighted edges along paths through \(G^{+}\) connecting them. I present this form, however, because it is unlikely that in practical situations we will be faced with something that can be easily routed along a single path. Such a statement is easy to derive by choosing unique paths satisfying the constraints of Section4, however. Section6.1 provides one such simplification.
Although there exist cases where one can create cuts such that condition 2 above is always unachievable, in many cases, this is handled by the unitary rotation considered in Section2.1.
## 5 Two Dirichlet Cheeger inequalities
In this section, I present Cheeger inequalities using a technique similar to [15]. Unlike [15], we wish to construct an inequality for as broad a class of matrices as possible. A theorem similar to Theorem 1 was originally derived and presented by me in [1], however, at that time, I did not realize that it could be significantly strengthened to the more useful one below. First, we need to bound the contribution of the term \(W\) to the eigenvalues \(\lambda_{i}\). Because of Sections4 and 4, we only need to consider the case of nonnegative edge weights.
For a graph \(G=(V,E)\), suppose \(\phi:V\longrightarrow\mathbb{R}\) satisfies
\[\left(\lambda-W_{u}\right)q_{u}\phi(u)=\sum_{v\sim u}w_{uv}\left[\phi(u)-\phi( v)\right].\] (17)
for \(w>0\). Then,
\[\lambda \geq\max_{S^{\prime}\in\{S,V\setminus S\}}\left(\frac{\sum_{u\in S ^{\prime}}\left(W_{u}+\lambda_{0}^{D}(S^{\prime})\right)q_{u}\phi(u)^{2}}{\sum _{u\in S^{\prime}}q_{u}\phi(u)^{2}}\right)\]
for \(S=\left\{u\in V\;|\;\phi(u)\geq 0\right\}\) and \(\lambda_{0}^{D}(S^{\prime})\) the lowest Dirichlet eigenvalue of \(S^{\prime}\subseteq G\).
Proof.: Without loss of generality, assume that \(S^{\prime}\in\{S,V\setminus S\}\) achieves the maximum above. Now,
\[\sum_{u\in S^{\prime}}(\lambda-W_{u})q_{u}\phi(u)^{2}=\sum_{u\in S^{\prime}} \sum_{v\sim u}w_{uv}(\phi(u)-\phi(v))\phi(u)\]
\[=\sum_{\{u,v\}\in E(S^{\prime})}w_{uv}(\phi(u)-\phi(v))^{2}+\sum_{ \begin{subarray}{c}\{u,v\}\in\partial S^{\prime}\\ u\in S^{\prime}\end{subarray}}w_{uv}(\phi(u)-\phi(v))\phi(u)\]
\[\geq\lambda_{0}^{D}(S^{\prime})\sum_{u\in S^{\prime}}q_{u}\phi^{2}(u)-\sum_{ \begin{subarray}{c}\{u,v\}\in\partial S^{\prime}\\ u\in S^{\prime}\end{subarray}}w_{uv}\phi(v)\phi(u)\]
\[\geq\lambda_{0}^{D}(S^{\prime})\sum_{u\in S^{\prime}}q_{u}\phi^{2}(u).\]
Above, the first inequality follows from the definition of the Dirichlet eigenvalues and the second because \({\phi(S^{\prime})\phi(\overline{S^{\prime}})\leq 0}\). ∎
For a graph \(G=(V,E)\), suppose \(\phi:V\longrightarrow\mathbb{R}\) satisfies
\[\left(\lambda-W_{u}\right)q_{u}\phi(u)=\sum_{v\sim u}w_{uv}\left[\phi(u)-\phi( v)\right].\] (18)
for \(w>0\). Then,
\[\lambda \geq\max_{S^{\prime}\in\{S,V\setminus S\}}\left(\frac{\sum_{u\in S ^{\prime}}W_{u}q_{u}\phi(u)^{2}}{\sum_{u\in S^{\prime}}q_{u}\phi(u)^{2}}\right)\]
for \(S=\left\{u\in V\;|\;\phi(u)\geq 0\right\}\).
Section5 allows us to derive our primary Cheeger inequality, which generalizes from [15]:
Suppose \(\phi_{i}:V\longrightarrow\mathbb{R}\), satisfy
\[\left[\lambda_{i}-W_{u}\right]q_{u}\phi_{i}(u)=\sum_{v\sim u}w_{uv}\left[\phi_ {i}(u)-\phi_{i}(v)\right]\] (19)
and let \(\gamma=\lambda_{1}-\lambda_{0}\). Then,
\[\gamma\geq\sqrt{h^{2}+Q^{2}}-Q\]
where
\[Q=\frac{\sum_{u\in S}d_{u}\phi_{1}^{2}(u)}{\sum_{u\in S}q_{u}\phi_{1}^{2}(u)}.\]
Proof.: For a particular vertex \(u_{0}\), we begin by considering the one-parameter family
\[f_{\epsilon}(u)=\begin{cases}f(u_{0})+\epsilon\vol\left(G\setminus\{u_{0}\} \right)&u=u_{0}\\ f(u)-\epsilon q_{u_{0}}\phi_{0}^{2}(u_{0})&\text{otherwise}\end{cases}\]
where \(f\) achieves the infimum in Eq.16. Clearly, \(f_{\epsilon}\) satisfies \(f_{\epsilon}\perp q\phi_{0}^{2}\). Then, we introduce this into the Rayleigh quotient \(R(f_{\epsilon})\) and note that \(\frac{d}{d\epsilon}R(f_{\epsilon})|_{\epsilon=0}=0\)⁵
[FOOTNOTE:5][ENDFOOTNOTE]
\[0=\restr{\frac{dR(f(\epsilon))}{d\epsilon}}{\epsilon=0}=\frac{d}{d\epsilon} \left[\frac{\sum_{\{u,v\}}w_{uv}\phi_{0}(u)\phi_{0}(v)[f_{ \epsilon}(u)-f_{\epsilon}(v)]^{2}}{\sum_{u}q_{u}f_{\epsilon}^{2}( u)\phi_{0}^{2}(u)}\right]_{\epsilon=0}=\frac{d}{d\epsilon}\left[\frac{ \sum_{\begin{subarray}{c}\{u,v\}\\ u,v\neq u_{0}\end{subarray}}\omega_{uv}[f(u)-f(v)]^{2}+\sum_{u\neq u_{0}} \omega_{u_{0}u}\left(f(u_{0})-f(u)+\epsilon\vol(G)\right]_{\epsilon=0}^{2}}{ \sum_{u\neq u_{0}}q_{u}\left(f(u)-\epsilon q_{u_{0}}\phi_{0}^{2}( u_{0})\right)^{2}\phi_{0}^{2}(u)+q_{u_{0}}\left(f(u_{0})+\epsilon\vol(G \setminus\{u_{0}\})\right)^{2}}\right]_{\epsilon=0}=\frac{2\sum_{u\neq u_{0}}w _{u_{0}u}\left(f(u_{0})-f(u)\right)\vol(G)}{\sum_{u}q_{u}f^{2}(u)\phi_{0}^{2}( u)}-2R(f)q_{u_{0}}\phi_{0}^{2}(u_{0})\left(\frac{-\sum_{u\neq u_{0}}q_{u}f(u) \phi_{0}^{2}(u)+f(u_{0})\vol\left(G\setminus\{u_{0}\}\right)}{{\sum_{u}q_{u}f^ {2}(u)\phi_{0}^{2}(u)}}\right)=\sum_{u\neq u_{0}}w_{u_{0}u}\left(f(u_{0})-f(u) \right)\vol(G)-\gamma q_{u_{0}}\phi_{0}^{2}(u_{0})\left({f(u_{0})\vol(G \setminus\{u_{0}\})-\sum_{u\neq u_{0}}q_{u}f(u)\phi_{0}^{2}(u)}\right)=\sum_{u \neq u_{0}}w_{u_{0}u}\left(f(u_{0})-f(u)\right)\vol(G)-\gamma q_{u_{0}}\phi_{0 }^{2}(u_{0})\left(f(u_{0})\vol(G\setminus\{u_{0}\})+q_{u_{0}}f(u_{0})\phi_{0}^ {2}(u_{0})\right)=\sum_{u\neq u_{0}}w_{u_{0}u}\left(f(u_{0})-f(u)\right)\vol(G )-\gamma q_{u_{0}}f(u_{0})\phi_{0}^{2}(u_{0})\vol(G)=\sum_{u\neq u_{0}}w_{u_{0 }u}\left(f(u_{0})-f(u)\right)-\gamma q_{u_{0}}f(u_{0})\phi_{0}^{2}(u_{0}).\]
Thus, for any \(u\), \(f(u)\) satisfies
\[\gamma q_{u}f(u)\phi_{0}^{2}(u) =\sum_{v\sim u}w_{uv}\phi_{0}(v)\phi_{0}(u)[f(u)-f(v)]\]
\[\gamma q_{u}f^{2}(u)\phi_{0}^{2}(u) =\sum_{v\sim u}w_{uv}\phi_{0}(v)\phi_{0}(u)[f(u)-f(v)]f(u).\]
Let \(S\subseteq G\) be the subgraph of \(G\) induced by the vertex set \(V(S)=\left\{v|\phi_{1}(v)\geq 0\right\}\) and let \(\omega_{uv}=w_{uv}\phi_{0}(u)\phi_{0}(v)\). Without loss of generality, we assume that \(\sum_{u\in S}q_{u}\phi_{0}^{2}(u)\leq\sum_{u\notin S}q_{u}\phi_{0}^{2}(u)\). (If this is not the case, simply take \(f\mapsto-f\).) Then, for any region \(S^{\prime}\subseteq G\) such that either \(S^{\prime}\subseteq S\) or \(\overline{S^{\prime}}\subseteq S\), and define the Cheeger ratio as
\[h_{S^{\prime}} \equiv\frac{\abs{\partial S^{\prime}}}{\min\{\vol(S^{\prime}), \vol(\overline{S^{\prime}}\}}\]
\[=\begin{cases}\frac{\abs{\partial S^{\prime}}}{\sum_{u\in V(S^{ \prime})}q_{u}\phi_{0}^{2}(u)}&S^{\prime}\subseteq S\\ \frac{\abs{\partial S^{\prime}}}{\sum_{u\in V(\overline{S^{\prime}})}q_{u}\phi _{0}^{2}(u)}&\text{$\overline{S^{\prime}}\subseteq S$}\end{cases}\]
\[\geq h.\] (20)
Now, we let
\[\gamma\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u) =\sum_{u\in V(S)}\sum_{v\sim u}\omega_{uv}[f(u)-f(v)]f(u)\]
\[=\sum_{\{v,u\}\in E(S)}\omega_{uv}[f(u)-f(v)]^{2}+\sum_{ \begin{subarray}{c}\{u,v\}\in\partial S\\ u\in V(S)\end{subarray}}\omega_{uv}[f(u)-f(v)]f(u)\]
\[\geq\sum_{\{v,u\}\in E(S)}\omega_{uv}[f(u)-f(v)]^{2}+\sum_{ \begin{subarray}{c}\{u,v\}\in\partial S\\ u\in V(S)\end{subarray}}\omega_{uv}f^{2}(u)\]
since \(f(u)f(v)\leq 0\) whenever \(\{u,v\}\in\partial S\).
Introducing the function
\[g(u)=\begin{cases}f(u)&f(u)\geq 0\\ 0&\text{otherwise},\end{cases}\]
we have that
\[{\gamma\geq\Phi}=\frac{\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^{2}}{ \sum_{u}q_{u}g^{2}(u)\phi_{0}^{2}(u)}\]
\[=\frac{\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^{2}}{ \sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u)}\cdot\frac{\sum_{\{v ,u\}}\omega_{uv}[g(u)+g(v)]^{2}}{\sum_{\{v,u\}}\omega_{uv}[g(u)+g (v)]^{2}}\\\]
\[\geq\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^{2}(v)} \right)^{2}}{\left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u) \right)\left(\sum_{\{v,u\}}\omega_{uv}[g(u)+g(v)]^{2}\right)}\\\]
\[=\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^{2}(v)} \right)^{2}}{\left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u) \right)\left( 2\sum_{u\in V(S)}f^{2}(u)\phi_{0}(u)\sum_{v\sim u}w _{uv}\phi_{0}(v)-\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^{2}\right)}\]
\[=\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^{2}(v)} \right)^{2}}{\left(\sum_{u\in V(S)}q_{u}g^{2}(u)\phi_{0}^{2}(u) \right)\left( 2\sum_{u\in V(S)}f^{2}(u)\phi_{0}^{2}(u)q_{u}\left( W_{u}+\frac{d_{u}}{q_{u}}-\lambda_{0}\right)-\sum_{\{v,u\}}\omega_{uv}[g(u)-g( v)]^{2}\right)}\]
\[=\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^{2}(v)} \right)^{2}}{\left(\sum_{u\in V(S)}q_{u}g^{2}(u)\phi_{0}^{2}(u) \right)^{2}\left(\frac{ 2\sum_{u\in V(S)}q_{u}\phi_{1}^{2}(u) \left(W_{u}+\frac{d_{u}}{q_{u}}-\lambda_{0}\right)}{\sum_{u\in V( S)}q_{u}\phi_{1}^{2}(u)}-\Phi\right)}\\\]
≥(∑{v,u}_ω_uvg2(u)-g2(v))2(∑u∈V(S)quf2(u)_ϕ_02(u) )2(2 _γ_+ 2 Q - Φ) where the first inequality follows from Cauchy-Schwarz and the final inequality follows from Section5. Now, suppose that we label our vertices \(u_{i}\) with integers \(i\geq 1\) such that \(f(u_{i+1})\geq f(u_{i})\). Then, clearly, for any \(j<i\)
\[g(u_{i})-g(u_{j}) =\sum_{k=j}^{i-1}(g(u_{k+1})-g(u_{k})).\]
Now, consider the cut \(S_{k}=\left\{u_{j}\;|\;j\leq k\right\}\),
\[\omega_{u_{i}u_{j}}\abs*{g^{2}(u_{i})-g^{2}(u_{j})} =\omega_{u_{i}u_{j}}\sum_{k=j}^{i-1}\abs*{g^{2}(u_{k+1})-g^{2}(u_ {k})}\]
\[\sum_{j<i}\omega_{u_{i}u_{j}}\abs*{g^{2}(u_{i})-g^{2}(u_{j})} =\sum_{j<i}\sum_{k=j}^{i-1}\omega_{u_{i}u_{j}}\abs*{g^{2}(u_{k+1} )-g^{2}(u_{k})}\]
\[=\sum_{k\leq\abs*{V}-1}\abs*{g^{2}(u_{k+1})-g^{2}(u_{k})}\sum_{j \leq k<i}\omega_{u_{i}u_{j}}\]
\[\geq h\sum_{k\leq\abs*{V}}q_{u_{k}}g^{2}(u_{k})\phi_{0}^{2}(u_{k})\]
\[=h\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u).\]
Above, both inequalities follow from Eq.20, where the second also utilizes summation by parts. Introducing this into Section5,
\[\Phi\geq\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs*{g^{2}(u)-g^{2} (v)}\right)^{2}}{\left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}( u)\right)^{2}\left(2\gamma+2Q-\Phi\right)}\]
\[\geq h^{2}\frac{\left(\sum_{u\in V(S)}q_{u}f^{2}(u_{u})\phi_{0}^{2}(u_{u}) \right)^{2}}{\left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u)\right)^{2} \left(2\gamma+2Q-\Phi\right)}\]
\[=\frac{h^{2}}{2\gamma+2Q-\Phi}.\]
Now,
\[h^{2}\leq(2\gamma+2Q-\Phi)\Phi\]
\[=2(\gamma+Q)\Phi-\Phi^{2}\]
\[\leq 2Q\gamma-(\Phi-\gamma)^{2}+\gamma^{2}\]
\[\leq 2Q\gamma+\gamma^{2},\]
so that \(\gamma\geq\sqrt{h^{2}+Q^{2}}-Q\). ∎
At this point, it is worth pausing to recognize just how much tighter the bound one finds from Section5 is than its expansion around \(h=0\). Had we simply assumed \(h\) was small, we would have arrived at the inequality \(\gamma\geq h^{2}/2Q-h^{4}/8Q^{3}\). For \(W=0\), one would expect the inequality \(\gamma\geq h^{2}/(2Q)\), so our result is only slightly weaker than anticipated. At first glance, one might expect this to be our desired bound. Unlike the \(W=0\) case, however, we _do not_ expect that \(h\) will usually be small. In fact, for strongly peaked distributions, we expect that \(h\) can be rather large. Thus, retaining the expression of Section5 can be essential to using this bound for most choices of \(W\).
The following inequality looks more like the standard Cheeger inequality and does not turn negative, however it is weak for large \(h\). It follows immediately from the inequality \(2(\sqrt{x+1}-\sqrt{x})>1/\sqrt{x+1}\) when \(x>0\). Suppose \(\phi_{i}:V\longrightarrow\mathbb{R}\), satisfy
\[\left[\lambda_{i}-W_{u}\right]q_{u}\phi_{i}(u)=\sum_{v\sim u}w_{uv}\left[\phi_ {i}(u)-\phi_{i}(v)\right]\] (21)
and let \(\gamma=\lambda_{1}-\lambda_{0}\). Then,
\[\gamma\geq\frac{h^{2}}{2\sqrt{h^{2}+Q^{2}}}\]
where \(Q\) is as in Section5.
We can now adapt AppendixA to provide a Cheeger inequality for the case considered in Section4. For a graph \(G=(V,E)\), suppose
\[\gamma=\inf_{\tiny{g\perp q\phi^{2}}}\frac{\sum_{\{u,v\}\in E(G^{+})} \widetilde{\omega}_{uv}[g(u)-g(v)]^{2}}{\sum_{u}q_{u}g^{2}(u)\phi^{2}(u)},\]
then
\[\gamma\geq(Q+\rho)-\sqrt{(Q+\rho)^{2}-h^{2}}\]
where \(\rho=\lambda_{\abs{V}-1}-\lambda_{0}\). For a proof, see AppendixA. Now, Section5 with the appropriate choice of \(Q\) yields the following corollaries: Suppose \(H=L+W\) is an \(n\times n\) real symmetric matrix with eigenvalues \(\lambda_{0}\leq\lambda_{1}\leq\dots\leq\lambda_{n-1}\) and corresponding ground-state \(\phi\) where \(L\) is the combinatorial Laplacian of \(G\). Then, if \(G^{+}\) has degree at most \(d_{\max}\)
\[\lambda_{1}-\lambda_{0}\geq(d_{\max}+\rho)-\sqrt{(d_{\max}+\rho)^{2}-h^{2}}\]
for \(\rho=\lambda_{N-1}-\lambda_{0}\) and
\[h=\sup_{\begin{subarray}{c}\alpha>0\\ \sum_{p\in P(u,v)}\alpha_{p}=1\end{subarray}}h_{\alpha},\]
\[h_{\alpha}=\min_{S}\max_{S^{\prime}\in\{S,\overline{S}\}}\frac{\sum_{\{u,v\} \in\partial S}\omega_{uv}(\alpha)}{\sum_{u\in S^{\prime}}\phi^{2}(u)}\]
where
\[\omega_{uv}(\alpha)=\left(w_{uv}\phi(u)\phi(v)-\sum_{w_{xy}<0}\sum_{\tiny{ \begin{subarray}{c}p\in P(x,y)\\ \{u,v\}\in p\end{subarray}}}\abs*{w_{xy}}\ell_{p}\alpha_{p}\phi(x)\phi(y)\right)\]
as in Section4.
Suppose \(H=\mathcal{L}+W\) is a real symmetric matrix with eigenvalues \(\lambda_{0}<\lambda_{1}\leq\dots\leq\lambda_{N-1}\) and corresponding ground-state \(\phi\) where \(\mathcal{L}\) is the normalized Laplacian of \(G\). Then,
\[\lambda_{1}-\lambda_{0}\geq(1+\rho)-\sqrt{(1+\rho)^{2}-h^{2}}\]
for \(\rho=\lambda_{N-1}-\lambda_{0}\) and the distributed Cheeger constant
\[h=\sup_{\begin{subarray}{c}\alpha>0\\ \sum_{p}\alpha_{p}=1\end{subarray}}h_{\alpha},\]
\[h_{\alpha}=\min_{S}\max_{S^{\prime}\in\{S,\overline{S}\}}\frac{\sum_{\{u,v\} \in\partial S}\omega_{uv}(\alpha)}{\sum_{u\in S^{\prime}}d_{u}\phi^{2}(u)}\]
where
\[\omega_{uv}(\alpha)=\left(\omega_{uv}\phi(u)\phi(v)-\sum_{\omega_{xy}<0}\sum_{ \tiny{\begin{subarray}{c}p\in P(x,y)\\ \{u,v\}\in p\end{subarray}}}\abs{\omega_{xy}}\ell_{p}\alpha_{p}\phi(x)\phi(y)\right)\]
as in Section4. The form of Section5 and Section5 is not as elegant as Section5, but we shouldn’t be turned off so easily; each corollary has a pleasing interpretation. Begin by taking negative edge weights and redistribute them along positive paths as best you can. The Cheeger constant of the resulting graph is always a lower bound for the gap.
## 6 Applications
### Some simple reductions
The approach of Section4 is more general than one might desire. All we have effectively done in that section is apply Jensen’s inequality. Restricting to unique paths, we have the following corollary to Section4. Suppose that for the graph \(G=(V,E)\), \(\gamma(G)\) is the spectral gap of the combinatorial Laplacian of \(G\). Then, if \(G^{+}\) is connected and there exists a set of non-overlapping paths such that
\[\mathcal{P}=\{P(u,v)\in E^{+}\;|\;\{u,v\}\in E^{-}\text{ and }w(e\in P(u,v))- \abs{w_{uv}}\abs{P(u,v)}\geq 0\}.\]
Then, \(\gamma(G)\geq\gamma(G\setminus\mathcal{P})\) where all constants are as in Section3. Comparison theorems like this are rather easy to derive by choosing the appropriate set of paths through \(G\). One can also use this to derive a Cheeger inequality that uses the Cheeger constant \(h\) of \(G^{+}\) on both sides. Note that, we obtain a tighter bound than that of Section5, since \(\lambda_{0}(G)=\lambda_{0}(G^{+})=0\), so we only need to bound \(\lambda_{1}(G)\geq\lambda_{1}(G\setminus\mathcal{P})\) and we can apply Section5 directly. Suppose that the graph \(G=(V,E)\), \(\gamma(G)\) is spectral gap of the combinatorial Laplacian of \(G\). Then, if \(G^{+}\) is connected and there exists a set of non-overlapping paths such that
\[\mathcal{P}=\{P(u,v)\in E^{+}\;|\;\{u,v\}\in E^{-}\text{ and }w(e\in P(u,v))- \abs{w_{uv}}\abs{P(u,v)}\geq\epsilon\}.\]
Then,
\[2h\geq\gamma\geq\epsilon(\sqrt{h^{2}+Q^{2}}-Q)\]
where all constants are as in Section3.
Proof.: This follows readily from Section5. First, it is obvious that the degree \(Q^{\prime}\) resulting from routing negative edge weights must satisfy \(Q^{\prime}\geq\epsilon Q\). Thus, one must only show that \(k\), the weighted Cheeger constant of \(G^{+}\) after routing, satisfies \(k\geq\epsilon h\). If we let \(\omega\) be the edge-weights after appropriately routing the original weight function \(w\),
\[k=\frac{\sum_{\begin{subarray}{c}u\in S\\ v\notin S\end{subarray}}\omega_{uv}}{\sum_{u\in V(S)}q_{u}}\geq\epsilon\frac{ \sum_{\begin{subarray}{c}u\in S\\ v\notin S\end{subarray}}w_{uv}}{\sum_{u\in V(S)}q_{u}}=\epsilon h.\]
∎
### Cheeger comparison theorems
To obtain a somewhat useful comparison theorem, we require the following characterization of the weighted Cheeger constant \(h\), which derives from a lengthy calculation beyond the scope of this paper. For a derivation that generalizes easily, see [15]. Specifically,
\[h=\inf_{f\not\equiv 0}\sup_{C}\frac{\sum_{\{u,v\}\in E(G)}w_{uv}\phi_{0}(u) \phi_{0}(v)\abs{f(u)-f(v)}}{\sum_{u}q_{u}\phi_{0}^{2}(u)\abs{f(u)-C}}\] (22)
where \(\phi_{0}\) is the ground-state of the corresponding Hamiltonian (Laplacian). Note that when \(H\) is just a Laplacian, or \(W=0\), \(h\) is the standard Cheeger constant of the corresponding graph. With this, we can prove the following theorem: Suppose that \(g\) is the Cheeger constant of \(L_{q}\) corresponding to \(G=(V,E)\) with weight function \(w:V\times V\longrightarrow\mathbb{R}\). Further, suppose \(h\) is the weighted Cheeger constant of \(H=L_{q}+W\) resulting from imposing the Dirichlet condition as in Section2.3.2. Then, if \(H\) has ground-state \(\phi\) satisfying the curvature inequality,
\[\sum_{v\sim u}w_{uv}\abs{\phi(u)-\phi(v)}\leq\frac{\epsilon}{2}d_{u}\phi(u)\]
the Cheeger constants \(h\) and \(g\) satisfy
\[g\leq h+\lambda_{0}(H)+\epsilon Q\]
where \(Q\) is the maximum degree of \(G\) if \(L_{q}\) is the combinatorial Laplacian and \(1\) if \(L_{q}\) is the normalized Laplacian.
Proof.: Note that \(g\) corresponds to a case where \(\phi_{0}(u\in V(G))=1\) in Eq.22. Thus,
\[g=\inf_{f\not\equiv 0}\sup_{C}\frac{\sum_{\{u,v\}\in E(G)\cup\partial G}w_{uv} \abs{f(u)-f(v)}}{\sum_{u}q_{u}\abs{f(u)-C}}.\]
Now, let \(S\subseteq G\) be the subset of \(G\) that achieves \(h\). We introduce
\[f(u)-C=\begin{cases}\phi^{2}(u)&u\in S\\ -\phi^{2}(u)&u\notin S,\end{cases}\]
where \(\phi\) is the ground-state of \(H\). Now,
\[g\leq\frac{\sum_{\{u,v\}\in\partial S}w_{uv}\left(\phi^{2}(u)+ \phi^{2}(v)\right)+\sum_{\{u,v\}\notin\partial S}w_{uv}\abs{\phi^{2}(u)-\phi^{ 2}(v)}}{\sum_{u}q_{u}\phi^{2}(u)}\]
\[=\frac{\sum_{\{u,v\}\in\partial S}w_{uv}\left[\left(\phi(u)-\phi( v)\right)^{2}+2\phi(u)\phi(v)\right]+\sum_{\{u,v\}\notin\partial S}w_{uv}\abs{ \phi^{2}(u)-\phi^{2}(v)}}{\sum_{u}q_{u}\phi^{2}(u)}\]
\[\leq 2h\frac{\sum_{u\in V(S)}q_{u}\phi^{2}(u)}{\sum_{u}q_{u}\phi^{2}(u)}+\frac {\sum_{\{u,v\}\in\partial S}w_{uv}\left(\phi(u)-\phi(v)\right)^{2 }+\sum_{\{u,v\}\notin\partial S}w_{uv}\abs{\phi^{2}(u)-\phi^{2}(v)}}{\sum_{u}q _{u}\phi^{2}(u)}\]
\[\leq h+\frac{\sum_{\{u,v\}\in E(G)}w_{uv}\left(\phi(u)-\phi(v) \right)^{2}+\sum_{\{u,v\}\notin\partial S}w_{uv}\left(\abs{\phi^{2}(u)-\phi^{2 }(v)}-\left(\phi(u)-\phi(v)\right)^{2}\right)}{\sum_{u}q_{u}\phi^{2}(u)}\]
\[\leq h+\lambda_{0}(H)+\frac{\sum_{\{u,v\}}w_{uv}\bigg{[}2\min\{ \phi(u),\phi(v)\}\;\abs{\phi(u)-\phi(v)}\bigg{]}}{\sum_{u}q_{u}\phi^{2}(u)}\]
\[\leq h+\lambda_{0}(H)+2\frac{\sum_{u}\phi(u)\sum_{v\sim u}w_{uv} \abs{\phi(u)-\phi(v)}}{\sum_{u}q_{u}\phi^{2}(u)}\]
\[\leq h+\lambda_{0}(H)+\epsilon\frac{\sum_{u}d_{u}\phi^{2}(u)}{ \sum_{u}q_{u}\phi^{2}(u)}\]
\[\leq h+\lambda_{0}(H)+\epsilon Q.\]
Thus,
\[g\leq h+\lambda_{0}(H)+\epsilon Q.\]
∎
The above theorem is not as tight as we would ideally like. In the future, it would be advantageous to derive a better analogue of the results in [12]. Although continuous, those results suggest that one could derive a comparison theorem such that \(ch\geq g\) for some constant \(c\) that depends only upon the structure of the space. Additionally, it seems likely that in the case that \(\phi\) is unimodal, the weighted Cheeger constant is proportional to the Cheeger constant of the host graph. Nonetheless, a proof remains elusive.
#### 6.2.1 Subgraph Comparison
We can prove something a bit better by comparing subgraphs of our Hamiltonian and applying Section5. For any \(S\), let \(h_{S}\) be as in Eq.20. That is,
\[h_{S}=\frac{\abs{\partial S}}{\min\{\vol(S),\vol(\overline{S})\}}\] (23)
where all quantities are as in Section3.
If we again restrict to the case that \(H\) is stoquastic, then we can apply the technique of Section5 to prove a theorem which makes clear the significance of the Cheeger constant for any particular cut \(S\subset G\). In the following theorem, we make use of the Dirichlet representation of Section2.3.2. In other words,
\[\lambda_{0}(H)=\inf_{\begin{subarray}{c}f\\ f|_{\delta G}=0\end{subarray}}\frac{\sum_{\{u,v\}\in E(G)\cup\partial G}w_{uv} (f(u)-f(v))^{2}}{\sum_{u\in V(G)}q_{u}f^{2}(u)}.\]
Thus, for any subgraph \(S\subseteq G\), we can consider \(\delta G\subseteq\delta S\). Another way of stating this, is that
\[\lambda_{0}^{D}(H,S)=\inf_{\begin{subarray}{c}f\\ f|_{\delta S}=0\end{subarray}}\frac{\sum_{\{u,v\}\in E(S)\cup\partial S}w_{uv} (f(u)-f(v))^{2}}{\sum_{u\in V(S)}q_{u}f^{2}(u)}\]
\[=\inf_{\begin{subarray}{c}f\\ f|_{\delta S}=0\end{subarray}}\frac{\sum_{\{u,v\}\in E(S)\cup(\partial S \setminus\partial G)}w_{uv}(f(u)-f(v))^{2}+\sum_{u\in V(S)}q_{u}W_{u}\phi^{2}( u)}{\sum_{u\in V(S)}q_{u}f^{2}(u)}.\]
The following theorem compares the Dirichlet eigenvalues of the subgraph \(S\) to those of \(G\).
Suppose that \(H\) is a stoquastic Hamiltonian with ground state \(\phi>0\), corresponding to a graph \(G\) with subgraph \(S\subset G\). Then,
\[h_{S}\geq\lambda_{0}^{D}(H,S)-\lambda_{0}(H).\]
Above, \(h_{S}\) is as in Eq.23 and \(\lambda_{0}^{D}(H,S)\) is the Dirichlet eigenvalue of the subgraph \(S\) of the host graph \(G\subseteq G^{\prime}\), defined by
\[\lambda_{0}^{D}(H,S)=\inf_{\begin{subarray}{c}f\\ f|_{\delta S}=0\end{subarray}}\frac{\sum_{\{u,v\}\in E(S)\cup\partial S}w_{uv} (f(u)-f(v))^{2}}{\sum_{u\in V(S)}q_{u}f^{2}(u)}.\]
Proof.: First, we begin with the appropriate definition of the Dirichlet eigenvalues of a subgraph.
We begin as in Section5. Without loss of generality, assume that \(\vol(S)\leq\vol(\overline{S})\). Now,
\[\sum_{u\in V(S)}(\lambda_{0}(H)-W_{u})q_{u}\phi^{2}(u)=\sum_{u\in S}\sum_{\{v, u\}\in E(G)}w_{uv}(\phi(u)-\phi(v))\phi(u)\]
\[\lambda_{0}(H)\sum_{u\in V(S)}q_{u}\phi^{2}(u)=\sum_{\{u,v\}\in E(S)}w_{uv}( \phi(u)-\phi(v))^{2}+\sum_{u\in V(S)}q_{u}W_{u}\phi^{2}(u)+\sum_{\{v,u\}\in \partial S\setminus\partial G}w_{uv}\left(\phi(u)-\phi(v)\right)\phi(u)\]
\[\geq\lambda_{0}^{D}(H,S)\sum_{u\in V(S)}q_{u}\phi^{2}(u)-\sum_{\{v,u\}\in \partial S}w_{uv}\phi(v)\phi(u)\]
\[=\left(\lambda_{0}^{D}(H,S)-h_{S}\right)\sum_{u\in V(S)}q_{u}\phi^{2}(u).\]
Since we know that \(\sum_{u\in V(S)}q_{u}\phi^{2}(u)>0\),
\[h_{S}\geq\lambda_{0}^{D}(H,S)-\lambda_{0}(H).\]
∎
In other words, whenever \(h_{S}\) is exponentially small, there exists a Dirichlet eigenfunction for some subgraph that approximates the ground-state eigenvalue of \(H\). This is equivalent to saying that there exists some block of \(H\) that has approximately the same ground-state eigenvalue as \(H\) itself.
## 7 Physical implications
These results lead to a very concrete understanding of the nature of the spectral gap in most quantum systems. In a very strong sense, the presence of a spectral gap implies that the ground-state wave function _must not_ contain bottlenecks. Although this may be unsurprising, all prior results fail to confirm the intuition when \(\norm{W}\) is sufficiently large. In this paper, I have eliminated the ability for physics behave unexpectedly in such situations. That is, we now know that gapped Hamiltonians must not contain strong bottlenecks in their ground-states and, additionally, the appropriate scaling of this claim. Equivalently, the presence of a bottleneck guarantees a small spectral gap.
This conceptual point does not yet hold in reverse. That is, we have not shown that a small gap implies a strong bottleneck. It is possible that there exist Hamiltonians with ground-states without bottlenecks that nonetheless have small spectral gaps. This particular point may be of some physical interest and worth exploring, however in the context that inspired this work is somewhat less interesting.
Probably the major advantage of this characterization is that we can now definitively say that, for the standard adiabatic theorem to guarantee an efficient adiabatic process, at no point in the evolution must \(H\) have a bottlenecked ground-state. Some results suggest that, at least with existing Monte Carlo techniques, states without bottlenecks can still be hard to simulate [26, 30, 10]. Nonetheless, a guaranteed lack of bottlenecks reaffirms my agnosticism about whether one might be able to classically and efficiently sample from ground-state distributions arising from large-gap stoquastic Hamiltonians. Shifting dialogue away from spectral gaps and towards bottlenecked distributions as also suggested in [29, 20] will, hopefully, shed light on this question one way or the other.
## 8 The Bashful Adiabatic Algorithm
In this section, I show how one might be able to exploit the weighted Cheeger constant to improve quantum adiabatic algorithms. A quantum process solves the Schrödinger equation
\[\begin{cases}i\frac{\partial\phi(t)}{\partial t}=H(t/T)\phi(t)\\ \phi(0)=\phi_{0}(0)\end{cases}\]
where \(\phi_{0}(t)\) is the ground-state of \(H(t/T)\). An adiabatic algorithm seeks to produce the distribution \(\phi(T)\approx\phi_{0}(T)\) and the adiabatic theorem guarantees that this can be done provided that a quantity like \(\gamma^{-2}(H(t/T))\norm{\frac{dH(t/T)}{dt}}\) is never too large [27]. Abusively, for this section, we call the Hamiltonian \(H(t/T)\) the “schedule”. At least in the case of real Hamiltonians, our inequality opens up the possibility of adaptive adiabatic algorithms, or those where we adjust the rate of variation of \(H\) in response to the size of the gap.
In many cases, \(h\) reduces the problem of bounding the spectral gap to determining information about the ground-state. This allows one to stop an evolution early, say at \(t<T\) and bound the gap at that point. That is, suppose we know \(\phi(t)\approx\phi_{0}(t)\) for some \(t\). Then, if we can use \(\phi(t)\) to approximate \(h\), we can assume that we know \(\gamma(H(t/T))\). One can use Weyl’s inequality or another perturbative argument to then guarantee that \(\gamma(H(\tau/T))\geq c\) for some choice of \(c\) and \(\tau>t\). Thus, we can restart the adiabatic from \(t=0\) and choose an appropriate \(dH(t/T)/dt\) such that \(\phi(\tau)\approx\phi_{0}(\tau)\). Repeating this until \(\tau=T\) would give us the entire adiabatic path with, potentially, only polynomial overhead. This algorithm, which I am calling the Bashful Adiabatic Algorithm (BAA), is sketched below:⁶
[FOOTNOTE:6][ENDFOOTNOTE]
```
1:Assume \(H_{\tau}(1)=H_{0}(1)\) for all choices of \(\tau\).
2:Choose a schedule \(H_{\tau}\) with \(\min_{t<\tau}\gamma(H_{\tau}(t/T))>\gamma_{\min}\).
3:Prepare the state \(\phi_{0}(0)\) of \(H_{0}(0)\).
4:while \(\tau<T\) do
5: Generate \(N\) copies of \(\phi(\tau)\) from \(\phi(0)\) using the schedule \(H_{\tau}(t/T)\).
6: Sample \(\{\phi(\tau)\}\) and (if possible) approximate the weighted Cheeger constant of \(H_{\tau}(\tau)\).
7: Use the result to bound \(\min_{t<\tau+\delta\tau}\gamma(H_{\tau+\delta\tau}(t/T))\) for some new schedule \(H_{\tau+\delta\tau}(t/T)\).
8: \(\tau\leftarrow\tau+\delta\tau\). return\(\phi(T)\) using the schedule \(H_{T}(t/T)\).
```
Bashful Adiabatic Algorithm
This algorithm would run in time \(\bigO{(T/\delta\tau)^{2}(X+N\delta\tau)}\), where \(\delta\tau\) is the smallest timestep taken, \(N\) is the number of copies needed, and \(X\) the longest time it takes to compute \(h\). The reader should note that even if \(\delta\tau\) must get very small (because \(\gamma\) gets very small), so long as it is only small for a sufficiently short period of time, we should be able to locally decrease \(\norm{\frac{dH}{dt}}\) and obtain much tighter scaling than that proposed above. Furthermore, we can ensure that our \(\norm{\frac{dH}{dt}}\) is taken as large as possible while remaining consistent with the adiabatic theorem, or that our path (through time) is chosen optimally. The ability to compute \(h\) may allow one to predict when an adiabatic path needs to be changed, as suggested in [21].
Even given the ability to sample \(\phi_{0}\), we would still require an efficient method for approximating \(h\). Although I do not expect this to be possible for an arbitrary graph and \(\phi_{0}\), this may indeed be possible for some classes of graphs and reasonable assumptions about \(\phi_{0}\). It is likely that a statement like Section5 will be useful in this regard. Additionally, while there will clearly be distributions where an approximation strategy for \(h\) should fail, it is quite possible that these same instances correspond to otherwise intractable optimization problems.
As an example, one can think of the graph \(G=(V,E)\) with \(V=\{u_{i}\;|\;i\in\intrange{1}{n}\}\) and \(E=\{\{u_{i},u_{i+1}\}\;|\;i\in\intrange{1}{n-1}\}\). Suppose that for some \(j\notin\{i,i+1\}\), the Hamiltonian has ground-state
\[\phi_{0}(u_{i},\tau)=\begin{cases}c_{1}&i=1\\ c_{j}&i=j\\ C&i\notin\{1,j\}.\end{cases}\]
Choosing \(C\sim e^{-n}\), if \(c_{1}>c_{j}\sim\mathrm{poly}(n)\), then there exists a cut such that \(h\) is exponentially small in \(n\). Using \(L\) as the graph Laplacian for this graph, this is achieved by, for example, the ground-state of \(H=L+W\) with diagonal matrix \(W\equiv\diag{(W_{u})_{u\in V}}\)
\[W_{u_{i}}=\begin{cases}cx^{-1}&i=1\\ xc^{-1}&i=2\\ c^{-1}&i=\abs{V}-1\\ c&i=\abs{V}\\ 1&\text{otherwise}\end{cases}\]
and an appropriate choice of \(c\) and \(x\). (Take \(c\) to be small and choose \(x\) to produce the desired ratio of \(c_{1}/c_{\abs{V}}\).)
Distinguishing this from the case where \(c_{j}\sim e^{-n}\), which implies that \(h\) is only polynomially small in \(n\) (see [29]), seems to be close to efficiently solving unstructured search. Thus, if one were to investigate an algorithm for approximating \(h\), one might need to consider a divide-and-conquer approach that considers separate adiabatic processes constrained to different subgraphs for sufficiently concentrated \(\phi_{0}\). Another possibility would be to attempt to adapt existing algorithms for approximating the Cheeger constant in large networks [43]. Exploring this question is well beyond the scope of the present work, but would nonetheless be very interesting.
## 9 Open questions and future work
These inequalities lead to quite a few open questions.
* First and foremost, I think, is the question of whether one can ever efficiently approximate the weighted Cheeger constant and what information/constraints would be necessary to do so. The standard combinatorial Cheeger constant has been the object of extensive study and we know determining it to be NP-hard [38]. Nonetheless, one can efficiently approximate the Cheeger constant, however the scaling of such estimates is probably insufficient for quantum systems. Additionally, given that the weighted Cheeger constant depends on more information than the combinatorial Cheeger constant, estimating the weighted Cheeger constant might be considerably harder. Nonetheless, it is possible that in sparse graphs, such as those that would naturally arise from physical systems of interest, this quantity might not be too difficult to approximate, especially if one is willing to take a poor estimate. If one can approximate \(h\) efficiently enough in a large enough number of cases, one might potentially use this information to choose an adiabatic path for adiabatic quantum computation as discussed in the previous section [21].
* Also, because this work demonstrates the deficiencies in gap analysis, it would be interesting if one could prove a version of the adiabatic theorem specific to bottlenecked states. In particular, an adiabatic theorem that stresses Dirichlet eigenfunctions would probably be able to capture the “relevant” portion of the wavefunction. One can imagine a situation where the solution to some optimization problem is in a subgraph \(S\subseteq G\) where there exists no bottleneck and \(\phi\) is large and, yet, \(\overline{S}\) contains a strong bottleneck somewhere. It would be interesting to see if such situations arise frequently, infrequently, or never. I suspect they arise frequently, and thus deriving adiabatic theorems that restrict to the subgraph \(S\) that we wish to explore would have a hope of providing much better runtime bounds.
* Another question is whether one can derive useful comparison theorems between the gap of the host graph and the gap of Hamiltonian, as alluded to in Section6. Desirable forms for comparison theorems can be found in many places, such as [15, 13]. (The interested reader should beware, however, as [15, Theorem 3] is incorrect due to a sign error and the result is carried through to two of the main corollaries of the paper. Theorem 4 of that paper also appears to be incorrect, and the best one can hope for is a statement like the present Section6.2.) It seems likely that, at least for unimodal ground-states on strongly convex subgraphs of homogeneous graphs (see [13]), one should be able to show that the gap of the Hamiltonian scales with the gap of the graph. Additionally, [29] shows that a condition like log-concavity is not enough to guarantee unimodality. In that paper, a seemingly bimodal distribution can satisfy log-concavity due to the nature of the boundary, whereas the continuous definition of log-concavity would imply unimodality.
* Finally, one might consider what useful information the frustration index can provide about the spectral gap. In [36], the author derives isoperimetric inequalities that utilize the frustration index. It is entirely possible that a suitably defined index can yield tighter bounds than those derived through our reductions here. It also seems likely that this concept might be a key component to obtaining gap lower bounds in the general Hermitian case.
## 10 Acknowledgements
The idea for using \(h\) to adjust the adiabatic path was arrived at during exchanges with Antonio Martinez. Kianna Wan pointed out many small errors that would have otherwise went unnoticed, helping me greatly improve my presentation. I thank Elizabeth Crosson, Stephen Jordan, Tsz Chiu Kwok, Brad Lackey, Lap Chi Lau, and Adrian Lupascu for helpful discussions.
## Appendix A Proof of Section5
First, we note that in the proof of Section5, we had the following corollary. For a graph \(G=(V,E)\), suppose
\[\gamma=\inf_{\tiny{g\perp q\phi^{2}}}\frac{\sum_{\{u,v\}\in E(G^{+})} \widetilde{\omega}_{uv}[g(u)-g(v)]^{2}}{\sum_{u}q_{u}g^{2}(u)\phi^{2}(u)}.\]
Then, for \(f\) achieving the infimum above and
\[g(u)=\begin{cases}f(u)&f(u)\geq 0\\ 0&\text{otherwise},\end{cases}\]
we have
\[{\gamma\geq\frac{\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^{2}}{ \sum_{u}q_{u}g^{2}(u)\phi_{0}^{2}(u)}\geq\frac{\left( \sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^{2}(v)}\right)^{2}}{ \left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u)\right)\left( \sum_{\{v,u\}}\omega_{uv}[g(u)+g(v)]^{2}\right)}}.\]
Now, we can prove Section5 by adapting the proof of Section5. First, we note that by AppendixA,
\[{\gamma\geq\Phi}=\frac{\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^{2}}{ \sum_{u}q_{u}g^{2}(u)\phi_{0}^{2}(u)}\geq\frac{\left( \sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^{2}(v)}\right)^{2}}{ \left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u)\right)\left( \sum_{\{v,u\}}\omega_{uv}[g(u)+g(v)]^{2}\right)}\]
\[=\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^{2}(v)} \right)^{2}}{\left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u) \right)\left( 2\sum_{\{v,u\}}\omega_{uv}[g^{2}(u)+g^{2}(v)]-\sum_ {\{v,u\}}\omega_{uv}[g(u)-g(v)]^{2}\right)}\]
\[=\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^{2}(v)} \right)^{2}}{\left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u) \right)\left( 2\sum_{u}g^{2}(u)\sum_{v\sim u}\omega_{uv}-\sum_{\{ v,u\}}\omega_{uv}[g(u)-g(v)]^{2}\right)}\]
\[\geq\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^{2}(v)} \right)^{2}}{\left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u) \right)\left( 2\sum_{u}g^{2}(u)\sum_{v\sim u}\phi(u)\phi(v)w_{uv} -\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^{2}\right)}\]
\[\geq\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^{2}(v)} \right)^{2}}{\left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{2}(u) \right)\left( 2\sum_{u}q_{u}f^{2}(u)\phi^{2}(u)\left(W_{u}+\frac{ d_{u}}{q_{u}}-\lambda_{0}\right)-\sum_{\{v,u\}}\omega_{uv}[g(u)-g(v)]^{2} \right)}\geq\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs{g^{2}(u)-g^ {2}(v)}\right)^{2}}{\left(\sum_{u\in V(S)}q_{u}f^{2}(u)\phi_{0}^{ 2}(u)\right)^{2}\left( 2Q+2\left(\lambda_{\abs{V}-1}-\lambda_{0} \right)-\Phi\right)}\geq\frac{\left(\sum_{\{v,u\}}\omega_{uv}\abs {g^{2}(u)-g^{2}(v)}\right)^{2}}{\left(\sum_{u\in V(S)}q_{u}f^{2}( u)\phi_{0}^{2}(u)\right)^{2}\left( 2Q+2\rho-\Phi\right)}.\]
The remainder of this proof follows identically the remaining portion of the proof of Section5.
## References
* [1] Adiabatic Optimization and Dirichlet Graph Spectra. Quantum Optimization Workshop, Fields Institute, Toronto, ON, Canada, 8 2014.
* [2] Abbas Al-Shimary and Jiannis K Pachos. Energy gaps of hamiltonians from graph laplacians. _arXiv preprint arXiv:1010.4130_, 2010.
* [3] Tameem Albash and Daniel A. Lidar. Adiabatic quantum computation. _Reviews of Modern Physics_, 90(1), 11 2018.
* [4] Ben Andrews and Julie Clutterbuck. Proof of the fundamental gap conjecture. _Journal of the American Mathematical Society_, 24(3):899–916, 2011.
* [5] Fatihcan M. Atay and Shiping Liu. Cheeger constants, structural balance, and spectral clustering analysis for signed graphs. _Citeseer_, 2014.
* [6] Fatihcan M Atay and Hande Tuncel. On the spectrum of the normalized laplacian for signed graphs: Interlacing, contraction, and replication. _Linear Algebra and its Applications_, 442:165–177, 2014.
* [7] F Barahona. On the computational complexity of Ising spin glass models. _Journal of Physics A: Mathematical and General_, 15(10):3241, 1982.
* [8] Frank Bauer. Normalized graph Laplacians for directed graphs. _Linear Algebra and Its Applications_, 436(11):4193–4222, 2012.
* [9] Sergey Bravyi, David P Divincenzo, Roberto Oliveira, and Barbara M Terhal. The complexity of stoquastic local hamiltonian problems. _Quantum Information & Computation_, 8(5):361–385, 2008.
* [10]Jacob Bringewatt, William Dorland, Stephen P Jordan, and Alan Mink. Diffusion monte carlo approach versus adiabatic computation for local hamiltonians. _Physical Review A_, 97(2):022323, 2018.
* [11] T. H Hubert Chan, Zhihao Gavin Tang, and Chenzi Zhang. Cheeger inequalities for general edge-weighted directed graphs. _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_, 9198:30–41, 2015.
* [12] Shiu-Yuen Cheng and Kevin Oden. Isoperimetric inequalities and the gap between the first and second eigenvalues of an euclidean domain. _The Journal of Geometric Analysis_, 7(2):217–239, 1997.
* [13] F R K Chung. _Spectral Graph Theory_, volume 92 of _CBMS Regional Conference Series in Mathematics_. American Mathematical Society, Providence, Rhode Island, 12 1997.
* [14]Fan Chung. Laplacians and the Cheeger inequality for directed graphs. _Annals of Combinatorics_, 9(1):1–19, 2005.
* [15] Fan R K Chung and Kevin Oden. Weighted graph Laplacians and isoperimetric inequalities. _Pacific Journal of Mathematics_, 192(2):257–273, 2000.
* [16] Bertrand Cloez and Marie-Noémie Thai. Fleming-viot processes: two explicit examples. 2016.
* [17] Bertrand Cloez and Marie-Noémie Thai. Quantitative results for the fleming–viot particle system and quasi-stationary distributions in discrete space. _Stochastic Processes and their Applications_, 126(3):680–702, 2016.
* [18] Pierre Collet, Servet Martínez, and Jaime San Martín. _Quasi-stationary distributions: Markov chains, diffusions and dynamical systems_. Springer Science & Business Media, 2012.
* [19] Pierre Collet, Servet Martínez, and Jaime San Martín. Markov chains on finite spaces. In _Quasi-Stationary Distributions_, pages 31–44. Springer, 2013.
* [20] Elizabeth Crosson and John Bowen. Quantum ground state isoperimetric inequalities for the energy spectrum of local hamiltonians. 2017.
* [21] Elizabeth Crosson, Edward Farhi, Cedric Yen-Yu Lin, Han-Hsuan Lin, and Peter Shor. Different strategies for optimization using the quantum adiabatic algorithm. _arXiv preprint arXiv:1401.7320_, 2014.
* [22] Elizabeth Crosson and Aram W. Harrow. Simulated Quantum Annealing Can Be Exponentially Faster Than Classical Simulated Annealing. In _2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)_, pages 714–723. IEEE, 8 2016.
* [23] Persi Diaconis and Daniel Stroock. Geometric bounds for eigenvalues of markov chains. _The Annals of Applied Probability_, pages 36–61, 1991.
* [24] M R Garey, D S Johnson, and L Stockmeyer. Some simplified NP-complete graph problems. _Theoretical Computer Science_, 1(3):237–267, 1976.
* [25] Frank Harary and Jerald A Kabell. A simple algorithm to detect balance in signed graphs. _Mathematical Social Sciences_, 1(1):131–136, 1980.
* [26] M. B. Hastings. Obstructions to classically simulating the quantum adiabatic algorithm. _Quantum Information and Computation_, 13(11/12):1038–1076, 2013. With appendix by M. H. Freedman.
* [27] Sabine Jansen, Mary-Beth Ruskai, and Ruedi Seiler. Bounds for the adiabatic approximation with applications to quantum computation. _Journal of Mathematical Physics_, 102111(2007):15, 2006.
* [28] M Jarret, S P Jordan, and B Lackey. Adiabatic optimization versus diffusion Monte Carlo methods. _Physical Review A - Atomic, Molecular, and Optical Physics_, 2016.
* [29] Michael Jarret and Stephen P Jordan. Adiabatic optimization without local minima. _Quantum Information & Computation_, 14(Quantum Information & Computation), 2015.
* [30] Michael Jarret, Stephen P Jordan, and Brad Lackey. Adiabatic optimization versus diffusion monte carlo methods. _Physical Review A_, 94(4):042318, 2016.
* [31]Michael Jarret and Brad Lackey. Substochastic monte carlo algorithms. 2017.
* [32] Volker Kaibel. On the expansion of graphs of 0/1-polytopes. In _The Sharpest Cut: The Impact of Manfred Padberg and His Work_, pages 199–216. SIAM, 2004.
* [33] Ravi Kannan, Santosh Vempala, and Adrian Vetta. On clusterings: Good, bad and spectral. _Journal of the ACM (JACM)_, 51(3):497–515, 2004.
* [34] Carsten Lange, Shiping Liu, Norbert Peyerimhoff, and Olaf Post. Frustration index and Cheeger inequalities for discrete and continuous magnetic Laplacians. _Calculus of Variations and Partial Differential Equations_, 54(4):4165–4196, 12 2015.
* [35] Tom Leighton and Satish Rao. An approximate max-flow min-cut theorem for uniform multicommodity flow problems with applications to approximation algorithms. In _Foundations of Computer Science, 1988., 29th Annual Symposium on_, pages 422–431. IEEE, 1988.
* [36] Florian Martin. Frustration and isoperimetric inequalities for signed graphs. _Discrete Applied Mathematics_, 217:276–285, 1 2017.
* [37] Milad Marvian, Daniel A. Lidar, and Itay Hen. On the Computational Complexity of Curing the Sign Problem. pages 1–12, 2018.
* [38] David W Matula and Farhad Shahrokhi. Sparsest cuts and bottlenecks in graphs. _Discrete Applied Mathematics_, 27(1-2):113–123, 1990.
* [39] Jérémie Roland and Nicolas J Cerf. Quantum search by local adiabatic evolution. _Physical Review A_, 65(4):042308, 2002.
* [40]Subir Sachdev. _Quantum phase transitions_. Wiley Online Library, 2007.
* [41] David Sherrington and Scott Kirkpatrick. Solvable model of a spin-glass. _Phys. Rev. Lett._, 35:1792–1796, 12 1975.
* [42] Alistair Sinclair. _Algorithms for random generation and counting: a Markov chain approach_. Springer Science & Business Media, 2012.
* [43] Daniel A Spielman and Shang-Hua Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In _Proceedings of the thirty-sixth annual ACM symposium on Theory of computing_, pages 81–90. ACM, 2004.
* [44] Matthias Troyer and Uwe-Jens Wiese. Computational complexity and fundamental limitations to fermionic quantum monte carlo simulations. _Physical review letters_, 94(17):170201, 2005.
* [45] Shing-Tung Yau. An estimate of the gap of the first two eigenvalues in the Schr ̈odinger operator. 2009.
|
1408.6579 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 79956,
"num_imgs": 8,
"llama3_tokens_count": 21069
} | [
"content_image/1408.6579/x1.png",
"content_image/1408.6579/x2.png",
"content_image/1408.6579/x3.png",
"content_image/1408.6579/x4.png",
"content_image/1408.6579/x5.png",
"content_image/1408.6579/x6.png",
"content_image/1408.6579/x7.png",
"content_image/1408.6579/x8.png"
] | # The _Chandra_ Cygnus OB2 Legacy Survey: Design and X-ray Point Source Catalog
Nicholas J. Wright\({}^{1,2}\), Jeremy J. Drake\({}^{1}\), Mario G. Guarcello\({}^{1}\), Tom L. Aldcroft\({}^{1}\), Vinay L. Kashyap\({}^{1}\), Francesco Damiani\({}^{3}\), Joe DePasquale\({}^{1}\), and Antonella Fruscione\({}^{1}\)
\({}^{1}\)Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA
nick.nwright@gmail.com
\({}^{2}\)Centre for Astrophysics Research, University of Hertfordshire, Hatfield AL10 9AB, UK
\({}^{3}\)INAF - Osservatorio Astronomico di Palermo, Piazza del Parlamento 1, I-90134 Palermo, Italy
###### Abstract
The Cygnus OB2 association is the largest concentration of young and massive stars within 2 kpc of the Sun, including an estimated \(\sim\)65 O-type stars and hundreds of OB stars. The _Chandra_ Cygnus OB2 Legacy Survey is a large imaging program undertaken with the Advanced CCD Imaging Spectrometer onboard the _Chandra X-ray Observatory_. The survey has imaged the central 0.5 deg\({}^{2}\) of the Cyg OB2 association with an effective exposure of \(\sim\)120 ks and an outer 0.35 deg\({}^{2}\) area with an exposure of \(\sim\)60 ks. Here we describe the survey design and observations, the data reduction and source detection, and present a catalog of \(\sim\)8,000 X-ray point sources. The survey design employs a grid of 36 heavily (\(\sim\)50%) overlapping pointings, a method that overcomes _Chandra_’s low off-axis sensitivity and produces a highly uniform exposure over the inner 0.5 deg\({}^{2}\). The full X-ray catalog is described here and is made available online.
Subject headings:X-rays: stars - open clusters and associations: individual (Cygnus OB2) - stars: pre-main sequence - stars: massive
## 1. Introduction
The _Chandra X-ray Observatory_(Weisskopf et al., 2002) is the premier X-ray observatory for studying young star clusters and associations, thanks to its low background rate and high angular resolution, and the ideal choice for a wide survey of the Cygnus OB2 association (e.g., Massey & Thompson, 1991; Knödlseder, 2000; Wright et al., 36). The _Chandra_ Cygnus OB2 Legacy Survey (Drake et al., 2014) is a 1 deg\({}^{2}\) survey of the Cyg OB2 association with the _Chandra X-ray Observatory_ that goes deeper and wider than previous surveys (Albacete Colombo et al., 2007; Wright & Drake, 2009). Drake et al. (2014) discuss the survey motivation and design and present highlights of the results. This paper describes the survey observations, X-ray data reduction, and the compilation of an X-ray point source catalog. An analysis of the completeness limits of the survey as a function of various observational and stellar parameters is presented in Wright et al. (37). Other papers in this series present a catalog of optical and near-IR counterparts to the X-ray catalog (Guarcello et al., 2014) and an analysis of Cyg OB2 members and contaminants in the X-ray catalog (Kashyap et al., 2014).
In Section 2 we introduce the survey, describe its design and present the observations. In Section 3 we present the methods used to detect sources in the survey, including the use of a new enhanced wavdetect code. In Section 4 we outline the iterative process used to refine the list of reliable sources and the methods used to extract X-ray source properties from the data. Finally in Section 5 we present an analysis of the quality of the final X-ray source catalog. Future papers will include X-ray spectral fitting of the brightest sources detected (Flaccomio et al., 13) and will combine this X-ray catalog with the available optical and infrared data covering the region to produce a fully multi-wavelength catalog of Cyg OB2 members.
## 2. Observations
In this section we describe the survey strategy, observations, and initial data reduction techniques employed. The survey was selected in 2009 as a _Chandra X-ray Observatory_ Cycle 11 Very Large Project (VLP) and awarded 1.08 Ms. The observations presented here include the 36 pointings that make up the survey, as well as 4 previous observations of Cyg OB2. These observations have been published elsewhere, but we include them in our data reduction and analysis so as to provide a single cohesive dataset. We will assume that all X-ray sources in Cyg OB2 are at a distance of 1.4 kpc, as determined by recent parallax measurements of masers within the Cygnus X complex (Rygl et al., 2012).
### Survey Design
The _Chandra_ Cygnus OB2 Legacy Survey was designed to uniformly survey the entire OB association to a high depth that would be sufficient to identify a large population of young, solar mass stars that would facilitate unbiased surveys of the structure (e.g., Wright et al., 38) and stellar populations (Guarcello et al., 2013) of the association. Given the large size of the Cyg OB2 association (Knödlseder, 2000, estimate a half-light radius of 13\({}^{\prime}\) and a total diameter of \(\sim\)2\({}^{\circ}\)) this necessrily required multiple pointings. To minimize the effects of _Chandra_’s reduced point spread function (PSF) sensitivity at large off-axis angles a simple tiling strategy was adopted (following that used successfully by Elvis et al., 2009), that has been shown to produce a well-defined lower flux limit with a sharp cutoff (Puccetti et al., 2009) and a high spatial resolution over the majority of the survey area. This approach also ensures that the area with _Chandra_’s good PSF, which can resolve sources 2\({}^{\prime\prime}\) apart (corresponding to \(\sim\)0.01 pc at 1.4 kpc) is maximized. This has allowed us to obtain a spatially-unbiased and accurate census of the association facilitating a number of scientific studies of interest. The sensitivity of the survey resulting from our tiling strategy is discussed and simulated in detail in Wright et al. (37).
<figure><img src="content_image/1408.6579/x1.png"><figcaption>Figure 1.— Left: the “as designed” Cygnus OB2 Chandra Legacy Survey tiling forthe 36 30 ks pointings. The thick black box (bottom left) represents oneACIS-I pointing, the thin boxes all the pointings. Different colors show areaswith different numbers of overlapping pointings: teal – 4 overlappingpointings; blue – 2 overlapping pointings; purple – 1 pointing. The black barsshow roughly the relative dimensions of one pointing (∼16′), of the deep innerarea (∼42′), and of the total field (∼56′). Pointing 1-1 lies at the bottomright (SW), 1-6 lies at the top right (NW), 6-1 lies at the bottom left (SE)and 6-6 lies at the top left (NE). Right: the “as executed” exposure map forthe Cygnus OB2 Chandra Legacy Survey and complementary observations. The colorbar gives the achieved effect exposure in units of ks. The deepest region ofthe survey with a non-negligible area has an exposure time of ∼220 ks.</figcaption></figure>
The tiling scheme (Figure 1) employs a \(6\times 6\) raster array of 36 pointings using _Chandra_’s Advanced CCD Imaging Spectrometer (ACIS-I; Garmire et al., 2003), each of 30 ks nominal exposure. The center of the array, 20\({}^{h}\) 33\({}^{m}\) 12\({}^{s}\) +41\({}^{\circ}\) 19\({}^{\prime}\) 00\({}^{\prime\prime}\), was chosen based on the main concentration of OB stars found by multiple authors. The 8\({}^{\prime}\).0 offset between pointing centers was chosen to be slightly less than the 8\({}^{\prime}\).3 size of an ACIS chip so that chip gaps are not co-added to create small scale dips in the effective exposure time. The inner part of the field is covered by four exposures, to give a total nominal exposure of \(\sim\)120 ks over a \(42\mbox{${}^{\prime}$}\times 42\mbox{${}^{\prime}$}\) area (0.5 deg\({}^{2}\), hereafter referred to as the _deep inner_ region). The outer regions are covered by two observations, and the four corners covered by one observation. The final position and overall position angle of the array was chosen to maximize the extent of the association covered (as traced by the positions of the known OB stars) as well as maximize the alignment of the different observations.
### Survey and Supplemental Observations
The survey observations were performed over a 6-week period from January – March 2010. All observations utilize the ACIS in imaging mode, which comprises four CCDs (chips I0-I3), each with \(1024\times 1024\) pixels (at a scale of 0.492\({}^{\prime\prime}\) pix\({}^{-1}\)), giving a \(17\mbox{${}^{\prime}$}\times 17\mbox{${}^{\prime}$}\) field of view (FoV). Some of the ACIS-S chips were turned on during the observations, but due to the high off-axis angle to these chips and consequently large PSF we have not used this data in this work. All observations were performed in very faint mode. The indices \(X-Y\) (1-1 through 6-6) describe the field numbers, where \(X\) is an index in R.A. and \(Y\) an index in declination, and 1-1 being in the bottom right (SW) corner of the grid and 1-6 being in the top right (NW) corner. The pointings and overall survey grid were observed at a nominal roll angle of 27.2\({}^{\circ}\), with the majority of pointings within the central grid of \(4\times 4\) pointings stable to \(\pm 10\) deg (see Figure 1).
None of the 30 ks observations were scheduled to be split due to observational or thermal constraints on the spacecraft, but the first observation, ObsID 10939, was interrupted after 24 ks for a spacecraft software reboot and the remaining 6 ks were observed as ObsID 12099. The mean effective exposure time per pointing (not per ObsID) for the 36 survey pointings is 28.1 ks, when only the good time intervals (GTIs) are used. The minimum and maximum exposure times per pointing are 27.3 ks and 30.6 ks. Over the inner region of \(42\mbox{${}^{\prime}$}\times 42\mbox{${}^{\prime}$}\) the mean total exposure time per tile is 116.3 ks, with a variation of \(\pm 0.7\) ks (0.6%).
Obs. ID | Grid | R.A. | Dec. | Roll | Start Time | Exp. Time
---|---|---|---|---|---|---
| number | (J2000) | (J2000) | (deg) | (UT) | (ks)
4358 | - | 20 32 05.54 | +41 30 31.98 | 195.4 | 2002 Aug 11, 19:50 | 5.0
4511 | - | 20 33 12.22 | +41 15 00.68 | 349.0 | 2004 Jan 16, 10:51 | 97.8
4501 | - | 20 32 05.75 | +41 30 38.98 | 170.4 | 2004 Jul 19, 02:03 | 49.4
7426 | - | 20 35 47.73 | +41 23 12.98 | 58.4 | 2007 Mar 17, 16:43 | 20.1
10939 | 1-2 | 20 32 06.48 | +40 59 12.30 | 359.8 | 2010 Jan 25, 13:14 | 24.4
10940 | 1-3 | 20 31 47.00 | +41 06 19.22 | 1.0 | 2010 Jan 26, 12:05 | 28.1
10941 | 1-4 | 20 31 27.53 | +41 13 26.14 | 2.6 | 2010 Jan 27, 19:00 | 30.1
10942 | 1-5 | 20 31 08.05 | +41 20 33.06 | 4.2 | 2010 Jan 29, 03:00 | 27.3
10943 | 1-6 | 20 31 48.58 | +41 27 39.99 | 6.3 | 2010 Jan 30, 21:19 | 28.8
10944 | 2-6 | 20 31 26.47 | +41 31 19.39 | 12.9 | 2010 Feb 01, 10:57 | 28.7
10945 | 3-6 | 20 32 04.37 | +41 34 58.80 | 12.9 | 2010 Feb 01, 19:13 | 28.2
10946 | 4-6 | 20 32 42.26 | +41 38 38.21 | 12.9 | 2010 Feb 02, 09:40 | 28.7
10947 | 5-6 | 20 33 20.15 | +41 42 17.61 | 12.9 | 2010 Feb 03, 22:14 | 27.6
10948 | 6-6 | 20 33 58.05 | +41 45 57.02 | 12.9 | 2010 Feb 04, 20:40 | 27.6
10949 | 5-5 | 20 33 39.63 | +41 35 10.69 | 12.9 | 2010 Feb 05, 04:41 | 27.8
10950 | 4-5 | 20 33 01.73 | +41 31 31.29 | 12.9 | 2010 Feb 07, 17:04 | 27.8
12099 | 1-2 | 20 32 06.48 | +40 59 12.30 | 6.7 | 2010 Feb 08, 17:13 | 6.0
10951 | 3-5 | 20 32 23.84 | +41 27 51.88 | 19.6 | 2010 Feb 11, 13:54 | 29.6
10952 | 2-4 | 20 32 05.42 | +41 17 05.55 | 20.2 | 2010 Feb 11, 22:27 | 29.6
10953 | 2-3 | 20 32 24.90 | +41 09 58.63 | 20.6 | 2010 Feb 12, 07:05 | 29.3
10954 | 5-3 | 20 34 18.58 | +41 20 56.85 | 20.4 | 2010 Feb 12, 15:26 | 29.5
10955 | 4-3 | 20 33 40.68 | +41 17 17.44 | 27.4 | 2010 Feb 14, 18:37 | 27.8
10956 | 3-3 | 20 33 02.79 | +41 13 38.04 | 27.4 | 2010 Feb 15, 02:41 | 28.1
10957 | 4-4 | 20 33 21.21 | +41 24 24.36 | 27.4 | 2010 Feb 16, 15:50 | 29.4
10958 | 3-4 | 20 32 43.32 | +41 20 44.96 | 27.4 | 2010 Feb 17, 00:40 | 29.4
10959 | 5-4 | 20 33 59.10 | +41 28 03.77 | 27.4 | 2010 Feb 17, 09:15 | 28.5
10960 | 4-2 | 20 34 00.16 | +41 10 10.52 | 27.4 | 2010 Feb 17, 21:55 | 28.6
10961 | 3-2 | 20 33 22.27 | +41 06 31.12 | 27.4 | 2010 Feb 22, 06:55 | 30.2
10962 | 2-5 | 20 31 45.95 | +41 24 12.47 | 27.4 | 2010 Feb 22, 15:45 | 29.8
10963 | 5-2 | 20 34 38.05 | +41 13 49.93 | 27.4 | 2010 Feb 23, 00:13 | 29.8
10964 | 2-2 | 20 32 44.37 | +41 02 51.71 | 27.4 | 2010 Feb 24, 14:36 | 30.1
10965 | 6-5 | 20 34 17.52 | +41 38 50.10 | 27.4 | 2010 Feb 24, 23:34 | 30.0
10966 | 6-4 | 20 34 37.00 | +41 31 43.18 | 36.7 | 2010 Mar 02, 04:41 | 29.6
10967 | 6-3 | 20 34 56.47 | +41 24 36.26 | 36.7 | 2010 Mar 02, 13:15 | 29.2
10968 | 6-2 | 20 35 15.95 | +41 17 29.34 | 36.7 | 2010 Mar 02, 21:35 | 29.2
10969 | 1-1 | 20 32 25.95 | +40 52 05.38 | 39.2 | 2010 Mar 03, 05:56 | 29.1
10970 | 2-1 | 20 33 03.85 | +40 55 44.79 | 41.7 | 2010 Mar 05, 16:04 | 29.9
10971 | 3-1 | 20 33 41.74 | +40 59 24.20 | 45.2 | 2010 Mar 08, 21:25 | 30.6
10972 | 4-1 | 20 34 19.63 | +41 03 03.60 | 45.7 | 2010 Mar 09, 20:12 | 28.0
10973 | 5-1 | 20 34 57.53 | +41 06 43.01 | 46.0 | 2010 Mar 10, 04:40 | 27.8
10974 | 6-1 | 20 35 35.42 | +41 10 22.42 | 46.2 | 2010 Mar 10, 12:35 | 27.8
Notes. Observations are listed in date order. The aim points and roll angles
are obtained from the satellite aspect solution before astrometric corrections
are applied. Units of right ascension are hours, minutes, and seconds; units
of declination are degrees, arcminutes, and arcseconds. ObsID 12099 is the
second part of the observation of field 1-2, the first part of which, ObsID
10939, was interrupted to allow a spacecraft software reboot.
Table 1Log of Chandra observations
These observations were combined with four existing observations that fell within the survey area and which had previously been used to study the Cyg OB2 association and some of its members by other authors (Butt et al., 2006; Albacete Colombo et al., 2007; Wright & Drake, 2009). In the center of the association this results in a maximum exposure time of a non-negligible area of \(\sim\)214 ks. In total 41 ObsIDs were used for this work, and these are listed in Table 1.
### Data Processing
The data from all 41 ObsIDs were uniformly processed using the CIAO 4.5 software tools¹(Fruscione et al., 2006), the _yaxx²_ tool, the _pyyaks³_ tool, and the CALDB 4.5.8 calibration files. The standard Level-1 and Level-2 data products were downloaded from the _Chandra_ Data Archive for all ObsIDs. After the data were processed we determined astrometric corrections (Section 2.4) and then reprocessed the data as outlined here.
[FOOTNOTE:1][ENDFOOTNOTE]
[FOOTNOTE:2][ENDFOOTNOTE]
[FOOTNOTE:3][ENDFOOTNOTE]
Data reduction began with the Level 1 event files using the CIAO acis_process_events tool to perform background cleansing and gain adjustments. A new Level 2 event file was produced by filtering out events with non-zero status and bad grades (events with grades 1, 5, or 7 were removed). While running the acis_process_events tool we enabled very-faint mode processing and turned off pixel randomisation, applying instead the sub-pixel EDSER (Energy-Dependent Subpixel Event Repositioning) algorithm, which should result in the optimal event positions.
Intervals of high background were determined by creating a background light curve for the ACIS-I events with point sources found by wavdetect removed. No observations showed intervals with a significant deviation from the quiescent background level (with the exception of those that included the bright X-ray source Cyg X-3, see below). The background is very stable for the observations that avoid Cyg X-3 with a typical rate of \(\sim 4.9\times 10^{-7}\) counts s\({}^{-1}\) pixel\({}^{-1}\). In the 5 pointings (6 ObsIDs: 10939, 10940, 10964, 10969, 10970, 12099) that include Cyg X-3 the background is higher and has significant spatial variability due to the wings of the PSF from Cyg X-3. Background rates in these ObsIDs are typically (1–\(4)\times 10^{-6}\) counts s\({}^{-1}\) pixel\({}^{-1}\), which significantly reduces the sensitivity in these areas.
### Astrometric Corrections
The absolute astrometry provided by the _Chandra_ spacecraft is accurate to one ACIS-I pixel (0.6\({}^{\prime\prime}\)at 90% confidence, POG⁴, Section 5). To avoid a loss of sensitivity when merging data from different observations, and to provide the most accurate positions for cross-matching to other wavelengths, the astrometry must be corrected to a common frame of reference. To do this a list of bright X-ray sources for each of the 41 ObsIDs was generated using the standard CIAO wavdetect tool, considering only bright sources (\(\geq 10\) photons) on-axis (within 4\({}^{\prime}\) of the aim point). This list was then cross-matched with the 2MASS (Two Micron All Sky Survey, Skrutskie et al., 2006) point source catalog using a matching radius of 1\({}^{\prime\prime}\) and using only sources with ‘AAA’ quality photometry and errors \(<0.1\) mag in all three bands. Only sources in the magnitude range \(K_{s}=8\)–14 mag were used, which minimizes systematic effects introduced by bright stars (saturation) and faint background objects (misidentification).
[FOOTNOTE:4][ENDFOOTNOTE]
From the list of cross-matched sources, which typically contained between \(\sim\)20 and \(\sim\)100 sources per ObsID, astrometric offsets were calculated in RA and Dec. The spacecraft roll angle is known to a very high precision based on guide stars spaced a degree apart or more on the sky, and consequently astrometric uncertainties arising from roll angle uncertainties are negligible compared to those arising from knowledge of absolute pointing. Roll angle changes were therefore set to zero for all ObsIDs for the purposes of astrometric corrections. The offsets between the X-ray and near-IR positions have mean values of \(\Delta\)R.A. \(=0.04\)\({}^{\prime\prime}\) and \(\Delta\)Dec. \(=0.03\)\({}^{\prime\prime}\) with all values smaller than \(\sim\)0.1\({}^{\prime\prime}\). The astrometric offsets were applied to the L1 data products to create new aspect solutions for each ObsID and the data were then re-reduced following the procedure in Section 2.3.
### Exposure maps and survey sensitivity
Exposure maps were constructed for each ObsID on a per-CCD basis and in multiple energy bands using the standard CIAO tool sequence of asphist, mkinstmap, and mkexpmap. The exposure maps were calculated using a thermal plasma model spectrum with \(kT=1.35\) keV and \(N_{H}=1.25\times 10^{22}\) cm\({}^{-2}\) (typical of a stellar coronal X-ray source at the extinction of Cyg OB2). Figure 1 shows the exposure map, which clearly shows the central region composed of four overlapping pointings and complemented by the two existing deep pointings.
<figure><img src="content_image/1408.6579/x2.png"><figcaption>Figure 2.— Top: Histogram of the exposure times in all observations used inthis work. The narrow peaks lie at the 1, 2, and 4 exposure values generatedfrom the overlapping gridded observations. Peaks are also visible at ∼80 ksand ∼140 ks where, respectively, 3 and 5 grid pointings overlap due tovariations in the roll angles of neighboring ObsIDs, as well as at ∼170 and∼220 ks where the existing 50 and 100 ks observations overlap with the centralarea of 4 grid pointings. The broader bases correspond to overlaps caused byslight variations in the roll angles of the ObsIDs.</figcaption></figure>
Figure 2 shows a histogram of the exposure times, the narrow peaks representing the 1, 2, and 4 ObsID exposure values. Also visible are a number of smaller peaks where the existing 50 and 100 ks observations overlap with the deep central region (peaks at 170 and 220 ks) and where variations in the roll angles of neighboring ObsIDs lead to areas covered by 3 and 5 grid pointings (peaks at 80 and 140 ks respectively). Based on the cumulative fraction of exposure times, approximately 95% of the 0.97 deg\({}^{2}\) survey area has an exposure of at least 30 ks, \(\sim\)70% has at least 60 ks, and \(\sim\)40% has \(\sim\)120 ks. 25% of the survey area exceeds 120 ks due to offset roll angles and the two deeper pointings.
### Removal of Cyg X-3 Readout Streaks
<figure><img src="content_image/1408.6579/x3.png"><figcaption>Figure 3.— Example image of the Cyg X-3 readout streak in ObsID 10964 before(left) and after (right) application of a hard mask to remove events from theevent file. Each image is approximately 5′ wide and aligned with north up andeast to the left. The readout streak is orientated along a chip column, theposition angle of which is dictated by the observation roll angle.</figcaption></figure>
When ACIS reads out it is still taking data, and therefore bright sources continue to expose the entire column in which the source lies, producing _readout streaks_. Figure 3 shows the readout streaks caused by Cyg X-3 that are present in 6 of our ObsIDs (including ObsID 10939 that was split in two). In these images Cyg X-3 can be clearly seen as the bright source, with the core of Cyg X-3 faint due to pileup (when two or more photons are detected as a single event often with a bad grade and the photon counting rate is therefore underestimated) and bad event grades (from piled-up photons). When the data are processed, some of the readout streak events are rejected because of bad grades, but the majority remain.
There are a number of reasons why these readout streaks should be removed prior to data analysis. These events are often falsely identified as real sources by source detection codes, though the positions of these sources allow them to be easily removed. More importantly, these events can contribute to the background regions of nearby sources, leading to incorrectly high background estimates and low source significances.
A number of methods were considered for dealing with these readout streaks. Some studies have incorporated the readout streaks into the background model (e.g., Evans et al., 2010), though this can be complex when different pieces of software are used for data reduction and analysis. The CIAO tool acisreadcorr can be used to remove readout steaks whilst retaining true background photons by using a background spectrum to separate background and readout events. However, because Cyg X-3 is so bright the background surrounding the readout streaks is actually the wings of the PSF (see Figure 3) and therefore has the same spectral shape and this tool cannot distinguish between the two. Another method is to use the CIAO tool dmfilth, which replaces pixel values in a source region with events interpolated from surrounding background regions. However the tool does not provide full event information on all reproduced events and so many extraction or analysis tools cannot work on such data products.
The method that was used was to implement a ‘hard’ mask for the data, completely removing all events that fall within certain regions. The exposure map was also masked in this way so that the various data reduction tools used consider these regions to have zero exposure. This was implemented using two masks, one was circular, centered on Cyg X-3 with a radius of 40 pixels, removing the detailed substructure of _Chandra_’s PSF that is evident in the brightest of sources. The second mask is a long rectangle that covers the readout streak along the length of the CCD on which Cyg X-3 was imaged, with a width of 10–20 pixels (depending on the width of the streak, which is determined by the size of the Cyg X-3 PSF and therefore the off-axis angle of Cyg X-3). Both of these distances were estimated as the distances at which the count rates approached that of the background level. These masks were first applied to the exposure map using dmcopy and then dmimgpick was used to remove events from the level 2 event file that fell within zero-valued pixels of the exposure map. This was successful in removing the readout streaks in all 6 ObsIDs that included Cyg X-3. An example of this is shown in Figure 3.
<figure><img src="content_image/1408.6579/x4.png"><figcaption>Figure 4.— An inverted three-color (RGB = soft 0.5–1.2 keV / medium 1.2–2.0keV / hard 2–7 keV) X-ray image of the center of Cyg OB2 showing observationsfrom the Cygnus OB2 Chandra Legacy Survey. The X-ray data has been processed,cleansed and flux calibrated as described in the text. The image isapproximately 23′×17′ with North up and East to the left. The brightest X-raysources visible in this image are predominantly OB stars, with the six mostluminous objects in the centre of the image being the trapezium of Cyg OB2 #8(upper left), #9 and #22 (center and lower left), the blue hypergiant Cyg OB2#12 and MT267 (center right) and Cyg OB2 #5 (upper right). See Negueruela etal. (2008) for a near-IR image with a similar field of view. Many hundreds oflower-mass stellar X-ray sources are also visible.</figcaption></figure>
Figure 4 shows an inverted three-color X-ray image of the center of the association made by combining the completely processed and cleansed X-ray data in the _soft_, _medium_ and _hard_ bands. The image has been mosaiced using CIAO and flux calibrated by dividing by the exposure map. The figure reveals the large number of resolved X-ray point sources visible in the data.
## 3. Source Detection
In this section we outline the methods employed to detect possible X-ray sources using a variety of tools. The validity of these sources is determined using more complex tools that take into account the source and background photons in multiple ObsIDs, resulting in a statistical quantification of the source validity (see Section 4). Therefore the goal of source detection is to detect as many sources as is feasibly possible such that after applying an iterative and stringent source validation program the majority of acceptable sources are retained and the weak or false sources weeded out. This method is adopted over an initially conservative source detection method as, despite the extra work involved, we believe it is more likely to produce a larger list of valid sources.
In order to fully exploit the wide and deep observations, and the fact that the majority of sources would be detected in multiple ObsIDs at different off-axis angles, particular care had to be taken to maximize the number of sources that could be detected. We applied three different source detection algorithms at a range of spatial scales and in several different energy bands, specifically the CIAO wavdetect(Freeman et al., 2002) and Palermo Wavelet Detection (PWDetect, Damiani et al., 1997) codes, as well as a new and enhanced multi-ObsID version of CIAO wavdetect. We adopted three energy bands for the source detection: _soft_ (0.5–2.0 keV), _hard_ (2.0–7.0 keV), and _broad_ (0.5–7.0 keV). We used an upper limit of 7 keV instead of the more typical value of 8 keV because above this energy there is a rise in the instrumental background due to charged particle impact, combined with fluorescent instrumental lines of Ni K and Au L (see the POG), which reduces the detection significances of faint sources. To maximize the number of sources in our final catalog we complemented our candidate source list with the positions of known sources in Cyg OB2, visually inspecting all sources in the X-ray image before adding them.
### wavdetect
The CIAO tool wavdetect was run on each CCD of each ObsID using wavelet scales of 2, 4, 8, 16, and 32 pixels (to be sensitive to both point-like and moderately extended sources, or sources at large off-axis angles) and at multiple detection thresholds. Our first list of candidate sources was generated using a conservative threshold of \(S_{0}=10^{-6}\)(see Freeman et al., 2002), and then supplemented by additional lists generated using liberal thresholds of \(S_{0}=10^{-5}\) and \(10^{-4}\). The highest threshold at which each source was detected was stored for later reference when the source lists were merged. Whilst a significance threshold of \(10^{-4}\) is rarely used (due to the large fraction of spurious sources it generates) it was used in our work because of the need to detect faint sources observed in multiple ObsIDs and at different off-axis angles. These sources were only retained if they were detected in another ObsID or with another method. We found that across our observations the number of sources detected at thresholds of \(10^{-6}\):\(10^{-5}\):\(10^{-4}\) was approximately in the ratio 1:1.4:2.6.
### PWDetect
Source detection was also performed with the wavelet-based source detection algorithm _PWDetect_(Damiani et al., 1997) at a detection limit of \(\sigma=4.1\), which should produce \(\sim\)20 spurious detections within each CCD that the code is run on. PWDetect is optimized to take into account the spatial variation of the PSF across the _Chandra_ FoV, such that at a given position the wavelet transform is only made at scales no smaller than about 1/2 the PSF sigma (since there is no useful information smaller than this).
### _Enhanced_wavdetect
A limitation of the two source detection algorithms used above is that they cannot be run on multiple observations with different field centers. Non-aligned X-ray observations cannot be merged in the traditional sense because the PSF size and shape varies as a function of the off-axis angle, and therefore traditional wavelet-based source detection that takes into account the specific PSF shape at a given point on the CCD would be misguided. Our tiled observing strategy means that for a typical constant source a single ObsID only accounts for 25% of the observing time, thus limiting our potential to detect the faintest sources possible.
To overcome this we have developed an enhanced version of CIAO’s wavdetect that allows source detection on multiple overlapping observations. This improves the sensitivity of the survey and allows us to detect weak sources in the full dataset that are below the detection threshold of individual observations. The method uses the two components of wavdetect, wtransform and wrecon, to self-consistently take into account the size of the PSF in different observations. The wtransform routine, which carries out a wavelet transformation of the data, is applied to all observations at the appropriate PSF size. Since wavelet transformation is a linear process, the correlation value of a wavelet scale applied to a combination of datasets is simply \(C_{ij}^{(1+2+...)}=C_{ij}^{(1)}+C_{ij}^{(2)}+...\) (as long as the PSF size criteria is met for all datasets). Only observations where the wavelet scale is larger than the PSF size are combined, thus limiting the number of false detections that arise at scales smaller than the PSF size. We recompute the detection threshold at each pixel for each scale and detect a source if \(C_{ij}^{i+j+...}>\mathcal{T}_{ij}^{i+j+...}\). The threshold is computed based on the summed backgrounds from all the observations, \(b^{i+j+...}=b^{i}+b^{j}+...\), excluding those observations where the wavelet scale is smaller than the local PSF size, and is therefore equivalent to the threshold used with CIAO wavdetect. Furthermore, because wrecon is used exactly as it would be with CIAO’s wavdetect, both the thresholds and the number of false positives remain the same.
We ran the code on “tiles”, which consist of multiple overlapping CCDs built up from our tiled observing strategy. Our observations consisted of 49 tiles, of which 45 have two or more observations overlapping. The _Enhanced_wavdetect procedure was therefore run on each tile + band combination. We used a threshold of \(S_{0}=10^{-5}\) (more conservative than used with wavdetect because of the large number of sources that were detected at more liberal levels of detection) which resulted in a total of 21,121 detections. The vast majority of these were detections of the same source at different spatial scales and in different energy bands, or were duplications of sources already detected by wavdetect. These duplications were removed following the same technique as employed by wavdetect itself, leaving a total of 2,635 new candidate sources from running the _Enhanced_wavdetect.
### Additional lists of known sources
The source detection methods outlined above have hopefully detected the majority of significant X-ray sources in our observations, but given the complex nature of both X-ray observations and our tiled observing strategy it is possible some faint sources may have evaded detection. Lowering the source detection threshold would likely increase the number of valid sources detected (those that would survive our source verification process) but would also have dramatically increased the number of false sources detected (those that would be discarded at the source verification stage). Faced with a situation of diminishing returns it is natural to draw a line in the source detection process at a certain level. However, some valid source may have escaped detection and thus other methods that could identify these sources should be considered. Whilst the primary objective of these X-ray observations is to compile a catalog of previously unknown members of Cyg OB2 there is also significant scientific merit in studying the X-ray properties of previously known members of the association.
For this reason we supplemented the list of X-ray detected sources with 634 additional sources from lists of known Cyg OB2 members, including O and B-type stars (Wright, 2014), young A-type stars in the association (Drew et al., 2008), and lower mass stars at unique evolutionary phases (Wright et al., 2012). These sources had not been detected in the original source detection methods outlined above but were included in the list of candidate X-ray sources to be verified in our full extraction process. Following the full source extraction and verification process only 100 of these sources were retained in our catalog, and these are noted in the final catalog. These 100 sources comprise \(\sim\)1% of the final catalog.
### Combining the source lists
The different source detection methods outlined above produced a total of 54,311 sources (22,443 from wavdetect, 28,599 from _PWDetect_, 2,635 additional sources from the _Enhanced_wavdetect, and 634 additional sources), many of which were duplicates detected by different methods and therefore needed to be removed. To understand the extent of the source duplication we performed a simple comparison of the source detection results from the main two codes, wav detect and _PWDetect_ (which were responsible for the vast majority of sources detected), on a single ‘tile’ of four overlapping CCDs in our observations (after merging the multiple source lists from the four overlapping CCDs and removing multiple detections).
For this experiment wavdetect was run at a threshold of \(S_{0}=10^{-6}\) and _PWDetect_ was run with a detection limit of \(\sigma=4.7\), both of which should statistically result in 1 false source per CCD (therefore 4 false sources per ‘tile’; note that these false source numbers are estimates and may not be correct), wavdetect detected 148 sources and _PWDetect_ 185 sources with an overlap of 125 (overlap fractions of 84% and 68% respectively). Decreasing the source detection thresholds for both algorithms (and thereby increasing the number of expected false sources to \(\sim\)10 per CCD - though there may be more or less than this) resulted in overlap fractions of 71% and 57% for the two methods (detection increases of 39% and 38% respectively), suggesting there was an increase of un-matched ‘false positives’ and a tendency for the two algorithms to uncover different faint sources. _PWDetect_ generally detected more sources in our tests, particularly at large off-axis angles, but the positions it determined were less accurate (with a larger standard deviation between positions detected from different ObsIDs and positions of known sources). Sometimes a source would be identified as one source by one code, but as two sources by another code (it was more common for wavdetect to identify a source as two separate sources). In these situations it was our belief that both sources should be taken through to the source verification stage and assessed there.
Because of the larger number of detections and the high overlap between source detection methods an automated detection merging process was required. The method employed was based on that used by the _Chandra_ Source Catalog (Evans et al., 2010, see Appendix A) and begins by building ‘groups’ of detections with overlapping error ellipses. This method is based on the assumption that all our sources are point-like (a valid assumption given that the great majority of sources are expected to be stellar) and that merging detections at different energies is no different from merging detections from different source detection algorithms.
Three types of group are possible from this method: unambiguous single detections, unambiguous multiple detections, and ambiguous multiple detections. The first of these is a single detection that does not overlap with any others and so is preserved as a single ‘source’ (there were 8568 of these in our detection lists). The second of these is a group of multiple detections all of whose error ellipses overlap with all the other error ellipses of detections in the group. This group is therefore consistent with being a single source, of which we find 3124 in our detection lists. Finally an ambiguous group of multiple detections consists of detections whose error ellipses overlap, but not all error ellipses overlap with all other error ellipses and is therefore not consistent with being a single source. Visual inspection showed that the vast majority of these (of which there were 1349 in our detection list) were caused by multiple close sources whose error ellipses overlapped in one or more detections leading to a ‘chain’ of detections (with group sizes of between 3–32 detections). These groups were resolved by visual inspection, adopting detections and error ellipses made at the smallest off-axis angle (and therefore the best spatial resolution) in uncertain cases. Finally, as described in Section 4, all sources were visually inspected to remove obviously spurious sources due to detector artifacts such as readout streaks.
### Final candidate source list
<figure><img src="content_image/1408.6579/x5.png"><figcaption>Figure 5.— Final distribution of the 7924 X-ray sources in the Chandra CygnusOB2 Legacy Survey after removing all sources that failed the sourceverification procedure (Section 4). The sources are color-coded based on thedetection method used to add the sources to the initial source candidate list.The 7608 surviving sources detected using either wavdetect or PWDetect (mostlydetected using both codes) are colored grey. The 216 surviving sourcesdetected using the Enhanced wavdetect code (but not detected by other methods)are colored red and the 100 surviving sources added from known source lists(and not detected by other methods) are colored blue.</figcaption></figure>
This process led to a final total number of 13,041 groups which would hereafter be treated as, and referred to as, candidate sources. The positions of the final sources were calculated based on the most on-axis detections of a group (note that positions would be revised later during the full photon extraction). Figure 5 shows the spatial distribution of the final 7924 sources after the full source verification process, color-coded by their detection method.
## 4. Point Source Extraction
In this section we describe the process of extracting point sources from the data, including assessing the validity of the sources, refining our source list, and adjusting the positions of our sources. Point source extraction was performed using the _ACIS Extract⁵_(AE, Broos et al., 2002, 2010) software package, which defines source extraction apertures, extracts source and background spectra, and compiles source photometry, spectra, and lightcurves.
[FOOTNOTE:5][ENDFOOTNOTE]
Our source extraction process is divided into two stages, an iterative source verification process (in which candidate sources are extracted and their validity assessed) followed by a full spectral extraction (in which the properties of the source are to be accurately determined). Because the objectives of these two stages are different and, because for the majority of our candidate sources we have multiple observations available to us, we use different combinations of the data to achieve these goals. In the first stage of the extraction, when we wish to assess the validity of our proposed sources, we use the subset of whole observations that optimizes (or minimizes) the quantity \(P_{B}\), the probability that the source is a background fluctuation. This is based on the methodology that a source is deemed to exist if it is significant in any observation, or in any combination of observations. In the second stage we wish to maximize the signal to noise ratio (S/N) of any measured quantities through our extraction. In this instance AE uses a greater subset of the observations, discarding data only when retaining them would significantly degrade the final S/N. These differences are most often manifested through discarding or retaining data at large off-axis angles when data at smaller off-axis angles also exists for a source. A full discussion of this approach can be found in Section 6.2 of Broos et al. (2010).
Because this is the fundamental difference between the two stages of our extraction process we first outline the general extraction process used by both stages, and then highlight the important parts of the extraction process relevant to the two individual stages. A full description of the AE package can be found in Broos et al. (2010), but we reiterate the main steps here.
### The extraction process
A basic extraction process is used whether the goal of the extraction is to assess source validity, determine accurate source positions, or extract source properties. This process begins with assigning an extraction aperture for the source (Section 4.1.1) based on the local PSF and level of crowding. Then a background region is defined (Section 4.1.2) so as to accurately characterize the background contamination in the source aperture. Events are then extracted from these two regions (Section 4.1.3) and the results compared with a model of the source and background that includes the properties of the _Chandra_ observatory so that the intrinsic source properties can be calculated. These steps are described here.
#### 4.1.1 Extraction apertures
Because the _Chandra_ point-spread function (PSF) is both non-circular and also varies significantly across the field of view, the extraction of point-like sources requires a model of the local PSF for each observation of each source (because a source may be observed on-axis in one ObsID and off-axis in another, causing the PSF to vary significantly). AE builds finite extraction apertures from contours of the local PSF at an energy of 1.5 keV enclosing, by default, 90% of the PSF power, decreasing this fraction (to a minimum of 40%) in crowded regions to prevent overlapping extraction apertures from different sources. If a close pair of sources has their extraction apertures decreased to less than 40% of the PSF power then either that observation is not considered by AE for these sources (when multiple observations are available), or else the weaker of the two sources is discarded (on the belief that no meaningful source properties can be determined for the source). In this situation the extraction aperture of the stronger source will increase to the default value of 90%, encompassing the weaker source. This situation therefore reverts to the default situation of a single source, which is appropriate if one cannot reliably prove there to be two sources present.
A hook-shaped feature, \(0^{\prime\prime}.8\) in length, was discovered in the _Chandra_ PSF in 2011,⁶ extending from the peak of the source in a roll-angle dependent position. It is estimated to contain \(\sim\)5% of the source flux and is therefore only discernible for bright sources observed sufficiently on-axis that the PSF and the hook are compact enough to be individually identified. At the time of writing the exact flux fraction, shape and energy dependence of the feature have not been well characterized and no revised PSF models have been produced. It is therefore possible that our source detection and extraction process may have detected this ‘hook’ as a real source and verified its authenticity. AE includes a tool that identifies where bright PSF hooks would appear in the available data, facilitating an inspection of all sources bright enough (\(>\)100 net counts) and observed on-axis (\(\leq\)4\({}^{\prime}\)) such that the hook might be identified as a separate source. We inspected 688 sources that met these two criteria but could not identify any sources that appeared consistent with being due to the PSF hook feature. The visual inspection process was aided by the fact that each source was typically observed at multiple off-axis angles and roll angles due to our tiling strategy. Since the PSF hook feature would therefore have appeared in different positions in each observation it was straightforward to verify that none of our extracted sources were due to the PSF hook.
[FOOTNOTE:6][ENDFOOTNOTE]
#### 4.1.2 Background regions
Background estimates must be obtained “locally” for each source to account for spatial variations in the X-ray background due to diffuse emission or the wings of nearby point sources. For an isolated source a simple background annulus is used, extending from a radius 1.1 times that which encloses 99% of the PSF to a radius such that a minimum of 100 background photons has been gathered, excluding regions around other point sources. AE also adjusts the sizes of the background regions used so that the Poisson uncertainty on the background level contributes no more than 3% of the total uncertainty on the source photometry.
In crowded regions it is often not possible to construct background regions that avoid other point sources, especially since the PSF wings of bright neighbors can be especially wide. To overcome this we employ AE’s alternative background algorithm that constructs background regions that sample nearby bright sources in proportion to their expected contamination of the extraction aperture of the source in question. This is estimated using a spatial background model for all the nearby sources determined using PSF models and estimates of their fluxes, and is then improved over multiple iterations and extractions.
#### 4.1.3 Source extraction
Once the extraction apertures are defined for each source, events are extracted by AE using standard CIAO tools, including dmextract to construct source spectra, mkarf to build ancillary reference files (ARFs), and mkacisrmf to construct response matrix files (RMFs). Corrections are then applied for the finite size of the PSF and the lost source events falling outside of the extraction aperture. Because _Chandra_’s PSF is energy dependent this correction is calculated at multiple energies and convolved with the ARF to represent the true observatory effective area responsible for the extracted source and background events.
#### 4.1.4 Source repositioning
During the extraction process the positions of source candidates, originally derived from wavelet-based source detection, are updated with AE estimates using a subset of each source’s extractions chosen to minimize the positional uncertainty. AE offers three positional estimates: the _data_ position, calculated from the centroid of all extracted events; the _correlation_ position, derived from the spatial correlation between the _Chandra_ PSF and the events; and the _maximum-likelihood_ position, based on the peak in the maximum-likelihood image reconstruction of the source neighborhood. For uncrowded on-axis sources these positions are very similar and therefore the _data_ position is the easiest to calculate. For off-axis sources the asymmetry in the _Chandra_ PSF can bias this position so the _correlation_ position is often more accurate, while for crowded sources with overlapping PSFs the _maximum-likelihood_ position provides better positional estimates. We follow AE’s guidelines by using this system to allocate positions for all sources while also visually inspecting all repositioned sources. Source repositioning was performed every 2–3 iterations (see Section 4.2) to provide accurate source positions for assessing the source validity during the extraction process and typically resulted in shifts of \(\leq\)0.1\({}^{\prime\prime}\).
### Iterative trimming of the source list
Our liberal source detection strategy relies upon a careful and conservative source verification process such that weak point sources likely to be background fluctuations are removed and only those sources that pass a critical threshold are retained. An accurate assessment of the significance of a candidate source requires a full extraction of the source and background regions that is not possible through the simple source detection process, especially for complex observational datasets such as ours, and must be assessed following a full point source extraction. This requires an iterative process of source extraction and verification.
Traditional source validity is assessed via the signal to noise ratio, defined by the source flux divided by its uncertainty, a quantity that is equivalent to a source significance when the source and background fluxes have Gaussian distributions. However, since most X-ray sources in our sample are quite weak and background count rates are low, the Gaussian approximation is not valid and photon distributions follow Poisson statistics. To overcome this AE assesses the significance of a source using the probability that a source can be explained as a background fluctuation according to Poisson statistics (Broos et al., 2010, Section 4.3), equivalent to testing the null hypothesis that a candidate source does not exist (Weisskopf et al., 2007, Appendix A2). When multiple observations of a source exists AE bases the calculation of \(P_{B}\) on the subset of each source’s extractions that maximize the significance of the source. This prevents extractions at large off-axis angles with a large PSF from biasing the measured significance of a source that is also observed on-axis with a small PSF.
The threshold _‘not-a-source probability’_, \(P_{B}\), that is chosen must balance completeness (the fraction of real sources detected) against reliability (the fraction of sources that are ‘false’). We follow the standard threshold adopted by previous users of AE and require our sources to have a probability of being a background fluctuation of \(P_{B}\leq 0.01\) and calculated from a minimum of 3 net counts. All sources that do not meet these requirements are reviewed visually and then discarded. The visual review is intended to prevent sources that have been wrongly demoted from being lost, such as sources where the extraction aperture was not correctly centered on the source. Such instances were very rare and the sources were retained for another round of extraction and if necessary their extraction apertures were corrected.
Once these faint sources have been discarded the entire catalog is extracted once more, including recalculating source and background regions since these may change if nearby sources have been removed. The net effect of removing sources is that more events will contribute to the background regions of nearby sources, thereby lowering their significance. Therefore sources that survive the source pruning may now fall below the source criteria, requiring another round of source pruning. An iterative sequence of alternating source extraction and source pruning is therefore required and continues until no sources are found to be insignificant. To prevent the accidental removal of a large number of potentially valid sources in an early iteration we started this process with a probability threshold higher (\(P_{B}=0.1\)) than our final threshold, adopting a threshold of \(P_{B}=0.01\) after three iterations. This process left a list of 7924 valid sources. At this stage the remaining sources are considered to be valid sources and a full spectral extraction can be performed.
### Full Spectral Extraction
Once a high-confidence list of sources has been produced we perform a full spectral extraction on the sources, following the basic steps outlined above, but this time choosing a subset of the data that optimizes the photometric S/N of any measured quantity such as source photometry or spectroscopy, but without succumbing to photometric bias. AE achieves this by discarding observations only when retaining them would halve the maximum S/N that could be achieved with the observations. This strategy tolerates a slight deterioration to the S/N in order to avoid photometric biases arising from data selection (Broos et al., 2010).
Source spectra are extracted along with background spectra and facilitate the calculation of many observed quantities such as the number of source, background, and net photon events, the photon flux (in photons cm\({}^{-2}\) s\({}^{-1}\)) and the photon energy flux (in erg cm\({}^{-2}\) s\({}^{-1}\)). Additional quantities with a high diagnostic value include background-corrected quartiles (25%, 50%, and 75%) of the observed event energy distributions (which provide a valuable characteristic of the observed spectrum for low-count data) and the probability that the photon arrival times can be described by a source with a constant flux (a useful diagnostic of variable sources and an indicator of flare-like events).
### Source Variability
To quantify the level of X-ray variability of a source we follow Broos et al. (2010) and use a one-sample Kolmogorov-Smirnov (KS) test, as implemented by AE, comparing the arrival times of events (in calibrated units of photon ks\({}^{-1}\) cm\({}^{-2}\)) with the null hypothesis of a uniform source flux. This test then provides a probability (a \(p\)-value) that the source is not variable and is assessed both within individual observations and between all observations (accounting for variations in effective area among the observations). While this test does not reveal periodicity or variability near the beginning or end of the observation, and does not take into account variability in the background, it does provide a simple indication of potentially variable behavior that is useful to identify which sources are worthy of further investigation.
This variability will be discussed in more detail in Flaccomio et al. (13), but we briefly summarise the single-parameter results here. We consider a source as being variable if the \(p\)-value of its light curve being reproduced by a source of constant flux is \(\leq 0.005\) (\(\sim\)3\(\sigma\) confidence of variability). We identify 628 (8%) sources as being variable within a single observation, while 2193 (28%) exhibit evidence of inter-observation variability (with an overlap between these categories of 432 sources). Because these two variability tests are not independent of each other, the results from the tests should be compared with care. In total, 2389 (30%) sources exhibit some evidence of variability from this single parameter.
## 5. Source catalog properties and statistics
The final source list contains 7924 X-ray sources that pass our threshold significance criteria and are considered to be valid sources. The properties of these sources are presented in a table published electronically and available at the Vizier archive⁷. The column names and descriptions are listed in Table 2.
[FOOTNOTE:7][ENDFOOTNOTE]
Column Label | Units | Description
---|---|---
Number | … | Source number used within the Chandra Cygnus OB2 Legacy Survey
Name | … | IAU source name; prefix is CXOCOB2 J (Chandra X-ray Observatory Cygnus OB2)
RAa | deg | Right ascension (J2000)
Deca | deg | Declination (J2000)
PosErra | arcsec | 1σ error circle around the source position (Right ascension, Declination)
PosTypea | … | Method used to estimate source position (see Section 4.1.4)
SuspectFlag | … | Flag indicating that the source is coincident with a known PSF feature (see Section 4.2)
Significanceb | … | Source significance (calculated from the full band)
ProbNoSrcb | … | Base 10 logarithm of the p-valuec for no-source null hypothesis (minimum value, see Section 4.2)
Prob_bandb | … | Band used for ProbNoSrc (minimum value, see Section 4.2)
Detect_methodc | … | Method used for detecting the candidate source
ExpTime_Tot | s | Total exposure time in all observations
ExpTime_Nomd | s | Total exposure time in all merged observations for photometry
ExpTime_Fracd | … | Fraction of total exposure time merged for photometry
NumObs_Tot | … | Total number of observations of the source
NumObs_Nomd | … | Total number of observations merged for photometry
Theta_Lo | arcmin | Smallest off-axis angle for merged observations (see also Fig. 7)
Theta | arcmin | Average off-axis angle for merged observations
Theta_Hi | arcmin | Largest off-axis angle for merged observations
PSF_Frac | … | Average PSF fraction (at 1.5 keV) for merged observations (see Section 4.1.1)
SrcArea | pixel2 | Average aperture area for merged observations (in pixels)
ProbKS_single | … | Smallest p-valuee for the non-variable null hypothesis from a single observation
ProbKS_merge | … | Smallest p-valuee for the non-variable null hypothesis from all merged observations
| | (see Section 4.4)
SrcCnts_full | counts | Observed counts in merged source apertures (full band)
SrcCnts_soft | counts | Observed counts in merged source apertures (soft band)
SrcCnts_hard | counts | Observed counts in merged source apertures (hard band)
BkgCnts_full | counts | Observed counts in merged background regions (full band)
BkgCnts_soft | counts | Observed counts in merged background regions (soft band)
BkgCnts_hard | counts | Observed counts in merged background regions (hard band)
BkgScaling | … | Scaling of the background extraction region
NetCnts_full | counts | Net counts in merged source apertures (full band)
NetCnts_soft | counts | Net counts in merged source apertures (soft band)
NetCnts_hard | counts | Net counts in merged source apertures (hard band)
MeanEffArea_full | cm2 count photon−1 | Mean ARFf value (full band)
MeanEffArea_soft | cm2 count photon−1 | Mean ARFf value (soft band)
MeanEffArea_hard | cm2 count photon−1 | Mean ARFf value (hard band)
MedianEnergy_full | keV | Median energyg of the observed spectrum (full band)
MedianEnergy_soft | keV | Median energyg of the observed spectrum (soft band)
MedianEnergy_hard | keV | Median energyg of the observed spectrum (hard band)
EnergyFlux_full | erg cm−2 s−1 | Energy flux of the observed spectrum (full band)
EnergyFlux_soft | erg cm−2 s−1 | Energy flux of the observed spectrum (soft band)
EnergyFlux_hard | erg cm−2 s−1 | Energy flux of the observed spectrum (hard band)
NetCnts_full_Lo | count | 1σ lower bound on NetCounts_full
NetCnts_full_Hi | count | 1σ upper bound on NetCounts_full
NetCnts_soft_Lo | count | 1σ lower bound on NetCounts_soft
NetCnts_soft_Hi | count | 1σ upper bound on NetCounts_soft
NetCnts_hard_Lo | count | 1σ lower bound on NetCounts_hard
NetCnts_hard_Hi | count | 1σ upper bound on NetCounts_hard
Notes. The catalog is sorted by right ascension. The suffixes “_full,”
“_soft,” and “_hard” on names of photometric quantities designate the full
(0.5–8 keV), soft (0.5–2 keV), and hard (2–8 keV) energy bands.
a Source position quantities (RA, Dec, PosErr, PosType) are computed using a
subset of each source’s extractions chosen to minimize the position
uncertainty (see Section 4.1.4).
b Source significance quantities (Significance, ProbNoSrc, Prob_band) are
computed using a subset of each source’s extractions chosen to maximize
significance (see Section 4.2).
c Detection methods: CW (CIAO wav detect), PW (PWdetect), CPW (source detected
by both CIAO wav detect and PWdetect), EW (Enhanced wav detect), and KS
(previously known sources).
d Source photometric quantities are computed using a subset of each source’s
extractions (indicated by ExpTime_Nom, ExpTime_Frac, NumObs_Nom) that balance
the conflicting goals of minimizing photometric uncertainty and of avoiding
photometric bias (see Section 4.3).
e In statistical hypothesis testing, the p-value is the probability of
obtaining a test statistic at least as extreme as the one that was actually
observed when the null hypothesis is true.
f In Chandra ACIS data analysis the ARF incorporates both the effective area
of the observatory and the fraction of the observation for which data were
actually collected for the source.
g The median energy is the median energy of extracted events correct for the
background (see Broos et al., 2010, Section 7.3).
Table 2List of columns in the Chandra Cygnus OB2 Legacy Survey catalog.
### Technical properties
<figure><img src="content_image/1408.6579/x6.png"><figcaption>Figure 6.— Distribution of various observational and technical quantities forthe 7924 sources in the final X-ray catalog. Left: The number of times eachsource is observed. The majority of sources (>80%) are observed at least 4times due to the tiling strategy outlined in Section 2.1. Center: The smallestoff-axis angle at which any source is observed, with quartiles at 2.3, 3.2,and 4.0 arcmin. Right: The positional uncertainty of all sources, withquartiles at 0.2, 0.3, and 0.5 arcsec.</figcaption></figure>
<figure><img src="content_image/1408.6579/x7.png"><figcaption>Figure 7.— Distribution of X-ray sources in the Chandra Cygnus OB2 LegacySurvey colored as a function of the lowest off-axis angle that the source isdetected at. The observational tiling strategy is evident in the grid patternseen in this image. Due to the tiling strategy used the vast majority ofsources were observed at least once at an off-axis angle <4′ and thereforewith PSF sizes <2′′ in radius (at 1.49 keV).</figcaption></figure>
Figure 6 shows various observational properties of the final source catalog of 7924 sources. The vast majority of sources are observed more than once thanks to the observational tiling strategy adopted, with \(>\)80% of sources observed 4 or more times (see Section 2.1). The principle product of the survey design is that the overlapping grid of pointings means that the majority of sources will be observed on-axis at least once, and this is also evident in Figure 6, where it can be seen that half of all sources are observed at least once with an off-axis angle of at most 3.2\({}^{\prime}\), and 75% of sources are observed at least once with an off-axis angle of \(<\)4.0\({}^{\prime}\). This can be seen in relation to the survey area in Figure 7, which shows the distribution of all X-ray sources colored by the smallest off-axis angle at which they are observed. This has the effect of highlighting the tiling strategy used for the observations to an extent that is not as evident from the distribution of sources in Figure 5. One of the important ways that the off-axis angle translates into the final sources properties is that the positional uncertainty is very low. Figure 6 shows that the vast majority of sources have a positional uncertainty \(<\)1\({}^{\prime\prime}\) and 50% of sources have an uncertainty less than 0.3\({}^{\prime\prime}\).
### Source properties
<figure><img src="content_image/1408.6579/x8.png"><figcaption>Figure 8.— Distribution of various source quantities for the 7924 sources inthe final X-ray catalog. Left: The distribution of net counts of all extractedsources, with quartiles at 7, 13, and 28 net counts. Center: The background-subtracted median photon energy of all extracted sources, with quartiles at1.7, 2.0, and 2.8 keV. Right: The distribution of model-independent sourceenergy fluxes calculated using the equation Fenergy=1.602×10−9~Ephoton×Fphoton(see Section 5.2) with a median of 2.6×10−15 erg cm−2 s−1.</figcaption></figure>
The distribution of source properties is shown in Figure 8 for all 7924 sources in the final X-ray catalog. Most sources are very weak, only 402 sources (5% of the catalog) have more than 100 net counts, and 39% have fewer than 10 net counts. Figure 8 shows the net count distribution for all the sources in our catalog, extending up to the most luminous X-ray sources with \(\sim\)60,000 net counts. There are 15 sources with more than 1000 net counts in our sample, the vast majority of which (13 of 15) are known O-type stars, with those having the most net counts being supergiants (for a discussion of the X-ray properties of the known O and B-type stars in Cyg OB2, see Rauw et al., 2014). The turnover at low net counts is due to the source verification criteria used, which, it should be noted, is not necessarily based upon the same extracted events as those used to derive source properties. This explains the presence of ‘verified’ sources with apparently \(\sim\)1–2 net counts in Figure 8 (see discussion in Section 4). Note that the intensities of many of the weak sources near the threshold of detectability will be affected by Eddington bias (Eddington, 1940) and we warn the user that luminosities estimated directly from the measured counts will tend to be overestimated. Accounting for this requires proper modelling of the Poisson distribution and the detection threshold across the detector.
Figure 8 also shows the distribution of background-subtracted median photon energies for all sources in the final catalog. The median photon energy is a commonly-used diagnostic of the X-ray source emission, and for young stellar sources can also be an indicator of the absorbing column density (Feigelson et al., 2005). The distribution of median photon energies is strongly peaked at around \(\sim\)2 keV, in agreement with previous studies of Cyg OB2 (e.g., Wright et al., 36), and slightly higher than the typical peak values for unobscured field stars (\(\sim\)1 keV, e.g., Wright et al., 34) and for stars in less-obscured star forming regions (\(\sim\)1.3 keV, e.g., Getman et al., 2005).
Finally Figure 8 shows the distribution of model-independent energy fluxes, which are defined by Broos et al. (2010) as
\[F_{energy}=1.602\times 10^{-9}\tilde{E}_{photon}\times F_{photon}\] (1)
where \(\tilde{E}_{photon}\) is the background-subtracted median photon energy, \(F_{photon}\) is the photon flux (Broos et al., 2010, Section 7.4), and the numerical constant arises from the conversion between keV and ergs to produce units of ergs cm\({}^{-2}\) s\({}^{-1}\). The distribution of energy fluxes rises towards fainter sources and peaks at \(\sim 3\times 10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\), which provides a first-order estimate of our completeness limit over the entire survey area (note that the survey completeness limit will be deeper in the central 0.5 deg\({}^{2}\) of the survey where the exposure time is higher). The most luminous sources are the OB supergiants Cyg OB2 #8A (O6I), Cyg OB2 #5 (O7I) and Cyg OB2 #12 (B3.5Ia+), which have energy fluxes of \(\sim\) 2–4 \(\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\), though many of these suffer from severe pile-up. No correction was made for pile-up for these sources in our catalog, but we refer the interested reader to Rauw et al. (2014) who provide a detailed discussion and analysis of pile-up for these sources as part of their analysis of the X-ray properties of the OB stars in Cyg OB2. Of the ten most luminous X-ray sources, 9 are known OB and Wolf-Rayet stars, though the 9th most luminous X-ray source is a previously unknown and optically faint source that is most likely a flaring pre-MS star.
### Spurious Sources
The detection process and source verification process employed by our survey is designed to maximize the detection of all sources, and therefore the detection of a small number of false sources should be expected. To quantify the number of spurious detections one would ideally perform source detection and verification on synthetic observations that reproduce the observations as closely as possible. Variations in the background level may reach such levels as to be detected and characterised as valid sources, and the frequency of these sources could then be quantified. **For example such simulations were performed by the XMM-Newton serendipitous survey (Watson et al., 2009) and the Swift Point Source Catalog (Evans et al., 2014), both of which found large differences between the false positive rate expected from formal statistics and that found from simulations. However,** due to the complexity of our survey, the multiple, heavily overlapping observations, and the multi-stage detection, inspection and verification process used in producing our source catalog, it would be impossible to perform simulations of our survey in a reasonable amount of time. An alternative approximate estimate of the number of false sources in our catalog may be calculated from the sum of all _not-a-source probabilities_, \(\Sigma\,P_{B}\). From our final source catalog we calculate this value to be 5.6 false sources within our catalog, or 0.07% of the total catalog.
## 6. Summary
In this paper we have presented and discussed the X-ray observations from the _Chandra_ Cygnus OB2 Legacy Survey (Drake et al., 2014), as well as a small number of existing X-ray observations that cover the same region. We have described the standard data reduction, X-ray source detection, and source verification procedures that we have followed. The tiling pattern adopted by our survey leads to a large area at which the vast majority of sources are observed on-axis at least once, including a deep area of 0.5 deg\({}^{2}\) with a highly uniform exposure of \(116.3\pm 0.7\) ks, a standard deviation of only 0.6%. These are pivotal requirements for obtaining high and uniform sensitivity levels over the majority of the survey area and for limiting spatial biases for the analysis of this unique and interesting target. A number of novel data analysis techniques were introduced to maximise the scientific return of our unique set of observations, including an enhanced version of the CIAO tool wavdetect that performs source detection on multiple non-aligned X-ray observations, detecting sources that may not be detected in the individual observations.
We have presented a catalog of 7924 X-ray point sources detected and verified from these observations. The catalog, available online, contains positions and basic X-ray properties. The vast majority of sources are observed at least 4 times, and detected on axis (\(<4\)\({}^{\prime}\)) at least once. The positional uncertainty is therefore very low (typically \(<0.5\)\({}^{\prime\prime}\)). An analysis of the sensitivity of our survey to a number of observational and stellar parameters is presented in Wright et al. (37). Optical and infrared counterparts to these sources are presented in Guarcello et al. (2014), and an analysis of the likely Cyg OB2 members and contaminants is presented in Kashyap et al. (2014). X-ray spectroscopy of the low- (Flaccomio et al., 14) and high-mass (Rauw et al., 2014) members of the association is presented elsewhere, along with analysis and discussion.
We would like to extend special thanks to Ken Glotfelty, Frank Primini, and especially Pat Broos for their advice and assistance in this work. We would also like to thank the anonymous referee for a prompt and helpful report that has improved the content and clarity of this paper. NJW acknowledges a Royal Astronomical Society Research Fellowship. NJW and MG acknowledge support from _Chandra_ grant GO0-11040X. JJD was supported by NASA contract NAS8-03060 to the _Chandra X-ray Center_ and thanks the director, H. Tananbaum, and the science team for continuing support and advice. This research has made use of data from the _Chandra X-ray Observatory_ and software provided by the _Chandra X-ray Center_ in the application packages CIAO and Sherpa, and from Penn State for the ACIS Extract software package. This research has also made use of NASA’s Astrophysics Data System and the Simbad and VizieR databases, operated at CDS, Strasbourg, France.
## References
* Albacete Colombo et al. (2007) Albacete Colombo, J. F., Flaccomio, E., Micela, G., Sciortino, S., & Damiani, F. 2007, A&A, 464, 211
* Broos et al. (2002) Broos, P., Townsley, L., Getman, K. V., & Bauer, F. 2002, ACIS Extract, An ACIS Point Source Extraction Package, Pennsylvania State University
* Broos et al. (2010) Broos, P. S., Townsley, L. K., Feigelson, E. D., Getman, K. V., Bauer, F. E., & Garmire, G. P. 2010, ApJ, 714, 1582
* Butt et al. (2006) Butt, Y. M., Drake, J., Benaglia, P., Combi, J. A., Dame, T., Miniati, F., & Romero, G. E. 2006, ApJ, 643, 238
* Damiani et al. (1997) Damiani, F., Maggio, A., Micela, G., & Sciortino, S. 1997, ApJ, 483, 350
* Drake et al. (2014) Drake, J. J., Wright, N. J., Guarcello, M. G., Flaccomio, E., & Kashyap, V. 2014, Apj, submitted
* Drew et al. (2008) Drew, J. E., Greimel, R., Irwin, M. J., & Sale, S. E. 2008, MNRAS, 386, 1761
* Eddington (1940) Eddington, Sir, A. S. 1940, MNRAS, 100, 354
* Elvis et al. (2009) Elvis, M., Civano, F., Vignali, C., Puccetti, S., Fiore, F., Cappelluti, N., Aldcroft, T. L., Fruscione, A., Zamorani, G., Comastri, A., Brusa, M., Gilli, R., Miyaji, T., Damiani, F., Koekemoer, A. M., Finoguenov, A., Brunner, H., Urry, C. M., Silverman, J., Mainieri, V., Hasinger, G., Griffiths, R., Carollo, M., Hao, H., Guzzo, L., Blain, A., Calzetti, D., Carilli, C., Capak, P., Ettori, S., Fabbiano, G., Impey, C., Lilly, S., Mobasher, B., Rich, M., Salvato, M., Sanders, D. B., Schinnerer, E., Scoville, N., Shopbell, P., Taylor, J. E., Taniguchi, Y., & Volonteri, M. 2009, ApJS, 184, 158
* Evans et al. (2010) Evans, I. N., Primini, F. A., Glotfelty, K. J., Anderson, C. S., Bonaventura, N. R., Chen, J. C., Davis, J. E., Doe, S. M., Evans, J. D., Fabbiano, G., Galle, E. C., Gibbs, II, D. G., Grier, J. D., Hain, R. M., Hall, D. M., Harbo, P. N., (Helen He, X., Houck, J. C., Karovska, M., Kashyap, V. L., Lauer, J., McCollough, M. L., McDowell, J. C., Miller, J. B., Mitschang, A. W., Morgan, D. L., Mossman, A. E., Nichols, J. S., Nowak, M. A., Plummer, D. A., Refsdal, B. L., Rots, A. H., Siemiginowska, A., Sundheim, B. A., Tibbetts, M. S., Van Stone, D. W., Winkelman, S. L., & Zografou, P. 2010, ApJS, 189, 37
* Evans et al. (2014) Evans, P. A., Osborne, J. P., Beardmore, A. P., Page, K. L., Willingale, R., Mountford, C. J., Pagani, C., Burrows, D. N., Kennea, J. A., Perri, M., Tagliaferri, G., & Gehrels, N. 2014, ApJS, 210, 8
* Feigelson et al. (2005) Feigelson, E. D., Getman, K., Townsley, L., Garmire, G., Preibisch, T., Grosso, N., Montmerle, T., Muench, A., & McCaughrean, M. 2005, ApJS, 160, 379
* Flaccomio et al. (2014a) Flaccomio, E., Kashyap, V. L., Guarcello, M. G., Drake, J. J., & Wright, N. J. 2014a, Apj, submitted
* Flaccomio et al. (2014b) —. 2014b, Apj, submitted
* Freeman et al. (2002) Freeman, P. E., Kashyap, V., Rosner, R., & Lamb, D. Q. 2002, ApJS, 138, 185
* Fruscione et al. (2006) Fruscione, A., McDowell, J. C., Allen, G. E., Brickhouse, N. S., Burke, D. J., Davis, J. E., Durham, N., Elvis, M., Galle, E. C., Harris, D. E., Huenemoerder, D. P., Houck, J. C., Ishibashi, B., Karovska, M., Nicastro, F., Noble, M. S., Nowak, M. A., Primini, F. A., Siemiginowska, A., Smith, R. K., & Wise, M. 2006, in Presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference, Vol. 6270, Observatory Operations: Strategies, Processes, and Systems. Edited by Silva, David R.; Doxsey, Rodger E.. Proceedings of the SPIE, Volume 6270, pp. 62701V (2006).
* Garmire et al. (2003) Garmire, G. P., Bautz, M. W., Ford, P. G., Nousek, J. A., & Ricker, Jr., G. R. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4851, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. J. E. Truemper & H. D. Tananbaum, 28–44
* Getman et al. (2005) Getman, K. V., Flaccomio, E., Broos, P. S., Grosso, N., Tsujimoto, M., Townsley, L., Garmire, G. P., Kastner, J., Li, J., Harnden, Jr., F. R., Wolk, S., Murray, S. S., Lada, C. J., Muench, A. A., McCaughrean, M. J., Meeus, G., Damiani, F., Micela, G., Sciortino, S., Bally, J., Hillenbrand, L. A., Herbst, W., Preibisch, T., & Feigelson, E. D. 2005, ApJS, 160, 319
* Guarcello et al. (2013) Guarcello, M. G., Drake, J. J., Wright, N. J., Drew, J. E., Gutermuth, R. A., Hora, J. L., Naylor, T., Aldcroft, T., Fruscione, A., García-Alvarez, D., Kashyap, V. L., & King, R. 2013, ApJ, 773, 135
* Guarcello et al. (2014) Guarcello, M. G., Drake, J. J., Wright, N. J., Kashyap, V., & Flaccomio, E. 2014, Apj, submitted
* Kashyap et al. (2014) Kashyap, V. L., Guarcello, M. G., Drake, J. J., Wright, N. J., & Flaccomio, E. 2014, Apj, submitted
* Knödlseder (2000) Knödlseder, J. 2000, A&A, 360, 539
* Massey & Thompson (1991) Massey, P., & Thompson, A. B. 1991, AJ, 101, 1408
* Negueruela et al. (2008) Negueruela, I., Marco, A., Herrero, A., & Clark, J. S. 2008, A&A, 487, 575
* Puccetti et al. (2009) Puccetti, S., Vignali, C., Cappelluti, N., Fiore, F., Zamorani, G., Aldcroft, T. L., Elvis, M., Gilli, R., Miyaji, T., Brunner, H., Brusa, M., Civano, F., Comastri, A., Damiani, F., Fruscione, A., Finoguenov, A., Koekemoer, A. M., & Mainieri, V. 2009, ApJS, 185, 586
* Rauw et al. (2014) Rauw, G., Nazé, Y., Wright, N. J., Drake, J. J., & Guarcello, M. G. 2014, Apj, submitted
* Rygl et al. (2012) Rygl, K. L. J., Brunthaler, A., Sanna, A., Menten, K. M., Reid, M. J., van Langevelde, H. J., Honma, M., Torstensson, K. J. E., & Fujisawa, K. 2012, A&A, 539, A79
* Skrutskie et al. (2006) Skrutskie, M. F., Cutri, R. M., Stiening, R., Weinberg, M. D., Schneider, S., Carpenter, J. M., & Beichman, C. 2006, AJ, 131, 1163
* Watson et al. (2009) Watson, M. G., Schröder, A. C., Fyfe, D., Page, C. G., Lamer, G., Mateos, S., Pye, J., Sakano, M., Rosen, S., Ballet, J., Barcons, X., Barret, D., Boller, T., Brunner, H., Brusa, M., Caccianiga, A., Carrera, F. J., Ceballos, M., Della Ceca, R., Denby, M., Denkinson, G., Dupuy, S., Farrell, S., Fraschetti, F., Freyberg, M. J., Guillout, P., Hambaryan, V., Maccacaro, T., Mathiesen, B., McMahon, R., Michel, L., Motch, C., Osborne, J. P., Page, M., Pakull, M. W., Pietsch, W., Saxton, R., Schwope, A., Severgnini, P., Simpson, M., Sironi, G., Stewart, G., Stewart, I. M., Stobbart, A.-M., Tedds, J., Warwick, R., Webb, N., West, R., Worrall, D., & Yuan, W. 2009, A&A, 493, 339
* Weisskopf et al. (2002) Weisskopf, M. C., Brinkman, B., Canizares, C., Garmire, G., Murray, S., & Van Speybroeck, L. P. 2002, PASP, 114, 1
* Weisskopf et al. (2007) Weisskopf, M. C., Wu, K., Trimble, V., O’Dell, S. L., Elsner, R. F., Zavlin, V. E., & Kouveliotou, C. 2007, ApJ, 657, 1026
* Wright (2014) Wright, N. J. 2014, MNRAS, in prep
* Wright & Drake (2009) Wright, N. J., & Drake, J. J. 2009, ApJS, 184, 84
* Wright et al. (2010a) Wright, N. J., Drake, J. J., & Civano, F. 2010a, ApJ, 725, 480
* Wright et al. (2012) Wright, N. J., Drake, J. J., Drew, J. E., Guarcello, M. G., Gutermuth, R. A., Hora, J. L., & Kraemer, K. E. 2012, ApJ, 746, L21
* Wright et al. (2010b) Wright, N. J., Drake, J. J., Drew, J. E., & Vink, J. S. 2010b, ApJ, 713, 871
* Wright et al. (2014a) Wright, N. J., Kashyap, V. L., & Drake, J. J. 2014a, Apj, in preparation
* Wright et al. (2014b) Wright, N. J., Parker, R. J., Goodwin, S. P., & Drake, J. J. 2014b, MNRAS, 438, 639
|
1910.02643 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 142212,
"num_imgs": 0,
"llama3_tokens_count": 59028
} | [] | # Regularizing effects concerning elliptic equations
with a superlinear gradient term
Marta Latorre, Martina Magliocca and Sergio Segura de León
M. Latorre: Departamento de Matemática Aplicada, Ciencia y Tecnología de los Materiales y Tecnología Electrónica, Universidad Rey Juan Carlos, C/Tulipán s/n 28933, Móstoles, Spain. _E-mail address:_marta.latorre@urjc.es
M. Magliocca: Dipartimento di Matematica, Sapienza Università di Roma, Piazzale Aldo Moro 5, 00185 Roma, Italy. _E-mail address:_magliocca@mat.uniroma1.it
S. Segura de León: Departament d’Anàlisi Matemàtica, Universitat de València, Dr. Moliner 50, 46100 Burjassot, Spain. _E-mail address:_sergio.segura@uv.es
###### Abstract.
We consider the homogeneous Dirichlet problem for an elliptic equation driven by a linear operator with discontinuous coefficients and having a subquadratic gradient term. This gradient term behaves as \(g(u)|\nabla u|^{q}\), where \(1<q<2\) and \(g(s)\) is a continuous function. Data belong to \(L^{m}(\Omega)\) with \(1\leq m<\frac{N}{2}\) as well as measure data instead of \(L^{1}\)-data, so that unbounded solutions are expected. Our aim is, given \(1\leq m<\frac{N}{2}\) and \(1<q<2\), to find the suitable behaviour of \(g\) close to infinity which leads to existence for our problem. We show that the presence of \(g\) has a regularizing effect in the existence and summability of the solution. Moreover, our results adjust with continuity with known results when either \(g(s)\) is constant or \(q=2\).
Key words and phrases:Gradient term with quadratic growth, Renormalized solutions 2010 Mathematics Subject Classification: 35J60, 35J25, 35R05, 35B65
## 1. Introduction
This paper is concerned to an elliptic problem, in an open bounded set \(\Omega\subset\mathbb{R}^{N}\), whose model is:
\[\left\{\begin{array}[]{ll}-\Delta u=g(u)\,|\nabla u|^{q}+f(x)&\mbox{ in } \Omega\,,\\ u=0&\mbox{ on }\partial\Omega\,,\end{array}\right.\] (1.1)
where
1. \(g\,:\,\mathbb{R}\to(0,+\infty)\) is a continuous positive function;
2. \(1<q<2\);
3. \(f\in L^{m}(\Omega)\) such that \(m\geq 1\). Eventually, we will also consider measures instead of \(f\in L^{1}(\Omega)\).
Our aim is, given \(q\) and \(m\), to find the suitable behaviour of \(g\) close to infinity which leads to existence for problem (1.1). We may measure the behaviour of \(g\) through the exponent \(\alpha\) such that the limit \(\lim_{|s|\to\infty}|s|^{\alpha}g(s)\) is positive and finite. For the sake of simplicity, we will assume that \(g\) is continuous and satisfies \(g(s)\leq\frac{\gamma}{|s|^{\alpha}}\;\), with \(\;\alpha,\gamma>0\). So, we look for the possible exponents \(\alpha\) for which we can obtain existence of solution to this problem.
Solutions to (1.1) are considered in a weak sense (i.e., having finite energy) when \(m\geq\frac{2N}{N+2}\). Nevertheless, this notion has no meaning when \(m\) is closer to 1. In these cases the notion of weak solution must be replaced with the notion of entropy solution or that of renormalized one. Entropy solutions were introduced in [2] for \(L^{1}\)–data and in [5] for measure data which are absolutely continuous with respect to the capacity. On the other hand, renormalized solutions were handled in [22, 12]. Since both notions are equivalent, in the present paper renormalized solution is the chosen notion and only the renormalized formulation will be used in what follows.
### Background
Problems related to (1.1) have been widely studied in recent years. Recall that, when \(0\leq q<1\) and data belong to the dual space \(H^{-1}(\Omega)\), it can be solved applying the theory of pseudomonotone operators (see, for instance, [20]). Likewise, this theory also applies when \(q=1\) and the norm of the source \(f\) is small enough to get coerciveness. Without assuming any smallness condition, an existence result holds true as proved by Bottaro and Marina in [11] for linear equations, and by Del Vecchio and Porzio in [13] in a nonlinear framework.
The other growth limit, \(q=2\), deserves some remarks. With additional hypotheses, equations having gradient terms with quadratic growth have been studied in a series of papers in the 80’s, mainly by Boccardo, Murat and Puel. For gradient terms satisfying a sign condition, we refer to [7, 8, 3], while for existence of bounded solutions, to [9, 10]. The first attempt to study equations with a gradient term having natural growth (without the sign condition or an additional zero order term), was carried out by Ferone and Murat in [14] (see also [15, 16] for extensions). They consider the case \(g(s)\) constant and prove a sharp smallness condition on \(f\in L^{\frac{N}{2}}(\Omega)\) which leads to an existence result. More precisely, it was proved that if \(\|f\|_{L^{\frac{N}{2}}(\Omega)}\) is small, then there exists a solution \(u\) which also satisfies the further regularity \(\left(e^{\delta|u|}-1\right)\in H_{0}^{1}(\Omega)\), for \(\delta\) less than a constant which only depends on \(\|f\|_{L^{\frac{N}{2}}(\Omega)}\), the coerciveness of the principal term and the best constant in Sobolev’s inequality. More general data in this quadratic growth were considered in [26, 24], under the assumption \(g\in L^{1}(\mathbb{R})\): It is studied existence for all \(L^{1}\)–data (in [26]) and for all Radon measures (in [24]). The assumption \(g\in L^{1}(\mathbb{R})\) turns out to be optimal (see [26, Proposition 5.1]). The exhaustive analysis of the necessary growth condition on \(g\) to obtain a solution for every datum \(f\in L^{m}(\Omega)\), where \(m>1\), was made by Porretta and Segura de León in [25]. The main result of [25], when it is applied to an equation governed by a linear operator, states:
1. Given any \(f\in L^{m}(\Omega)\) with \(m\geq\frac{N}{2}\): there exists a solution to problem (1.1), with \(q=2\), under the assumption \(\lim_{|s|\to\infty}g(s)=0\); this solution is bounded when \(m>\frac{N}{2}\).
2. Given any \(f\in L^{m}(\Omega)\) with \(\frac{2N}{N+2}\leq m<\frac{N}{2}\): if \(g(s)\leq\frac{\gamma}{|s|}\), with \(\gamma<\frac{N(m-1)}{N-2m}\), then there exists a solution to problem (1.1), with \(q=2\), which belongs to \(H_{0}^{1}(\Omega)\cap L^{\frac{Nm}{N-2m}}(\Omega)\).
3. Given any \(f\in L^{m}(\Omega)\) with \(1<m<\frac{2N}{N+2}\): if \(g(s)\leq\frac{\gamma}{|s|}\), with \(\gamma<\frac{N(m-1)}{N-2m}\), then there exists an entropy solution to problem (1.1), with \(q=2\), which belongs to \(W_{0}^{1,m^{*}}(\Omega)\).
One of the main objectives of the present paper is to extend these results to the case \(1<q<2\). Some consequences of the quadratic case for our problem are:
1. If \(\lim_{s\to\pm\infty}g(s)=0\), then there exists a solution for every \(f\in L^{m}(\Omega)\), with \(m\geq\frac{N}{2}\) (see Proposition 2.9 below).
2. If \(g\in L^{\frac{2}{q}}(\mathbb{R})\), then there exists a solution for every \(f\in L^{1}(\Omega)\) (see Remark 2.11 below).
In the subquadratic setting (i.e., \(1<q<2\)) and for \(g(s)\) constant, the general theory was developed by Grenon, Murat and Porretta in [17] (for the range \(1+\frac{2}{N}\leq q<2\)) and [18] (with full generality). Their aim is to find the “optimal” exponent \(m\), which depends on \(q\), such that there exists a solution for data in \(L^{m}(\Omega)\) satisfying a smallness condition. Moreover, Alvino, Ferone and Mercaldo showed in [1] the sharp condition on datum \(f\) which guarantee the existence of solution. It was proved in [17] that:
1. If \(1+\frac{2}{N}\leq q<2\), \(m\geq\frac{N(q-1)}{q}\) and \(\|f\|_{m}\) is small enough, then there exists a solution \(u\in H_{0}^{1}(\Omega)\) to problem (1.1), with \(g(s)\) constant, which satisfies the further regularity \(|u|^{\sigma}\in H_{0}^{1}(\Omega)\), with \(\sigma=\frac{m(N-2)}{2(N-2m)}\).
The extension studied in [18] leads to:
1. If \(\frac{N}{N-1}<q<1+\frac{2}{N}\), \(m\geq\frac{N(q-1)}{q}\) and \(\|f\|_{m}\) is small enough, then there exists a renormalized solution to problem (1.1), with \(g(s)\) constant, which satisfies the regularity \((1+|u|)^{\sigma-1}u\in H_{0}^{1}(\Omega)\), with \(\sigma=\frac{m(N-2)}{2(N-2m)}\), and \(|\nabla u|\in L^{N(q-1)}(\Omega)\).
2. If \(q=\frac{N}{N-1}\) and \(\|f\|_{L^{m}(\Omega)}\) is small enough for certain \(m>1\), then there exists a renormalized solution to problem (1.1), with \(g(s)\) constant, satisfying the regularity \((1+|u|)^{\sigma-1}u\in H_{0}^{1}(\Omega)\), with \(\sigma=\frac{m(N-2)}{2(N-2m)}\), and \(|\nabla u|\in L^{m^{*}}(\Omega)\).
3. If \(1<q<\frac{N}{N-1}\), \(m\geq 1\) and \(\|f\|_{1}\) is small enough, then there exists a renormalized solution to problem (1.1), with \(g(s)\) constant, which satisfies \(|u|\in M^{\frac{N}{N-2}}(\Omega)\) and \(|\nabla u|\in M^{\frac{N}{N-1}}(\Omega)\). Here, \(M^{q}(\Omega)\) stands for the Marcinkiewicz space (see Subsection 3.3 below). Actually, sources more general than \(L^{1}\)–functions are handled, namely, finite Radon measures.
The final picture looks as follows:
[FIGURE:S1.SS1.fig1][ENDFIGURE]
The right zone indicates solutions of finite energy, the left zone shows the points \(q\) where measure data can be considered while the central zone is where they obtain renormalized solutions with \(L^{m}\)-data.
We point out that, in [18], the authors obtain existence for every datum with a zero order term, which has a regularizing effect. In some sense, the singular term \(\frac{|\nabla u|^{q}}{|u|^{\alpha}}\), where \(\alpha>0\), induces a similar effect, so that we expect better estimates than those in the case \(\alpha=0\) (see Remark 3.2). Note, nevertheless, that the term \(\frac{|\nabla u|^{q}}{|u|^{\alpha}}\) behaves in a superlinear way with respect to the gradient power when \(\alpha<q-1\) (see Subsection 2.5).
Finally, we remark that a problem similar to (1.1) has recently been studied in [21, Proposition \(3.4\)].
### Our results
As we have mentioned, given \(q\) and \(m\), our goal is to look for the best exponent \(\alpha\) to get existence for our problem. The identity we find is
\[\alpha=\frac{N(q-1)-mq}{N-2m}\quad\hbox{for }m>1,\,\,1<q<2\,.\] (1.2)
This value of \(\alpha\) is intuitively deduced in Subsection 2.5. We remark that, when \(\alpha=0\), it yields \(m=\frac{N(q-1)}{q}\) recovering the threshold occurring in S1. and S2. above. In general, \(m=m(q,\alpha)\) is given by
\[m=\frac{N(q-1-\alpha)}{q-2\alpha}\,.\] (1.3)
According to the value of \(m\) and the connection between \(\alpha\) and \(q\), there are two different types of results:
1. As in Q2. or S1., if \(m\geq\frac{2N}{N+2}\), then we get finite energy solutions. Otherwise, renormalized ones are obtained. We also note that there are points \((q,\alpha)\in[1,2]\times[0,1]\) satisfying \[\frac{N(q-1-\alpha)}{q-2\alpha}<1\,.\] In this area, talking about Lebesgue spaces looses sense. This means that measure data are allowed.
2. In full agreement with the above picture, we have to deal with three zones. If \(q>1+\alpha\), we are within the superlinear framework. In this setting, we may only expect existence of solutions for sources satisfying a smallness condition. The limit case \(q=1+\alpha\) corresponds to a linear gradient term in which we get existence when this term is small enough. Furthermore, the sublinear case \(q<1+\alpha\) guaranties existence of solutions for all data and all gradient terms. The informal deduction of this classification will be shown in Subsection 2.5.
We state our main results in Theorem 2.7 and Theorem 2.8 below, where all possible situations are considered.
Roughly speaking, we may illustrate the relation between \(\alpha,\,q,\,m\) in the following picture.
\begin{tabular}{l l}
\(\alpha\hskip 5.690551pt\) \(1\hskip 5.690551pt\) \(0\hskip 5.690551pt\) \(2\) \(1\) \(\frac{N}{N-1}\) \(1+\frac{2}{N}\) \(q\hskip 5.690551pt\) & \begin{tabular}{} \end{tabular} \\
\end{tabular}
We explicitly point out that, as \(\alpha\) increases, the different zones drift to the right, so that the function \(g\) induces a regularizing effect. Moreover, the sublinear zone \(0<\alpha<q-1\) appears, which entails existence for every datum. The bigger value of \(\alpha\), the wider is this new zone.
This scheme adapts perfectly to what is expected since there is continuity with respect to known results. In fact, the \(q\)–axis coincides with the picture of results in [18], while the line \(q=2\) depicts the results in [25]. Furthermore, the bound \(\frac{N(m-1)}{N-2m}\), occurring in Q2. above, is the limit as \(q\to 2\) of the related bounds obtained for \(q<2\) (see Remark 3.6 below); a similar observation applies to Q3.
In order to achieve these bounds in the renormalized framework, we need to fine-tune our estimates as much as possible. So, we have to introduce a special way of applying known inequalities (see Lemma 3.7). In this way we managed not to lose information when making our estimates.
To prove our results we use approximation techniques based on
* estimates of a suitable power of the solutions;
* the strong convergence in \(L^{1}\) of the gradient term.
Our estimates are obtained with variants of the method introduced by Grenon, Murat and Porretta in which certain powers of \(G_{k}(u)=(|u|-k)^{+}{\rm\;sign}(u)\) are taken as test functions. The greatest difficulty arises when studying problem (1.1) with a measure datum \(\mu\). In this case one should take \(T_{j}(G_{k}(u))\) as a test function getting
\[\int_{\Omega}|\nabla T_{j}(G_{k}(u))|^{2}\,dx\leq j\left[\int_{\{|u|>k\}}g(u)| \nabla u|^{q}\,dx+\|\mu\|_{\mathcal{M}_{b}}\right]\,.\]
An appeal to a lemma on Marcinkiewicz spaces (see [3, Lemma 4.2]) leads to
\[\Big{[}|\nabla G_{k}(u)|\Big{]}_{\frac{N}{N-1}}\leq C\left[\int_{\{|u|>k\}}g(u )|\nabla u|^{q}\,dx+\|\mu\|_{\mathcal{M}_{b}}\right]\,.\]
Having in mind that values \(q>\frac{N}{N-1}\) are allowed, an estimate cannot be expected from this inequality. To overcome this trouble, we take \(T_{j}(|G_{k}(u)|^{\theta}){\rm\;sign}(u)\) as a test function, with the power \(\theta\) close to \(0\), and then, a suitable generalization of the above lemma (see Lemma 3.12 below) will be applied.
### Plan of the paper
The plan of the paper is the following. Section 2 is devoted to introduce the assumptions and state the main results (see Theorem 2.7 and Theorem 2.8 below). We also include here our starting point (see Proposition 2.9), which is a simple consequence of the results of [25]. Section 3 deals with a priori estimates, while the convergence of approximate solutions is proved in Section 4. We point out that not only the superlinear case is seen, since we also deal with the linear case (in which existence for each data is achieved under a hypothesis of smallness on the gradient term) and even some sublinear cases that, as far as we know, have not already been handled. In this Section 4 the limit line \(q=\frac{N+\alpha(N-2)}{N-1}\), which does not fit into the general scheme, is also studied. In Section 5 we end up analyzing what happens when data enjoy more summability than that strictly necessary to obtain existence.
## 2. Assumptions and Statement of results
### Notation
Throughout this paper, \(\Omega\) stands for a bounded open subset of \(\mathbb{R}^{N}\), with \(N>2\). The Lebesgue measure of a set \(E\subset\Omega\) will be denoted by \(|E|\). The symbols \(L^{s}(\Omega)\) denote the usual Lebesgue spaces and \(W^{1,s}_{0}(\Omega)\) the usual Sobolev spaces of measurable functions having weak gradient in \(L^{s}(\Omega;\mathbb{R}^{N})\) and zero trace on \(\partial\Omega\). We will also use the notation \(H_{0}^{1}(\Omega)\) instead of \(W^{1,2}_{0}(\Omega)\). Let \(1\leq p<N\), in the sequel \(p^{*}=\frac{Np}{N-p}\), \(p_{*}=\frac{Np}{Np-N+p}\) and \(S_{p}\) stands for the constant in Sobolev inequality in \(W_{0}^{1,p}(\Omega)\), that is,
\[\left[\int_{\Omega}|u|^{p^{*}}dx\right]^{1/p^{*}}\leq S_{p}\left[\int_{\Omega} |\nabla u|^{p}dx\right]^{1/p},\quad\hbox{for all }u\in W_{0}^{1,p}(\Omega)\,.\]
We recall that this constant just depend on \(N\) and \(p\), and this dependence is continuous on \(p\). On the other hand, \(C^{PF}_{p}\) stands for the constant in the Poincaré–Friedrichs inequality in \(W_{0}^{1,p}(\Omega)\), so that
\[\left[\int_{\Omega}|u|^{p}dx\right]^{1/p}\leq C^{PF}_{p}\left[\int_{\Omega}| \nabla u|^{p}dx\right]^{1/p},\quad\hbox{for all }u\in W_{0}^{1,p}(\Omega)\,.\]
This constant depends on \(\Omega\) and \(p\).
Two auxiliary real functions will be used throughout this paper. For every \(k>0\), we define \(T_{k}\::\:\mathbb{R}\to\mathbb{R}\) and \(G_{k}\::\:\mathbb{R}\to\mathbb{R}\) as
\[T_{k}(s)=\left\{\begin{array}[]{ll}s&\hbox{ if }|s|\leq k\,,\\ k\frac{s}{|s|}&\hbox{ if }|s|>k\,;\end{array}\right.\]
\[G_{k}(s)=s-T_{k}(s)=(|s|-k)^{+}{\rm\;sign}(s)\,.\]
### Assumptions
We will deal with the problem
\[\left\{\begin{array}[]{ll}-\hbox{\rm div\,}[A(x)\cdot\nabla u]=H(x,u,\nabla u) +f(x)&\mbox{ in }\Omega\,,\\ u=0&\mbox{ on }\partial\Omega\,,\end{array}\right.\] (2.1)
and we assume the following statements.
1. \(A(x)\) is an \(N\times N\) symmetric matrix which satisfies \[\lambda|\xi|^{2}\leq[A(x)\cdot\xi]\cdot\xi\leq\Lambda|\xi|^{2}\] (2.2) for almost all \(x\in\Omega\) and \(\xi\in\mathbb{R}^{N}\), and certain positive constants \(\Lambda\) and \(\lambda\).
2. There exist a positive continuous function \(g\::\:\mathbb{R}\to(0,+\infty)\) and \(1<q<2\) such that \[|H(x,t,\xi)|\leq g(t)|\xi|^{q}\] (2.3) for almost all \(x\in\Omega\), \(t\in\mathbb{R}\) and \(\xi\in\mathbb{R}^{N}\).
3. The datum \(f\) belongs to \(L^{m}(\Omega)\), with \(1\leq m<\frac{N}{2}\). When \(m=1\), instead of considering an \(L^{1}\)–function, we will choose a Radon measure (see problem (2.10)).
As far as the function \(g\) is concerned, we assume that there exist constants \(\gamma,\alpha>0\) satisfying
\[g(t)\leq\frac{\gamma}{|t|^{\alpha}}\] (2.4)
for all \(t\in\mathbb{R}\).
**Remark 2.1**.:
1. Throughout this paper, the linearity of the principal part plays no role. We point out that our results also hold for equations driven by more general operators such as those of the Leray–Lions type with linear growth.
2. We stress that condition (2.4) have to be seen as a condition at infinity, that is, what is essential is \[\lim_{|s|\to\infty}g(s)|s|^{\alpha}=\gamma\,.\] We are assuming (2.4) for the sake of simplicity.
In what follows, we also consider the parameter
\[\sigma=\frac{m(N-2)}{N-2m}\,,\quad\hbox{i.e.}\quad m=\frac{N\sigma}{N+2\sigma- 2}\,.\] (2.5)
Observe that it implies
\[(\sigma-1)m^{\prime}=\frac{\sigma}{2}2^{*}\,.\] (2.6)
It is straightforward that
1. \(1\leq\sigma<\infty\).
2. \(\sigma=1\) if and only if \(m=1\).
3. \(1<\sigma<2\) if and only if \(1<m<\frac{2N}{N+2}\).
4. \(\sigma\geq 2\) if and only if \(\frac{2N}{N+2}\leq m<\frac{N}{2}\).
### Notions of solution
According to the summability of the datum, we will find solutions to problem (2.1) with finite energy or renormalized solutions. Definitions follow.
**Definition 2.2**.: _We will say that a function \(u\in H_{0}^{1}(\Omega)\) is a weak solution to problem (2.1) if \(H(x,u,\nabla u)\in L^{1}(\Omega)\) and_
\[\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla\varphi\,dx=\int_{\Omega}H(x,u, \nabla u)\varphi\,dx+\int_{\Omega}f(x)\varphi\,dx\] (2.7)
_holds for all \(\varphi\in H^{1}_{0}(\Omega)\cap L^{\infty}(\Omega)\)._
We remark that, as a consequence of Sobolev’s inequality, formulation (2.7) has sense only when \(m\geq\frac{2N}{N+2}\), that is, \(\sigma\geq 2\). When \(m<\frac{2N}{N+2}\) (so that \(\sigma<2\)), a different formulation must be required. The functional setting for the renormalized formulation lies on the space \(\mathcal{T}_{0}^{1,2}(\Omega)\) of almost everywhere finite functions such that \(T_{k}(u)\in H_{0}^{1}(\Omega)\) for all \(k>0\). Functions in this space have a generalized gradient which (grosso modo) is defined by
\[\nabla T_{k}(u)=(\nabla u)\chi_{\{|u|<k\}}\qquad\hbox{for all }k>0\,,\]
(see [2] or [12]).
**Definition 2.3**.: _A function \(u\::\:\Omega\to\mathbb{R}\) is a renormalized solution to problem (2.1) having datum \(f\in L^{m}(\Omega)\), with \(1<m<\frac{2N}{N+2}\), if it satisfies_
1. \(u\in\mathcal{T}_{0}^{1,2}(\Omega)\)_;_
2. \(\nabla u\in L^{1}(\Omega;\mathbb{R}^{N})\)_;_
3. \(H(x,u,\nabla u)\in L^{1}(\Omega)\)_;_
_and_
\[\int_{\Omega}S^{\prime}(u)\varphi[A(x)\cdot\nabla u]\cdot\nabla u \,dx+\int_{\Omega}S(u)[A(x)\cdot\nabla u]\cdot\nabla\varphi\,dx\\ =\int_{\Omega}H(x,u,\nabla u)S(u)\varphi\,dx+\int_{\Omega}f(x)S(u )\varphi\,dx\] (2.8)
_holds for any Lipschitz function \(S\::\:\mathbb{R}\to\mathbb{R}\) with compact support and for any \(\varphi\in H^{1}(\Omega)\cap L^{\infty}(\Omega)\) such that \(S(u)\varphi\in H_{0}^{1}(\Omega)\)._
**Remark 2.4**.: Note that Definition 2.3 does not require any asymptotic condition on the energy term such as
\[\lim_{n\to\infty}\frac{1}{n}\int_{\{n\leq|u|\leq 2n\}}|\nabla u|^{2}\,dx=0.\] (2.9)
Indeed, we will prove (in different steps) that
1. If \(1<\sigma<2\) (i.e. \(1<m<\frac{2N}{N+2}\)), then solutions enjoy a certain Sobolev regularity which implies (2.9).
2. If \(\sigma=1\) (i.e. \(m=1\)), then condition (2.9) must be required to solutions.
It is not difficult to check R1 if we assume condition \(\left[(1+|u|)^{\frac{\sigma}{2}}-1\right]\in H_{0}^{1}(\Omega)\) with \(1<\sigma<2\). Then
\[\frac{|\nabla u|^{2}}{(1+|u|)^{2-\sigma}}\in L^{1}(\Omega)\,.\]
Thus,
\[\frac{1}{n}\int_{\{n\leq|u|\leq 2n\}}|\nabla u|^{2}dx\leq\frac{1}{n^{\sigma-1} }\left[2^{2-\sigma}\int_{\{n\leq|u|\leq 2n\}}\frac{|\nabla u|^{2}}{(1+|u|)^{2- \sigma}}dx\right]\leq\frac{C}{n^{\sigma-1}}{\buildrel{n\to\infty}\over{ \longrightarrow}}0\]
and condition (2.9) holds.
Up to now, we have taken \(m=\frac{N(q-1-\alpha)}{q-2\alpha}\) with \(m>1\). Nonetheless, this ratio can be strictly smaller than 1. Then we take measure data and so consider problem
\[\left\{\begin{array}[]{ll}-\hbox{\rm div\,}[A(x)\cdot\nabla u]=H(x,u,\nabla u) +\mu&\mbox{ in }\Omega\,,\\ u=0&\mbox{ on }\partial\Omega\,,\end{array}\right.\] (2.10)
being \(\mu\in\mathcal{M}_{b}(\Omega)\) a bounded Radon measure, instead of problem (2.1).
As far as bounded Radon measures are concerned, we recall that every \(\mu\in\mathcal{M}_{b}(\Omega)\) can be decomposed, in a unique way, as the sum \(\mu=\mu_{0}+\mu_{s}\), where \(\mu_{0}\in L^{1}(\Omega)+H^{-1}(\Omega)\) is the absolutely continuous (with respect to the capacity) part and \(\mu_{s}\) is the singular one and it is concentrated on a set of null capacity. Further comments on measures data and the notion of capacity can be found in [5, Section \(2\)], [12, Section \(2\)].
**Definition 2.5**.: _A function \(u\::\:\Omega\to\mathbb{R}\) is a renormalized solution to problem (2.10) if it satisfies_
1. \(u\in\mathcal{T}_{0}^{1,2}(\Omega)\)_;_
2. \(\nabla u\in L^{1}(\Omega;\mathbb{R}^{N})\)_;_
3. \(H(x,u,\nabla u)\in L^{1}(\Omega)\)_;_
_and_
\[\int_{\Omega}S^{\prime}(u)\varphi[A(x)\cdot\nabla u]\cdot\nabla u \,dx+\int_{\Omega}S(u)[A(x)\cdot\nabla u]\cdot\nabla\varphi\,dx\\ =\int_{\Omega}H(x,u,\nabla u)S(u)\varphi\,dx+\int_{\Omega}S(u) \varphi\,d\mu_{0}\] (2.11)
_holds for any Lipschitz function \(S\::\:\mathbb{R}\to\mathbb{R}\) with compact support and for any \(\varphi\in H^{1}(\Omega)\cap L^{\infty}(\Omega)\) such that \(S(u)\varphi\in H_{0}^{1}(\Omega)\), and_
\[\lim_{n\to\infty}\frac{1}{n}\int_{\{n\leq u\leq 2n\}}\varphi[A(x) \cdot\nabla u]\cdot\nabla u\,dx =\int_{\Omega}\varphi\,d\mu_{s}^{+},\] (2.12)
\[\lim_{n\to\infty}\frac{1}{n}\int_{\{-2n\leq u\leq-n\}}\varphi[A(x )\cdot\nabla u]\cdot\nabla u\,dx =\int_{\Omega}\varphi\,d\mu_{s}^{-},\] (2.13)
_for every \(\varphi\in C_{b}(\Omega)\), i.e. \(\varphi\) continuous and bounded in \(\Omega\), and being \(\mu_{s}^{+}\) and \(\mu_{s}^{-}\) the positive and negative parts of \(\mu_{s}\), respectively._
**Remark 2.6**.: Both in Definition 2.3 and Definition 2.5, we will need to use test functions for which function \(S\) has not compact support although \(S^{\prime}\) has. Most of them can be considered by a standard argument in the renormalized setting (see [17] for more details). This procedure consists of two steps, which we next apply to the main example \(S(s)=T_{k}(s)\) in the case \(f\in L^{m}(\Omega)\) with \(m>1\).
1. Take \(S(s)=T_{k}(s)\vartheta_{h}(s)\), \(\varphi=1\) in (2.8) with \[\vartheta_{h}(s)=\begin{cases}\begin{array}[]{ll}1&\quad|s|\leq h\,,\\ \frac{2h-|s|}{h}&\quad h<|s|\leq 2h\,,\\ 0&\quad|s|>2h\,.\end{array}\end{cases}\] Observe that \(\vartheta_{h}(\cdot)\) is compactly supported and converges to \(1\) as \(h\to\infty\). In this way, (2.8) becomes \[\int_{\{|u|<k\}}\vartheta_{h}(u)[A(x)\cdot\nabla u]\cdot\nabla T_ {k}(u)\,dx-\frac{1}{h}\int_{\{h<|u|<2h\}}|T_{k}(u)|\,[A(x)\cdot\nabla u]\cdot \nabla u\,dx\\ =\int_{\Omega}H(x,u,\nabla u)T_{k}(u)\vartheta_{h}(u)\,dx+\int_{ \Omega}fT_{k}(u)\vartheta_{h}(u)\,dx\,.\]
2. Check that letting \(h\) go to \(\infty\) is allowed in each term (in the second term, where \(\vartheta_{h}^{\prime}(u)\) appears, just apply condition (2.9). It turns out that \[\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla T_{k}(u)\,dx=\int_{\Omega}H(x,u, \nabla u)T_{k}(u)\,dx+\int_{\Omega}fT_{k}(u)\,dx\,.\]
Analogous comments can be done when measure data are considered, taking the same test functions in (2.11) and using (2.12)–(2.13) instead of (2.9), we get
\[\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla T_{k}(u)\,dx=\int_{\Omega}H(x,u, \nabla u)T_{k}(u)\,dx+\int_{\Omega}fT_{k}(u)\,d\mu_{0}+k\mu_{s}(\Omega)\,.\]
Throughout this paper, we will consider such general test functions without further comments.
### Main results
As we have seen in the Introduction, we get two different types of results: one in the superlinear setting and the other in the linear case. To justify this classification, we refer to the next Subsection. On the other hand, in both situations we should have in mind that, depending on the data, we will obtain finite energy solutions or renormalized ones.
The results of this paper can be summarized in the following statements.
**Theorem 2.7** (Existence results in the superlinear case).: _Using the above notation, assume that \(\|f\|_{L^{m}(\Omega)}\) is small enough._
1. _If_ \(\frac{2N}{N+2}\leq m<\frac{N}{2}\) _and_ \(\frac{N(q-1)-mq}{N-2m}\leq\alpha<q-1\)_, then there exists a weak solution to problem (_2.1_) satisfying_ \(H(x,u,\nabla u)u\in L^{1}(\Omega)\)_,_ \(H(x,u,\nabla u)\in L^{\frac{2}{q}}(\Omega)\) _and the further regularity_ \(|u|^{\frac{\sigma}{2}}\in H_{0}^{1}(\Omega)\)_._
2. _If_ \(1<m<\frac{2N}{N+2}\) _and_ \(\frac{N(q-1)-mq}{N-2m}\leq\alpha<q-1\)_, then there exists a renormalized solution to (_2.1_) satisfying_ \((1+|u|)^{\frac{\sigma}{2}-1}u\in H_{0}^{1}(\Omega)\) _and_ \(H(x,u,\nabla u)\in L^{m}(\Omega)\)_._
3. _If_ \(m>1\) _and_ \(\alpha=\frac{N(q-1)-q}{N-2}\)_, then there exists a renormalized solution to (_2.1_) satisfying_ \((1+|u|)^{\frac{\sigma}{2}-1}u\in H_{0}^{1}(\Omega)\)_._
_Furthermore, assuming a source \(\mu\in\mathcal{M}_{b}(\Omega)\) with \(\|\mu\|_{\mathcal{M}_{b}}\) small enough, if \(\frac{N(q-1)-q}{N-2}<\alpha<q-1\), then there exists a renormalized solution to (2.10)._
**Theorem 2.8** (Existence results in the linear case).: _With the same notation as above, assume that \(\alpha=q-1\) and \(\gamma\) is small enough._
1. _If_ \(\frac{2N}{N+2}\leq m<\frac{N}{2}\)_, then there exists a weak solution to problem (_2.1_) which also satisfies_ \(H(x,u,\nabla u)u\in L^{1}(\Omega)\)_,_ \(H(x,u,\nabla u)\in L^{\frac{2}{q}}(\Omega)\) _and the further regularity_ \(|u|^{\sigma/2}\in H_{0}^{1}(\Omega)\)_._
2. _If_ \(1<m<\frac{2N}{N+2}\)_, then there exists a renormalized solution to (_2.1_) satisfying the regularity_ \((1+|u|)^{\frac{\sigma}{2}-1}u\in H_{0}^{1}(\Omega)\) _and_ \(H(x,u,\nabla u)\in L^{m}(\Omega)\)_._
_Furthermore, for every \(\mu\in\mathcal{M}_{b}(\Omega)\) there exists a renormalized solution to (2.10)._
### Connection among parameters
The aim of this Subsection is to show the connection among all parameters of our problem which lead to existence of solution. The key argument is to find the best power \(\sigma\) such that \(u^{\sigma}\) can be taken as a test function.
We begin estimating the gradient term and seeing the connection between \(q\) and \(\alpha\). In order to simplify the incoming explanation, we consider the problem
(2.14)
where \(\alpha>0\) and \(0<F\in L^{\tau}(\Omega)\mbox{ with }\tau<\frac{N}{2}\). Note that \(u>0\) by the classical maximum principle.
Basically, our aim is to prove a gradient estimate of the type
\[\||\nabla u|^{b}\|_{L^{1}(\Omega)}\leq c\|F\|_{L^{\tau}(\Omega)}^{\zeta}\,,\]
for certain values \(b\leq 2\), \(\zeta>0\). Once this step is concluded, we set \(F=|\nabla u|^{q}\), i.e.
\[\||\nabla u|^{b}\|_{L^{1}(\Omega)}\leq c\||\nabla u|^{b}\|_{L^{\frac{\tau\,q}{ b}}(\Omega)}^{\frac{q\zeta}{b}}\,,\]
so we will deduce that
* we close the estimate choosing \(\frac{\tau\,q}{b}=1\);
* we are within the superlinear setting if and only if \(\frac{q\zeta}{b}>1\).
We take \((1+u)^{\sigma-1}-1\) as test function in (2.14) for some \(\sigma\). Then, defining \(v=(1+u)^{\frac{\sigma}{2}}\), we obtain
\[\int_{\Omega}|\nabla v|^{2}\,dx\leq c\int_{\Omega}F\,v^{\frac{2}{\sigma}( \sigma-1-\alpha)}\,dx\,.\] (2.15)
Since Hölder’s inequality with \((\tau,\tau^{\prime})\) implies
\[\int_{\Omega}F\,v^{\frac{2}{\sigma}(\sigma-1-\alpha)}\,dx\leq\|F\|_{L^{\tau}( \Omega)}\|v\|_{L^{\frac{2}{\sigma}\tau^{\prime}(\sigma-1-\alpha)}(\Omega)}^{ \frac{2}{\sigma}(\sigma-1-\alpha)}\,,\] (2.16)
we require
\[\tau=\tau(\sigma,\alpha)=\frac{N\sigma}{(N-2)(\alpha+1)+2\sigma},\quad\text{i. e.}\quad\frac{2}{\sigma}\tau^{\prime}(\sigma-1-\alpha)=2^{*}\,.\] (2.17)
We estimate (2.16) by applying Young’s inequality with \(\left(\frac{2\tau^{\prime}}{2^{*}},\frac{2\tau^{\prime}}{2\tau^{\prime}-2^{*}} \right)=\left(\frac{2\tau^{\prime}}{2^{*}},\frac{\tau(N-2)}{N-2\tau}\right)\). Then, invoking Sobolev’s embedding too, we obtain
\[\begin{split}\int_{\Omega}F\,v^{\frac{2}{\sigma}(\sigma-1-\alpha) }\,dx\leq\frac{1}{2}\|v\|_{H_{0}^{1}(\Omega)}^{2}+c\|F\|_{L^{\tau}(\Omega)}^{ \frac{\tau(N-2)}{N-2\tau}}\,.\end{split}\]
Note that, having \(\tau<\frac{N}{2}\), this step makes sense. We gather (2.15)–(2.16) and deduce
\[\|v\|_{H_{0}^{1}(\Omega)}^{2}\leq c\|F\|_{L^{\tau}(\Omega)}^{\frac{\tau(N-2)}{ N-2\tau}}\,.\] (2.18)
Now, let \(0<b<2\) and take into account \(\||\nabla u|^{b}\|_{L^{1}(\Omega)}\). We omit the case \(b=2\) since it can be dealt in the same way without passing to the change of variable \(v=(1+u)^{\frac{\sigma}{2}}\). Then, by Hölder’s inequality with \(\left(\frac{2}{b},\frac{2}{2-b}\right)\), we have that
\[\begin{split}\int_{\Omega}|\nabla u|^{b}\,dx=\int_{\Omega}|\nabla v ^{\frac{2}{\sigma}}|^{b}\,dx\leq C\left(\int_{\Omega}|\nabla v|^{2}\,dx\right) ^{\frac{b}{2}}\left(\int_{\Omega}v^{\frac{2}{\sigma}\frac{b}{2-b}(2-\sigma)}\, dx\right)^{\frac{2-b}{2}}\,.\end{split}\] (2.19)
We thus require
\[b=b(\sigma)=\frac{N\sigma}{N-2+\sigma},\quad\text{i.e.}\quad\frac{2}{\sigma} \frac{b}{2-b}(2-\sigma)=2^{*}\,.\] (2.20)
Note that \(b<2\) if \(\sigma<2\). Thanks to Sobolev’s embedding, the inequality in (2.19) becomes
\[\||\nabla u|^{b}\|_{L^{1}(\Omega)}\leq c\|v\|_{H_{0}^{1}(\Omega)}^{b}\|v\|_{L^ {2^{*}}(\Omega)}^{b\frac{2-\sigma}{\sigma}}\leq cS_{2}\|v\|_{H_{0}^{1}(\Omega) }^{\frac{2b}{\sigma}}\,.\]
Recalling (2.18) too and taking \(F=|\nabla u|^{q}\), we finally get
\[\||\nabla u|^{b}\|_{L^{1}(\Omega)}\leq c\|F\|_{L^{\tau}(\Omega)}^{\frac{\tau(N -2)}{N-2\tau}\frac{b}{\sigma}}\leq c\|F\|_{L^{\tau q}(\Omega)}^{\frac{\tau(N-2 )}{N-2\tau}\frac{bq}{\sigma}}\leq c\||\nabla u|^{b}\|_{L^{\tau\frac{q}{b}}( \Omega)}^{\frac{\tau(N-2)}{N-2\tau}\frac{q}{\sigma}}\,.\]
Then
* we close the estimate taking \( 1=\frac{\tau q}{b}\). Thus, \(\frac{b}{q}=\tau=\frac{N\sigma}{(N-2)(\alpha+1)-2\sigma}\) (by (2.17)) from where we deduce taking into account (2.20) \[\sigma=\frac{(N-2)(q-1-\alpha)}{2-q}\] (2.21) and so \[\tau=\frac{b}{q}=\frac{N(q-1-\alpha)}{q(1-\alpha)}\,.\]
* We are within the superlinear setting if and only if \[\frac{\tau(N-2)}{N-2\tau}\frac{q}{\sigma}=\frac{q}{1+\alpha}>1\quad\text{i.e.} \quad q>1+\alpha\,.\]
Since we want to keep us in a superlinear but still subquadratic setting, we will consider
\[1+\alpha<q<2\,,\]
which implies that \(0<\alpha<1\). In other words, if \(\alpha\geq 1\), then we are no longer in a superlinear gradient setting. The linear one appears when \(q=1+\alpha\), while we are in the sublinear setting when \(q<1+\alpha\).
We now want to determine the relation between the gradient growth parameter \(q\), the power growth parameter \(\alpha\) and the data assumptions \(f\in L^{m}(\Omega)\). To this end, we focus on the source term \(f\) and consider the simple problem
(2.22)
where \(0<f\in L^{m}(\Omega)\mbox{ with }m<\frac{N}{2}\). Now, if we take again \(\left((1+u)^{\sigma-1}-1\right)\), with \(\sigma<2\) defined in (2.21), as test function in (2.22) and reason as before, then we find that we need
\[m(\sigma)=\frac{N\sigma}{N+2\sigma-2}\quad\text{i.e.}\quad m^{\prime}(\sigma-1 )=2^{*}\frac{\sigma}{2}\]
Gathering this identity with (2.21), we have
\[m=\frac{N(q-1-\alpha)}{q-2\alpha},\quad\text{i.e.}\quad\alpha=\frac{N(q-1)-mq} {N-2m}\,.\]
Therefore, we have informally deduced the need of conditions (2.5), (1.3) and (1.2), respectively.
### Our starting point
We begin with the following result which provides us of solution to approximating problems in Section 4. It follows from the results of [25].
**Proposition 2.9**.: _Consider two continuous functions \(g_{1},g_{2}:\mathbb{R}\to(0,+\infty)\) such that \(g_{1}\in L^{2/q}(\mathbb{R})\) and \(\lim_{s\to\pm\infty}g_{2}(s)=0\), and set \(g=g_{1}+g_{2}\)._
1. _If_ \(f\in L^{m}(\Omega)\)_, with_ \(m>N/2\)_, then there exists a solution to (_2.1_) belonging to_ \(H_{0}^{1}(\Omega)\cap L^{\infty}(\Omega)\)_._
2. _If_ \(f\in L^{N/2}(\Omega)\)_, then there exists a solution to (_2.1_) which belongs to_ \(H_{0}^{1}(\Omega)\cap L^{r}(\Omega)\) _for all_ \(1\leq r<\infty\)_._
Proof. Note that the expression \(G(u)=\int_{0}^{u}g_{1}(s)^{2/q}\,ds\) defines a real bounded function (due to \(g_{1}\in L^{2/q}(\mathbb{R})\)). Now consider \(\varphi\) a Lipschitz–continuous and increasing real function such that \(\varphi(0)=0\). Taking \(e^{\frac{|G(u)|}{\lambda}}\varphi(u)\) as test function, \(\lambda\) as in (2.2), in (2.1), it follows from (2.2), (2.3) and Young’s inequality that
\[\lambda\int_{\Omega}e^{\frac{|G(u)|}{\lambda}} \varphi^{\prime}(u)|\nabla u|^{2}\,dx+\int_{\Omega}|\varphi(u)|g_ {1}(u)^{2/q}e^{\frac{|G(u)|}{\lambda}}|\nabla u|^{2}\,dx\]
\[\leq\int_{\Omega}H(x,u,\nabla u)e^{\frac{|G(u)|}{\lambda}}\varphi (u)\,dx+\int_{\Omega}|f(x)|e^{\frac{|G(u)|}{\lambda}}\varphi(u)\,dx\]
\[\leq\int_{\Omega}g_{1}(u)\varphi(u)e^{\frac{|G(u)|}{\lambda}}| \nabla u|^{q}\,dx+\int_{\Omega}g_{2}(u)\varphi(u)e^{\frac{|G(u)|}{\lambda}}| \nabla u|^{q}\,dx+\int_{\Omega}|f(x)|e^{\frac{|G(u)|}{\lambda}}\varphi(u)\,dx\]
\[\leq\frac{q}{2}\int_{\Omega}g_{1}(u)^{2/q}|\varphi(u)|e^{\frac{|G (u)|}{\lambda}}|\nabla u|^{2}\,dx+\frac{q}{2}\int_{\Omega}g_{2}(u)^{2/q}| \varphi(u)|e^{\frac{|G(u)|}{\lambda}}|\nabla u|^{2}\,dx\]
\[\qquad+(2-q)\int_{\Omega}|\varphi(u)|e^{\frac{|G(u)|}{\lambda}}\, dx+\int_{\Omega}|f(x)|e^{\frac{|G(u)|}{\lambda}}|\varphi(u)|\,dx\,.\]
Simplifying, we deduce
\[\lambda\int_{\Omega}e^{\frac{|G(u)|}{\lambda}}\varphi^{\prime}(u)|\nabla u|^{2 }\,dx\leq\frac{q}{2}\int_{\Omega}g_{2}(u)^{2/q}|\varphi(u)|e^{\frac{|G(u)|}{ \lambda}}|\nabla u|^{2}\,dx+\int_{\Omega}\left[|f(x)|+(2-q)\right]e^{\frac{|G( u)|}{\lambda}}|\varphi(u)|\,dx\,.\]
Denoting \(h(x)=C\left[|f(x)|+(2-q)\right]\), being \(C\) an upper bound of \(e^{\frac{|G(u)|}{\lambda}}\) (recall that \(G\) is a bounded function), we have \(h\in L^{m}(\Omega)\), here \(m\geq N/2\). Then
\[\lambda\int_{\Omega}\varphi^{\prime}(u)|\nabla u|^{2}\,dx\leq\frac{q}{2}C\int_ {\Omega}g_{2}(u)^{2/q}|\varphi(u)||\nabla u|^{2}\,dx+\int_{\Omega}h(x)|\varphi (u)|\,dx\,,\]
for every Lipschitz–continuous and increasing real function \(\varphi\) such that \(\varphi(0)=0\). Having in mind that \(\lim_{s\to\pm\infty}g_{2}(s)^{2/q}=0\), an appeal to the proofs of [25, Theorem 2.1 and Theorem 2.2] shows that this estimate leads to existence for any \(h\in L^{m}(\Omega)\) and consequently for every \(f\in L^{m}(\Omega)\).
**Remark 2.10**.: A straightforward consequence of the previous result is the existence of solutions for every \(\alpha>0\) when \(m\geq N/2\). This is the reason to assuming \(m<\frac{N}{2}\).
**Remark 2.11**.: The argument of the above result can also be applied to \(L^{1}\)–functions deducing existence of solution for any \(f\in L^{1}(\Omega)\) when \(g\in L^{\frac{2}{q}}(\mathbb{R})\) (see [26], and [24] for its extension to measure data) and as consequence it is satisfied if \(\;\alpha>q/2\;\). Nevertheless, this bound is not optimal since we will see that this fact holds for every \(\;\alpha>q-1\;\) (note that \(q-1<q/2\) if \(1<q<2\)). This gap will be studied in Theorem 4.15 below.
## 3. A priori estimates
Following [17, 18], the basic idea to get a priori estimates is to choose \(|G_{k}(u)|^{\sigma-1}{\rm\;sign}(u)\) as test function in problem (2.1). Hence, we will consider three cases according to the value of the exponent \(\sigma-1\). Roughly speaking, the easiest case is when \(\frac{2N}{N+2}\leq m<\frac{N}{2}\) (that is \(\sigma\geq 2\)) since then \(|G_{k}(u)|^{\sigma-1}{\rm\;sign}(u)\) can be directly taken as test function. In the case \(1<m<\frac{2N}{N+2}\) (that is \(1<\sigma<2\)), we have to replace it with \(\left[(\varepsilon+|G_{k}(u)|)^{\sigma-1}-\varepsilon^{\sigma-1}\right]{\rm\; sign}(u)\) since now the exponent does not define a Lipschitz–continuous function of \(G_{k}(u)\). (Actually, we cannot take this function in the renormalized formulation, however we may follow the steps of Remark 2.6 to approximate \(\left[(\varepsilon+|G_{k}(u)|)^{\sigma-1}-\varepsilon^{\sigma-1}\right]{\rm\; sign}(u)\) and lead to a similar estimate.) The last case is \(m=1\) when the exponent vanishes and the test function must be bounded.
### Finite energy solutions
**Proposition 3.1**.: _Let \(f\in L^{m}(\Omega)\) with \(\frac{2N}{N+2}\leq m<\frac{N}{2}\) and let \(\alpha=\frac{N(q-1)-mq}{N-2m}\). Assume (2.2), (2.3), (2.4) and that \(u\) is a solution to problem (2.1) in the sense of Definition 2.2 such that \(|u|^{\frac{\sigma}{2}}\in H_{0}^{1}(\Omega)\). (Observe that it yields \(\sigma=\frac{(N-2)(q-1-\alpha)}{2-q}\) and \(m=\frac{N(q-1-\alpha)}{q-2\alpha}\).) Then, if \(\|f\|_{L^{m}(\Omega)}\) is small enough, every such solution \(u\) satisfies the following estimate:_
\[\int_{\Omega}|u|^{\sigma-2}|\nabla u|^{2}\,dx\leq M\,,\]
_where \(M\) is a positive constant which only depends on \(N\), \(q\), \(m\), \(\lambda\), \(\gamma\) and \(\|f\|_{L^{m}(\Omega)}\)._
Proof. Let \(k>0\). We start taking the test function \(|G_{k}(u)|^{\sigma-1}{\rm\;sign}(u)\) in problem (2.1). Then, by (2.3), we obtain
\[\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla(|G_{k}(u)|^{\sigma-1} {\rm\;sign}(u))\\ \leq\int_{\Omega}g(u)\,|G_{k}(u)|^{\sigma-1}{\rm\;sign}(u)|\nabla u |^{q}\,dx+\int_{\Omega}f\,|G_{k}(u)|^{\sigma-1}{\rm\;sign}(u)\,dx\,.\] (3.1)
On the left hand side, thanks to (2.2), we get
\[\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla(|G_{k}(u)|^{\sigma-1} {\rm\;sign}(u)) \geq\lambda\int_{\Omega}\nabla u\cdot\nabla\big{[}|G_{k}(u)|^{ \sigma-1}{\rm\;sign}(u)\big{]}\,dx\]
\[=\lambda(\sigma-1)\int_{\Omega}|G_{k}(u)|^{\sigma-2}\,|\nabla G_{ k}(u)|^{2}\,dx=\lambda\frac{4(\sigma-1)}{\sigma^{2}}\int_{\Omega}|\nabla|G_{k} (u)|^{\frac{\sigma}{2}}|^{2}\,dx\,.\]
Recalling also (2.4), inequality (3.1) becomes
\[\lambda\frac{4(\sigma-1)}{\sigma^{2}}\int_{\Omega}|\nabla|G_{k}(u )|^{\frac{\sigma}{2}}|^{2} \leq\int_{\Omega}g(u)\,|G_{k}(u)|^{\sigma-1}{\rm\;sign}(u)|\nabla u |^{q}\,dx+\int_{\Omega}f\,|G_{k}(u)|^{\sigma-1}{\rm\;sign}(u)\,dx\]
\[\leq\gamma\int_{\Omega}\frac{|G_{k}(u)|^{\sigma-1}}{|u|^{\alpha}} |\nabla u|^{q}\,dx+\int_{\Omega}|f|\,|G_{k}(u)|^{\sigma-1}\,dx=I_{1}+I_{2}\,.\] (3.2)
We start by performing some simple computations on the gradient term \(I_{1}\).
\[I_{1}=\gamma\,\int_{\Omega}\frac{|G_{k}(u)|^{\sigma-1}}{|u|^{ \alpha}}|\nabla u|^{q}\,dx \leq\gamma\int_{\{|u|>k\}}\frac{|G_{k}(u)|^{\sigma-1}}{|G_{k}(u)| ^{\alpha}}|\nabla u|^{q}\,dx\]
\[=\gamma\int_{\{|u|>k\}}|G_{k}(u)|^{\sigma-1-\alpha}\frac{|G_{k}(u )|^{(\frac{\sigma}{2}-1)q}}{|G_{k}(u)|^{(\frac{\sigma}{2}-1)q}}|\nabla G_{k}(u )|^{q}\,dx\]
\[=\gamma\frac{2^{q}}{\sigma^{q}}\int_{\Omega}|G_{k}(u)|^{\sigma-1- \alpha-(\frac{\sigma}{2}-1)q}|\nabla|G_{k}(u)|^{\frac{\sigma}{2}}|^{q}\,dx\,,\]
where we have used that \(0<\alpha\) and that \(|G_{k}(u)|\leq|u|\) hold; we remark that no singularity appears since we are integrating on the set \(\{|u|>k\}\). Then, applying Hölder’s inequality, we deduce
\[I_{1}\leq\gamma\frac{2^{q}}{\sigma^{q}}\left(\int_{\Omega}|G_{k}(u)|^{[\sigma- 1-\alpha-(\frac{\sigma}{2}-1)q]\frac{2}{2-q}}\,dx\right)^{\frac{2-q}{2}}\left( \int_{\Omega}|\nabla|G_{k}(u)|^{\frac{\sigma}{2}}|^{2}\,dx\right)^{\frac{q}{2} }\,.\] (3.3)
Now, we will apply Sobolev’s inequality. Indeed, since \(\sigma=\frac{(N-2)(q-1-\alpha)}{2-q}\), the power of \(G_{k}(u)\) in the first factor in (3.3) changes to
\[\left[\sigma-1-\alpha-\left(\frac{\sigma}{2}-1\right)q\right]\frac{2}{2-q}= \frac{\sigma}{2}\,2^{*}\,.\] (3.4)
Therefore, it follows that
\[I_{1} \leq\gamma\frac{2^{q}}{\sigma^{q}}\left(\int_{\Omega}|G_{k}(u)|^{ \frac{\sigma}{2}2^{*}}\,dx\right)^{\frac{2-q}{2}}\||G_{k}(u)|^{\frac{\sigma}{2 }}\|^{q}_{H_{0}^{1}(\Omega)}\]
\[\leq\gamma\frac{2^{q}}{\sigma^{q}}\left(S_{2}^{2}\int_{\Omega}| \nabla|G_{k}(u)|^{\frac{\sigma}{2}}|^{2}\,dx\right)^{\frac{2^{*}}{2}\frac{2-q} {2}}\||G_{k}(u)|^{\frac{\sigma}{2}}\|^{q}_{H_{0}^{1}(\Omega)}\]
\[=\gamma\frac{2^{q}}{\sigma^{q}}S_{2}^{2^{*}\frac{2-q}{2}}\||G_{k} (u)|^{\frac{\sigma}{2}}\|^{q+2^{*}\frac{2-q}{2}}_{H_{0}^{1}(\Omega)}=\gamma \frac{2^{q}}{\sigma^{q}}S_{2}^{2^{*}\frac{2-q}{2}}\||G_{k}(u)|^{\frac{\sigma}{ 2}}\|^{2\frac{N-q}{N-2}}_{H_{0}^{1}(\Omega)}\,.\]
On the other hand, we use Hölder’s inequality on \(I_{2}\) to get
\[I_{2}=\int_{\Omega}|f|\,|G_{k}(u)|^{\sigma-1}\,dx\leq\left(\int_{\Omega}|f|^{m }\,dx\right)^{\frac{1}{m}}\left(\int_{\Omega}|G_{k}(u)|^{(\sigma-1)\,m^{\prime }}\,dx\right)^{\frac{1}{m^{\prime}}}\,.\]
Therefore, having in mind (2.6),
\[I_{2}\leq\|f\|_{L^{m}(\Omega)}\left(\int_{\Omega}|G_{k}(u)|^{ \frac{\sigma}{2}2^{*}}\,dx\right)^{\frac{1}{m^{\prime}}} \leq\|f\|_{L^{m}(\Omega)}\left(S_{2}^{2}\int_{\Omega}|\nabla|G_{k }(u)|^{\frac{\sigma}{2}}|^{2}\,dx\right)^{\frac{2^{*}}{2}\frac{1}{m^{\prime}}}\]
\[=\|f\|_{L^{m}(\Omega)}S_{2}^{2\frac{\sigma-1}{\sigma}}\||G_{k}(u) |^{\frac{\sigma}{2}}\|^{2\frac{\sigma-1}{\sigma}}_{H_{0}^{1}(\Omega)}\,.\]
Thus, inequality (3.2) becomes
\[C_{1}\||G_{k}(u)|^{\frac{\sigma}{2}}\|^{2}_{H_{0}^{1}(\Omega)}\leq\gamma C_{2} \||G_{k}(u)|^{\frac{\sigma}{2}}\|^{2\frac{N-q}{N-2}}_{H_{0}^{1}(\Omega)}+C_{3} \|f\|_{L^{m}(\Omega)}\||G_{k}(u)|^{\frac{\sigma}{2}}\|^{2\frac{\sigma-1}{ \sigma}}_{H_{0}^{1}(\Omega)}\,,\]
for some positive constants \(C_{i}\) only depending on \(N\), \(q\), \(\lambda\) and \(m\) (this one through \(\sigma\), by (2.5)). This is equivalent to
\[C_{1}\||G_{k}(u)|^{\frac{\sigma}{2}}\|^{\frac{2}{\sigma}}_{H_{0}^{1}(\Omega)}- \gamma C_{2}\||G_{k}(u)|^{\frac{\sigma}{2}}\|^{2\frac{N-q}{N-2}-2\frac{\sigma- 1}{\sigma}}_{H_{0}^{1}(\Omega)}\leq C_{3}\|f\|_{L^{m}(\Omega)}\,.\]
If we denote \(Y_{k}=\||G_{k}(u)|^{\frac{\sigma}{2}}\|^{2}_{H_{0}^{1}(\Omega)}\) and define the function
\[F(y)=C_{1}y^{\frac{1}{\sigma}}-\gamma C_{2}y^{\frac{N-q}{N-2}-\frac{\sigma-1}{ \sigma}}\,,\quad y\geq 0\,,\]
we have obtained
\[F(Y_{k})\leq C_{3}\|f\|_{L^{m}(\Omega)}\qquad\forall k>0\,.\] (3.5)
Note that the continuous function \(F(y)\) satisfies \(F(0)=0\), \(\lim_{y\to+\infty}F(y)=-\infty\), it is increasing until reaching certain \(y^{*}\) and then it is decreasing, so that it has a maximun \(M^{*}\) at \(y^{*}\), i.e., \(M^{*}=F(y^{*})=\max_{y}F(y)\). We explicitly remark that \(M^{*}\) depends on \(\gamma\) as well as on \(q\), \(N\), \(\lambda\) and \(m\). Choosing constant
\[K=\frac{M^{*}}{C_{3}}\,,\]
if we require \(\|f\|_{L^{m}(\Omega)}<K\), then the equation \(F(y)=C_{3}\|f\|_{L^{m}(\Omega)}<M^{*}\) has two roots:
\[Y^{-}\;\mbox{ and }\;Y^{+}\,,\quad\mbox{ with }\quad Y^{-}<y^{*}<Y^{+}\,.\]
It follows from \(|u|^{\frac{\sigma}{2}}\in H_{0}^{1}(\Omega)\), that function \(k\mapsto Y_{k}\) is continuous and goes to \(0\) when \(k\to\infty\). This fact implies \(Y_{k}\leq Y^{-}\) for all \(k>0\), and so
\[\frac{\sigma^{2}}{4}\int_{\Omega}|G_{k}(u)|^{\sigma-2}|\nabla G_{k}(u)|^{2}\, dx=\int_{\Omega}|\nabla|G_{k}(u)|^{\frac{\sigma}{2}}|^{2}\,dx\leq Y^{-}\,,\]
for all \(k>0\). Therefore, \(\int_{\Omega}|u|^{\sigma-2}|\nabla u|^{2}\,dx\leq\frac{4}{\sigma^ {2}}Y^{-}\).
**Remark 3.2**.: We explicitly point out that our choice \(m=\frac{N(q-1-\alpha)}{q-2\alpha}\) and our assumption \(\alpha>0\) implies \( m<\frac{N(q-1)}{q}\), so that the range for parameter \(m\) is actually \(\frac{2N}{N+2}\leq m<\frac{N(q-1)}{q}\). A simple consequence is that then \(q\geq\alpha\frac{N-2}{N}+1+\frac{2}{N}\), which, in particular, yields \( q>1+\frac{2}{N}\).
If \( q<\alpha\frac{N-2}{N}+1+\frac{2}{N}\), then we are allowed to consider data with a lower summability with respect to the case \(\alpha=0\).
**Remark 3.3**.: The proof of Proposition 3.1 for the case \(\frac{N(q-1)-mq}{N-2m}<\alpha<q-1\) is similar to that of the limit case. The only differences begin in (3.4) since now
\[\beta:=\left[\sigma-1-\alpha-\left(\frac{\sigma}{2}-1\right)q\right]\frac{2}{2 -q}<\frac{\sigma}{2}\,2^{*}\,.\]
Therefore, Hölder’s inequality must be applied once again in (3.3):
\[I_{1} \leq\gamma\frac{2^{q}}{\sigma^{q}}\left(\int_{\Omega}|G_{k}(u)|^{ \beta}\,dx\right)^{\frac{2-q}{2}}\||G_{k}(u)|^{\frac{\sigma}{2}}\|^{q}_{H_{0}^ {1}(\Omega)}\]
\[\leq\gamma\frac{2^{q}}{\sigma^{q}}|\Omega|^{(2-q)(\frac{1}{2}- \frac{\beta}{2^{*}\sigma})}\left(\int_{\Omega}|G_{k}(u)|^{\frac{\sigma}{2}2^{* }}\,dx\right)^{\frac{\beta(2-q)}{2^{*}\sigma}}\||G_{k}(u)|^{\frac{\sigma}{2}} \|^{q}_{H_{0}^{1}(\Omega)}\]
\[\leq\gamma\frac{2^{q}}{\sigma^{q}}|\Omega|^{(2-q)(\frac{1}{2}- \frac{\beta}{2^{*}\sigma})}\left(S_{2}^{2}\int_{\Omega}|\nabla|G_{k}(u)|^{ \frac{\sigma}{2}}|^{2}\,dx\right)^{\frac{\beta(2-q)}{2\sigma}}\||G_{k}(u)|^{ \frac{\sigma}{2}}\|^{q}_{H_{0}^{1}(\Omega)}\]
\[=\gamma\frac{2^{q}}{\sigma^{q}}S_{2}^{\frac{\beta(2-q)}{\sigma}}| \Omega|^{(2-q)(\frac{1}{2}-\frac{\beta}{2^{*}\sigma})}\||G_{k}(u)|^{\frac{ \sigma}{2}}\|^{q+\frac{\beta(2-q)}{\sigma}}_{H_{0}^{1}(\Omega)}\,.\]
From this point on, we can follow the same proof, we just note that now the constants also depend on \(\alpha\) and \(|\Omega|\).
**Remark 3.4**.: A relevant case occurs when \(\alpha\) attains its limit value \(\alpha=q-1\). Then \(\beta=\sigma\) and so we have
\[I_{1}\leq\gamma\frac{2^{q}}{\sigma^{q}}S_{2}^{2-q}|\Omega|^{\frac{2-q}{N}}\|\, |G_{k}(u)|^{\frac{\sigma}{2}}\|_{H_{0}^{1}(\Omega)}^{2}\,.\]
A more accurate estimate follows from the Poincaré–Friedrichs inequality. It yields
\[I_{1}\leq\gamma\frac{2^{q}}{\sigma^{q}}(C^{PF}_{2})^{2-q}\|\,|G_{k}(u)|^{\frac {\sigma}{2}}\|_{H_{0}^{1}(\Omega)}^{2}\,.\]
As a consequence, inequality (3.2) becomes
\[\lambda\frac{4(\sigma-1)}{\sigma^{2}}\|\,|G_{k}(u)|^{\frac{\sigma}{2}}\|_{H_{0 }^{1}(\Omega)}^{2}\leq\gamma\frac{2^{q}}{\sigma^{q}}(C^{PF}_{2})^{2-q}\|\,|G_{ k}(u)|^{\frac{\sigma}{2}}\|_{H_{0}^{1}(\Omega)}^{2}+\|f\|_{L^{m}(\Omega)}S_{2} ^{2\frac{\sigma-1}{\sigma}}\|\,|G_{k}(u)|^{\frac{\sigma}{2}}\|^{2\frac{\sigma- 1}{\sigma}}_{H_{0}^{1}(\Omega)}\]
and an estimate for every \(f\in L^{m}(\Omega)\) holds if
\[\gamma\frac{2^{q}}{\sigma^{q}}(C^{PF}_{2})^{2-q}<\lambda\frac{4(\sigma-1)}{ \sigma^{2}}\,.\]
Hence, we have arrived at the following result.
**Proposition 3.5**.: _Let \(f\in L^{m}(\Omega)\) with \(\frac{2N}{N+2}\leq m<\frac{N}{2}\) and let \(\alpha=q-1\). Assume (2.2), (2.3), (2.4) and that \(u\) is a solution to problem (2.1) in the sense of Definition 2.2 such that \(|u|^{\frac{\sigma}{2}}\in H_{0}^{1}(\Omega)\). If_
\[\gamma<\lambda\frac{2^{2-q}}{\sigma^{2-q}(C^{PF}_{2})^{2-q}}(\sigma-1)\,,\]
_then such a solution \(u\) satisfies the following estimate:_
\[\int_{\Omega}|u|^{\sigma-2}|\nabla u|^{2}\,dx\leq M\,,\]
_for every \(\|f\|_{L^{m}(\Omega)}\), where \(M\) is a positive constant which only depends on \(N\), \(q\), \(m\), \(\Omega\), \(\lambda\) and \(\gamma\)._
**Remark 3.6**.: Noting that \(\sigma-1=\frac{N(m-1)}{N-2m}\), it follows that the condition we have found in Proposition 3.5 can be written as
\[\gamma<\lambda C^{2-q}\frac{N(m-1)}{N-2m}\quad\text{for}\quad C=\frac{2}{ \sigma C_{2}^{PF}}\,.\]
We point out that letting \(q\) go to \(2\), we obtain the same critical value appearing in [25].
### Renormalized solutions with \(L^{m}(\Omega)\) data
In order to show that the parameters involved in all the cases are adjusted with continuity, the following result is necessary, it allows us to estimate sharply.
**Lemma 3.7**.: _Let \(v\) be a nonnegative function belonging to \(W_{0}^{1,p}(\Omega)\) and consider \(\varphi_{\varepsilon}(v)=(\varepsilon+v)^{\nu}\) for \(\varepsilon>0\) and \(0<\nu<1\). Then_
_where \(A\) is a positive real function such that \(\lim_{\varepsilon\to 0}A(\varepsilon)=0\)._
Proof. First note that \(\varphi_{\varepsilon}\in W^{1,p}(\Omega)\) since \(\varphi_{\varepsilon}\) is defined through a Lipschitz–continuous real function. Now, extend \(\varphi_{\varepsilon}\) to be \(\varepsilon^{\nu}\) in \(\mathbb{R}^{N}\backslash\Omega\). We denote by \(B_{r}\) the ball centered at the origin with radius \(r\). Fix \(0<r<R\) in such a way that \(\Omega\subset B_{r}\) and consider the cut–off function \(\eta\in W^{1,\infty}(\mathbb{R}^{N})\) with \(0\leq\eta\leq 1\) defined as
\[\begin{array}[]{ll}\eta(x)=1&x\in B_{r}\,;\\ \eta(x)=0&x\notin B_{R}\,;\\ |\nabla\eta(x)|=\frac{1}{R-r}&x\in B_{R}\backslash B_{r}\,.\end{array}\]
It follows from \(\varphi_{\varepsilon}\eta\in W^{1,p}(\mathbb{R}^{N})\), that
\[\Big{(}\int_{\mathbb{R}^{N}}(\varphi_{\varepsilon}\eta)^{p^{*}}\,dx\Big{)}^{1/ p^{*}}\leq S_{p}\Big{(}\int_{\mathbb{R}^{N}}|\nabla(\varphi_{\varepsilon}\eta) |^{p}\,dx\Big{)}^{1/p}.\]
As a consequence,
\[\Big{(}\int_{\Omega}\varphi_{\varepsilon}^{p^{*}}\,dx\Big{)}^{1/p ^{*}}\]
\[\leq S_{p}\Big{(}\int_{\mathbb{R}^{N}}|\nabla(\varphi_{ \varepsilon}\eta)|^{p}\,dx\Big{)}^{1/p}=S_{p}\Big{(}\int_{B_{R}\backslash B_{r }}\varphi_{\varepsilon}^{p}|\nabla\eta|^{p}\,dx+\int_{\Omega}\eta^{p}|\nabla \varphi_{\varepsilon}|^{p}\,dx\Big{)}^{1/p}\]
\[=S_{p}\Big{(}\frac{\varepsilon^{p\nu}}{(R-r)^{p}}|B_{R}\backslash B _{r}|+\int_{\Omega}|\nabla\varphi_{\varepsilon}|^{p}\,dx\Big{)}^{1/p},\]
as desired.
**Remark 3.8**.: We explicitly point out that a similar result holds for the Poincaré–Friedrichs inequality with \(C_{p}^{PF}\) instead of \(S_{p}\).
**Proposition 3.9**.: _Let \(f\in L^{m}(\Omega)\) with \(1<m<\frac{2N}{N+2}\) and let \(\alpha=\frac{N(q-1)-mq}{N-2m}\). Assume (2.2), (2.3), (2.4) and that \(u\) is a renormalized solution to problem (2.1) in the sense of Definition 2.3 such that \((1+|u|)^{\frac{\sigma}{2}-1}u\in H_{0}^{1}(\Omega)\). (Observe that then \(\sigma=\frac{(N-2)(q-1-\alpha)}{2-q}\) and \(m=\frac{N(q-1-\alpha)}{q-2\alpha}\).) Then, if \(\|f\|_{L^{m}(\Omega)}\) is small enough, every such a solution \(u\) satisfies the following estimate:_
\[\int_{\Omega}(1+|u|)^{\sigma-2}|\nabla u|^{2}\,dx\leq M\,,\]
_where \(M\) is a positive constant which only depends on \(N\), \(q\), \(m\), \(\gamma\), \(\lambda\) and \(\|f\|_{L^{m}(\Omega)}\)._
Proof. Let \(k>0\) and fix \(\varepsilon\) such that \(0<\varepsilon<\min\{1,k\}\). We recall Remark 2.6 and take the test function \(S_{n,k}(u)\varphi\), with
\[S_{n,k}(u)=\int_{0}^{T_{n}(G_{k}(u))}(\varepsilon+|t|)^{\sigma-2}\,dt=\frac{1} {\sigma-1}\Big{[}(\varepsilon+|T_{n}(G_{k}(u))|)^{\sigma-1}-\varepsilon^{ \sigma-1}\Big{]}{\rm\;sign}(u),\quad\hbox{ and }\quad\varphi=1\,,\]
and so, by the growth condition (2.3),
\[\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla S_{n,k}(u)\,dx\leq \int_{\Omega}g(u)\,|S_{n,k}(u)|\,|\nabla u|^{q}\,dx+\int_{\Omega}|f|\,|S_{n,k} (u)|\,dx\\ =\int_{\{|u|>k\}}g(u)\,|S_{n,k}(u)|\,|\nabla u|^{q}\,dx+\int_{ \Omega}|f|\,|S_{n,k}(u)|\,dx\,,\]
since \(S_{n,k}(u)\) vanishes in the set \(\{|u|\leq k\}\). On the left hand side we get
\[\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla S_{n,k}(u)\,dx =\int_{\Omega}(\varepsilon+|T_{n}(G_{k}(u))|)^{\sigma-2}[A(x) \cdot\nabla u]\cdot\nabla T_{n}(G_{k}(u))\,dx\]
\[\geq\lambda\frac{4}{\sigma^{2}}\int_{\Omega}\big{|}\nabla( \varepsilon+|T_{n}(G_{k}(u))|)^{\frac{\sigma}{2}}\big{|}^{2}\,dx\]
by (2.2).
Therefore, we obtain
\[\lambda\frac{4}{\sigma^{2}}\int_{\Omega}\big{|}\nabla(\varepsilon+|T_{n}(G_{k} (u))|)^{\frac{\sigma}{2}}\big{|}^{2}\,dx\leq\int_{\{|u|>k\}}g(u)|S_{n,k}(u)|| \nabla u|^{q}\,dx+\int_{\Omega}|f|\,|S_{n,k}(u)|\,dx\,,\]
and letting \(n\to\infty\) (which is licit thanks to the \(\frac{\sigma}{2}\)–power regularity), we have
\[\lambda\frac{4}{\sigma^{2}}\int_{\Omega}|\nabla\varphi_{\varepsilon}^{k}(u)|^{ 2}\,dx\leq\int_{\{|u|>k\}}g(u)|S_{k}(u)||\nabla u|^{q}\,dx+\int_{\Omega}|f|\,| S_{k}(u)|\,dx=I_{1}+I_{2}\,,\] (3.6)
where we have denoted
\[S_{k}(u)=\frac{1}{\sigma-1}(\varepsilon+|G_{k}(u)|)^{\sigma-1}\,,\]
and
\[\varphi^{k}_{\varepsilon}(u)=(\varepsilon+|G_{k}(u)|)^{\frac{\sigma}{2}}\,.\]
We start making some computations on \(I_{1}\).
\[I_{1} \leq\gamma\int_{\{|u|>k\}}\frac{|S_{k}(u)|}{|u|^{\alpha}}|\nabla u |^{q}\,dx\]
\[=\frac{\gamma}{\sigma-1}\int_{\{|u|>k\}}\frac{(\varepsilon+|G_{k} (u)|)^{\sigma-1}}{|u|^{\alpha}}|\nabla u|^{q}\,dx\]
\[\leq\frac{\gamma}{\sigma-1}\int_{\{|u|>k\}}(\varepsilon+|G_{k}(u) |)^{\sigma-1-\alpha}|\nabla G_{k}(u)|^{q}\,dx\]
owed to \(\alpha>0\) and the fact that the inequality \(\varepsilon+|G_{k}(u)|\leq|u|\) holds in \(\{|u|>k\}\). Thus,
\[I_{1} \leq\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}\int_{\Omega}( \varepsilon+|G_{k}(u)|)^{\sigma-1-\alpha}\frac{1}{(\varepsilon+|G_{k}(u)|)^{( \frac{\sigma}{2}-1)q}}|\nabla(\varepsilon+|G_{k}(u)|)^{\frac{\sigma}{2}}|^{q} \,dx\]
\[=\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}\int_{\Omega}( \varepsilon+|G_{k}(u)|)^{\sigma-1-\alpha-(\frac{\sigma}{2}-1)q}|\nabla\varphi_ {\varepsilon}^{k}(u)|^{q}\,dx\,.\]
Moreover, applying Hölder’s inequality we arrive at
\[I_{1}\leq\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}\left(\int_{\Omega}( \varepsilon+|G_{k}(u)|)^{\left[\sigma-1-\alpha-(\frac{\sigma}{2}-1)q\right] \frac{2}{2-q}}\,dx\right)^{\frac{2-q}{2}}\left(\int_{\Omega}|\nabla\varphi_{ \varepsilon}^{k}(u)|^{2}\,dx\right)^{\frac{q}{2}}\,.\] (3.7)
Since we have choosen \(\sigma=\frac{(N-2)(q-1-\alpha)}{2-q}\), the power of \((\varepsilon+|G_{k}(u)|)\) in the first integrand is actually
\[\left[\sigma-1-\alpha-\left(\frac{\sigma}{2}-1\right)q\right]\frac{2}{2-q}= \frac{\sigma}{2}\,2^{*}\,.\]
Therefore, inequality (3.7) becomes
\[I_{1}\leq\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}\left(\int_{\Omega}| \varphi^{k}_{\varepsilon}(u)|^{2^{*}}\,dx\right)^{\frac{2-q}{2}}\|\nabla \varphi_{\varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{q}\,.\]
Thanks to Lemma 3.7, we may perform the following manipulations:
\[I_{1} \leq\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}S_{2}^{2^{*} \frac{2-q}{2}}\left(\int_{\Omega}|\nabla\varphi_{\varepsilon}^{k}(u)|^{2}\,dx+ A(\varepsilon)\right)^{\frac{2^{*}}{2}\frac{2-q}{2}}\|\nabla\varphi_{ \varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{q}\]
\[=\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}S_{2}^{2^{*}\frac {2-q}{2}}\left(\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{2}+A( \varepsilon)\right)^{\frac{2^{*}}{2}\frac{2-q}{2}}\|\nabla\varphi_{\varepsilon }^{k}(u)\|_{L^{2}(\Omega)}^{q}\]
\[\leq\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}S_{2}^{2^{*} \frac{2-q}{2}}\left(\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{2} +A(\varepsilon)\right)^{\frac{2^{*}}{2}\frac{2-q}{2}+\frac{q}{2}}\,.\]
On the other hand, we use Hölder’s inequality in \(I_{2}\) to get
Therefore, on account of (2.6) and applying Lemma 3.7 again,
\[I_{2} \leq\frac{1}{\sigma-1}\|f\|_{L^{m}(\Omega)}\left(\int_{\Omega}( \varepsilon+|G_{k}(u)|)^{\frac{\sigma}{2}2^{*}}\,dx\right)^{\frac{1}{m^{\prime }}}=\frac{1}{\sigma-1}\|f\|_{L^{m}(\Omega)}\left(\int_{\Omega}\varphi_{ \varepsilon}^{k}(u)^{2^{*}}\,dx\right)^{\frac{1}{m^{\prime}}}\]
\[\leq\frac{1}{\sigma-1}\|f\|_{L^{m}(\Omega)}S_{2}^{\frac{2^{*}}{m^ {\prime}}}\left(\int_{\Omega}|\nabla\varphi_{\varepsilon}^{k}(u)|^{2}\,dx+A( \varepsilon)\right)^{\frac{2^{*}}{2}\frac{1}{m^{\prime}}}\,.\]
Thus, inequality (3.6) becomes
\[\lambda\frac{4}{\sigma^{2}}\|\nabla\varphi_{\varepsilon}^{k}(u)\| _{L^{2}(\Omega)}^{2}\leq I_{1}+I_{2} \leq\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}S_{2}^{2^{*} \frac{2-q}{2}}\left(\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{2} +A(\varepsilon)\right)^{\frac{2^{*}}{2}\frac{2-q}{2}+\frac{q}{2}}\] (3.8)
\[+\frac{1}{\sigma-1}\|f\|_{L^{m}(\Omega)}S_{2}^{\frac{2^{*}}{m^{ \prime}}}\left(\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{2}+A( \varepsilon)\right)^{\frac{2^{*}}{2m^{\prime}}}\,.\]
If \(k\) satisfies \(G_{k}(u)=0\), then \(\|\varphi_{\varepsilon}^{k}(u)\|_{H_{0}^{1}(\Omega)}=0\) and we are done. So, we will assume that \(G_{k}(u)\neq 0\) and consequently \(\lim_{\varepsilon\to 0}\|\varphi_{\varepsilon}^{k}(u)\|_{H_{0}^{1}(\Omega)}\neq 0\). Then, we rearrange the terms of (3.8), obtaining
\[\lambda\frac{4}{\sigma^{2}}\|\nabla\varphi_{\varepsilon}^{k}(u)\| _{L^{2}(\Omega)}^{2-\frac{2^{*}}{m^{\prime}}}\leq\frac{\gamma}{\sigma-1}\frac{ 2^{q}}{\sigma^{q}}S_{2}^{2^{*}\frac{2-q}{2}}\|\nabla\varphi_{\varepsilon}^{k}( u)\|_{L^{2}(\Omega)}^{2^{*}\frac{2-q}{2}+q-\frac{2^{*}}{m^{\prime}}}\\ +B(\varepsilon)+\frac{1}{\sigma-1}\|f\|_{L^{m}(\Omega)}S_{2}^{ \frac{2^{*}}{m^{\prime}}}\left(1+\frac{A(\varepsilon)}{\|\nabla\varphi_{ \varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{2}}\right)^{\frac{2^{*}}{2m^{\prime}}}\,,\] (3.9)
where
\[B(\varepsilon)=\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}S_{2}^{2^{*} \frac{2-q}{2}}\frac{\Big{(}\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{2}( \Omega)}^{2}+A(\varepsilon)\Big{)}^{\frac{2^{*}}{2}\frac{2-q}{2}+\frac{q}{2}}- \|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{{2^{*}}\frac{2-q}{2}+q }}{\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{\frac{2^{*}}{m^{ \prime}}}}\]
defines a positive function which satisfies \(\lim_{\varepsilon\to 0}B(\varepsilon)=0\). Denoting \(Y_{k}=\||G_{k}(u)|^{\frac{\sigma}{2}}\|^{2}_{H_{0}^{1}(\Omega)}\) and \(Y_{k,\varepsilon}=\|\nabla\varphi_{\varepsilon}^{k}(u)\|^{2}_{L^{2}(\Omega)}\), inequality (3.9) changes to
\[C_{1}Y_{k,\varepsilon}^{1-\frac{2^{*}}{2m^{\prime}}}-\gamma C_{2}Y_{k, \varepsilon}^{\frac{2^{*}}{2}\frac{2-q}{2}+\frac{q}{2}-\frac{2^{*}}{2m^{\prime }}}\leq B(\varepsilon)+C_{3}\|f\|_{L^{m}(\Omega)}\left(1+\frac{A(\varepsilon)} {\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{2}}\right)^{\frac{2^{ *}}{2m^{\prime}}}\,,\]
where each \(C_{i}\) denotes a positive constant depending on \(q\), \(N\), \(\lambda\) and \(m\).
Now, we consider again the function
\[F(y)=C_{1}y^{1-\frac{2^{*}}{2m^{\prime}}}-\gamma C_{2}y^{\frac{2^{*}}{2}\frac{ 2-q}{2}+\frac{q}{2}-\frac{2^{*}}{2m^{\prime}}}\,,\]
(note that \(1-\frac{2^{*}}{2m^{\prime}}=\frac{1}{\sigma}\) and \(\frac{2^{*}}{2}\frac{2-q}{q}+\frac{q}{2}-\frac{2^{*}}{2m^{\prime}}=\frac{N-q}{ N-2}\frac{\sigma-1}{\sigma}\)) which has a maximun \(M^{*}\) achieved at certain \(y^{*}\), i.e., \(M^{*}=F(y^{*})=\max_{y}F(y)\). Choosing constant
\[K=\frac{M^{*}}{C_{3}}\,,\]
and requiring \(\|f\|_{L^{m}(\Omega)}<K\), there exists \(\varepsilon_{0}\in(0,1)\) such that
\[F(Y_{k,\varepsilon})\leq M_{\varepsilon}=B(\varepsilon)+C_{3}\|f\|_{L^{m}( \Omega)}\left(1+\frac{A(\varepsilon)}{\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{ L^{2}(\Omega)}^{2}}\right)^{\frac{2^{*}}{2m^{\prime}}}<M^{*}\]
for all \(0<\varepsilon<\varepsilon_{0}\), and so the equation \(F(y)=M_{\varepsilon}\) has two roots:
\[Y_{\varepsilon}^{-}\;\mbox{ and }\;Y_{\varepsilon}^{+}\,,\quad\mbox{ with } \quad Y_{\varepsilon}^{-}<y^{*}<Y_{\varepsilon}^{+}\,.\]
Observe that the continuity of \(F\) leads to the continuity of the function \(\varepsilon\mapsto Y_{\varepsilon}^{-}\).
From our hypothesis \((1+|u|)^{\frac{\sigma}{2}-1}u\in H_{0}^{1}(\Omega)\), we have that function \(k\mapsto Y_{k,\varepsilon}\) is continuous and goes to \(0\) when \(k\to\infty\). Hence, \(F(Y_{k,\varepsilon})<M^{*}\) implies \(Y_{k,\varepsilon}\leq Y_{\varepsilon}^{-}\) for all \(k>\varepsilon\) and, as a consequence,
\[\frac{\sigma^{2}}{4}\int_{\Omega}(1+|G_{k}(u)|)^{\sigma-2}|\nabla G _{k}(u)|^{2}\,dx \leq\frac{\sigma^{2}}{4}\int_{\Omega}(\varepsilon+|G_{k}(u)|)^{ \sigma-2}|\nabla G_{k}(u)|^{2}\,dx\]
\[=\int_{\Omega}|\nabla(\varepsilon+|G_{k}(u)|)^{\frac{\sigma}{2}}| ^{2}\,dx\leq(Y_{\varepsilon}^{-})^{2}\,,\]
for all \(k>\varepsilon\). We point out that equation \(F(y)=C_{3}\|f\|_{L^{m}(\Omega)}\) has two roots which will be denoted by \(Y^{-}\) and \(Y^{+}\), with \(Y^{-}<Y_{\varepsilon}^{-}<y^{*}<Y_{\varepsilon}^{+}<Y^{+}\). Due to the continuity of function \(F\) and since \(\lim_{\varepsilon\to 0}M_{\varepsilon}=C_{3}\|f\|_{L^{m}(\Omega)}\), it follows that \(\lim_{\varepsilon\to 0}Y_{\varepsilon}^{-}=Y^{-}\). Hence,
\[\int_{\Omega}(1+|G_{k}(u)|)^{\sigma-2}|\nabla G_{k}(u)|^{2}\,dx\leq\frac{4}{ \sigma^{2}}(Y^{-})^{2}\]
for all \(k>0\) from where the desired estimate follows.
**Remark 3.10**.: As in Remark 3.3, we may extend the above result to the range \(\frac{N(q-1)-mq}{N-2m}<\alpha<q-1\) with a constant depending also on \(\alpha\) and \(|\Omega|\).
In the same spirit than Proposition 3.5, a consequence of Proposition 3.9 in the limit case \(\alpha=q-1\) can be obtained. We also point out that when \(q\) tends to \(2\), it yields the same critical value found in [25].
**Proposition 3.11**.: _Let \(f\in L^{m}(\Omega)\) with \(1<m<\frac{2N}{N+2}\) and let \(\alpha=q-1\). Assume (2.2), (2.3), (2.4) and that \(u\) is a renormalized solution to problem (2.1) in the sense of Definition 2.3 such that \((1+|u|)^{\frac{\sigma}{2}-1}u\in H_{0}^{1}(\Omega)\). If_
\[\gamma<\lambda\frac{2^{2-q}}{\sigma^{2-q}(C^{PF}_{2})^{2-q}}(\sigma-1)\,,\]
_then such solution \(u\) satisfies the following estimate:_
\[\int_{\Omega}(1+|u|)^{\sigma-2}|\nabla u|^{2}\,dx\leq M\,,\]
_where \(M\) is a positive constant which only depends on \(N\), \(q\), \(m\), \(\Omega\), \(\lambda\), \(\|f\|_{L^{m}(\Omega)}\) and \(\gamma\)._
Proof. We may follow the same argument of the proof of Proposition 3.9 until we reach the inequality (3.7), which now is
\[I_{1}\leq\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}\left(\int_{\Omega}( \varepsilon+|G_{k}(u)|)^{\frac{\sigma}{2}2}\,dx\right)^{\frac{2-q}{2}}\left( \int_{\Omega}|\nabla\varphi_{\varepsilon}^{k}(u)|^{2}\,dx\right)^{\frac{q}{2}}\,.\]
Thus, the Poincaré–Friedrichs inequality yields
\[I_{1}\leq\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}(C_{2}^{PF})^{2-q} \left(\int_{\Omega}|\nabla\varphi_{\varepsilon}^{k}(u)|^{2}\,dx+A(\varepsilon) \right)^{\frac{2-q}{2}}\left(\int_{\Omega}|\nabla\varphi_{\varepsilon}^{k}(u)| ^{2}\,dx\right)^{\frac{q}{2}}\]
and so (3.8) becomes
\[\lambda\frac{4}{\sigma^{2}}\|\nabla\varphi_{\varepsilon}^{k}(u)\| _{L^{2}(\Omega)}^{2}\leq\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}(C_{2}^ {PF})^{2-q}\left(\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{2}+A( \varepsilon)\right)\\ +\frac{1}{\sigma-1}\|f\|_{L^{m}(\Omega)}S_{2}^{\frac{2^{*}}{m^{ \prime}}}\left(\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{2}(\Omega)}^{2}+A( \varepsilon)\right)^{\frac{2^{*}}{2m^{\prime}}}\,.\]
Therefore, the condition \(\frac{\gamma}{\sigma-1}\frac{2^{q}}{\sigma^{q}}(C_{2}^{PF})^{2-q}<\lambda\frac {4}{\sigma^{2}}\) implies a uniform estimate of \(\nabla\varphi_{\varepsilon}^{k}(u)\) in \(L^{2}(\Omega;\mathbb{R}^{N})\). We then infer the estimate
\[\int_{\Omega}(1+|u|)^{\sigma-2}|\nabla u|^{2}\,dx\leq M\,.\]
### Renormalized solutions with measure data
We recall here the definition and a few properties of _Marcinkiewicz_ spaces we are going to employ when dealing with the measure setting.
Let \(0<\zeta<\infty\). Then, the Marcinkiewicz space \(M^{\zeta}(\Omega)\) is defined as the set of measurable functions \(u:\Omega\to\mathbb{R}\) such that
Furthermore, the following continuous embeddings hold
\[L^{\zeta}(\Omega)\hookrightarrow M^{\zeta}(\Omega)\hookrightarrow L^{\zeta- \omega}(\Omega)\]
for every \(\omega>0\) such that \(\zeta-\omega>1\). More precisely,
\[\|f\|_{L^{\zeta-\omega}(\Omega)}\leq\left(\frac{\zeta}{\omega}\right)^{\frac{1 }{\zeta-\omega}}|\Omega|^{\frac{\omega}{\zeta(\zeta-\omega)}}[\,f\,]_{\zeta}\] (3.10)
holds for all \(f\in M^{\zeta}(\Omega)\). We point out that the constant in the embedding depends on \(\zeta\), \(\omega\) and \(|\Omega|\), and it blows up just when \(\omega\) tends to \(0\).
**Lemma 3.12**.: _Let \(\Omega\subset\mathbb{R}^{N}\) be a bounded open set. Let \(1<p<N\) and \(0<r<p\). Consider \(u\::\:\Omega\to\mathbb{R}\) a measurable and a.e. finite function satisfying_
\[T_{\ell}(u)\in W_{0}^{1,p}(\Omega)\,,\qquad\hbox{for all }\ell>0\,.\]
_Assume that there exists \(M>0\) such that_
\[\int_{\Omega}|\nabla T_{\ell}(u)|^{p}\leq\ell^{r}M\,,\qquad\hbox{for all }\ell >0\,.\]
_Then_
\[[\,u\,]_{\frac{N}{N-p}(p-r)}\leq C_{1}(N,p,r)M^{1/(p-r)}\,;\] (3.11)
\[[\,|\nabla u|\,]_{\frac{N}{N-r}(p-r)}\leq C_{2}(N,p,r)M^{1/(p-r)}\,.\] (3.12)
Proof. Applying Sobolev’s inequality (and denoting by \(S_{p}\) the Sobolev constant), we obtain
\[|\{|u|>\ell\}| \leq|\{|T_{\ell}(u)|\geq\ell\}|\leq\int_{\Omega}\frac{|T_{\ell}(u )|^{p^{*}}}{\ell^{p^{*}}}\leq\frac{S_{p}^{p^{*}}\left(\int_{\Omega}|\nabla T_{ \ell}(u)|^{p}\right)^{p^{*}/p}}{\ell^{p^{*}}}\]
\[\leq S_{p}^{p^{*}}M^{\frac{N}{N-p}}\ell^{-\frac{N}{N-p}(p-r)}\,,\]
from where (3.11) follows.
To see (3.12), perform the following manipulations:
\[|\{|\nabla u|>j\}| \leq|\{|\nabla T_{\ell}(u)|>j\}|+|\{|u|>\ell\}|\]
\[\leq\int_{\Omega}\frac{|\nabla T_{\ell}(u)|^{p}}{j^{p}}+S_{p}^{p^ {*}}M^{\frac{N}{N-p}}\ell^{-\frac{N}{N-p}(p-r)}\]
\[\leq\frac{\ell^{r}M}{j^{p}}+S_{p}^{p^{*}}M^{\frac{N}{N-p}}\ell^{- \frac{N}{N-p}(p-r)}\,.\]
Since the minimum is obtained for
\[\ell^{*}=\Big{(}\frac{N}{N-p}\frac{p-r}{r}S_{p}^{p^{*}}\Big{)}^{\frac{N-p}{p(N -r)}}j^{\frac{N-p}{N-r}}M^{\frac{1}{N-r}}\,,\]
we deduce that
\[|\{|\nabla u|>j\}|\leq C(N,p,r)M^{\frac{N}{N-r}}j^{-\frac{N(p-r)}{N-r}},\]
wherewith (3.12) holds.
**Remark 3.13**.: _For further references, it is convenient to explicit the above constant \(C_{2}(N,p,r)\). It is easy to check that_
\[C_{2}(N,p,r)=C(N,p,r)^{\frac{N-r}{N(p-r)}}\]
_and_
\[C(N,p,r)=\left[\left(\frac{N(p-r)}{r(N-p)}\right)^{\frac{r(N-p)}{p(N-r)}}+ \left(\frac{N(p-r)}{r(N-p)}\right)^{-\frac{N(p-r)}{p(N-r)}}\right]S_{p}^{\frac {Nr}{N-r}}\,.\]
_We point out that_
\[\lim_{r\to 0}C(N,p,r)=1\,.\]
**Theorem 3.14**.: _Assume (2.2), (2.3), (2.4). Let \(\mu\in\mathcal{M}_{b}(\Omega)\) and \(\frac{N(q-1)-q}{N-2}<\alpha<q-1\). If \(\|\mu\|_{\mathcal{M}_{b}(\Omega)}\) is small enough, then every renormalized solution to problem (2.10) in the sense of Definition 2.5 satisfies_
\[\int_{\Omega}|\nabla T_{j}(u)|^{2}\,dx\leq Mj\,,\quad\hbox{for all }j>0\,,\] (3.13)
_and_
\[\int_{\Omega}g(u)|\nabla u|^{q}\,dx\leq C_{0}\,,\]
_where \(M\) and \(C_{0}\) are positive constants which only depends on \(N\), \(q\), \(\gamma\), \(\alpha\), \(\lambda\) and \(\|\mu\|_{\mathcal{M}_{b}(\Omega)}\). Moreover, for each \(k>0\), the following estimate holds_
\[\int_{\{|u|>k\}}g(u)|\nabla u|^{q}\,dx\leq C_{k}\,,\]
_where \(C_{k}\) is a constant which only depends on the above parameters of the problem and it satisfies_
\[\lim_{k\to\infty}C_{k}=0\,.\]
Proof. Most of the proof consists of estimating the gradient term in \(L^{1}(\Omega)\).
The case we are considering does not states any \(\frac{\sigma}{2}\)–class as in the previous results (see, however, Remark 3.15 below). We thus want to “recreate” an analogous tool.
We choose
\[\theta=\frac{2qN-sq-2\alpha N+2\alpha s-Ns}{s(N-q)}\,,\]
with \(s\) such that
\[q<s<\frac{2N(q-\alpha)}{N+q-2\alpha},\]
in order to have \(\theta>0\). Note that this condition is not restrictive since \(q>\alpha+1>2\alpha\).
We now analyze the connection among all these parameters. Observe that \(0<1-\frac{2\alpha}{q}\) holds because of the restriction \(\alpha<\frac{q}{2}\), and \(q<s\) implies
\[\theta<1-\frac{2\alpha}{q}\,.\]
On the other hand, it follows from \(\frac{N(q-1)-q}{N-2}<\alpha\) that
\[0<\theta<\frac{N(2-s)}{s(N-2)}\,.\] (3.14)
Let \(0<j,k\) and let \(\varepsilon>0\) satisfy \(\varepsilon<k\). We start by taking \(\psi(u)=T_{j}((\varepsilon+|G_{k}(u)|)^{\theta}-\varepsilon^{\theta})\) as test function in problem (2.10). Notice that \(\psi(u)\) vanishes on the set \(\{|u|\leq k\}\). Then, thanks to (2.3),
\[\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla\psi(u)\,dx\leq j\int_{\{|u|>k\}}g( u)|\nabla u|^{q}\,dx+j\|\mu\|_{\mathcal{M}_{b}(\Omega)}\,.\] (3.15)
On the left hand side we get
\[\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla\psi(u)\,dx =\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla T_{j}((\varepsilon+| G_{k}(u)|)^{\theta}-\varepsilon^{\theta})\,dx\]
\[=\theta\int_{\{(\varepsilon+|G_{k}(u)|)^{\theta}-\varepsilon^{ \theta}<j\}}(\varepsilon+|G_{k}(u)|)^{\theta-1}[A(x)\cdot\nabla u]\cdot\nabla G _{k}(u)\,dx\]
\[\geq\theta\int_{\{(\varepsilon+|G_{k}(u)|)^{\theta}<j\}}( \varepsilon+|G_{k}(u)|)^{\theta-1}[A(x)\cdot\nabla u]\cdot\nabla G_{k}(u)\,dx\]
\[\geq\lambda\theta\int_{\{\varepsilon+|G_{k}(u)|<j^{\frac{1}{ \theta}}\}}(\varepsilon+|G_{k}(u)|)^{\theta-1}|\nabla G_{k}(u)|^{2}\,dx\]
\[=\lambda\theta\int_{\{(\varepsilon+|G_{k}(u)|)^{\frac{\theta+1}{2 }}<j^{\frac{\theta+1}{2\theta}}\}}\big{[}(\varepsilon+|G_{k}(u)|)^{\frac{ \theta-1}{2}}|\nabla G_{k}(u)|\big{]}^{2}\,dx\]
\[=\lambda\theta\frac{4}{(\theta+1)^{2}}\int_{\Omega}|\nabla T_{j^{ \frac{\theta+1}{2\theta}}}(\varepsilon+|G_{k}(u)|)^{\frac{\theta+1}{2}}|^{2}\, dx\,,\]
due to (2.2).
Thus, invoking (2.4) too, (3.15) becomes
\[\lambda\frac{4\theta}{(\theta+1)^{2}} \int_{\Omega}|\nabla T_{j^{\frac{\theta+1}{2\theta}}}(\varepsilon +|G_{k}(u)|)^{\frac{\theta+1}{2}}|^{2}\,dx\leq j\int_{\{|u|>k\}}g(u)|\nabla u| ^{q}\,dx+j\|\mu\|_{\mathcal{M}_{b}(\Omega)}\]
\[\leq j\left\{\gamma\int_{\{|u|>k\}}\frac{1}{|u|^{\alpha}}|\nabla u |^{q}\,dx+\|\mu\|_{\mathcal{M}_{b}(\Omega)}\right\}=j(I+\|\mu\|_{\mathcal{M}_{ b}(\Omega)})\,,\]
where \( I=\gamma\int_{\{|u|>k\}}\frac{1}{|u|^{\alpha}}|\nabla u|^{q}\,dx\). Moreover, writing \(\ell=j^{\frac{\theta+1}{2\theta}}\) and \(r=\frac{2\theta}{\theta+1}\) (i.e., \(\theta=\frac{r}{2-r}\)), we get
\[\lambda\frac{4\theta}{(\theta+1)^{2}}\int_{\Omega}|\nabla T_{\ell}(\varepsilon +|G_{k}(u)|)^{\frac{\theta+1}{2}}|^{2}\,dx\leq\ell^{r}(I+\|\mu\|_{\mathcal{M}_ {b}(\Omega)})\,.\] (3.16)
We note that it follows from \(\theta<\frac{N(2-s)}{s(N-2)}\) (see (3.14)) that \( s<\frac{N(2-r)}{N-r}\).
We go on by performing some simple computations on the gradient term \(I\).
\[I =\gamma\,\int_{\{|u|>k\}}\frac{1}{|u|^{\alpha}}|\nabla u|^{q}\,dx \leq\gamma\int_{\{|u|>k\}}(\varepsilon+|G_{k}(u)|)^{-\alpha}|\nabla G_{k}(u)|^ {q}\,dx\]
\[=\gamma\int_{\{|u|>k\}}(\varepsilon+|G_{k}(u)|)^{-\alpha}\frac{( \varepsilon+|G_{k}(u)|)^{\frac{\theta-1}{2}q}}{(\varepsilon+|G_{k}(u)|)^{\frac {\theta-1}{2}q}}|\nabla G_{k}(u)|^{q}\,dx\]
\[=\gamma\frac{2^{q}}{(\theta+1)^{q}}\int_{\Omega}(\varepsilon+|G_{ k}(u)|)^{-\alpha-\frac{\theta-1}{2}q}|\nabla(\varepsilon+|G_{k}(u)|)^{\frac{ \theta+1}{2}}|^{q}\,dx\,,\]
where we have used that \(0<\alpha\) and that \(\varepsilon+|G_{k}(u)|\leq|u|\) holds; we remark that no singularity appears since we are integrating on the set \(\{|u|>k\}\). Then, applying Hölder’s inequality with \(\left(\frac{s}{q},\frac{s}{s-q}\right)\), we deduce
\[I\leq\gamma\frac{2^{q}}{(\theta+1)^{q}}\left(\int_{\Omega}(\varepsilon+|G_{k}( u)|)^{[-\alpha-\frac{\theta-1}{2}q]\frac{s}{s-q}}\,dx\right)^{\frac{s-q}{s}} \left(\int_{\Omega}|\nabla(\varepsilon+|G_{k}(u)|)^{\frac{\theta+1}{2}}|^{s}\, dx\right)^{\frac{q}{s}}\,.\] (3.17)
The next step is to estimate \(I\) in terms of the function
\[\varphi_{\varepsilon}^{k}(u)=(\varepsilon+|G_{k}(u)|)^{\frac{\theta+1}{2}}\,.\]
To this end, we will apply Sobolev’s inequality taking into account Lemma 3.7. Indeed, the definition of \(\theta\) implies that the power of \((\epsilon+|G_{k}(u)|)\) in the first integrand in (3.17) changes to
\[\left[\frac{1-\theta}{2}q-\alpha\right]\frac{s}{s-q}=\frac{\theta+1}{2}\,s^{*}\,.\] (3.18)
Therefore, estimate (3.17) becomes
\[I \leq\gamma\frac{2^{q}}{(\theta+1)^{q}}\left(\int_{\Omega}( \varepsilon+|G_{k}(u)|)^{\frac{\theta+1}{2}s^{*}}\,dx\right)^{\frac{s-q}{s}}\| \nabla\varphi_{\varepsilon}^{k}(u)\|^{q}_{L^{s}(\Omega)}\] (3.19)
\[\leq\gamma\frac{2^{q}}{(\theta+1)^{q}}S_{s}^{s^{*}\frac{s-q}{s}} \bigg{(}\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{s}+A( \varepsilon)\bigg{)}^{s^{*}\frac{s-q}{s^{2}}}\|\nabla\varphi_{\varepsilon}^{k} (u)\|^{q}_{L^{s}(\Omega)}\,.\]
Going back to inequality (3.16), we deduce
\[\int_{\Omega}|\nabla T_{\ell}(\varphi_{\varepsilon}^{k}(u))|^{2} \,dx\\ \leq\ell^{r}\left[\gamma\,C_{1}(r,s)\bigg{(}\|\nabla\varphi_{ \varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{s}+A(\varepsilon)\bigg{)}^{s^{*}\frac{s -q}{s^{2}}}\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{q}+C_{2}(r, s)\|\mu\|_{\mathcal{M}_{b}(\Omega)}\right]\,,\]
being
\[C_{1}(r,s)=\frac{(\theta+1)^{2-q}}{2^{2-q}}\frac{S_{s}^{s^{*}\frac{s-q}{s}}}{ \lambda\theta}\quad\text{and}\quad C_{2}(r,s)=\frac{(\theta+1)^{2}}{4\theta \lambda}\,.\] (3.20)
Note that \(C_{i}(r,s)\), \(i=1,\,2\), continuously depend on \(r\) and \(s\) (besides depending on \(N\), \(\lambda\) and \(q\)). Using Lemma 3.12 it yields
\[\Big{[}|\nabla\varphi_{\varepsilon}^{k}(u)|\Big{]}_{\frac{N(2-r)} {N-r}}\\ \leq c_{0}(r,s)\left[\gamma\,{C}_{1}(r,s)\bigg{(}\|\nabla\varphi_ {\varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{s}+A(\varepsilon)\bigg{)}^{s^{*}\frac{ s-q}{s^{2}}}\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{q}+{C}_{2} (r,s)\|\mu\|_{\mathcal{M}_{b}(\Omega)}\right]^{\frac{1}{2-r}}\,,\]
for some \(c_{0}(r,s)\) continuously depending on \(r\) and \(s\), besides \(N\).
Now recall we have taken \(s<\frac{N(2-r)}{N-r}\), so that for each \((r,s)\) there exists a positive constant \(C_{0}(r,s)\) continuously depending on \(r\) and \(s\), jointly with \(N\) and \(|\Omega|\), such that
\[\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)}\leq C_{0}(r,s)\Big{[}| \nabla\varphi_{\varepsilon}^{k}(u)|\Big{]}_{\frac{N(2-r)}{N-r}}\,.\]
Indeed, by (3.10), we have
\[C_{0}(r,s)=\left(\frac{N(2-r)}{N(2-r)-s(N-r)}\right)^{\frac{1}{s}}\,|\Omega|^{ \frac{N(2-r)-s(N-r)}{sN(2-r)}}\,.\]
Note that \(C_{0}(r,s)\) only blows up when \(s\to\frac{N(2-r)}{N-r}\), which is impossible once \(\alpha\) is fixed. Hence,
\[\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)} \leq C_{0}(r,s)\left[|\nabla\varphi_{\varepsilon}^{k}(u)|\right]_ {\frac{N(2-r)}{N-r}}\]
\[\leq\gamma^{\frac{1}{2-r}}C_{3}(r,s)\bigg{(}\|\nabla\varphi_{ \varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{s}+A(\varepsilon)\bigg{)}^{s^{*}\frac{s -q}{s^{2}}\frac{1}{2-r}}\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)} ^{\frac{q}{2-r}}+C_{4}(r,s)\|\mu\|_{\mathcal{M}_{b}(\Omega)}^{\frac{1}{2-r}}\]
\[\leq\gamma^{\frac{1}{2-r}}C_{3}(r,s)\|\nabla\varphi_{\varepsilon} ^{k}(u)\|_{L^{s}(\Omega)}^{(s^{*}\frac{s-q}{s}+q)\frac{1}{2-r}}+C_{4}(r,s)\| \mu\|_{\mathcal{M}_{b}(\Omega)}^{\frac{1}{2-r}}+B(\varepsilon)\,,\]
where
\[B(\varepsilon)=\gamma^{\frac{1}{2-r}}C_{3}(r,s)\left[\bigg{(}\|\nabla\varphi_{ \varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{s}+A(\varepsilon)\bigg{)}^{s^{*}\frac{s -q}{s^{2}}\frac{1}{2-r}}-\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega) }^{s^{*}\frac{s-q}{s}\frac{1}{2-r}}\right]\|\nabla\varphi_{\varepsilon}^{k}(u) \|_{L^{s}(\Omega)}^{\frac{q}{2-r}}\]
satisfies \(\lim_{\varepsilon\to 0}B(\varepsilon)=0\). Now, denoting \(Y_{k,\varepsilon}=\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)}\), we have
\[Y_{k,\varepsilon}-\gamma^{\frac{1}{2-r}}C_{3}(r,s)Y_{k,\varepsilon}^{(s^{*} \frac{s-q}{s}+q)\frac{1}{2-r}}\leq C_{4}(r,s)\|\mu\|_{\mathcal{M}_{b}(\Omega)} ^{\frac{1}{2-r}}+B(\varepsilon)\,.\]
We explicitly note that the power of \(Y_{k,\varepsilon}\) does not depend on either \(r\) or \(s\). Indeed, it is straightforward
\[\left(s^{*}\frac{s-q}{s}+q\right)\frac{1}{2-r}=\frac{s(N-q)}{N-s}\frac{1}{2-r}\]
and our definitions of \(\theta\) and \(r\) yield
\[\frac{r}{2-r}=\theta=\frac{2qN-sq-2\alpha N+2\alpha s-Ns}{s(N-q)}\quad\text{ and}\quad r=\frac{2qN-sq-2\alpha N+2\alpha s-Ns}{(N-s)(q-2\alpha)}\,,\]
so that
\[\frac{s(N-q)}{N-s}\frac{1}{2-r}=\frac{2qN-sq-2\alpha N+2\alpha s-Ns}{r(N-s)}=q -\alpha\,.\]
We define the family of functions (\(r>0\) and \(q<s<\frac{2N(q-\alpha)}{N+q-2\alpha}\))
\[F_{r,s}(y)=y-\gamma^{\frac{1}{2-r}}C_{3}(r,s)y^{q-\alpha}\,,\quad y>0\,,\]
each one satisfying the same properties of that considered in the previous theorems and having a maximum \(M_{r,s}^{*}\) at the point \(y_{r,s}^{*}\). We remark that we are not able to take limits when \(r\to 0\Leftrightarrow\theta\to 0\Leftrightarrow s\to\frac{2N(q-\alpha)}{N+q-2\alpha}\) since in this case the constants \(C_{3}\) and \(C_{4}\) blow up. Choose \(\mu\) such that
\[\|\mu\|_{\mathcal{M}_{b}(\Omega)}^{\frac{1}{2-r}}<\frac{M_{r,s}^{*}}{C_{4}(r,s )}\,,\]
for some \(r>0\) and \(q<s<\frac{2N(q-\alpha)}{N+q-2\alpha}\). From now on, we fix such parameters \(r\) and \(s\). Since \(\lim_{\varepsilon\to 0}B(\varepsilon)=0\) (note that \(B(\varepsilon)\) also depends on \(r\) and \(s\)), it follows that there exists \(\varepsilon_{0}>0\) such that
\[M_{\varepsilon}:=C_{4}(r,s)\|\mu\|_{\mathcal{M}_{b}(\Omega)}^{\frac{1}{2-r}}+B (\varepsilon)<M_{r,s}^{*}\,,\]
for all \(0<\varepsilon<\varepsilon_{0}\).
Observe that the equation \(F_{r,s}(y)=M_{\varepsilon}\) has two roots:
\[(Y_{\varepsilon}^{-})_{r,s}\;\mbox{ and }\;(Y_{\varepsilon}^{+})_{r,s}\,,\quad \mbox{ with }\quad(Y_{\varepsilon}^{-})_{r,s}<y_{r,s}^{*}<(Y_{\varepsilon}^{+} )_{r,s}\,,\]
and the continuity of \(F_{r,s}\) leads to the continuity of the function \(\varepsilon\mapsto(Y_{\varepsilon}^{-})_{r,s}\).
Since the function \(k\mapsto Y_{k,\varepsilon}\) is also continuous and goes to \(0\) when \(k\to\infty\), it follows from \(F(Y_{k,\varepsilon})<M^{*}\) that \(Y_{k,\varepsilon}\leq(Y_{\varepsilon}^{-})_{r,s}\) for all \(k>\varepsilon\). As a consequence,
\[\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)}\leq(Y_{\varepsilon}^{-} )_{r,s}\,,\]
and so
\[\left(\frac{\theta+1}{2}\right)^{s}\int_{\{|u|>k\}}\frac{|\nabla u|^{s}}{|u|^{ \frac{(1-\theta)s}{2}}}\,dx\leq\left(\frac{\theta+1}{2}\right)^{s}\int_{\Omega }\frac{|\nabla G_{k}(u)|^{s}}{(\varepsilon+|G_{k}(u)|)^{\frac{(1-\theta)s}{2}} }\,dx\leq(Y_{\varepsilon}^{-})_{r,s}\]
for all \(k>\varepsilon\) such that \(\varepsilon<1\). Letting \(\varepsilon\to 0\), we obtain
\[\int_{\{|u|>k\}}|\nabla|u|^{\frac{\theta+1}{2}}|^{s}\,dx\leq(Y^{-})_{r,s}\,,\] (3.21)
for all \(k>0\). Here \((Y^{-})_{r,s}\) stands for the smaller root of equation \(F_{r,s}(y)=C_{4}(r,s)\|\mu\|_{\mathcal{M}_{b}(\Omega)}^{\frac{1}{2-r}}\). It is then straightforward that
\[\lim_{k\to\infty}\int_{\{|u|>k\}}|\nabla|u|^{\frac{\theta+1}{2}}|^{s}\,dx=0\,.\] (3.22)
Taking into account (2.4) and (3.19), it yields
\[\int_{\{|u|>k\}}g(u)|\nabla u|^{q}\,dx \leq\gamma\,\int_{\{|u|>k\}}\frac{1}{|u|^{\alpha}}|\nabla u|^{q} \,dx\]
\[\leq\gamma\frac{2^{q}}{(\theta+1)^{q}}S_{s}^{s^{*}\frac{s-q}{s}} \bigg{(}\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{s}+A( \varepsilon)\bigg{)}^{s^{*}\frac{s-q}{s^{2}}}\|\nabla\varphi_{\varepsilon}^{k} (u)\|_{L^{s}(\Omega)}^{q}\,,\]
and so, letting \(\varepsilon\to 0\),
\[\int_{\{|u|>k\}}g(u)|\nabla u|^{q}\,dx\leq\gamma\frac{2^{q}}{(\theta+1)^{q}}S_ {s}^{s^{*}\frac{s-q}{s}}\bigg{(}\int_{\{|u|>k\}}|\nabla|u|^{\frac{\theta+1}{2} }|^{s}\,dx\bigg{)}^{s^{*}\frac{s-q}{s^{2}}+\frac{q}{s}}=:C_{k}\,.\]
This is the key estimate we are looking for. Now it is enough to choose \(C_{0}=\lim_{k\to 0}C_{k}\) (on account of the estimate (3.21)) and to realise that \(\lim_{k\to\infty}C_{k}=0\) (see (3.22)).
It just remains to check that (3.13) holds. We take \(T_{j}(u)\) as test function in problem (2.10). It follows that
\[\lambda\int_{\Omega}|\nabla T_{j}(u)|^{2}\,dx\leq j\gamma\,\int_{\Omega}g(u)| \nabla u|^{q}\,dx+j\|\mu\|_{\mathcal{M}_{b}(\Omega)}\leq j\big{(}C_{0}+\|\mu\| _{\mathcal{M}_{b}(\Omega)}\big{)}\]
and we are done.
**Remark 3.15**.: _In contrast to what happens in Proposition 3.1 and Proposition 3.9, in Theorem 3.14 we do not provide any regularity condition on the solution. It is worth finding the regularity that results in our problem with measure datum. We point out that it is inadvisable to use (3.21) because the values of \(\theta\) and \(s\) do not necessarily supply optimal regularity, besides they are not fully determined._
_In problems with measure data, the regularity one obtains is_
\[\int_{\Omega}\frac{|\nabla u|^{2}}{(1+|u|)^{1+\rho}}\,dx\leq\frac{M}{\rho} \qquad\forall\rho>0\,,\]
_here \(M\) is the same constant stated in (3.13). This inequality is easily deduced by taking_
\[S(u)=\left(1-\frac{1}{(1+|u|)^{\rho}}\right){\rm\;sign}(u)\]
_as test function (in the sense of Remark 2.6)._
We now turn to analyze the limit case \(\alpha=q-1\).
**Proposition 3.16**.: _Let \(\mu\in\mathcal{M}_{b}(\Omega)\) and let \(\alpha=q-1\). Assume (2.2), (2.3), (2.4) and that \(u\) is a renormalized solution to problem (2.10) in the sense of Definition 2.11._
_If there exists \(q<s<2\) satisfying_
\[\gamma<\lambda\frac{(2-s)^{2}}{Ns^{q-1}c_{0}(s)(C_{s}^{PF})^{s-q}|\Omega|^{ \frac{2-s}{N}}}\,,\]
_where_
\[c_{0}(s)=\left[\left(\frac{Ns}{(N-2)(2-s)}\right)^{\frac{(N-2)(2-s)}{2(N-2+s)} }+\left(\frac{Ns}{(N-2)(2-s)}\right)^{-\frac{Ns}{2(N-2+s)}}\right]S_{2}^{\frac {N(2-s)}{N-2+s}}\,,\] (3.23)
_then every such solution \(u\) satisfies the following estimate:_
\[\int_{\Omega}|\nabla T_{k}(u)|^{2}\,dx\leq Mk\,,\]
_and_
\[\int_{\{|u|>k\}}g(u)|\nabla u|^{q}\,dx\leq C_{k}\,,\]
_for every \(k>0\), where \(M\) and \(C_{k}\) are positive constants which only depend on \(N\), \(q\), \(s\), \(\lambda\), \(\Omega\), \(\|\mu\|_{\mathcal{M}_{b}(\Omega)}\) and \(\gamma\), and \(\lim_{k\to\infty}C_{k}=0\)._
Proof. Since we follow a similar argument that that of the previous proof, we just sketch the proof. Take \(s\) such that \(q<s<2\) and if we define \(r=2-s\) and \(\theta=(2-s)/s\) (i.e. \(r=2\theta/(\theta+1)\)), then \(0<\theta<(s-q)/q\). Fix \(j>0\) and \(0<\varepsilon<k\), and take again the test function \(\psi(u)=T_{j}((\varepsilon+|G_{k}(u)|)^{\theta}-\varepsilon^{\theta})\) in problem (2.1). Arguing as in the previous proof we also obtain (3.16) and (3.17). Nevertheless, we now have
\[\left[\frac{1-\theta}{2}q-q+1\right]\frac{s}{s-q}=\frac{\theta+1}{2}\,s\]
instead of (3.18), and so (3.17) becomes
\[\int_{\{|u|>k\}}g(u)|\nabla u|^{q}dx\leq\gamma\frac{2^{q}}{(\theta+1)^{q}} \left(\int_{\Omega}(\varepsilon+|G_{k}(u)|)^{\frac{\theta+1}{2}s}\,dx\right)^{ \frac{s-q}{s}}\|\nabla\varphi_{\varepsilon}^{k}(u)\|^{q}_{L^{s}(\Omega)}\,.\]
Applying the Poincaré–Friedrichs inequality (recall Lemma 3.7 and Remark 3.8) we deduce that
\[\int_{\{|u|>k\}}g(u)|\nabla u|^{q}dx\leq\gamma\frac{2^{q}}{(\theta+1)^{q}}(C_{ s}^{PF})^{s-q}\left(\|\nabla\varphi_{\varepsilon}^{k}(u)\|^{s}_{L^{s}(\Omega)} +A(\varepsilon)\right)^{\frac{s-q}{s}}\|\nabla\varphi_{\varepsilon}^{k}(u)\|^{ q}_{L^{s}(\Omega)}\] (3.24)
and then (3.16) leads to
\[\int_{\Omega}|\nabla T_{\ell}(\varphi_{\varepsilon}^{k}(u))|^{2} \,dx\\ \leq\ell^{r}\left[\gamma\frac{(\theta+1)^{2-q}}{\lambda\theta 2^{ 2-q}}(C_{s}^{PF})^{s-q}\bigg{(}\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}( \Omega)}^{s}+A(\varepsilon)\bigg{)}^{s^{*}\frac{s-q}{s^{2}}}\|\nabla\varphi_{ \varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{q}+\frac{(\theta+1)^{2}}{4\lambda\theta }\|\mu\|_{\mathcal{M}_{b}(\Omega)}\right]\,.\]
Therefore, recalling that \(2-r=s\), Lemma 3.12 gives
\[\Big{[}|\nabla\varphi_{\varepsilon}^{k}(u)|\Big{]}_{\frac{Ns}{N-2 +s}}^{s}\\ \leq c_{0}(s)\left(\gamma\frac{(\theta+1)^{2-q}}{\lambda\theta 2^ {2-q}}(C_{s}^{PF})^{s-q}\bigg{(}\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}( \Omega)}^{s}+A(\varepsilon)\bigg{)}^{\frac{s-q}{s}}\|\nabla\varphi_{ \varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{q}+\frac{(\theta+1)^{2}}{4\lambda\theta }\|\mu\|_{\mathcal{M}_{b}(\Omega)}\right)\,.\]
Taking on account Remark 3.13, we deduce that \(c_{0}(s)\) is given by (3.23). Now observe that \(s<\frac{Ns}{N-2+s}\) and so, having in mind (3.10), there exists a constant \(C_{0}(s)>0\) such that
\[\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)}\leq C_{0}(s)\Big{[}| \nabla\varphi_{\varepsilon}^{k}(u)|\Big{]}_{\frac{Ns}{N-2+s}}\]
and \(C_{0}(s)\) tends to \(+\infty\) as \(s\to 2\); indeed,
\[C_{0}(s)=\left(\frac{N}{2-s}\right)^{\frac{1}{s}}|\Omega|^{\frac{2-s}{Ns}}\,.\]
Hence,
\[\|\nabla\varphi_{\varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{s}\leq c_ {0}(s)C_{0}(s)^{s}\left[|\nabla\varphi_{\varepsilon}^{k}(u)|\right]_{\frac{Ns} {N-2+s}}^{s}\\ \leq c_{0}(s)C_{0}(s)^{s}\gamma\frac{(\theta+1)^{2-q}}{\lambda \theta 2^{2-q}}(C_{s}^{PF})^{s-q}\bigg{(}\|\nabla\varphi_{\varepsilon}^{k}(u) \|_{L^{s}(\Omega)}^{s}+A(\varepsilon)\bigg{)}^{\frac{s-q}{s}}\|\nabla\varphi_{ \varepsilon}^{k}(u)\|_{L^{s}(\Omega)}^{q}\\ +c_{0}(s)C_{0}(s)^{s}\frac{(\theta+1)^{2}}{4\lambda\theta}\|\mu\| _{\mathcal{M}_{b}(\Omega)}\,.\]
Thus, recalling that \(\theta=(2-s)/s\), we have obtained an estimate for
\[\varphi(u)=(1+|u|)^{\frac{\theta+1}{2}}-1=(1+|u|)^{\frac{1}{s}}-1\]
in \(W_{0}^{1,s}(\Omega)\) if
\[\gamma<\lambda\frac{(2-s)}{s^{q-1}(C_{s}^{PF})^{s-q}c_{0}(s)C_{0}(s)^{s}}= \lambda\frac{(2-s)^{2}}{Ns^{q-1}c_{0}(s)(C_{s}^{PF})^{s-q}|\Omega|^{\frac{2-s} {N}}}\,.\]
Going back to (3.24), letting \(\varepsilon\) go to \(0\) and denoting \(\varphi^{k}(u)=(1+|G_{k}(u)|)^{\frac{\theta+1}{2}}\), we obtain
\[\int_{\{|u|>k\}}g(u)|\nabla u|^{q}dx\leq\gamma\frac{2^{q}}{(\theta+1)^{q}}(C_{ s}^{PF})^{s-q}\|\nabla\varphi^{k}(u)\|^{s}_{L^{s}(\Omega)}\leq\gamma\frac{2^{q -s}}{(\theta+1)^{q-s}}(C_{s}^{PF})^{s-q}\int_{\{|u|>k\}}|\nabla|u|^{\frac{ \theta+1}{2}}|^{s}dx\,,\]
wherewith \(\int_{\{|u|>k\}}g(u)|\nabla u|^{q}dx\leq C_{k}\) for certain \(C_{k}\) such that \(\lim_{k\to\infty}C_{k}=0\).
Finally, since the gradient term is bounded in \(L^{1}(\Omega)\), it follows that the remaining estimate holds.
**Remark 3.17**.: It is worth remarking what happens when \(s\to 2\) (i.e. \(\theta\to 0\)). Observe that it is not possible to choose any \(\theta\in\left(0,\frac{2-q}{q}\right)\), so that the above proof does not apply. Furthermore, since \(\lim_{s\to 2}c_{0}(s)=1\), it follows that
\[\lim_{s\to 2}\frac{(2-s)^{2}}{Ns^{q-1}c_{0}(s)(C_{s}^{PF})^{s-q}|\Omega|^{ \frac{2-s}{N}}}=0\,.\]
Thus, no estimate is obtained for the equation
\[-\Delta u=\gamma\frac{|\nabla u|^{2}}{|u|}+f(x)\]
when \(f\in L^{1}(\Omega)\) and \(\gamma>0\). This is in total agreement with [25, Proposition 5.1].
### The sublinear case with measure data.
When \(\alpha>q-1\), our problem lies in the sublinear setting. Then we expect existence of a solution for each datum that is a finite Radon measure. To our knowledge, the range \(q-1<\alpha\leq\frac{q}{2}\) is not covered in previous papers, so that it will next be studied. We remark that the above proof can be extended to \(\alpha\) satisfying \(q-1<\alpha<\frac{q}{2}\) by choosing \(\frac{q}{q-\alpha}<s<2\). Nevertheless, it does not work for \(\alpha=\frac{q}{2}\). Hence, we will use very different test functions in the proof of the following result, which does not apply Lemma 3.12.
**Proposition 3.18**.: _Let \(\mu\in\mathcal{M}_{b}(\Omega)\). Assume (2.2), (2.3), (2.4) and that \(u\) is a renormalized solution to problem (2.10) in the sense of Definition 2.5._
_If \(q-1<\alpha\leq\frac{q}{2}\), then every such solution \(u\) satisfies the following estimates:_
\[\int_{\Omega}|\nabla T_{k}(u)|^{2}\,dx\leq Mk\,,\]
_and_
\[\int_{\{|u|>k\}}g(u)|\nabla u|^{q}\,dx\leq C_{k}\,,\]
_for every \(k>0\), where \(M\) and \(C_{k}\) are positive constants only depending on the parameters of our problem, and \(\lim_{k\to\infty}C_{k}=0\)._
Proof. We take
\[\varphi^{k}(u)=1-\frac{1}{(1+|G_{k}(u)|)^{\theta}}\]
as test function in (2.10); here \(k>1\) and \(\theta\) is a positive parameter to be chosen. Then
\[\lambda\theta\int_{\{|u|>k\}}\frac{|\nabla u|^{2}}{(1+|G_{k}(u)|) ^{1+\theta}}\,dx \leq\int_{\{|u|>k\}}g(u)|\nabla u|^{q}\,dx+\|\mu\|_{\mathcal{M}_{ b}(\Omega)}\] (3.25)
\[\leq\gamma\int_{\{|u|>k\}}\frac{|\nabla u|^{q}}{(1+|G_{k}(u)|)^{ \alpha}}\,dx+\|\mu\|_{\mathcal{M}_{b}(\Omega)}\,.\]
In order to estimate the right hand side, we apply Hölder’s inequality with exponents \(\left(\frac{2}{q},\frac{2}{2-q}\right)\), getting
\[I =\int_{\{|u|>k\}}\frac{|\nabla u|^{q}}{(1+|G_{k}(u)|)^{\alpha}}\, dx=\int_{\{|u|>k\}}\frac{|\nabla u|^{q}}{[(1+|G_{k}(u)|)^{1+\theta}]^{\frac{q} {2}}}(1+|G_{k}(u)|)^{(1+\theta)\frac{q}{2}-\alpha}\,dx\]
\[\leq\left(\int_{\{|u|>k\}}\frac{|\nabla u|^{2}}{(1+|G_{k}(u)|)^{1 +\theta}}\,dx\right)^{\frac{q}{2}}\left(\int_{\{|u|>k\}}(1+|G_{k}(u)|)^{(1+ \theta)\frac{q}{2-q}-\alpha\frac{2}{2-q}}\,dx\right)^{\frac{2-q}{2}}\,.\]
Now \(\theta\) is chosen to satisfy \(0<\theta<1+\alpha-q\), so that \((1+\theta)\frac{q}{2-q}-\alpha\frac{2}{2-q}<1-\theta\) wherewith \(\beta:=\frac{(1-\theta)(2-q)}{(1+\theta)q-2\alpha}>1\). Hölder’s inequality, now with exponents \((\beta,\beta^{\prime})\), and the Poincaré–Friedrichs inequality lead to
\[I\leq\left(\int_{\Omega}\frac{|\nabla G_{k}(u)|^{2}}{(1+|G_{k}(u )|)^{1+\theta}}\,dx\right)^{\frac{q}{2}}\left(\int_{\Omega}\left[(1+|G_{k}(u)| )^{\frac{1-\theta}{2}}\right]^{2}\,dx\right)^{\frac{2-q}{2\beta}}|\Omega|^{ \frac{2-q}{2\beta^{\prime}}}\\ \leq|\Omega|^{\frac{2-q}{2\beta^{\prime}}}(C_{2}^{PF})^{\frac{2-q }{\beta}}\left(\int_{\Omega}\frac{|\nabla G_{k}(u)|^{2}}{(1+|G_{k}(u)|)^{1+ \theta}}\,dx\right)^{\frac{q}{2}}\left(\int_{\Omega}\left|\nabla(1+|G_{k}(u)|) ^{\frac{1-\theta}{2}}\right|^{2}\,dx+A(1)\right)^{\frac{2-q}{2\beta}}\\ \leq|\Omega|^{\frac{2-q}{2\beta^{\prime}}}\left(\frac{1-\theta}{2 }\right)^{\frac{2-q}{\beta}}(C_{2}^{PF})^{\frac{2-q}{\beta}}\left(\int_{\Omega }\frac{|\nabla G_{k}(u)|^{2}}{(1+|G_{k}(u)|)^{1+\theta}}\,dx\right)^{\frac{q}{ 2}}\left(\int_{\Omega}\frac{|\nabla G_{k}(u)|^{2}}{(1+|G_{k}(u)|)^{1+\theta}} \,dx+A(1)\right)^{\frac{2-q}{2\beta}}\,.\]
Going back to (3.25) we obtain
\[\lambda\theta\int_{\{|u|>k\}}\frac{|\nabla u|^{2}}{(1+|G_{k}(u)|) ^{1+\theta}}\,dx\leq\|\mu\|_{\mathcal{M}_{b}(\Omega)}\\ +\gamma|\Omega|^{\frac{2-q}{2\beta^{\prime}}}\left(\frac{1-\theta }{2}\right)^{\frac{2-q}{\beta}}(C_{2}^{PF})^{\frac{2-q}{\beta}}\left(\int_{ \Omega}\frac{|\nabla G_{k}(u)|^{2}}{(1+|G_{k}(u)|)^{1+\theta}}\,dx\right)^{ \frac{q}{2}}\left(\int_{\Omega}\frac{|\nabla G_{k}(u)|^{2}}{(1+|G_{k}(u)|)^{1+ \theta}}\,dx+A(1)\right)^{\frac{2-q}{2\beta}}\]
and it follows from \(\frac{q}{2}+\frac{2-q}{2\beta}<1\) that there exists \(M_{1}>0\) satisfying
\[\int_{\Omega}\frac{|\nabla G_{k}(u)|^{2}}{(1+|G_{k}(u)|)^{1+\theta}}\,dx\leq M _{1}\quad\hbox{ for all }k>1\,,\]
where \(M_{1}\) only depends on \(\lambda\), \(q\), \(\gamma\), \(\alpha\), \(\Omega\) and \(\|\mu\|_{\mathcal{M}_{b}(\Omega)}\). As a consequence of the above procedure, we also find \(M_{2}>0\), depending on the same parameters, such that
\[\int_{\Omega}\frac{|\nabla G_{k}(u)|^{q}}{(1+|G_{k}(u)|)^{\alpha}}\,dx\leq M_{ 2}\quad\hbox{ for all }k>1\,.\]
A further estimate can be obtained observing that
\[\int_{\{|u|>k\}}g(u)|\nabla u|^{q}\,dx \leq\gamma\left(\int_{\{|u|>k\}}\frac{|\nabla u|^{2}}{(1+|G_{k}(u )|)^{1+\theta}}\,dx\right)^{\frac{q}{2}}\left(\int_{\{|u|>k\}}(1+|G_{k}(u)|)^{ (1+\theta)\frac{q}{2-q}-\alpha\frac{2}{2-q}}\,dx\right)^{\frac{2-q}{2}}\]
\[\leq\gamma\left(\int_{\Omega}\frac{|\nabla G_{k}(u)|^{2}}{(1+|G_{ k}(u)|)^{1+\theta}}\,dx\right)^{\frac{q}{2}}\left(\int_{\Omega}\left[(1+|G_{k} (u)|)^{\frac{1-\theta}{2}}\right]^{2}\,dx\right)^{\frac{2-q}{2\beta}}|\{|u|>k \}|^{\frac{2-q}{2\beta^{\prime}}}\]
\[\leq M_{3}|\{|u|>k\}|^{\frac{2-q}{2\beta^{\prime}}}=C_{k}\,,\]
that holds, at least, for every \(k>1\).
Taking \(T_{k}(u)\), for some \(k>1\) fixed, as test function in (2.10), we derive
\[\lambda\int_{\Omega}|\nabla T_{k}(u)|^{2}\,dx \leq\gamma\int_{\{|u|<k\}}|\nabla u|^{q}|u|^{1-\alpha}\,dx+k\int_ {\{|u|\geq k\}}g(u)|\nabla u|^{q}\,dx+k\|\mu\|_{\mathcal{M}_{b}(\Omega)}\]
\[\leq\gamma k^{1-\alpha}\int_{\Omega}|\nabla T_{k}(u)|^{q}\,dx+kC_ {k}+k\|\mu\|_{\mathcal{M}_{b}(\Omega)}\]
\[\leq\gamma k^{1-\frac{q}{2}}\int_{\Omega}|\nabla T_{k}(u)|^{q}\, dx+kC_{k}+k\|\mu\|_{\mathcal{M}_{b}(\Omega)}\,.\]
Then Young’s inequality implies an estimate of \(T_{k}(u)\) in \(H_{0}^{1}(\Omega)\) for every \(k>1\) (and so for every \(k>0\)). We finally deduce an estimate of the gradient term in \(L^{1}(\Omega)\). Indeed, fix \(k>1\), denote \(\bar{g}_{k}=\sup_{|s|\leq k}|g(s)|\) and split the gradient term as follows
\[\int_{\Omega}g(u)|\nabla u|^{q}\,dx =\int_{\{|u|\leq k\}}g(u)|\nabla u|^{q}\,dx+\int_{\{|u|>k\}}g(u)| \nabla u|^{q}\,dx\]
\[\leq\bar{g}_{k}\int_{\Omega}|\nabla T_{k}(u)|^{q}\ dx+C_{k}\,.\]
Once the gradient term is estimated in \(L^{1}(\Omega)\), the remaining estimate is easy.
## 4. Compactness and convergence results
Let us consider the approximating problems
\[\begin{cases}\begin{array}[]{ll}-\text{div }[A(x)\cdot\nabla u_{n}]=H(x,u_{n}, \nabla u_{n})+f_{n}(x)&\mbox{ in }\Omega\,,\\ u_{n}=0&\mbox{ on }\partial\Omega\,,\end{array}\end{cases}\] (4.1)
with \(f_{n}=T_{n}(f)\). Proposition 2.9 implies that there exists at least a solution \(u_{n}\in L^{\infty}(\Omega)\cap H_{0}^{1}(\Omega)\) such that
\[\int_{\Omega}[A(x)\cdot\nabla u_{n}]\cdot\nabla\varphi\,dx=\int_{\Omega}H(x,u_ {n},\nabla u_{n})\varphi\,dx+\int_{\Omega}T_{n}(f(x))\varphi\,dx\qquad\forall \varphi\in L^{\infty}(\Omega)\cap H_{0}^{1}(\Omega).\] (4.2)
We also handle measure data in Subsection 4.4 but considering different approximating problems for (2.10).
This Section is devoted to check that, up to subsequences, \(\{u_{n}\}_{n}\) converges to a solution to problem (2.1).
### The case of solutions with finite energy
**Proposition 4.1**.: _Let \(f\in L^{m}(\Omega)\) with \(\frac{2N}{N+2}\leq m<\frac{N}{2}\), \(\alpha=\frac{N(q-1)-mq}{N-2m}\), \(\sigma=\frac{(N-2)m}{N-2m}\) and \(\{u_{n}\}_{n}\) be a sequence of solutions of (4.1). Assume also (2.2), (2.3) and (2.4). Then_
\[\{|u_{n}|^{\frac{\sigma}{2}}\}_{n}\quad\text{is uniformly bounded in}\quad H_{ 0}^{1}(\Omega)\,,\] (4.3)
\[\{u_{n}\}_{n}\quad\text{is uniformly bounded in}\quad H_{0}^{1}(\Omega)\,,\] (4.4)
_and_
\[\{H(x,u_{n},\nabla u_{n})u_{n}\}_{n}\quad\text{is uniformly bounded in}\quad L ^{1}(\Omega)\,.\] (4.5)
_Furthermore, up to subsequences, there exists a function \(u\) such that_
\[u_{n}\rightharpoonup u\quad\text{in }H_{0}^{1}(\Omega)\,,\] (4.6)
_and_
\[u_{n}\to u\qquad\text{a.e. in }\Omega\,.\] (4.7)
Proof. We apply Proposition 3.1 to (4.1) and deduce (4.3).
Taking \(\varphi=u_{n}\) in (4.2) and recalling (2.2), (2.3) and (2.4), we get
\[\lambda\int_{\Omega}|\nabla u_{n}|^{2}\,dx\leq\gamma\int_{\Omega}|\nabla u_{n} |^{q}|u_{n}|^{1-\alpha}\,dx+\int_{\Omega}|f||u_{n}|\,dx.\]
We apply Hölder’s inequality with indices \(\left(\frac{2}{q},\frac{2}{2-q}\right)\) and \((m,m^{\prime})\), respectively, on the integrals on the right hand side obtaining
\[\lambda\|u_{n}\|_{H_{0}^{1}(\Omega)}^{2} \leq\gamma\|u_{n}\|_{H_{0}^{1}(\Omega)}^{q}\left(\int_{\Omega}|u_ {n}|^{\frac{2(1-\alpha)}{2-q}}\,dx\right)^{\frac{2-q}{2}}+\|f\|_{L^{m}(\Omega) }\|u_{n}\|_{L^{m^{\prime}}(\Omega)}\]
\[\leq\gamma\|u_{n}\|_{H_{0}^{1}(\Omega)}^{q}\left(\int_{\Omega}|u_ {n}|^{\frac{2(1-\alpha)}{2-q}}\,dx\right)^{\frac{2-q}{2}}+|\Omega|^{\frac{1}{m ^{\prime}}-\frac{1}{2^{*}}}\|f\|_{L^{m}(\Omega)}\|u_{n}\|_{L^{2^{*}}(\Omega)}\]
\[\leq\gamma\|u_{n}\|_{H_{0}^{1}(\Omega)}^{q}\left(\int_{\Omega}|u_ {n}|^{\frac{2(1-\alpha)}{2-q}}\,dx\right)^{\frac{2-q}{2}}+S_{2}|\Omega|^{\frac {1}{m^{\prime}}-\frac{1}{2^{*}}}\|f\|_{L^{m}(\Omega)}\|u_{n}\|_{H_{0}^{1}( \Omega)}\]
thanks, also, to Lebesgue spaces inclusion (indeed \(m^{\prime}\leq 2^{*}\) by assumptions) and to Sobolev’s embedding. Then, twice applications of Young’s inequality with \(\left(\frac{2}{q},\frac{2}{2-q}\right)\) and \((2,2)\) yield to
\[\lambda\left(\frac{2-q-\varepsilon}{2}\right)\|u_{n}\|_{H_{0}^{1}(\Omega)}^{2} \leq\frac{2-q}{2}\,\frac{\gamma^{\frac{2}{2-q}}}{\lambda^{\frac{q}{2-q}}}\int_ {\Omega}|u_{n}|^{\frac{2(1-\alpha)}{2-q}}\,dx+\frac{S_{2}^{2}}{2\varepsilon \lambda}|\Omega|^{\frac{2}{m^{\prime}}-\frac{2}{2^{*}}}\|f\|^{2}_{L^{m}(\Omega )}\,.\] (4.8)
We now take advantage of the power regularity in (4.3), namely: \(\{u_{n}\}_{n}\) is bounded in \(L^{\frac{2^{*}\sigma}{2}}(\Omega)\). Observe that
\[1-\alpha=\frac{(N-m)(2-q)}{N-2m}\leq\frac{Nm(2-q)}{2(N-2m)}\,,\]
owed to \(m\geq\frac{2N}{N+2}\). Hence,
\[\frac{2(1-\alpha)}{2-q}\leq\frac{Nm}{N-2m}=\frac{2^{*}\sigma}{2}\,,\]
so that the right hand side of (4.8) is uniformly bounded in \(n\) and this means that (4.4) holds. In particular we deduce (4.6) and (4.7) too.
As far as the \(L^{1}\)–bound (4.5) is concerned, it is also a consequence of the inequality
\[\gamma\int_{\Omega}|\nabla u_{n}|^{q}|u_{n}|^{1-\alpha}\,dx\leq\gamma\|u_{n}\| _{H_{0}^{1}(\Omega)}^{q}\left(\int_{\Omega}|u_{n}|^{\frac{2(1-\alpha)}{2-q}}\, dx\right)^{\frac{2-q}{2}}\]
which we already know being bounded.
**Proposition 4.2**.: _Let \(f\in L^{m}(\Omega)\) with \(\frac{2N}{N+2}\leq m<\frac{N}{2}\), \(\alpha=\frac{N(q-1)-mq}{N-2m}\) and \(\{u_{n}\}_{n}\) be a sequence of solutions of (4.1). Assume also (2.2), (2.3) and (2.4). Then,_
\[H(x,u_{n},\nabla u_{n})\quad\text{ is equi--integrable in}\quad L^{1}(\Omega)\,.\] (4.9)
_Moreover, up to subsequences, we have_
\[\nabla u_{n}\to\nabla u\qquad\text{a.e. in }\Omega.\] (4.10)
_In particular_
\[H(x,u_{n},\nabla u_{n})\to H(x,u,\nabla u)\qquad\text{in }L^{1}(\Omega).\] (4.11)
_Furthermore,_
\[H(x,u,\nabla u)u\in L^{1}(\Omega)\] (4.12)
_and_
\[H(x,u,\nabla u)\in L^{2/q}(\Omega).\]
Proof. We begin by showing that the sequence \(\{H(x,u_{n},\nabla u_{n})\}_{n}\) is uniformly bounded in \(L^{\frac{2}{q}}(\Omega)\). In fact,
\[\int_{\Omega}|H(x,u_{n},\nabla u_{n})|^{\frac{2}{q}}\,dx\leq\bar{g_{1}}^{\frac {2}{q}}\int_{\{|u_{n}|<1\}}|\nabla u_{n}|^{2}\,dx+\gamma^{\frac{2}{q}}\int_{\{ |u_{n}|\geq 1\}}|\nabla u_{n}|^{2}\,dx\leq(\bar{g_{1}}^{\frac{2}{q}}+\gamma^{ \frac{2}{q}})\;C\,,\]
where \(\bar{g_{1}}=\max_{\{|s|\leq 1\}}|g(s)|<\infty\), being \(g(\cdot)\) a continuous function and for some positive constant \(C\) depending on \(n\) (thanks to (4.4)). Therefore, (4.9) follows.
As far as the proof of (4.10) is concerned, we want to apply [6, Theorem \(2.1\) and Remark \(2.2\)]. To this aim, we need (4.6), (4.7) as well as the \(L^{1}\)–estimate of \(\{H(x,u_{n},\nabla u_{n})\}_{n}\).
Having (4.9), (4.7) and (4.10), we are allowed to apply Vitali’s Theorem and conclude with (4.11). Finally (4.12) follows from Fatou’s Lemma and the a.e. convergences (4.7) and (4.10).
**Theorem 4.3**.: _Let \(f\in L^{m}(\Omega)\) with \(\frac{2N}{N+2}\leq m<\frac{N}{2}\) and \(\alpha=\frac{N(q-1)-mq}{N-2m}\). Assume (2.2), (2.3) and (2.4). Then, there exists at least a solution \(u\in H_{0}^{1}(\Omega)\) of (2.1) in the sense of Definition 2.2 such that \(H(x,u,\nabla u)\in L^{2/q}(\Omega)\), \(H(x,u,\nabla u)u\in L^{1}(\Omega)\) and_
\[\int_{\Omega}|u|^{\sigma-2}|\nabla u|^{2}\,dx<M,\] (4.13)
_that is, \(|u|^{\frac{\sigma}{2}}\in H_{0}^{1}(\Omega)\)._
Proof. We can take the limit in \(n\to\infty\) in the approximating formulation (4.2) thanks to (4.6)–(4.11), recovering (2.7). The regularity (4.13) follows from (4.3).
**Remark 4.4**.: Having in mind Remark 3.3, we have a similar a priori estimate when
\[\frac{N(q-1)-mq}{N-2m}<\alpha<q-1\,.\]
Thus, we may follow the proofs of Propositions 4.1 and 4.2 with this new exponent \(\alpha\). We point out that we only need to check that
\[\frac{2(1-\alpha)}{2-q}\leq\frac{2^{*}\sigma}{2}\]
which obviously holds with a bigger \(\alpha\). Therefore, the above existence result applies as well.
The limit case \(\alpha=q-1\) also holds taking into account the a priori estimate stated in Proposition 3.5.
### The case of renormalized solutions
**Proposition 4.5**.: _Let \(f\in L^{m}(\Omega)\) with \(1<m<\frac{2N}{N+2}\), \(\alpha=\frac{N(q-1)-mq}{N-2m}\) and \(\{u_{n}\}_{n}\) be a sequence of solutions of (4.1). Assume also (2.2), (2.3) and (2.4). Then, up to subsequences, there exists a function \(u\) such that_
\[u_{n}\to u\qquad\text{a.e. in }\Omega.\] (4.14)
Proof. We claim that the uniform bound
\[\int_{\Omega}(1+|u_{n}|)^{\sigma-2}|\nabla u_{n}|^{2}\,dx\leq M\] (4.15)
holds. Indeed, Proposition 3.9 applies with the same test function evaluated in \(u_{n}\).
Now, set \(1<r<2\) to be determined. Then, the above inequality allows us to estimate
\[\int_{\Omega}|\nabla u_{n}|^{r}\,dx\leq\left(\int_{\Omega}(1+|u_{n}|)^{\sigma- 2}|\nabla u_{n}|^{2}\,dx\right)^{\frac{r}{2}}\left(\int_{\Omega}(1+|u_{n}|)^{ \frac{r(2-\sigma)}{2-r}}\,dx\right)^{\frac{2-r}{2}}\,.\]
Requiring \(\frac{r(2-\sigma)}{2-r}=r^{*}\) (that is \(\frac{r(2-\sigma)}{2-r}=\frac{\sigma}{2}2^{*}\)), we obtain \(r=\frac{N\sigma}{N+\sigma-2}>1\) since \(1<\sigma<2\). Note that \(r=m^{*}=\frac{N(q-1-\alpha)}{1-\alpha}\) which, for \(\alpha=0\), becomes the exponent of the gradient regularity in [18]. Since \(\{u_{n}\}\) is bounded in \(W_{0}^{1,r}(\Omega)\), an appeal to the compact embedding allows us to conclude (4.14).
**Proposition 4.6**.: _Let \(f\in L^{m}(\Omega)\) with \(1<m<\frac{2N}{N+2}\), \(\alpha=\frac{N(q-1)-mq}{N-2m}\) and \(\{u_{n}\}_{n}\) be a sequence of solutions of (4.1). Assume also (2.2), (2.3) and (2.4). Then,_
\[H(x,u_{n},\nabla u_{n})\quad\text{ is bounded in}\quad L^{m}(\Omega)\,,\] (4.16)
_and_
\[H(x,u_{n},\nabla u_{n})\quad\text{ is equi--integrable in}\quad L^{1}(\Omega)\,.\] (4.17)
_Furthermore, up to subsequences, we have_
\[\nabla u_{n}\to\nabla u\qquad\text{a.e. in }\Omega\,,\] (4.18)
\[H(x,u_{n},\nabla u_{n})\to H(x,u,\nabla u)\qquad\text{in }L^{1}(\Omega)\,,\] (4.19)
_and, for all \(j>0\)._
\[T_{j}(u_{n})\to T_{j}(u)\quad\text{strongly in }H_{0}^{1}(\Omega)\,.\] (4.20)
Proof. Let us begin with the proof of (4.16). Again, due to the assumption (2.3) on \(H(x,t,\xi)\) and to the regularity of \(f\), we focus only on the gradient term. Observe that, for some \(\gamma_{0}>\gamma\), it holds that
\[\int_{\Omega}\left|g(u_{n})|\nabla u_{n}|^{q}\right|^{b}\,dx \leq\gamma_{0}\int_{\Omega}\frac{|\nabla u_{n}|^{qb}(1+|u_{n}|)^{ \frac{bq(\sigma-2)}{2}}}{(1+|u_{n}|)^{\alpha b}(1+|u_{n}|)^{\frac{bq(\sigma-2) }{2}}}\,dx\]
\[\leq\gamma_{0}\left(\int_{\Omega}|\nabla u_{n}|^{2}(1+|u_{n}|)^{ \sigma-2}\,dx\right)^{\frac{qb}{2}}\left(\int_{\Omega}(1+|u_{n}|)^{\frac{bq(2- \sigma)}{2-bq}-\frac{2\alpha b}{2-bq}}\,dx\right)^{\frac{bq-2}{2}}\] (4.21)
thanks to Hölder’s inequality with \(\left(\frac{2}{qb},\frac{2}{2-bq}\right)\). We impose
\[\frac{bq(2-\sigma)}{2-bq}-\frac{2\alpha b}{2-bq}=2^{*}\frac{\sigma}{2}\]
and by (4.15), the integral (4.21) is bounded. Now, thanks also to the definitions of \(\sigma=\sigma(q,\alpha)\) and \(m=m(q,\alpha)\), we deduce
\[b=\frac{N\sigma}{q(N-2+\sigma)-\alpha(N-2)}=\frac{N(q-1-\alpha)}{q-2\alpha}=m>1.\]
Once we have obtained (4.21), then (4.17) follows by observing that
\[\int_{E}|g(u_{n})||\nabla u_{n}|^{q}\,dx\leq|E|^{\frac{1}{m^{\prime}}}\left( \int_{\Omega}\left|g(u_{n})|\nabla u_{n}|^{q}\right|^{m}\,dx\right)^{\frac{1}{ m}}\]
for every \(E\subset\Omega\).
If, in particular, we take \(E=\Omega\), then we have proved that the right hand side of (4.1) is uniformly bounded in \(L^{1}(\Omega)\) and this fact yields to (4.18) thanks to [4] (see also [23, Theorem \(2.1\)]). Note that the limit function \(u\) satisfies \(|\nabla u|\in L^{r}(\Omega)\) with the same \(r\) as in Proposition 4.5.
Having (4.17), (4.14) and (4.18), we are allowed to apply Vitali’s Theorem and conclude with (4.19).
The uniform boundedness in (4.15) implies that \(T_{j}(u_{n})\) is uniformly bounded in \(H_{0}^{1}(\Omega)\). We deduce the compactness of \(T_{j}(u_{n})\) in \(H_{0}^{1}(\Omega)\) from the compactness of the right hand side in \(L^{1}(\Omega)\) (see [22] or [19]).
**Theorem 4.7**.: _Let \(f\in L^{m}(\Omega)\) with \(1<m<\frac{2N}{N+2}\) and let \(\alpha=\frac{N(q-1)-mq}{N-2m}\). Assume (2.2), (2.3) and (2.4) as well. Then, there exists at least a solution \(u\) of (2.1) in the sense of Definition 2.3 such that \(H(x,u,\nabla u)\in L^{m}(\Omega)\) and_
\[\int_{\Omega}(1+|u|)^{\sigma-2}|\nabla u|^{2}\,dx<M,\] (4.22)
_that is, \((1+|u|)^{\frac{\sigma}{2}-1}u\in H_{0}^{1}(\Omega)\)._
Proof. Consider in (4.2) a test function of the kind \(S(u_{n})\varphi\), where \(\varphi\in H^{1}(\Omega)\cap L^{\infty}(\Omega)\) and \(S:\mathbb{R}\to\mathbb{R}\) is a Lipschitz function having compact support, say \(\text{supp}(S(u_{n}))\subseteq[-j,j]\), and such that \(S(u)\varphi\in H_{0}^{1}(\Omega)\). Then
\[\int_{\Omega}[A(x)\cdot\nabla u_{n}]\cdot\nabla(S(u_{n})\varphi)\,dx=\int_{ \Omega}H(x,u_{n},\nabla u_{n})S(u_{n})\varphi\,dx+\int_{\Omega}T_{n}(f)S(u_{n} )\varphi\,dx.\]
Due to the support assumption on \(S(u_{n})\), the above equation only takes into account \(T_{j}(u_{n})\), and so we rewrite the approximating formulation as
\[\begin{array}[]{c}\int_{\Omega}[A(x)\cdot\nabla T_{j}(u_{n})] \cdot\nabla\varphi\,S(u_{n})\,dx+\int_{\Omega}[A(x)\cdot\nabla T_{j}(u_{n})] \cdot\nabla T_{j}(u_{n})S^{\prime}(u_{n})\varphi\,dx\\ =\int_{\Omega}H(x,u_{n},\nabla u_{n})S(u_{n})\varphi\,dx+\int_{ \Omega}T_{n}(f)S(u_{n})\varphi\,dx.\end{array}\]
The convergence of the right hand side follows from (4.19) and (4.14). Furthermore
\[\int_{\Omega}[A(x)\cdot\nabla T_{j}(u_{n})]\cdot\nabla\varphi\,S(u_{n})\,dx\to \int_{\Omega}[A(x)\cdot\nabla T_{j}(u)]\cdot\nabla\varphi\,S(u)\,dx=\int_{ \Omega}[A(x)\cdot\nabla u]\cdot\nabla\varphi\,S(u)\,dx\]
and
\[\int_{\Omega}[A(x)\cdot\nabla T_{j}(u_{n})]\cdot\nabla T_{j}(u_{n })S^{\prime}(u_{n})\varphi\,dx\to \int_{\Omega}[A(x)\cdot\nabla T_{j}(u)]\cdot\nabla T_{j}(u)S^{ \prime}(u)\varphi\,dx\]
\[=\int_{\Omega}[A(x)\cdot\nabla u]\cdot\nabla uS^{\prime}(u) \varphi\,dx\]
thanks to (4.14) and (4.20).
We point out that (4.16) and Fatou’s lemma imply that \(H(x,u,\nabla u)\in L^{m}(\Omega)\) holds.
The regularity (4.22) directly follows from Proposition 3.9 applied on (4.1).
**Remark 4.8**.: As in Remark 4.4, we may consider exponents satisfying
\[\frac{N(q-1)-mq}{N-2m}<\alpha<q-1\,.\]
Indeed, it is enough to have in mind Remark 3.10 and follow the proofs of Propositions 4.5 and 4.6 as well as Theorem 4.7 with this new exponent \(\alpha\). We point out that now we have to check that
\[\frac{bq(2-\sigma)}{2-bq}-\frac{2\alpha b}{2-bq}\leq 2^{*}\frac{\sigma}{2}\]
which obviously holds with a bigger exponent \(b\). Therefore, the above existence result applies as well.
The limit case \(\alpha=q-1\) also holds taking into account the a priori estimate stated in Proposition 3.11.
### The limit case
We have already analyzed the situation when \(q>\frac{N+\alpha(N-2)}{N-1}\) with data \(f\in L^{m}(\Omega)\) (\(m>1\)). It remains to study the limit case \(q=\frac{N+\alpha(N-2)}{N-1}\), where existence of a renormalized solution with \(L^{1}\)–data should be expected. Nevertheless, this is not so as a variant of [18, Example 4.1] shows.
**Example 4.9**.: _Let \(q=\frac{N+\alpha(N-2)}{N-1}\), and consider a nonnegative \(f\in L^{1}(\Omega)\) and a continuous function \(g\::\:\mathbb{R}\to(0,+\infty)\) satisfying \(g(s)=s^{-\alpha}\) for \(s>k_{0}>0\)._
_Assume that there exists a renormalized solution \(u\) to problem_
\[\left\{\begin{array}[]{ll}-\Delta u=g(u)\,|\nabla u|^{q}+f(x)&\mbox{ in } \Omega\,,\\ u=0&\mbox{ on }\partial\Omega\,,\end{array}\right.\]
_which is obviously nonnegative. Then \(g(u)\,|\nabla u|^{q}\in L^{1}(\Omega)\), so that \(g(k+G_{k}(u))\,|\nabla G_{k}(u)|^{q}\in L^{1}(\Omega)\) for all \(k>0\). Fixing \(k>k_{0}\), we deduce that_
\[\Big{|}\nabla\Big{(}(k+G_{k}(u))^{1-\frac{\alpha}{q}}-k^{1-\frac{\alpha}{q}} \Big{)}\Big{|}^{q}\in L^{1}(\Omega)\,,\]
_that is,_
\[(k+G_{k}(u))^{1-\frac{\alpha}{q}}-k^{1-\frac{\alpha}{q}}\in W_{0}^{1,q}(\Omega )\,.\]
_Hence, the Sobolev embedding implies \((k+G_{k}(u))^{1-\frac{\alpha}{q}}\in L^{\frac{Nq}{N-q}}(\Omega)\) and consequently it follows from \(0\leq u\leq k+G_{k}(u)\) that \(u\in L^{\frac{N(q-\alpha)}{N-q}}(\Omega)\), where \(q=\frac{N+\alpha(N-2)}{N-1}\). Observing that_
\[\frac{N(q-\alpha)}{N-q}=\frac{N}{N-2}\,,\]
_it yields \(u\in L^{\frac{N}{N-2}}(\Omega)\). To get a contradiction, we just need to compare with the unique renormalized solution of_
\[\left\{\begin{array}[]{ll}-\Delta v=f(x)&\mbox{ in }\Omega\,,\\ v=0&\mbox{ on }\partial\Omega\,,\end{array}\right.\]
_which satisfies \(0\leq v\leq u\) and so \(v\in L^{\frac{N}{N-2}}(\Omega)\), but this summability does not hold for a general \(L^{1}\)-data._
We may expect existence of solution to problem (2.1) when we take \(q=\frac{N+\alpha(N-2)}{N-1}\) and the datum belongs to the Orlicz space \(L^{1}((\log L)^{N})\). However, since we are focus in the setting of Lebesgue spaces, we must assume data \(f\in L^{m}(\Omega)\) (with \(m>1\)) to deal with this limit case. Observe that it is enough to consider \(1<m<\frac{N(q-1)}{q}\) due to embeddings in Lebesgue spaces. In this situation we have existence for a problem with exponent \(\alpha_{0}=\frac{N(q-1)-mq}{N-2m}\). Owed to Remark 4.8, then we obtain an existence result for
\[\alpha=\frac{N(q-1)-q}{N-2}>\frac{N(q-1)-mq}{N-2m}=\alpha_{0}\,.\]
Therefore, we have proved the following result.
**Theorem 4.10**.: _Let \(f\in L^{m}(\Omega)\) with \(m>1\) and let \(\sigma=\frac{m(N-2)}{N-2m}\). Take \(\alpha=\frac{N(q-1)-q}{N-2}\) and assume (2.2), (2.3) and (2.4). Then, there exists at least a solution \(u\) of (2.1) in the sense of Definition 2.3 such that \((1+|u|)^{\frac{\sigma}{2}-1}u\in H_{0}^{1}(\Omega)\)._
### The case of measure data
We now discuss the case with measure data. Since we reason, as we have done before, through approximation techniques, we make some comments on the approximating problem we are going to consider.
Given \(\mu\in\mathcal{M}_{b}(\Omega)\), we choose a sequence \(\{\mu_{n}\}_{n}\) in \(L^{\infty}(\Omega)\) which approximates \(\mu\) as in [12, Section 3] and satisfies
\[\|\mu_{n}\|_{L^{1}(\Omega)}\leq\|\mu\|_{\mathcal{M}_{b}(\Omega)}.\]
Now consider the following approximating problems of (2.10):
\[\left\{\begin{array}[]{ll}-\hbox{\rm div\,}[A(x)\cdot\nabla u_{n}]=H(x,u_{n}, \nabla u_{n})+\mu_{n}&\mbox{ in }\Omega\,,\\ u_{n}=0&\mbox{ on }\partial\Omega\,.\end{array}\right.\] (4.23)
We already know that there exists solutions \(u_{n}\in H_{0}^{1}(\Omega)\cap L^{\infty}(\Omega)\) to problem (4.23). We recall that the definition of the sequence \(\{\mu_{n}\}_{n}\) in [12, Section 3] is made in such a way that the following result holds.
**Proposition 4.11**.: _Using the same notation as above, consider a Lipschitz–continuous function \(S\::\:\mathbb{R}\to\mathbb{R}\) such that \(S^{\prime}\) has compact support and denote by \(S(+\infty)\) and \(S(-\infty)\) the limits of \(S(t)\) at \(+\infty\) and \(-\infty\), respectively. Take \(\varphi\in W^{1,r}(\Omega)\cap L^{\infty}(\Omega)\), with \(r>N\), such that \(S(u)\varphi\in H_{0}^{1}(\Omega)\)._
_If, for some function \(u\),_
\[u_{n}(x)\to u(x)\qquad\hbox{a.e. in }\Omega\]
\[\nabla u_{n}(x)\to\nabla u(x)\qquad\hbox{a.e. in }\Omega\]
\[T_{k}(u_{n})\rightharpoonup T_{k}(u)\qquad\hbox{weakly in }H_{0} ^{1}(\Omega)\quad\hbox{for all }k>0\]
\[u_{n}\to u\qquad\hbox{strongly in }W_{0}^{1,s}(\Omega)\quad\hbox {for all }1\leq s<\frac{N}{N-1},\]
_then_
\[\lim_{n\to\infty}\int_{\Omega}S(u_{n})\varphi\mu_{n}\,dx=\int_{\Omega}S(u) \varphi\,d\mu_{0}+S(+\infty)\int_{\Omega}\varphi\,d\mu_{s}^{+}-S(-\infty)\int_ {\Omega}\varphi\,d\mu_{s}^{-}\,.\]
**Proposition 4.12**.: _Let \(\mu\in\mathcal{M}_{b}(\Omega)\) have a norm small enough. Let \(\frac{N(q-1)-q}{N-2}<\alpha<q-1\) and \(\{u_{n}\}_{n}\) be a sequence of solutions of (4.23). Assume also (2.2), (2.3) and (2.4). Then, the a.e. convergences (4.14) and (4.18), the equi–integrability (4.17) and the strong convergences (4.19), (4.20) and_
\[u_{n}\to u\quad\text{ in }W^{1,s}_{0}(\Omega)\] (4.24)
_for all \(1\leq s<\frac{N}{N-1}\)._
Proof. Theorem 3.14 implies that
\[\int_{\Omega}|\nabla T_{k}(u_{n})|^{2}\,dx\leq Mk\,,\text{ for all }k>0\,,\] (4.25)
and then, using Lemma 3.12 we get
\[|\{|\nabla u_{n}|>k\}|\leq C\frac{M^{\frac{N}{N-1}}}{k^{\frac{N}{N-1}}}\,, \text{ for all }k>0\,.\]
Hence, the sequence \(\{u_{n}\}\) is bounded in \(W^{1,s}_{0}(\Omega)\) for all \(1\leq s<\frac{N}{N-1}\) and there exist \(u\in W^{1,s}_{0}(\Omega)\) and a subsequence (not relabelled) such that
\[u_{n}\rightharpoonup u\quad\text{ weakly in }W^{1,s}_{0}(\Omega)\,,\] (4.26)
\[u_{n}\to u\quad\text{ in }L^{s}(\Omega)\,,\]
\[u_{n}\to u\quad\text{ a.e. in }\Omega\,.\] (4.27)
Moreover, condition (4.25) also implies that
\[\nabla T_{k}(u_{n})\rightharpoonup\nabla T_{k}(u)\quad\text{weakly in }L^{2}( \Omega;\mathbb{R}^{N})\,.\] (4.28)
To prove the equi-integrability of the right hand side we use that
\[\int_{\{|u|>k\}}g(u_{n})|\nabla u_{n}|^{q}\,dx\leq C_{k}\quad\hbox{for all }k> 0\,,\] (4.29)
with \(\lim_{k\to\infty}C_{k}=0\) (see Theorem 3.14). Thus, given \(\varepsilon>0\) we may find \(k_{0}>0\) such that
\[\int_{\{|u|>k\}}g(u_{n})|\nabla u_{n}|^{q}\,dx\leq\frac{\varepsilon}{2}\quad \text{ for all }\;k\geq k_{0}\;\text{ and for all }\;n\in\mathbb{N}\,.\]
Let \(E\subset\Omega\) and let \(k\geq k_{0}\) be fixed. Denoting \(\bar{g}_{k}=\sup_{|s|\leq k}|g(s)|\), the following inequalities hold:
\[\int_{E}g(u_{n})|\nabla u_{n}|^{q}\,dx =\int_{E\cap\{|u_{n}|\leq k\}}g(u_{n})|\nabla u_{n}|^{q}\,dx+\int _{E\cap\{|u_{n}|>k\}}g(u_{n})|\nabla u_{n}|^{q}\,dx\]
\[\leq\bar{g}_{k}\int_{E}|\nabla T_{k}(u_{n})|\,dx+\int_{\{|u|>k\}} g(u_{n})|\nabla G_{k}(u_{n})|^{q}\,dx\]
\[\leq\bar{g}_{k}\left(\int_{\Omega}|\nabla T_{k}(u_{n})|^{2}\,dx \right)^{\frac{q}{2}}|E|^{1-\frac{q}{2}}+\frac{\varepsilon}{2}\]
\[\leq\bar{g}_{k}\left(Mk\right)^{\frac{q}{2}}|E|^{1-\frac{q}{2}}+ \frac{\varepsilon}{2}\,,\]
which goes to 0 when \(|E|\) is small and so (4.17) is proved.
On the other hand, taking \(E=\Omega\) and applying [6, Lemma 1] we deduce (4.24). As a consequence we get
\[\nabla u_{n}\to\nabla u\quad\text{ a.e. in }\Omega\,,\]
\[g(u_{n})|\nabla u_{n}|^{q}\to g(u)|\nabla u|^{q}\quad\text{ in } L^{1}(\Omega)\,.\]
This last convergence implies (4.19). Finally, appealing to the proof of [12, Theorem \(3.4\), Step 5], we deduce (4.20).
**Theorem 4.13**.: _Assume \(\frac{N(q-1)-q}{N-2}<\alpha<q-1\), (2.2), (2.3) and (2.4). If \(\mu\in\mathcal{M}_{b}(\Omega)\) has a norm small enough, then there exists at least a renormalized solution to problem (2.10) in the sense of Definition 2.5._
Proof. We take advantage of the results contained in [12, Theorem \(2.33\)].
Proposition 4.12 provides us with \(u\in\mathcal{T}^{1,2}_{0}(\Omega)\) (by (4.28)), \(u\in W_{0}^{1,s}(\Omega)\) for all \(1\leq s<\frac{N}{N-1}\) (by (4.26)) and \(H(x,u,\nabla u)\in L^{1}(\Omega)\) (by (4.19)).
Now, consider a Lipschitz–continuous function \(S\::\:\mathbb{R}\to\mathbb{R}\) such that \(S^{\prime}\) has compact support and a \(\varphi\in W^{1,r}(\Omega)\cap L^{\infty}(\Omega)\), with \(r>N\), such that \(S(u)\varphi\in H_{0}^{1}(\Omega)\). Since \(S^{\prime}\) has compact support, it follows that there exists \(j>0\) such that \(S^{\prime}(u_{n})\nabla u_{n}=S^{\prime}(u_{n})\nabla T_{j}(u_{n})\). As a consequence, we get
\[\int_{\Omega}|\nabla S(u_{n})|^{2}dx=\int_{\Omega}S^{\prime}(u_{n})^{2}|\nabla T _{j}(u_{n})|^{2}dx\leq\|S^{\prime}\|_{\infty}\int_{\Omega}|\nabla T_{j}(u_{n}) |^{2}dx<\infty\]
and so
\[S^{\prime}(u_{n})\nabla T_{j}(u_{n})\rightharpoonup S^{\prime}(u)\nabla T_{j}( u)\qquad\hbox{weakly in }L^{2}(\Omega;\mathbb{R}^{N})\,.\] (4.30)
Taking \(S(u_{n})\varphi\) as test function in problem (4.23), we have
\[\int_{\Omega}\varphi S^{\prime}(u_{n})[A(x)\cdot\nabla u_{n}] \cdot\nabla u_{n}\,dx+\int_{\Omega}S(u_{n})[A(x)\cdot\nabla u_{n}]\cdot\nabla \varphi\,dx\\ =\int_{\Omega}S(u_{n})\varphi H(x,u_{n},\nabla u_{n})\,dx+\int_{ \Omega}S(u_{n})\varphi\mu_{n}\,dx\,.\] (4.31)
The first term on the left hand side can be written as
\[\int_{\Omega}\varphi S^{\prime}(u_{n})[A(x)\cdot\nabla u_{n}]\cdot\nabla u_{n} \,dx=\int_{\Omega}\varphi S^{\prime}(u_{n})[A(x)\cdot\nabla T_{j}(u_{n})]\cdot \nabla T_{j}(u_{n})\,dx\]
Hence, the strong convergence of \(A(x)\cdot\nabla T_{j}(u_{n})\) to \(A(x)\cdot\nabla T_{j}(u)\) in \(L^{2}(\Omega;\mathbb{R}^{N})\) (due to (4.20)) and the weak convergence of \(S^{\prime}(u_{n})\nabla T_{j}(u_{n})\) to \(S^{\prime}(u)\nabla T_{j}(u)\) (by (4.30)) imply the convergence of this first term to
\[\int_{\Omega}\varphi S^{\prime}(u)[A(x)\cdot\nabla T_{j}(u)]\cdot\nabla T_{j}( u)\,dx=\int_{\Omega}\varphi S^{\prime}(u)[A(x)\cdot\nabla u]\cdot\nabla u\,dx\,.\]
The convergence of the second term follows from (4.27), the boundedness of function \(S\) and (4.24).
As far as the right hand side of (4.31) is concerned, the convergence in the first term yields as a consequence of \(S(u_{n}){\buildrel*\over{\rightharpoonup}}S(u)\) in \(L^{\infty}(\Omega)\) and (4.19). We deal with the second term applying Proposition 4.11. Therefore, we can pass to limit in (4.31) obtaining
\[\int_{\Omega}\varphi S^{\prime}(u)[A(x)\cdot\nabla u]\cdot\nabla u \,dx+\int_{\Omega}S(u)[A(x)\cdot\nabla u]\cdot\nabla\varphi\,dx\\ =\int_{\Omega}S(u)\varphi H(x,u,\nabla u)\,dx+\int_{\Omega}S(u) \varphi\,d\mu_{0}+S(+\infty)\int_{\Omega}\varphi\,d\mu_{s}^{+}-S(-\infty)\int_ {\Omega}\varphi\,d\mu_{s}^{-}\,.\]
Since we have proved one of the equivalent definitions of renormalized solution stated in [12], we are done.
Applying Proposition 3.16 and Proposition 3.18 in Proposition 4.12, instead of Theorem 3.14, it leads to Theorem 4.14 and Theorem 4.15, respectively.
**Theorem 4.14**.: _Let \(\mu\in\mathcal{M}_{b}(\Omega)\) and \(\alpha=q-1\). Assume (2.2), (2.3) and (2.4), and \(\gamma\) is small enough. Then, there exists at least a renormalized solution to problem (2.10) in the sense of Definition 2.5._
**Theorem 4.15**.: _Let \(\mu\in\mathcal{M}_{b}(\Omega)\) and \(q-1<\alpha\leq\frac{q}{2}\). Assume also (2.2), (2.3) and (2.4). Then, there exists at least a renormalized solution to problem (2.10) in the sense of Definition 2.5._
## 5. Results on further regularity
Throughout this paper, we have shown that the features of our problem can be shortened in the parameters \(q\) and \(\alpha\) and illustrated by a \((q,\alpha)\)–plane (recall pictures in the introduction). In this way, the linear setting corresponds to the line \(\alpha=q-1\), whereas the superlinear one coincides with a triangle whose sides are that line, \(\alpha=0\) and \(q=2\). In this triangle, points may be grouped according to the suitable summability of sources. Thus, between lines \(\alpha=q-1\) and \(\alpha=\frac{N-1}{N-2}q-\frac{N}{N-2}\), we may take measure data. In the remaining triangle the best line that enables data \(f\in L^{m}(\Omega)\) is given by
\[\alpha=\frac{N-m}{N-2m}q-\frac{N}{N-2m}\,,\qquad 1<m<\frac{N}{2}\,.\]
The existence result also works if we have a bigger \(\alpha\) (as was emphasized several times) or if a smaller \(q\) is considered. On the other hand, fixed \(q\) and \(\alpha\), if we have more regular data, then existence is guaranteed by embeddings between Lebesgue spaces. In this case, however, the solution should be more regular as well.
In this Section, we propose an analogous of the bootstrapping results contained in [18] in the case of being \(g\) constant in (2.3), that is, when \(g(s)=\gamma\) for all \(s\in\mathbb{R}\). It is worth comparing the results between the cases [18, \(g(s)\equiv\gamma\)] and \(g(s)\leq\frac{\gamma}{|s|^{\alpha}}\) below. In the constant case it is proved
* \(u\in L^{\infty}(\Omega)\) if \(f\in L^{s}(\Omega)\), \(s>\frac{N}{2}\) (see [18, Theorems \(3.8\) & \(4.9\) points \((i)\) & Theorem \(5.8\)]), while solutions to (2.1) are bounded even for \(f\in L^{r}(\Omega)\) with \(\frac{N}{2}<s<r\);
* \(u\in L^{d}(\Omega)\), \(d<\infty\), if \(f\in L^{\frac{N}{2}}(\Omega)\) (see [18, Theorems \(3.8\) & \(4.9\) points \((ii)\) & Theorem \(5.8\)]) while, if (2.4) is in force, then the same holds for greater values of \(q\);
* \(|u|^{\frac{t}{2}}\in H_{0}^{1}(\Omega)\) if \(f\in L^{r}(\Omega)\), \(2^{*}<r<\frac{N}{2}\) (see [18, Theorems \(3.8\) & \(4.9\) points \((iii)\) & Theorem \(5.8\)]), while solutions to (2.1) with \(f\in L^{s}(\Omega)\) and (eventually) \(2^{*}<s<r<\frac{N}{2}\) verify \(|u|^{\frac{\tau}{2}}\in H_{0}^{1}(\Omega)\) for \(\tau>t\);
* \((1+|u|)^{\frac{t}{2}-1}u\in H_{0}^{1}(\Omega)\) if \(f\in L^{r}(\Omega)\), \(\frac{N(q-1)}{q}<r\leq 2^{*}\) (see [18, Theorems \(4.9\) point \((iv)\) & Theorem \(5.8\)] ), while solutions to (2.1) with \(f\in L^{s}(\Omega)\) (eventually) for \(\frac{N(q-(1+\alpha))}{q-2\alpha}<s<\frac{N(q-1)}{q}<r\leq 2^{*}\) verify \((1+|u|)^{\frac{\tau}{2}-1}u\in H_{0}^{1}(\Omega)\) for \(\tau>t\).
In our case, both R1 and R2 directly follow from Proposition 2.9. So we will focus on R3 and R4. Notice that the assumption (2.4) (which generalise the constant case) in (2.3) allows us to get the same results of the Theorems quoted above for lower data regularity/greater values of \(q\) with respect to [18]. Indeed the values \(s\) in Theorems 5.1 and 5.3 can be taken
\[\frac{N(q-1-\alpha)}{q-2\alpha}<s<\frac{N(q-1)}{q}.\]
### The case \(\frac{(N-2)(q-1-\alpha)}{2-q}\geq 2\)
**Theorem 5.1**.: _Let \(f\in L^{s}(\Omega)\) with \(s>m=\frac{N(q-1-\alpha)}{q-2\alpha}\) and \(\alpha\frac{N-2}{N}+\frac{2}{N}+1\leq q<2\) (i.e. \(m\geq 2_{*}\) and \(\sigma=\frac{m(N-2)}{N-2m}=\frac{(N-2)(q-1-\alpha)}{2-q}\geq 2\)). Assume also (2.2), (2.3) and (2.4). Then, solutions \(u\) to (2.1) in the sense of Definition 2.2, verifying (4.13) and \(2^{*}<s<\frac{N}{2}\) satisfy \(|u|^{\frac{\tau}{2}}\in H_{0}^{1}(\Omega)\), \(u\in L^{2^{*}\tau}(\Omega)\) and_
\[\||u|^{\frac{\tau}{2}}\|_{H_{0}^{1}(\Omega)}+\|u\|_{L^{s^{**}}(\Omega)}\leq M \qquad\text{for }\tau=\frac{s(N-2)}{N-2s}.\] (5.1)
_The constant \(M\) depends on \(q,\,m,\,N,\,\lambda,\,\gamma,\,\alpha,\,|\Omega|\) and on \(\|f\|_{L^{s}(\Omega)},\,\|u\|_{L^{1}(\Omega)}\)._
Note that \(\tau>\sigma=\frac{m(N-2)}{N-2m}\) since \(s>m\).
Proof. We take \(\varphi_{n}(u)=\Phi_{n}(G_{k}(u))\) in (2.7) with
\[\Phi_{n}(w)=\int_{0}^{w}|T_{n}(z)|^{\tau-2}\,dz,\qquad\tau>\sigma\,.\]
Then, the assumptions (2.2), (2.3), (2.4) lead us to
\[\lambda\int_{\Omega}|\nabla G_{k}(u)|^{2}|T_{n}(G_{k}(u))|^{\tau- 2}\,dx\\ \leq\gamma\int_{\Omega}\frac{|\nabla G_{k}(u)|^{q}|\Phi_{n}(G_{k} (u))|}{|u|^{\alpha}}\,dx+\int_{\Omega}|f||\Phi_{n}(G_{k}(u))|\,dx\,.\] (5.2)
Observe that
\[|\Phi_{n}(w)|\leq|T_{n}(w)|^{q\left(\frac{\tau}{2}-1\right)}\int_{0}^{w}|T_{n} (\xi)|^{(2-q)\left(\frac{\tau}{2}-1\right)}\,d\xi\leq|T_{n}(w)|^{q\left(\frac{ \tau}{2}-1\right)}\left(\int_{0}^{w}|T_{n}(\xi)|^{\frac{\tau}{2}-1}\,d\xi \right)^{2-q}|w|^{q-1}\]
by Hölder’s inequality with indices \(\left(\frac{1}{2-q},\frac{1}{q-1}\right)\). This fact allows us to estimate the first integral on the right hand side of (5.2) as
\[I=\int_{\Omega}\frac{|\nabla G_{k}(u)|^{q}|\Phi_{n}(G_{k}(u))|}{ |u|^{\alpha}}\,dx\\ \leq\int_{\Omega}\Biggl{[}\left(|\nabla G_{k}(u)|^{q}|T_{n}(G_{k} (u))|^{q\left(\frac{\tau}{2}-1\right)}\right)\left(\int_{0}^{|G_{k}(u)|}|T_{n} (\xi)|^{\frac{\tau}{2}-1}\,d\xi\right)^{2-q}|G_{k}(u)|^{q-1-\alpha}\Biggr{]}\, dx\,.\]
We now apply Hölder’s inequality with three indices \(\left(\frac{2}{q},\frac{2^{*}}{2-q},\frac{N}{2-q}\right)\) in the right hand side above, so we get
\[I \leq\left(\int_{\Omega}|\nabla G_{k}(u)|^{2}|T_{n}(G_{k}(u))|^{ \tau-2}\,dx\right)^{\frac{q}{2}}\left(\int_{\Omega}\left(\int_{0}^{|G_{k}(u)|} |T_{n}(\xi)|^{\frac{\tau}{2}-1}\,d\xi\right)^{2^{*}}\,dx\right)^{\frac{2-q}{2^ {*}}}\]
\[\qquad\times\left(\int_{\Omega}|G_{k}(u)|^{\frac{N(q-1-\alpha)}{2 -q}}\,dx\right)^{\frac{2-q}{N}}\,.\]
Then, since
\[\left(\int_{\Omega}\left(\int_{0}^{G_{k}(u)}|T_{n}(\xi)|^{\frac{\tau}{2}-1}\,d \xi\right)^{2^{*}}\,dx\right)^{\frac{2-q}{2^{*}}}\leq S_{2}^{\frac{2-q}{2}} \left(\int_{\Omega}|\nabla G_{k}(u)|^{2}|T_{n}(G_{k}(u))|^{\tau-2}\,dx\right)^ {\frac{2-q}{2}}\]
by Sobolev’s embedding and \(\frac{N(q-1-\alpha)}{2-q}=2^{*}\frac{\sigma}{2}\) (which is due to the definition of \(\sigma\)), we rewrite
\[I\leq S_{2}^{\frac{2-q}{2}}\left(\int_{\Omega}|\nabla G_{k}(u)|^{2}|T_{n}(G_{k }(u))|^{\tau-2}\,dx\right)\left(\int_{\Omega}|G_{k}(u)|^{2^{*}\frac{\sigma}{2} }\,dx\right)^{\frac{2-q}{N}}\,.\] (5.3)
Take \(k_{0}\) such that
\[S_{2}^{\frac{2-q}{2}}\gamma\||G_{k}(u)|^{\frac{\sigma}{2}}\|_{L^{2^{*}}(\Omega )}^{\frac{2^{*}}{N}(2-q)}\leq\frac{\lambda}{2}\qquad\forall k\geq k_{0},\]
so that combining (5.2)–(5.3) we obtain
\[\frac{\lambda}{2}\int_{\Omega}|\nabla G_{k}(u)|^{2}|T_{n}(G_{k}(u))|^{\tau-2} \,dx\leq\int_{\Omega}|f||\Phi_{n}(G_{k}(u))|\,dx\]
for \(k\geq k_{0}\).
Defining \(\psi_{n}(w)=\int_{0}^{w}|T_{n}(z)|^{\frac{\tau}{2}-1}\,dz\), we are left with the study of
\[\frac{\lambda}{2}\int_{\Omega}|\nabla\psi_{n}(G_{k}(u))|^{2}\,dx\leq\int_{ \Omega}|f||\Phi_{n}(G_{k}(u))|\,dx\,,\]
and our current aim becomes finding the relation between \(\Phi_{n}(\cdot)\) and \(\psi_{n}(\cdot)\), in order to obtain an inequality only involving \(\psi_{n}(\cdot)\). Using the definition of \(\psi_{n}\) and considering the sets \(\{|w|<n\}\) and \(\{|w|\geq n\}\), we get the inequality \(|\psi_{n}(w)|\geq\frac{2}{\tau}|T_{n}(w)|^{\frac{\tau}{2}-1}|w|\). And then, since function \(|T_{n}(z)|^{\tau-2}\) is non-decreasing, we deduce
\[|\Phi_{n}(w)|\leq|T_{n}(w)|^{\tau-2}|w|\leq|T_{n}(w)|^{\tau-2}|w|\left(\frac{| w|}{|T_{n}(w)|}\right)^{\frac{\tau-2}{\tau}}=\Big{[}|T_{n}(w)|^{\frac{\tau}{2} -1}|w|\Big{]}^{2\frac{\tau-1}{\tau}}.\]
Hence,
\[|\Phi_{n}(w)|\leq c|\psi_{n}(w)|^{2\frac{\tau-1}{\tau}}\]
for some constant \(c\), which does not depend on \(n\). This estimate and Hölder’s inequality with \((s,s^{\prime})\) provide us with
\[\frac{\lambda}{2}\int_{\Omega}|\nabla\psi_{n}(G_{k}(u))|^{2}\,dx \leq c\int_{\Omega}|f||\psi_{n}(G_{k}(u))|^{2\frac{\tau-1}{\tau}}\,dx\\ \leq c\|f\|_{L^{s}(\Omega)}\left(\int_{\Omega}|\psi_{n}(G_{k}(u)) |^{2s^{\prime}\frac{\tau-1}{\tau}}\,dx\right)^{\frac{1}{s^{\prime}}}\,.\] (5.4)
Now, by Sobolev’s embedding and since the definition of \(\tau\) implies \(2s^{\prime}\frac{\tau-1}{\tau}=2^{*}\), we obtain
\[\frac{\lambda}{2S_{2}}\left(\int_{\Omega}|\psi_{n}(G_{k}(u))|^{2^{*}}\,dx \right)^{\frac{2}{2^{*}}-2\frac{\tau-1}{2^{*}\tau}}\leq c\|f\|_{L^{s}(\Omega)}\,.\] (5.5)
We point out that \(\frac{2}{2^{*}}-2\frac{\tau-1}{2^{*}\tau}=\frac{2}{2^{*}\tau}>0\). Then we let \(n\to\infty\) in (5.5) getting \(u\in L^{2^{*}\frac{\tau}{2}}(\Omega)\).
We are now allowed to consider the limit \(n\to\infty\) in (5.4), deducing \(|u|^{\frac{\tau}{2}}\in H_{0}^{1}(\Omega)\).
**Remark 5.2**.: The dependence of \(M\) in (5.1) on \(\|u\|_{L^{1}(\Omega)}\) follows from the following fact.
Let us come back to (3.5). Recalling that the integral in the right hand side of (3.1) is evaluated over \(\{|u|>k\}\) and proceeding as in the proof of Proposition 3.1, we get the same results with \(C_{3}\||f|\chi_{\{|u|>k\}}\|_{L^{m}(\Omega)}\) instead of \(C_{3}\|f\|_{L^{m}(\Omega)}\). Thus, we rewrite (3.5) as
\[F(Y_{k})\leq C_{3}\||f|\chi_{\{|u|>k\}}\|_{L^{m}(\Omega)}\qquad\forall k>0\,.\]
Observe that Hölder’s inequality gives
\[C_{3}\||f|\chi_{\{|u|>k\}}\|_{L^{m}(\Omega)}\leq C_{3}\|f\|_{L^{s}(\Omega)} \left(\frac{\|u\|_{L^{1}(\Omega)}}{k}\right)^{\frac{s-m}{sm}}\]
and this value will be less than \(M^{*}\) for \(k\) large enough. This fact implies that the constants \(M\) in Proposition 3.1 depend (in this case) on \(\|u\|_{L^{1}(\Omega)}\) too.
### The case \(1<\frac{(N-2)(q-1-\alpha)}{2-q}<2\)
**Theorem 5.3**.: _Let \(f\in L^{s}(\Omega)\) with \(s>m=\frac{N(q-1-\alpha)}{q-2\alpha}\) and \(1+\frac{1}{N-1}+\alpha\frac{N-2}{N-1}<q<\alpha\frac{N-2}{N}+\frac{2}{N}+1\) (i.e. \(1<m<2_{*}\) and \(1<\sigma=\frac{m(N-2)}{N-2m}=\frac{(N-2)(q-1-\alpha)}{2-q}<2\)). Then, solutions \(u\) to (2.1) in the sense of Definition 2.3, verifying (4.22) and_
1. \(2_{*}\leq s<\frac{N}{2}\) _satisfy_ \(|u|^{\frac{\tau}{2}}\in H_{0}^{1}(\Omega)\)_,_ \(u\in L^{2^{*}\frac{\tau}{2}}(\Omega)\) _and_ \[\||u|^{\frac{\tau}{2}}\|_{H_{0}^{1}(\Omega)}+\|u\|_{L^{2^{*}\frac{\tau}{2}}( \Omega)}\leq M\qquad\text{for }\tau=\frac{s(N-2)}{N-2s}\geq 2>\sigma\,;\]
2. \(m<s<2_{*}\) _satisfy_ \((1+|u|)^{\frac{\tau}{2}-1}u\in H_{0}^{1}(\Omega)\)_,_ \(|\nabla u|\in L^{s^{*}}(\Omega)\) _and_ \[\|(1+|u|)^{\frac{\tau}{2}-1}u\|_{H_{0}^{1}(\Omega)}+\||\nabla u|\|_{L^{s^{*}}( \Omega)}\leq M\qquad\text{for }\tau=\frac{s(N-2)}{N-2s}\in(\sigma,2)\,.\] (5.6)
_The constants \(M\) above depend on \(q,\,s,\,N,\,\lambda,\,\gamma,\,\alpha,\,|\Omega|\) and on \(\|f\|_{L^{s}(\Omega)},\,\|u\|_{L^{1}(\Omega)}\)._
Here, cases \((i)\) and \((ii)\) differ in how the interval \((\sigma,\infty)\) is split by the parameter \(\tau\) as \(\tau\geq 2(>\sigma)\) and \(\sigma<\tau<2\), respectively.
Proof. First consider a function \(\Phi\in W^{1,\infty}(\mathbb{R})\) satisfying
\[0\leq\Phi^{\prime}(w)\leq L(1+|w|)^{\sigma-2},\qquad L>0\,,\] (5.7)
and
\[|\Phi(w)|\leq C(\Phi^{\prime}(w))^{\frac{q}{2}}\int_{0}^{|w|}(\Phi^{\prime}(z) )^{\frac{2-q}{2}}\,dz,\qquad C>0\,.\] (5.8)
We remark that Hölder’s inequality yields
\[|\Phi(w)|\leq C(\Phi^{\prime}(w))^{\frac{q}{2}}\int_{0}^{|w|}(\Phi^{\prime}(z) )^{\frac{2-q}{2}}\,dz\leq C(\Phi^{\prime}(w))^{\frac{q}{2}}\left[\int_{0}^{|w| }(\Phi^{\prime}(z))^{\frac{1}{2}}\,dz\right]^{2-q}|w|^{q-1}\,.\] (5.9)
We take \(\Phi(G_{k}(u))\) as test function. Note that the condition (5.7) is needed in order to make the test function admissible. Recalling (2.2), (2.3) and (2.4), we have
\[\lambda\int_{\Omega}\Phi^{\prime}(G_{k}(u))|\nabla G_{k}(u)|^{2}\,dx\leq\int_{ \Omega}g(G_{k}(u))|\nabla G_{k}(u)|^{q}\Phi(G_{k}(u))\,dx+\int_{\Omega}|f|| \Phi(G_{k}(u))|\,dx\,.\] (5.10)
We use (5.9) and Hölder’s inequality with indices \(\left(\frac{2}{q},\frac{2^{*}}{2-q},\frac{N}{2-q}\right)\) to estimate the integral involving the gradient term in the right hand side as
\[\int_{\Omega}g(u)|\nabla G_{k}(u)|^{q}\Phi(G_{k}(u))\,dx\leq \gamma\int_{\Omega}\frac{|\nabla G_{k}(u)|^{q}}{|u|^{\alpha}}\Phi(G_{k}(u))\, dx\\ \leq C\gamma\int_{\Omega}|\nabla G_{k}(u)|^{q}(\Phi^{\prime}(G_{k }(u)))^{\frac{q}{2}}\left(\int_{0}^{|G_{k}(u)|}(\Phi^{\prime}(z))^{\frac{1}{2} }\,dz\right)^{2-q}|G_{k}(u)|^{q-1-\alpha}\,dx\\ \leq C\gamma\left(\int_{\Omega}|\nabla G_{k}(u)|^{2}\Phi^{\prime} (G_{k}(u))\,dx\right)^{\frac{q}{2}}\left[\int_{\Omega}\left(\int_{0}^{|G_{k}(u )|}(\Phi^{\prime}(z))^{\frac{1}{2}}\,dz\right)^{2^{*}}\,dx\right]^{\frac{2-q}{ 2^{*}}}\\ \hskip 85.358268pt\times\left(\int_{\Omega}|G_{k}(u)|^{\frac{N(q- 1-\alpha)}{2-q}}\,dx\right)^{\frac{2-q}{N}}\\ \leq C\gamma S_{2}^{2-q}\left(\int_{\Omega}|\nabla G_{k}(u)|^{2} \Phi^{\prime}(G_{k}(u))\,dx\right)\left(\int_{\Omega}|G_{k}(u)|^{\frac{\sigma} {2}2^{*}}\,dx\right)^{\frac{2-q}{N}}\,,\]
thanks to Sobolev’s embedding too. Now we choose \(k_{0}\) such that
\[\gamma CS_{2}^{2-q}\||G_{k}(u)|^{\frac{\sigma}{2}}\|_{L^{2^{*}}(\Omega)}^{ \frac{2^{*}(2-q)}{N}}\leq\frac{\lambda}{2}\qquad\forall k\geq k_{0}\,.\]
Going back to (5.10), we have found that
\[\frac{\lambda}{2}\int_{\Omega}|\nabla G_{k}(u)|^{2}\Phi^{\prime}(G_{k}(u))\,dx \leq\int_{\Omega}|f||\Phi(G_{k}(u))|\,dx\,.\] (5.11)
If \(\tau\geq 2\), we argue as in Theorem 5.1. Instead, if \(\tau<2\), we set
\[\Phi(w)=\int_{0}^{w}(1+|z|)^{\sigma-2}(1+|T_{n}(z)|)^{\tau-\sigma}\,dz\,.\]
It is straightforward that \(\Phi\) satisfies (5.7). We are showing that (5.8) holds as well. To this end, we study the limits at \(0\) and at \(+\infty\):
\[\lim_{w\to 0}\frac{|\Phi(w)|}{(\Phi^{\prime}(w))^{\frac{q}{2}}\int_{0}^{|w|}( \Phi^{\prime}(z))^{\frac{2-q}{2}}\,dz}=\frac{\tau(2-q)+2(q-1)}{2(\tau-1)}\,.\]
\[\lim_{w\to+\infty}\frac{|\Phi(w)|}{(\Phi^{\prime}(w))^{\frac{q}{2}}\int_{0}^{| w|}(\Phi^{\prime}(z))^{\frac{2-q}{2}}\,dz}=\frac{\sigma(2-q)+2q(q-1)}{2(\sigma -1)}\,.\]
Hence, (5.8) follows. We also consider
\[\psi(w)=\int_{0}^{w}(1+|z|)^{\frac{\sigma-2}{2}}(1+|T_{n}(z)|)^{\frac{\tau- \sigma}{2}}\,dz\]
with \(\tau=\frac{s(N-2)}{N-2s}>\sigma=\frac{m(N-2)}{N-2m}\) (i.e. \(\tau 2^{*}=(2\tau-1)s^{\prime}\) as in Theorem 5.1).
We are now considering the function
\[w\mapsto\frac{|\Phi(w)|}{|\Psi(w)|^{2\frac{\tau-1}{\tau}}}\]
to check that it is bounded. It is easy to see that this quotient defines an even function which is increasing in \(\{0\leq w\leq n\}\) and decreasing in \(\{w>n\}\). Since
\[\lim_{w\to\infty}\frac{\int_{0}^{w}(1+z)^{\tau-2}dz}{\left[\int_{0}^{w}(1+z)^{ \frac{\tau-2}{2}}dz\right]^{2\frac{\tau-1}{\tau}}}=\left(\frac{\tau}{2}\right) ^{2\frac{\tau-1}{\tau}}\frac{1}{\tau-1}\,,\] (5.12)
it follows that
\[|\Phi(w)|\leq c|\Psi(w)|^{2\frac{\tau-1}{\tau}}\]
for certain constant \(c\) not depending on \(n\). Observe that the choice of the exponent involving \(\tau\) in (5.12) is justified to argue as in Theorem 5.1. Indeed, an analogous inequality as (5.4) can be recovered reasoning in a similar way.
We just remark that the regularity \((1+|u|)^{\frac{\tau}{2}-1}u\in H_{0}^{1}(\Omega)\) in point \((ii)\) implies that
\[\int_{\Omega}|\nabla u|^{s^{*}}\,dx\leq\left(\int_{\Omega}|\nabla u|^{2}(1+|u| )^{\tau-2}\,dx\right)^{\frac{s^{*}}{2}}\left(\int_{\Omega}(1+|u|)^{2^{*}\frac{ \tau}{2}}\,dx\right)^{\frac{2-s^{*}}{2}}\,,\]
so (5.6) follows.
**Remark 5.4**.: An analogous of Remark 5.2 holds in this case.
### The case \(\frac{(N-2)(q-1-\alpha)}{2-q}<1\)
In this Subsection, we consider the case \(m=1=\sigma\).
**Theorem 5.5**.: _Let \(f\in L^{s}(\Omega)\) with \(s>1\) and \(\frac{N(q-1)-q}{N-2}<\alpha<q-1\) (i.e. \(\alpha+1<q<1+\frac{1}{N-1}+\alpha\frac{N-2}{N-1}\)). Assume also (2.2), (2.3), (2.4) and that \(u\) is a solution \(u\) to (2.10) in the sense of Definition 2.3. Then_
1. \(2_{*}\leq s<\frac{N}{2}\) _satisfy_ \(|u|^{\frac{\tau}{2}}\in H_{0}^{1}(\Omega)\)_,_ \(u\in L^{2^{*}\frac{\tau}{2}}(\Omega)\) _and_ \[\||u|^{\frac{\tau}{2}}\|_{H_{0}^{1}(\Omega)}+\|u\|_{L^{2^{*}\frac{\tau}{2}}( \Omega)}\leq M\qquad\text{for }\tau=\frac{s(N-2)}{N-2s}\geq 2\,;\]
2. \(1<s<2_{*}\)_, then_ \((1+|u|)^{\frac{\tau}{2}-1}u\in H_{0}^{1}(\Omega)\)_,_ \(|\nabla u|\in L^{s^{*}}(\Omega)\) _and_ \[\|(1+|u|)^{\frac{\tau}{2}-1}u\|_{H_{0}^{1}(\Omega)}+\||\nabla u|\|_{L^{s^{*}}( \Omega)}\leq M\qquad\text{for }\tau=\frac{s(N-2)}{N-2s}\in(1,2)\,.\]
_The constants \(M\) above depend on \(q,\,s,\,N,\,\lambda,\,\gamma,\,\alpha,\,|\Omega|\) and on \(\|f\|_{L^{s}(\Omega)},\,\|u\|_{L^{1}(\Omega)}\)._
Proof. The proof follows the same argument of Theorem 5.3, although some changes in the case \(1<\tau<2\) must be done. Indeed, we first choose \(0<\rho<1\) and consider the function given by
\[\Phi(w)=\int_{0}^{w}\frac{(1+|T_{n}(z)|)^{\tau-1+\rho}}{(1+|z|)^{1+\rho}}dz\,,\]
which is a bounded function and so can be taken as test function (in the sense of Remark 2.6). It can be checked as in Theorem 5.3 that it also satisfies condition (5.8). Arguing as above, we arrive at the inequality
\[\int_{\Omega}g(u)|\nabla G_{k}(u)|^{q}\Phi(G_{k}(u))\,dx\leq\\ C\gamma\left(\int_{\Omega}|\nabla G_{k}(u)|^{2}\Phi^{\prime}(G_{ k}(u))\,dx\right)\left(\int_{\Omega}|G_{k}(u)|^{\frac{N(q-1-\alpha)}{2-q}}\,dx \right)^{\frac{2-q}{N}}.\]
Next, we use the fact that
\[\frac{N(q-1-\alpha)}{2-q}<\frac{2^{*}}{2}\]
and take into account Remark 3.15. So, an inequality similar to (5.11) may be deduced.
On the other hand, we set
\[\Psi(w)=\int_{0}^{w}\frac{(1+|T_{n}(z)|)^{\frac{\tau-1+\rho}{2}}}{(1+|z|)^{ \frac{1+\rho}{2}}}dz\,.\]
Contrary to what happens in the above theorem, now the function
\[w\mapsto\frac{|\Phi(w)|}{|\Psi(w)|^{2\frac{\tau-1}{\tau}}}\]
is increasing in the whole interval \([0,+\infty)\). Nevertheless, it is not difficult to find a bound:
\[\frac{\frac{\tau-1+\rho}{\rho(\tau-1)}(1+n)^{\tau-1}}{\left(\frac{2}{\tau} \right)^{2\frac{\tau-1}{\tau}}\left[(1+n)^{\frac{\tau}{2}}-1\right]^{2\frac{ \tau-1}{\tau}}}\leq\left(\frac{\tau}{2}\right)^{2\frac{\tau-1}{\tau}}\frac{ \tau-1+\rho}{\rho(\tau-1)}\left[\frac{2^{\frac{\tau}{2}}}{2^{\frac{\tau}{2}}-1 }\right]^{2\frac{\tau-1}{\tau}}\]
and it does not depend on \(n\). Hence, the analogous of (5.5) follows because \(2s^{\prime}\frac{\tau-1}{\tau}=2^{*}\).
## Acknowledgements
The third author has been partially supported by the Spanish Ministerio de Ciencia, Innovación y Universidades and FEDER, under project PGC2018–094775–B–I00.
## References
* [1] A. Alvino, V. Ferone & A. Mercaldo, Sharp a priori estimates for a class of nonlinear elliptic equations with lower order terms, Ann. Mat. Pura Appl. (4) 194 (2015), no. 4, 1169–1201.
* [2] Ph. Bénilan, L. Boccardo, Th. Gallouët, R. Gariepy, M. Pierre & J.L. Vázquez, An \(L^{1}\)–theory of existence and uniqueness of solutions of nonlinear elliptic equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) **22** (1995), no. 2, 241–-273.
* [3] A. Bensoussan, L. Boccardo & F. Murat, On a nonlinear partial differential equation having natural growth terms and unbounded solution Ann. Inst. Henri Poincaré, Anal. Non Linégreene 5, No. 4, (1988) 347–364.
* [4] L. Boccardo & T. Gallouët, Nonlinear Elliptic Equations with Right Hand Side Measures, Comm. in Par. Diff. Eq., Vol. **17** (1992), pp. 189–258.
* [5] L. Boccardo, T. Gallouët & L. Orsina, Existence and uniqueness of entropy solutions for nonlinear elliptic equations with measure data, Ann. Inst. Henri Poincaré, Anal. Non Linégreene 13, No. 5, (1996), 539–551.
* [6] L. Boccardo & F. Murat, Almost everywhere convergence of the gradients of solutions to elliptic and parabolic equations , Nonlin. Anal. TMA, Vol. **19** (1992), pp. 581–597.
* [7] L. Boccardo, F. Murat & J.–P. Puel, Existence de solutions non bornées pour certaines équations quasi–linéaires Port. Math. 41, (1982) 507–534.
* [8] L. Boccardo, F. Murat & J.–P. Puel, Résultats d’existence pour certains problèmes elliptiques quasilinéaires Ann. Sc. Norm. Super. Pisa, Cl. Sci., IV. Ser. 11, (1984) 213–235.
* [9] L. Boccardo, F. Murat & J.–P. Puel, Existence of bounded solutions for non linear elliptic unilateral problems Ann. Mat. Pura Appl., IV. Ser. 152, (1988) 183–196.
* [10] L. Boccardo, F. Murat & J.–P. Puel, \(L^{\infty}\) estimate for some nonlinear elliptic partial differential equations and application to an existence result SIAM J. Math. Anal. 23, No. 2, (1992) 326–333.
* [11] G. Bottaro & M.E. Marina, Problema di Dirichlet per equazioni ellittiche di tipo variazionale su insiemi non limitati Boll. Unione Mat. Ital., IV. Ser. 8, (1973) 46–56.
* [12] G. Dal Maso, F. Murat, L. Orsina & A. Prignet, Renormalized solutions of elliptic equations with general measure data, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) **28** (1999), no. 4, 741-–808.
* [13] T. Del Vecchio & M.M. Porzio, Existence results for a class of non coercive Dirichlet problems Ric. Mat. 44, No. 2, (1995) 421–438.
* [14] V. Ferone & F. Murat, Quasilinear problems having quadratic growth in the gradient: An existence result when the source term is small Équations aux dérivées partielles et applications. Articles dédiés à Jacques–Louis Lions. Gauthier–Villars: Paris. (1998) 497–515.
* [15] V. Ferone & F. Murat, Nonlinear problems having natural growth in the gradient: an existence result when the source terms are small Nonlinear Anal., Theory Methods Appl., Ser. A, Theory Methods 42, No. 7, (2000) 1309–1326.
* [16] V. Ferone & F. Murat, Nonlinear elliptic equations with natural growth in the gradient and source terms in Lorentz spaces J. Differ. Equations 256, No. 2, (2014) 577–608.
* [17] N. Grenon, F. Murat & A. Porretta, Existence and a priori estimate for elliptic problems with subquadratic gradient dependent terms C. R., Math., Acad. Sci. Paris 342, No. 1, (2005) 23–28.
* [18] N. Grenon, F. Murat & A. Porretta, A priori estimates and existence for elliptic equations with gradient dependent terms, Ann. Sc. Norm. Super. Pisa Cl. Sci., **13**, Issue 1, (2014), pp. 137–205.
* [19] Ch. Leone & A. Porretta, Entropy solutions for nonlinear elliptic equations in \(L^{1}\) Nonlinear Anal., Theory Methods Appl. **32**, Issue 3, (1998), 325–334.
* [20] J. Leray & J.–L. Lions, Quelques résultats de Visik sur les problèmes elliptiques non linégreenes par les méthodes de Minty–Browder, Bull. Soc. Math. Fr. 93, (1965) 97–107.
* [21] S. López-Martínez, A singularity as a break point for the multiplicity of solutions to quasilinear elliptic problems, to appear in Adv. Nonlin. Anal..
* [22] F. Murat, Soluciones renormalizadas de EDP elipticas no lineales, Laboratoire d’Analyse Numérique de l’Université Paris VI, Technical report R93023, (1993).
* [23] A. Porretta, Some remarks on the regularity of solutions for a class of elliptic equations with measure data, Houston J. Math., Vol. **26** (2000), pp. 183-–213.
* [24] A. Porretta, Nonlinear equations with natural growth terms and measure data, Electron. J. Differ. Equ. 2002, Conf. 09, (2002) 183–202.
* [25] A. Porretta & S. Segura de León, Nonlinear elliptic equations having a gradient term with natural growth, J. Math. Pures Appl. (9) **85**, No 3, (2006), pp. 465–492.
* [26] S. Segura de León, Existence and uniqueness for \(L^{1}\) data of some elliptic equations with natural growth, Adv. Differ. Equ. 8, No. 11, (2003) 1377–1408.
|
1308.3502 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 86990,
"num_imgs": 0,
"llama3_tokens_count": 28776
} | [] | # Initial Conditions for Numerical Relativity
\(\sim\) Introduction to numerical methods for solving elliptic PDEs \(\sim\)
Hirotada Okawa ¹
CENTRA, Departamento de Física, Instituto Superior Técnico,
Universidade Técnica de Lisboa - UTL, Av. Rovisco Pais 1, 1049 Lisboa, Portugal.
hirotada.okawa@ist.utl.pt
[FOOTNOTE:1][ENDFOOTNOTE]
###### Abstract
Numerical relativity became a powerful tool to investigate the dynamics of binary problems with black holes or neutron stars as well as the very structure of General Relativity. Although public numerical relativity codes are available to evolve such systems, a proper understanding of the methods involved is quite important. Here we focus on the numerical solution of elliptic partial differential equations. Such equations arise when preparing initial data for numerical relativity, but also for monitoring the evolution of black holes. Because such elliptic equations play an important role in many branches of physics, we give an overview of the topic, and show how to numerically solve them with simple examples and sample codes written in C++ and Fortran90 for beginners in numerical relativity or other fields requiring numerical expertise.
numerical relativity; numerical method; black hole. PACS numbers:
## 1 Introduction
Numerical relativity is now a mature science, the purpose of which is to investigate non-linear dynamical spacetimes. A traditional example of the application and importance of numerical relativity concerns the modelling of gravitational-wave emission and consequent detection. In order to detect gravitational waves(GWs) from black hole-neutron star (BH-NS), BH-BH or NS-NS binaries, one needs to accurately understand the waveforms from these sources in advance, because their signals are quite faint for our detectors [1]. To understand why the problem is so difficult, consider Newtonian gravity, as applied to systems like our very own Earth-Moon. In Newton’s theory, binary systems can move on stable, circular or quasi-circular orbits. However, in binary systems heavy enough or moving sufficiently fast, the effects of General Relativity become important, and the notion of stable orbits is no longer valid: GWs take energy and angular momentum away from the system and energy conservation implies that the binary orbit shrinks until finally the objects merge and presumably form a final single object. Accordingly, the evolution of binary stars can be divided into an inspiral, merger and a ring-down phase[2].
In the inspiral phase, GW emission is sufficiently under control by using slow-motion, Post-Newtonian expansions because the stars are distant from each other and their gravitational forces can be described in a perturbation scheme [3, 4]. The ringdown phase describes the vibrations of the final object. Because of the uniqueness theorems [5], GWs can be computed by BH perturbation methods [6, 7]. Advanced BH perturbation methods are reviewed in Ref. [8]. Numerical Relativity enables us to obtain the GW form in all phases [9, 10, 11].
Furthermore, we note that techniques of numerical relativity are also available in a variety of contexts. For example, but by no means the only one, it became popular to investigate the nature of higher dimensional spacetimes [12], most specially in the framework of large extra dimensions [13, 14, 15, 16]. It was pointed out that a micro BH can be produced from high energy particle collisions at the Large Hadron Collider(LHC) and beyond [17, 18], and while some works use shock wave collisions [19, 20, 21] the full-blown numerical solution is clearly desirable to investigate the nature of gravity with high energy collisions in four dimensional spacetime[22, 23, 24, 25]. If we consider spacetime dimensions higher than four in our simulations, larger computational resources will be required. However, it can be reduced to four dimensional problem with small changes assuming the symmetry of spacetimes [26, 27, 28, 29, 30, 31]. As a result, to investigate the nature of higher dimensional spacetimes is in the scope of numerical relativity [32, 33, 34].
Fortunately, open source codes to evolve dynamical systems with numerical relativity are available [35, 36, 37, 38]. All that one needs to do is to prepare the initial data describing the physics of the problem one is interested in. Here, we explain precisely how this is achieved, to prepare initial data for numerical relativity in this paper. Briefly, it amounts to solving an elliptic partial differential equation (PDE) and we explain how to solve the elliptic PDE from the numerical point of view of beginners in numerical studies.
## 2 ADM formalism
In numerical relativity, we regard our spacetimes as the evolution of spaces. We begin by showing how to decompose the spacetime into timelike and spacelike components in the ADM formalism[39, 40, 41]. Then, we derive evolution equations from Einstein’s equations along the lines of York’s review [42].
### Decomposition
First, we introduce a family of three-dimensional spacelike hypersurfaces \(\Sigma\) in four-dimensional manifold \(V\). Hypersurfaces \(\Sigma\) are expressed as the level surfaces of a scalar function \(f\) and are not supposed to intersect one another. We can define a one-form \(\Omega_{\mu}=\nabla_{\mu}f\) which is normal to the hypersurface.
Let \(g_{\mu\nu}\) be a metric tensor in four-dimensional manifold \(V\). The norm of one-form \(\Omega_{\mu}\) can be written by a positive function \(\alpha\) as
\[g^{\mu\nu}\Omega_{\mu}\Omega_{\nu}=-\frac{1}{\alpha^{2}},\] (1)
where \(\alpha\) is called lapse function. We define a normalized one-form by
\[\omega_{\mu}=\alpha\Omega_{\mu},\ \ \ g^{\mu\nu}\omega_{\mu} \omega_{\nu}=-1.\] (2)
The orthogonal vector to a hypersurface \(\Sigma\) is written by
\[n^{\mu}=-g^{\mu\nu}\omega_{\nu},\] (3)
whose minus sign is defined to direct at the future and we note that this timelike vector satisfies \(n^{\mu}n_{\mu}=-1\) by definition.
#### 2.1.1 Induced metric
The induced metric \(\gamma_{\mu\nu}\) on \(\Sigma\) and the projection tensor \(\perp_{\nu}^{\mu}\) from \(V\) to \(\Sigma\) are given by the four-dimensional metric \(g_{\mu\nu}\),
\[\gamma_{\mu\nu} = g_{\mu\nu}+n_{\mu}n_{\nu},\] (4)
\[\perp^{\mu}_{\nu} = \delta^{\mu}_{\nu}+n^{\mu}n_{\nu},\] (5)
where one can show \(n^{\mu}\gamma_{\mu\nu}=0\), which yields that timelike components of \(\gamma_{\mu\nu}\) vanish and only spacelike components \(\gamma_{ij}\) exist. The induced covariant derivative \(D_{i}\) on \(\Sigma\) is also defined in terms of the four-dimensional covariant derivative \(\nabla_{\mu}\).
\[D_{i}\psi = \perp^{\rho}_{i}\nabla_{\rho}\psi,\] (6)
\[D_{j}W^{i} = \perp^{\rho}_{j}\perp^{i}_{\lambda}\nabla_{\rho}W^{\lambda},\] (7)
where \(\psi\) and \(W^{\lambda}\) denote arbitrary scalar and vector on \(\Sigma\). By a straightforward calculation, one can show that the induced covariant derivative satisfies \(D_{i}\gamma_{jk}=0\).
#### 2.1.2 Curvature
Riemann tensor on \(\Sigma\) is defined using an arbitrary vector \(W^{i}\) by
\[D_{[i}D_{j]}W_{k} = \frac{1}{2}\mathcal{R}_{ijk}^{\quad\,\ell}W_{\ell},\] (8)
\[\mathcal{R}_{ijk\ell}n^{\ell} = 0,\] (9)
where \([\ ]\) denotes the antisymmetric operator for indices and \(\mathcal{R}_{ijk\ell}\) denotes Riemann tensor on \(\Sigma\). Ricci tensor is determined by the contraction of the induced metric and Riemann tensor on \(\Sigma\). Ricci scalar is also determined by the contraction of the induced metric and Ricci tensor.
We define the extrinsic curvature on \(\Sigma\), which describes how the hypersurface is embedded in the manifold \(V\). The extrinsic curvature is defined by
\[K_{\mu\nu}=-\perp^{\rho}_{\mu}\perp^{\lambda}_{\nu}\nabla_{(\rho }n_{\lambda)},\] (10)
where \((\,)\) denotes the symmetric operator for indeces. One can also show that the extrinsic curvature is spacelike by multiplying the normal vector \(n^{\mu}\) in the same manner as \(\gamma_{\mu\nu}\). In addition, by the definition of the projection tensor, we obtain the following relation between the covariant derivative of the normal vector and their projection,
\[\nabla_{\mu}n_{\nu} = \left(\perp_{\mu}^{\rho}-n_{\mu}n^{\rho}\right)\left(\perp_{\nu}^ {\lambda}-n^{\lambda}n_{\nu}\right)\nabla_{\rho}n_{\lambda}\] (11)
\[= \perp_{\mu}^{\rho}\perp_{\nu}^{\lambda}\nabla_{\rho}n_{\lambda}- \perp_{\nu}^{\lambda}n_{\mu}n^{\rho}\nabla_{\rho}n_{\lambda}-\perp_{\mu}^{\rho }n_{\nu}n^{\lambda}\nabla_{\rho}n_{\lambda}+n_{\mu}n_{\nu}n^{\rho}n^{\lambda} \nabla_{\rho}n_{\lambda}\]
\[= \perp_{\mu}^{\rho}\perp_{\nu}^{\lambda}\nabla_{\rho}n_{\lambda}-n _{\mu}n^{\rho}\nabla_{\rho}n_{\nu},\]
where the relation \(n^{\lambda}n_{\lambda}=-1\) is used in the last equation. Then, the extrinsic curvature can be rewritten by
\[K_{\mu\nu} = -\frac{1}{2}\left\{\nabla_{\mu}n_{\nu}+\nabla_{\nu}n_{\mu}+n_{\mu }n^{\rho}\nabla_{\rho}n_{\nu}+n_{\nu}n^{\rho}\nabla_{\rho}n_{\mu}\right\}\] (12)
\[= -\frac{1}{2}\left\{\gamma_{\nu\rho}\nabla_{\mu}n^{\rho}+\gamma_{ \mu\rho}\nabla_{\nu}n^{\rho}+n^{\rho}\nabla_{\rho}\gamma_{\mu\nu}\right\}\]
\[= -\frac{1}{2}\pounds\!_{n}\gamma_{\mu\nu},\]
where \(\pounds\!_{n}\gamma_{\mu\nu}\) denotes the Lie derivative of the tensor \(\gamma_{\mu\nu}\) along the vector \(n^{\mu}\). The geometrical nature of the three-dimensional hypersurfaces can be determined by the induced metric and extrinsic curvature on \(\Sigma\). \(K_{ij}\) and \(\gamma_{ij}\) must satisfy the following geometrical relations to embed \(\Sigma\) in \(V\).
#### 2.1.3 Geometrical relations
We derive geometrical relations by the projection of the four-dimensional Riemann tensor to the hypersurface \(\Sigma\). First, in order to obtain the relation between the four-dimensional Riemann tensor \(R_{\mu\nu\rho\lambda}\) and the three-dimensional Riemann tensor \(\mathcal{R}_{ijk\ell}\), we rewrite the definition of \(\mathcal{R}_{ijk\ell}\) with \(R_{\mu\nu\rho\lambda}\) and the extrinsic curvature,
\[\perp^{\mu}_{i}\perp^{\nu}_{j}\perp^{\rho}_{k}\perp^{\lambda}_{ \ell}R_{\mu\nu\rho\lambda}=\mathcal{R}_{ijk\ell}+K_{ik}K_{j\ell}-K_{jk}K_{i \ell}.\] (13)
Eq. (13) is called Gauss’ equation. Secondly, we project the four-dimensional Riemann tensor contracted by an orthogonal normal vector \(n^{\lambda}\).
\[\perp^{\mu}_{i}\perp^{\nu}_{j}\perp^{\rho}_{k}R_{\mu\nu\rho \lambda}n^{\lambda}=D_{j}K_{ik}-D_{i}K_{jk}.\] (14)
Eq. (14) is called Codazzi’s equation. Finally, we consider a Lie derivative of the extrinsic curvature to the time direction. We define a timelike vector \(t^{\mu}\) with a lapse function and a shift vector \(\beta^{\mu}\) which satisfies \(\Omega_{\mu}\beta^{\mu}\) as
\[t^{\mu}=\alpha n^{\mu}+\beta^{\mu}.\] (15)
Then, with the Lie derivative along \(t^{\mu}\) and \(\beta^{\mu}\), we rewrite the four dimensional Riemann tensor contracted by two orthogonal normal vectors as
\[\perp^{\mu}_{i}\perp^{\nu}_{j}R_{\mu\rho\nu\lambda}n^{\rho}n^{ \lambda}=\frac{1}{\alpha}\left[\pounds\!_{t}-\pounds\!_{\beta}\right]K_{ij}-K_ {i\ell}K^{\ell}_{j}-\frac{1}{\alpha}D_{i}D_{j}\alpha,\] (16)
which Eq. (16) is called Ricci’s equation.
### Decomposition of Einstein’s equations
Let us now use the geometric relations to decompose Einstein’s equations. Let us for convenience define the Einstein tensor \(G_{\mu\nu}\),
\[G_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\frac{8\pi G} {c^{4}}T_{\mu\nu},\] (17)
where \(G\) denotes the gravitational constant and \(c\) denotes the speed of light and hereafter we set \(G=c=1\) for simplicity. We start by decomposing the energy momentum tensor as
\[T_{\mu\nu}=S_{\mu\nu}+2j_{(\mu}n_{\nu)}+\rho n_{\mu}n_{\nu},\] (18)
where \(\rho\equiv T_{\mu\nu}n^{\mu}n^{\nu}\), \(j_{\mu}\equiv-\perp^{\rho}_{\mu}T_{\rho\lambda}n^{\lambda}\) and \(S_{\mu\nu}\equiv\perp^{\rho}_{\mu}\perp^{\lambda}_{\nu}T_{\rho\lambda}\).
We multiply Gauss’ equation (13) by an induced metric \(\gamma^{ik}\) and obtain
\[\perp^{\nu}_{j}\perp^{\lambda}_{\ell}\left[R_{\nu\lambda}-R_{\mu \nu\rho\lambda}n^{\mu}n^{\rho}\right]=\mathcal{R}_{j\ell}+KK_{j\ell}-K^{\,i}_{ j}K_{i\ell}.\] (19)
In addition, Eq. (19) contracted by \(\gamma^{j\ell}\) gives twice as much as the Einstein’s tensor contracted by two orthogonal normal vectors \(n^{\mu}\) and \(n^{\nu}\). Then, we obtain
\[\mathcal{R}+K^{2}-K_{ij}K^{ij}=16\pi\rho.\] (20)
Similarly, Codazzi’s equation (14) contracted by \(\gamma^{jk}\) results in
\[D_{j}K_{i}^{\,j}-D_{i}K=8\pi j_{i}.\] (21)
Note that Eq. (20) and Eq. (21) are composed of only spacelike variables and should be satisfied on each hypersurface \(\Sigma\) because they do not depend on time. Therefore, Eq. (20) and Eq. (21) are called the Hamiltonian and momentum constraints, respectively.
Finally, let us rewrite Ricci’s equation (16). Einstein’s equations (17) can also be expressed with the trace of the energy momentum tensor \(T\equiv g^{\mu\nu}T_{\mu\nu}\) as
\[R_{\mu\nu}=8\pi\left[T_{\mu\nu}-\frac{1}{2}g_{\mu\nu}T\right].\] (22)
The projection of Einstein’s equations (22) yields
\[\perp^{\mu}_{i}\perp^{\nu}_{j}R_{\mu\nu}=8\pi\left[S_{ij}-\frac{1 }{2}\gamma_{ij}\left(S-\rho\right)\right],\] (23)
where \(S=\gamma^{ij}S_{ij}\). Ricci’s equation (16) is rewritten with Eq. (19) and Eq. (23) as
\[\pounds\!_{t}K_{ij} = \pounds\!_{\beta}K_{ij}+\alpha\left(\mathcal{R}_{ij}-2K_{i\ell}K^ {\,\ell}_{j}+K_{ij}K\right)-D_{j}D_{i}\alpha\] (24)
\[-8\pi\alpha\left[S_{ij}+\frac{1}{2}\left(\rho-S\right)\gamma_{ij} \right].\]
### Propagation of Constraints
In ADM formalism, Einstein’s equations are regarded as evolution equations in time with the geometrical constraints on each hypersurface. In general, it is numerically expensive to guarantee the constraints on each step because we must solve elliptic PDEs as described in Sec. 3.1. However, in principle, one does not have to solve the constraint equations if the initial data satisfy the constraints [43, 44, 45]. This works as follows. We first define the following quantities,
\[C \equiv \left(G_{\mu\nu}-8\pi T_{\mu\nu}\right)n^{\mu}n^{\nu},\] (25)
\[C_{\mu} \equiv -\perp_{\mu}^{\rho}\left(G_{\rho\nu}-8\pi T_{\rho\nu}\right)n^{ \nu},\] (26)
\[C_{\mu\nu} \equiv \perp_{\mu}^{\rho}\perp_{\nu}^{\lambda}\left(G_{\rho\lambda}-8\pi T _{\rho\lambda}\right),\] (27)
where \(C=0\) corresponds to the Hamiltonian contstraint, \(C_{\mu}=0\) correspond to the momentum constraints and \(C_{\mu\nu}=0\) denote the evolution equations in ADM formalism. Einstein’s equations can be decomposed in terms of \(C,C_{\mu}\) and \(C_{\mu\nu}\) as
\[G_{\mu\nu}-8\pi T_{\mu\nu} = \left(\perp_{\mu}^{\rho}-n_{\mu}n^{\rho}\right)\left(\perp_{\nu}^ {\lambda}-n_{\nu}n^{\lambda}\right)\left(G_{\rho\lambda}-8\pi T_{\rho\lambda}\right)\] (28)
\[= C_{\mu\nu}+n_{\nu}C_{\mu}+n_{\mu}C_{\nu}+n_{\mu}n_{\nu}C.\]
Thanks to the Bianchi identity which is a mathematical relation for the Riemann tensor, the covariant derivative of Einstein’s tensor vanishes. Besides, the covariant derivative of the energy momentum tensor which appears in the right-hand-side of Einstein’s equations denotes the energy conservation law,
\[\nabla^{\nu}\left(G_{\mu\nu}-8\pi T_{\mu\nu}\right) = 0.\] (29)
Let us project the covariant derivative of Einstein’s equations to \(n^{\mu}\) direction and to the spatial direction with \(\perp^{\rho}_{\mu}\).
\[n^{\mu}\nabla^{\nu}\left(G_{\mu\nu}-8\pi T_{\mu\nu}\right) = -C_{\mu\nu}D^{\nu}n^{\mu}-2C_{\mu}n^{\nu}\nabla_{\nu}n^{\mu}-D^{ \mu}C_{\mu}-C\nabla_{\mu}n^{\mu}-n^{\mu}\nabla_{\mu}C,\] (30)
\[\perp^{\rho}_{\mu}\nabla^{\nu}\left(G_{\rho\nu}-8\pi T_{\rho\nu}\right) = D^{\nu}C_{\nu\mu}+C_{\mu\rho}n^{\nu}\nabla_{\nu}n^{\rho}+2n^{\nu }\nabla_{\nu}C_{\mu}-C_{\rho}n_{\mu}n^{\nu}\nabla_{\nu}n^{\rho},\] (31)
where \(D_{\mu}\) denotes the covariant derivative with respect to the induced metric, noting that \(C_{\mu}\) and \(C_{\mu\nu}\) are spatial. Thus, we show the propagation of constraints along the timelike vector as
\[n^{\mu}\nabla_{\mu}C = -C_{\mu\nu}D^{\nu}n^{\mu}-2C_{\mu}n^{\nu}\nabla_{\nu}n^{\mu}-D^{ \mu}C_{\mu}-C\nabla^{\mu}n_{\mu},\] (32)
\[n^{\nu}\nabla_{\nu}C_{\mu} = -\frac{1}{2}D^{\nu}C_{\mu\nu}-\frac{1}{2}C_{\mu\rho}n^{\nu}\nabla _{\nu}n^{\rho}+\frac{1}{2}C_{\rho}n_{\mu}n^{\nu}\nabla_{\nu}n^{\rho}.\] (33)
Therefore, we can keep the Hamiltonian and momentum constraints satisfied during evolution, as long as we evolve the initial data satisfying the constraints \(C=0\) and \(C_{\mu}=0\) on \(\Sigma\) by the evolution equation \(C_{\mu\nu}=0\). It should be noted that the ADM evolution equations are numerically unstable and not suitable for numerical evolutions; instead one uses hyperbolic evolution equations for time integration, for example, BSSN and Z4 evolution equations [46, 26, 47, 48, 49].
## 3 Initial Condition for Numerical Relativity
As emphasized in Sec. 2.2, initial data cannot be freely specified, as it needs to satisfy the Hamiltonian and momentum constraints on a hypersurface \(\Sigma\). In general, the problem of constructing initial data is called “Initial Value Problem”. The standard method for solving an initial value problem is reviewed in Ref.[50, 51]. In this section, we derive the equations for the initial value problem and then introduce some examples as initial data for numerical relativity.
### Initial Value Problem
There are twelve variables(\(\gamma_{ij},K_{ij}\)) as the metric part to be determined and four constraint equations in ADM formalism. One can obtain four variables by solving constraints after assuming eight variables by physical and numerical reasons.
#### 3.1.1 York-Lichnerowicz conformal decomposition
To begin with, we introduce the conformal factor \(\psi\) as
\[\gamma_{ij} = \psi^{4}\tilde{\gamma}_{ij},\] (34)
where we define \(\det\tilde{\gamma}_{ij}\equiv 1\) and we have one degree of freedom in \(\psi\) and five degrees of freedom in \(\tilde{\gamma}_{ij}\). By the conformal transformation, the following relations between variables with respect to \(\gamma_{ij}\) and \(\tilde{\gamma}_{ij}\) are immediately given by
\[\Gamma^{i}_{jk} = \tilde{\Gamma}^{i}_{jk}+\frac{2}{\psi}\left[\delta^{i}_{j}\tilde{ D}_{k}\psi+\delta^{i}_{k}\tilde{D}_{j}\psi-\tilde{\gamma}^{il}\tilde{\gamma}_{ jk}\tilde{D}_{l}\psi\right],\] (35)
\[\mathcal{R} = \tilde{\mathcal{R}}\psi^{-4}-8\psi^{-5}\tilde{\bigtriangleup}\psi,\] (36)
where \(\tilde{D}_{i},\tilde{\Gamma}^{i}_{jk},\tilde{\mathcal{R}}\) and \(\tilde{\bigtriangleup}\) are respectively covariant derivative, Ricci scalar, Christoffel symbol and Laplacian operator defined by \(\tilde{\bigtriangleup}\psi=\tilde{\gamma}^{ij}\tilde{D}_{i}\tilde{D}_{j}\psi\) with respect to \(\tilde{\gamma}_{ij}\).
#### 3.1.2 Transverse-Traceless decomposition
As for the extrinsic curvature, we start by decomposing it into a trace and a trace-free part,
\[K_{ij} = A_{ij}+\frac{1}{3}\gamma_{ij}K,\] (37)
where \(\gamma^{ij}A_{ij}=0\) and \(K=\gamma^{ij}K_{ij}\) and we have one degree of freedom in \(K\) and five degrees of freedom in \(A_{ij}\). Then, we also define the conformal transformation for \(A_{ij}\) as
\[A_{ij} = \psi^{-2}\tilde{A}_{ij}.\] (38)
According to the definition of derivatives with respect to \(\gamma_{ij}\) and \(\tilde{\gamma}_{ij}\), we obtain the following relation,
\[D_{j}A^{ij} = \psi^{-10}\tilde{D}_{i}\tilde{A}^{ij}.\] (39)
In addition, we decompose the conformal traceless tensor \(\tilde{A}_{ij}\) into a divergenceless part and a “derivative of a vector” \(W_{j}\) part. Hereafter we assume the divergenceless part vanishes for simplicity. The conformal traceless extrinsic curvature is described by
\[\tilde{A}_{ij} = \tilde{D}_{i}W_{j}+\tilde{D}_{j}W_{i}-\frac{2}{3}\tilde{\gamma}_{ ij}\tilde{D}_{k}W^{k}.\] (40)
The covariant derivative of \(\tilde{A}_{ij}\) is written by
\[\tilde{D}_{i}\tilde{A}^{i}_{j} = \tilde{\bigtriangleup}W_{j}+\tilde{D}_{i}\tilde{D}_{j}W^{i}-\frac {2}{3}\tilde{D}_{j}\tilde{D}_{k}W^{k}\] (41)
\[= \tilde{\bigtriangleup}W_{j}+\frac{1}{3}\tilde{D}_{j}\tilde{D}_{k} W^{k}+\tilde{\mathcal{R}}_{ij}W^{i},\]
where we used the definition of Riemann tensor.
#### 3.1.3 Constraints as initial value problem
With the above conformal transformation, the Hamiltonian and momentum constraints are rewritten as
\[\tilde{\bigtriangleup}\psi-\frac{1}{8}\tilde{\mathcal{R}}\psi+ \frac{1}{8}\tilde{A}_{ij}\tilde{A}^{ij}\psi^{-7}-\frac{1}{12}K^{2}\psi^{5} = 16\pi\psi^{5}\rho,\] (42)
\[\tilde{\bigtriangleup}W_{i}+\frac{1}{3}\tilde{D}_{i}\tilde{D}_{k} W^{k}+\tilde{\mathcal{R}}_{ij}W^{j}-\frac{2}{3}\psi^{6}\tilde{D}_{i}K = 8\pi\psi^{6}j_{i}.\] (43)
### Schwarzschild Black Hole
Let us consider an exact solution of Einstein’s equations as initial data for numerical relativity. The Schwarzschild BH is the simplest BH solution in static and spherically symmetric spacetimes [52]. The line element of the Schwarzschild BH in spherical coordinates \((\bar{r},\theta,\phi)\) is given by
\[{\rm d}s^{2}=-f_{0}{\rm d}t^{2}+f^{-1}_{0}{\rm d}\bar{r}^{2}+\bar {r}^{2}\left({\rm d}\theta^{2}+\sin^{2}\theta{\rm d}\phi^{2}\right),\] (44)
where we define \(f_{0}\) with BH mass \(M\),
\[f_{0}(\bar{r})=1-\frac{2M}{\bar{r}}.\] (45)
Let us define the coordinate transformation by
\[\bar{r}^{2} \equiv \psi_{0}^{4}r^{2},\] (46)
\[\frac{{\rm d}\bar{r}^{2}}{1-\frac{2M}{\bar{r}}} \equiv \psi^{4}_{0}{\rm d}r^{2},\] (47)
where \(r\) denotes the isotropic radial coordinate and we introduce a scalar function \(\psi_{0}\). Then, we solve \(\bar{r}\) under the coordinate transformation and obtain \(\psi_{0}\) and the relation between \(\bar{r}\) and \(r\) as
\[\psi_{0} = 1+\frac{M}{2r},\] (48)
\[\frac{{\rm d}\bar{r}}{{\rm d}r} = \left(1+\frac{M}{2r}\right)\left(1-\frac{M}{2r}\right).\] (49)
After straightforward calculations, the line element is rewritten by
\[{\rm d}s^{2} = -\left(\frac{1-\frac{M}{2r}}{1+\frac{M}{2r}}\right)^{2}{\rm d}t^{ 2}+\left(1+\frac{M}{2r}\right)^{4}\left[{\rm d}r^{2}+r^{2}{\rm d}\theta^{2}+r^ {2}\sin^{2}\theta{\rm d}\phi^{2}\right]\] (50)
\[= -\alpha_{0}^{2}{\rm d}t^{2}+\psi_{0}^{4}\eta_{ij}{\rm d}x^{i}{\rm d }x^{j},\]
where \(\eta_{ij}\) denotes the flat metric and we define \(\alpha_{0}\). In the isotropic coordinates, all spatial metric components remain regular, in contrast to the ones in the standard Schwarzschild coordinates. The range \([2M<\bar{r}<\infty]\) in the spherical coordinates corresponds to \([\frac{M}{2}<r<\infty]\) in the isotropic coordinates. In addition, when we change to a new coordinate \(\tilde{r}\equiv\left(M/2\right)^{2}/r\), we obtain the same expression as Eq. (50) with \(\tilde{r}\) instead of \(r\). It yields that the range \([\frac{M}{2}<\tilde{r}<\infty]\) corresponds to \([0<r<\frac{M}{2}]\). The solution is inversion symmetric at \(r=M/2\) and corresponds to the Einstein-Rosen bridge[53].
Obviously, the extrinsic curvature \(K_{ij}\) of the Schwarzschild BH vanishes because the spacetime is static, and therefore the momentum constraints are trivially satisfied. Besides, the Hamiltonian constraint is also satisfied as
\[\bigtriangleup\psi_{0} = 0,\] (51)
because the Schwarzschild BH is an exact solution of Einstein’s equations.
### Puncture Initial Data
One can analytically solve the momentum constraints with the following conditions,
\[\begin{array}[]{cccll}K&=&0&,&{\rm maximal\,condition},\\ \tilde{\gamma}_{ij}&=&\eta_{ij}&,&\small{\rm conformal\,flatness},\\ \psi\mid_{\infty}&=&1&,&{\rm asymptotically\,flatness}\,.\end{array}\] (52)
The derivative operator becomes quite simple assuming conformal flatness. We also note that Eq. (42) and Eq. (43) are decoupled with \(K=const.\) condition.
#### 3.3.1 Single Black Hole
Next, let us consider a BH with non-zero momentum(\(P^{i}\neq 0\)), for which the momentum constraints become non-trivial. However, a solution to conditions (52) can still be found. In this case, the momentum constraints are given by
\[\bigtriangleup W_{i}+\frac{1}{3}\partial_{i}\partial_{k}W^{k} = 0.\] (53)
We have a simple solution to satisfy Eq. (53) as
\[W_{i} = -\frac{1}{4r}\left[7P_{i}+n_{i}n_{j}P^{j}\right]+\frac{1}{r^{2}} \epsilon_{ijk}n^{j}S^{k},\] (54)
where \(P^{i}\) and \(S^{i}\) are constant vectors corresponding to the momentum and spin of BH and \(n^{i}\equiv x^{i}/r\) denotes the normal vector. Then, we obtain the Bowen-York extrinsic curvature[54] by substituting Eq. (54) into Eq. (40),
\[\tilde{A}^{(BY)}_{ij} = \frac{3}{2r^{2}}\left[P_{i}n_{j}+P_{j}n_{i}-\left(\eta_{ij}-n_{i} n_{j}\right)P^{k}n_{k}\right]+\frac{3}{r^{3}}\left[\epsilon_{kil}S^{l}n^{k}n_{ j}+\epsilon_{kjl}S^{l}n^{k}n_{i}\right].\]
On the other hand, to satisfy the Hamiltonian constraint (42) we must in general solve an elliptic PDE, even if simple-looking,
\[\bigtriangleup\psi = -\frac{1}{8}\tilde{A}^{(BY)}_{ij}\tilde{A}_{(BY)}^{ij}\psi^{-7}.\] (56)
Let us define the function \(u\) as a correction term relative to the Schwarzschild BH,
\[\psi = 1+\frac{M}{2r}+u.\] (57)
We can regularize the Hamiltonian constraint (56) when \(ru\) is regular at the origin. \(\tilde{A}_{ij}\) is at most proportional to \(r^{-3}\) at and \(\psi\) is proportional to \(r^{-1}\), so that the divergent behavior of \(\tilde{A}_{ij}\tilde{A}^{ij}\) is compensated by the \(\psi^{-7}\) term at the origin.
Therefore, the Hamiltonian constraint yields
\[\bigtriangleup u = -\frac{1}{8}\tilde{A}^{(BY)}_{ij}\tilde{A}_{(BY)}^{ij}\psi^{-7}.\] (58)
#### 3.3.2 Multi Black Holes
We can easily prepare the initial data which contains many BHs without any momenta under the condition (52) because the Hamiltonian constraint is the same as Eq. (51) and we know that the following conformal factor satisfies the Laplace equation.
\[\psi_{M} = 1+\sum_{n=1}^{N}\frac{M_{n}}{2\mid\bm{x}-\bm{x}_{n}\mid},\] (59)
where \(M_{n}\) and \(\bm{x}_{n}\) denote the mass and position of n-th BH, respectively. The initial data defined with \(\psi=\psi_{M}\) and \(\tilde{A}_{ij}=0\) is called Brill-Lindquist initial data[55].
As for BHs with non-zero momenta, we also use the Bowen-York extrinsic curvature and the same method for the Hamiltonian constraint as \(\psi\equiv\psi_{M}+u\),
\[\bigtriangleup u = -\frac{1}{8}\psi^{-7}\sum_{n=1}^{N}\tilde{A}^{(BY,n)}_{ij}\tilde{ A}_{(BY,n)}^{ij}.\] (60)
In principle, it is possible to construct initial data for multi BHs with any momenta and spins by solving an elliptic PDE[56].
### Kerr Black Hole
It should be noted that we have the exact BH solution of a rotating BH for Einstein’s equations and we can also use it as initial data. The Kerr BH is an exact solution of Einstein’s equations in stationary and axisymmetric spacetime[57]. The line element of the Kerr BH in Boyer-Lindquist coordinates[58] is defined by
\[{\rm d}s^{2}=-\left(1-\frac{2Mr_{BL}}{\Sigma}\right){\rm d}t^{2}-\frac{4aMr_{ BL}\sin^{2}\theta}{\Sigma}{\rm d}t{\rm d}\phi+\frac{\Sigma}{\Delta}{\rm d}r^{2 }_{BL}+\Sigma{\rm d}\theta^{2}+\frac{A}{\Sigma}\sin^{2}\theta{\rm d}\phi^{2},\] (61)
where
\[A = \left(r_{BL}^{2}+a^{2}\right)^{2}-\Delta a^{2}\sin^{2}\theta,\] (62)
\[\Sigma = r_{BL}^{2}+a^{2}\cos^{2}\theta,\] (63)
\[\Delta = r_{BL}^{2}-2Mr_{BL}+a^{2},\] (64)
where \(M\) and \(a\) denote the mass and spin of BH respectively. \(\Delta\) vanishes when the radial coordinate \(r_{BL}\) is at the radius of the inner or outer horizon \(r_{\pm}\), which is a coordinate singularity.
Let us introduce a quasi-isotropic radial coordinate in the same manner as for the Schwarzschild BH by
\[r_{BL} = r\left(1+\frac{M+a}{2r}\right)\left(1+\frac{M-a}{2r}\right),\] (65)
\[\frac{{\rm d}r_{BL}}{{\rm d}r} = 1-\frac{M^{2}-a^{2}}{4r^{2}}.\] (66)
Thus, the line element of the Kerr BH yields
\[{\rm d}s^{2} = -\frac{a^{2}\sin^{2}\theta-\Delta}{\Sigma}{\rm d}t^{2}-\frac{4aMr _{BL}\sin^{2}\theta}{\Sigma}{\rm d}t{\rm d}\phi+\frac{\Sigma}{r^{2}}{\rm d}r^{ 2}+\Sigma{\rm d}\theta^{2}+\frac{A}{\Sigma}\sin^{2}\theta{\rm d}\phi^{2}.\]
The spatial metric components in the quasi-isotropic coordinates also remain regular[59, 60]. One can show that the extrinsic curvature of the Kerr BH in the quasi-isotropic coordinates is given by
\[K_{r\phi} = \frac{aM\left[2r_{BL}^{2}\left(r_{BL}^{2}+a^{2}\right)+\Sigma \left(r_{BL}^{2}-a^{2}\right)\right]\sin^{2}\theta}{r\Sigma\sqrt{A\Sigma}},\] (68)
\[K_{\theta\phi} = \frac{-2a^{3}Mr_{BL}\sqrt{\Delta}\cos\theta\sin^{3}\theta}{\Sigma \sqrt{A\Sigma}},\] (69)
which comes from the shift vector \(\beta^{\phi}\).
## 4 Apparent Horizon Finder
Now we can perform long-term dynamical simulations containing BHs with numerical relativity. For the sake of convenience, we usually use the apparent horizon (AH) to define the BH and investigate the nature of BH during the evolution. In this section, we introduce the concept of AH and derive the elliptic PDE to determine the AH.
### Apparent Horizon
The region of BH in an asymptotic flat spacetime is defined as the set of spacetime points from which future-pointing null geodesics cannot reach future null infinity [61]. To find the BH, one can use the event horizon(EH) which is defined as the boundary of such region. It is possible to determine the EH by the data of the numerical simulation because in principle, one can integrate the null geodesic equation for any spacetime points forward in time during the evolution,
\[\frac{{\rm d}^{2}x^{\mu}}{{\rm d}\lambda^{2}}+\Gamma^{\mu}_{\ \nu \rho}\frac{{\rm d}x^{\nu}}{{\rm d}\lambda}\frac{{\rm d}x^{\rho}}{{\rm d} \lambda}=0,\] (70)
where \(x^{\mu}\) and \(\lambda\) denote the coordinates and the affine parameter. The numerical cost to find the EH is normally high, because we need global metric data[62].
We define a trapped surface on the hypersurface \(\Sigma\) as a smooth closed two-dimensional surface on which the expansion of future-pointing null geodesics is negative. The AH is defined as the boundary of region containing trapped surfaces in the hypersurface and is equivalent to the marginally outer trapped surface on which the expansion of future-pointing null geodesics vanishes[63]. The EH is outside the AH if the AH exists[61]. We often use the AH to find the BH in the numerical simulation instead of the EH because the AH can be locally determined and then the numerical cost is lower compared with finding the EH.
Let us introduce the normal vector \(s^{i}\) to the surface and define the induced two-dimensional metric as
\[m_{\mu\nu} = \gamma_{\mu\nu}-s_{\mu}s_{\nu}=g_{\mu\nu}+n_{\mu}n_{\nu}-s_{\mu}s _{\nu},\] (71)
where \(\gamma_{\mu\nu}\) denotes the induced metric on the three-dimensional hypersurface \(\Sigma\). The null vector is described with \(s^{i}\) and the normal vector to \(\Sigma\) by
\[\ell^{\mu} = \frac{1}{\sqrt{2}}\left[s^{\mu}+n^{\mu}\right].\] (72)
Then, the following equation should be satisfied on the AH by definition.
\[\Theta = \nabla_{\mu}\ell^{\mu}=D_{i}s^{i}-K+K_{ij}s^{i}s^{j}=0,\] (73)
where \(\Theta\) denotes the expansion of null vector and \(D_{i}\) denotes the covariant derivative with respect to \(\gamma_{ij}\).
### Apparent Horizon Finder
We can find the AH during the dynamical simulation by solving Eq. (73)[64, 65, 66]. Let us define the radius of the AH by
\[r=h(\theta,\phi).\] (74)
Thus, the normal vector \(s^{i}\) can be described with \(h(\theta,\phi)\) by
\[\tilde{s}_{i} = \left(1,-h_{,\theta},-h_{,\phi}\right),\] (75)
\[s_{i} = C\psi^{2}\tilde{s}_{i},\] (76)
\[C^{-2} = \tilde{\gamma}^{ij}\tilde{s}_{i}\tilde{s}_{j},\] (77)
where \(\tilde{s}^{i}\) is introduced for convenience and we raise their indeces of \(s_{i}\) and \(\tilde{s}_{i}\) by \(\gamma^{ij}\) and \(\tilde{\gamma}^{ij}\) respectively. Incidentally, the divergence of the normal vector is given by
\[D_{i}s^{i}=\frac{1}{\sqrt{\gamma}}\partial_{i}\sqrt{\gamma} \gamma^{ij}s_{j},\] (78)
where \(\gamma\) denotes the determinant of \(\gamma_{ij}\). Therefore, we obtain the equation to determine the AH as the elliptic PDE consisting of first and second derivatives of \(h(\theta,\phi)\). Note that because the AH equation is originally a non-linear elliptic PDE, we change the AH equation to the flat Laplacian equation with non-linear source term[64], which has the advantage of fixing the matrix with diagonal dominance mentioned in Sec. 5.2.2. Specifically, we solve the following equation.
\[\bigtriangleup_{\theta\phi}h-\left(2-\zeta\right)h=h_{,\theta \theta}+\frac{\cos\theta}{\sin\theta}h_{,\theta}+\frac{1}{\sin^{2}\theta}h_{, \phi\phi}-\left(2-\zeta\right)h = S\left(\theta,\phi\right),\] (79)
where \(\zeta\) denotes a constant to be chosen by the problem and the source term is given by the flat laplacian term and the AH equation as
\[S\left(\theta,\phi\right) = h_{,\theta\theta}+\frac{\cos\theta}{\sin\theta}h_{,\theta}+\frac {1}{\sin^{2}\theta}h_{,\phi\phi}-\left(2-\zeta\right)h+\frac{h^{2}\psi^{2}}{C^ {3}}\left[D_{i}s^{i}+K_{ij}s^{i}s^{j}-K\right]\] (80)
\[= 2h\xi^{rr}-2h\tilde{\gamma}^{r\theta}h_{,\theta}-2h\tilde{\gamma }^{r\phi}h_{,\phi}+h^{2}\cot\theta\tilde{\gamma}^{\theta r}-h^{2}\cot\theta\ \tilde{\gamma}^{\theta\phi}h_{,\phi}\]
\[-h^{2}\xi^{\theta\theta}h_{,\theta\theta}-h^{2}\xi^{\phi\phi}h_{, \phi\phi}+\zeta h\]
\[+\frac{1-C^{2}}{C^{2}}\left[2h\tilde{s}^{r}+h^{2}\cot\theta\tilde {s}^{\theta}-h^{2}\tilde{\gamma}^{\theta\theta}h_{,\theta\theta}-h^{2}\tilde{ \gamma}^{\phi\phi}h_{,\phi\phi}\right]\]
\[+\frac{h^{2}C_{,i}\tilde{s}^{i}}{C^{3}}+\frac{4h^{2}\psi_{,i} \tilde{s}^{i}}{C^{2}\psi}-\frac{h^{2}\tilde{\Gamma}^{j}\tilde{s}_{j}}{C^{2}}- \frac{2h^{2}\tilde{\gamma}^{\theta\phi}h_{,\theta\phi}}{C^{2}}\]
\[+\frac{h^{2}\psi^{2}}{C}\tilde{A}_{ij}\tilde{s}^{i}\tilde{s}^{j}- \frac{2h^{2}\psi^{2}}{3C^{3}}K,\]
where \(\xi^{ij}\equiv\tilde{\gamma}^{ij}-\eta^{ij}\).
### Mass and Spin of Black Hole
The area of the AH is defined by
\[\mathcal{A}_{AH} = \int_{S}\sqrt{\det(g_{\mu\nu})}\,{\rm d}S,\] (81)
where \(S\) denotes the surface of AH. We also compute the quantities related to the AH, the polar and equatorial circumfential length(\(\mathcal{C}_{p},\mathcal{C}_{e}\)).
\[\mathcal{C}_{p} = \int_{0}^{\pi}{\rm d}\theta\sqrt{g_{rr}h_{,\theta}^{2}+g_{r\theta }h_{,\theta}+g_{\theta\theta}},\] (82)
\[\mathcal{C}_{e} = \int_{0}^{2\pi}{\rm d}\phi\sqrt{g_{rr}h_{,\phi}^{2}+g_{r\phi}h_{, \phi}+g_{\phi\phi}}.\] (83)
If the BH relaxes to a stationary state during the evolution, the BH would be the Kerr BH because of no-hair theorem. The quantities related to the AH of the Kerr BH can be obtained by
\[\mathcal{A}_{AH} = 8\pi M_{BH}^{2}\left(1+\sqrt{1-a^{2}}\right),\] (84)
\[\mathcal{C}_{e} = 4\pi M_{BH},\] (85)
\[\frac{\mathcal{C}_{p}}{\mathcal{C}_{e}} = \frac{\sqrt{2r_{+}}}{\pi}E(\frac{a^{2}}{2r_{+}}),\] (86)
where \(M_{BH}\), \(a\) and \(r_{+}\) denote the mass, spin and outer horizon radius defined by \(r_{+}=1+\sqrt{1-a^{2}}\) and \(E(z)\) denotes an elliptic integral defined by
\[E(z) = \int_{0}^{\pi/2}\sqrt{1-z\sin^{2}\theta}{\rm d}\theta.\] (87)
## 5 Numerical Methods for solving elliptic PDEs
Constructing the initial data for numerical relativity is, in general, equivalent to solving the elliptic PDEs (42) and (43) with appropriate conditions. In order to solve a binary problem with high accuracy, the spectral method should be the standard method for solving an elliptic PDEs. In fact, there are useful open source codes, for example, TwoPuncture[67, 68] and LORENE[69].
Futhermore, we have to solve another elliptic PDE to find the BH in simulations within numerical relativity as described in Sec. 4. In this case, fast methods to solve the ellitptic PDE are preferred. Because there are many elliptic PDE solvers, the method has to be chosen according to the specific purpose. In this section, we introduce some classical numerical methods for beginners. It would also be the basis for Multi-Grid method mentioned in B.
### Discretization
We should discretize our physical space to the computational grid space by finite difference method because we cannot take continuum fields into account on the computer. Consider first one-dimensional problems for simplicity, and introduce the grid interval \(\Delta x\). Taylor expansion of a field \(Q(x)\) is given by
\[Q(x+\Delta x) = Q(x)+\Delta x\frac{\partial Q}{\partial x}+\frac{\Delta x^{2}}{2 }\frac{\partial^{2}Q}{\partial x^{2}}+\frac{\Delta x^{3}}{6}\frac{\partial^{3} Q}{\partial x^{3}}+\mathcal{O}(\Delta x^{4}),\] (88)
\[Q(x-\Delta x) = Q(x)-\Delta x\frac{\partial Q}{\partial x}+\frac{\Delta x^{2}}{2 }\frac{\partial^{2}Q}{\partial x^{2}}-\frac{\Delta x^{3}}{6}\frac{\partial^{3} Q}{\partial x^{3}}+\mathcal{O}(\Delta x^{4}).\] (89)
Thus, the derivative of the field \(Q(x)\) can be expressed as
\[\frac{\partial Q}{\partial x}(x) = \left\{\begin{array}[]{cll}\frac{Q_{j+1}-Q_{j}}{ \Delta x}+\mathcal{O}(\Delta x)&,&{\rm forward\,difference},\\ \frac{Q_{j}-Q_{j-1}}{\Delta x}+\mathcal{O}(\Delta x)&,&{\rm backward \,difference},\end{array}\right.\] (90)
where \(Q_{j+1},Q_{j}\) and \(Q_{j-1}\) denote \(Q(x+\Delta x),Q(x)\) and \(Q(x-\Delta x)\) respectively and both accuracies of the forward and backward difference method for derivatives are \(\mathcal{O}(\Delta x)\). In addition, the central difference method whose accuracy is \(\mathcal{O}(\Delta x^{2})\) can be defined by both Taylor expansions as
\[\frac{\partial Q}{\partial x}(x) = \frac{Q_{j+1}-Q_{j-1}}{\Delta x}+\mathcal{O}(\Delta x^{2}).\] (91)
Similarly, the second-order derivative of \(Q(x)\) is written by
\[\frac{\partial^{2}Q}{\partial x^{2}}(x) = \frac{Q_{j+1}-2Q_{j}+Q_{j-1}}{\Delta x^{2}}+\mathcal{O}(\Delta x^ {2}).\] (92)
One can increase accuracy of the calculation by using many points. For example, using five values \(Q(x+2\Delta x),Q(x+\Delta x),Q(x),Q(x-\Delta x)\) and \(Q(x-2\Delta x)\) around \(x\), the fourth-order accuracy scheme are defined by
\[\frac{\partial Q}{\partial x}(x) = \frac{-Q_{j+2}+8Q_{j+1}-8Q_{j-1}+Q_{j-2}}{12\Delta x}+\mathcal{O} (\Delta x^{4}),\] (93)
\[\frac{\partial^{2}Q}{\partial x^{2}}(x) = \frac{-Q_{j+2}+16Q_{j+1}-30Q_{j}+16Q_{j-1}-Q_{j-2}}{12\Delta x^{2 }}+\mathcal{O}(\Delta x^{4}),\] (94)
noting that higher accuracy scheme can also be defined. We also note that we can discretize our space in more than two dimensions in the same way.
### Relaxation Method
Hereafter, let us focus on Poisson equations \((\bigtriangleup\psi=S)\) with a field \(\psi\) and a source \(S\) as elliptic PDEs. These are a sufficiently general and complex class of problems that they embody all necessary elements to solve Poisson equation for constructing initial data for numerical relativity or finding an apparent horizon of BH. We explain how to solve general elliptic PDEs in A. One of the simple methods to solve elliptic PDEs, so-called relaxation method[70, 71, 72], is described in this section. Let us introduce a virtual time \(\tau\) to solve an elliptic PDE and our equation of elliptic type can be transformed to the equation of parabolic type as
\[\frac{\partial\psi}{\partial\tau}=\bigtriangleup\psi-S,\] (95)
which denotes the original Poisson equation after \(\psi\) relaxes by iteration. We adopt Cartesian coordinates in three-dimensional spaces and discretize the Poisson equation with second-order accuracy as
\[\frac{\psi^{n+1}_{j,k,l}-\psi^{n}_{j,k,l}}{\Delta\tau} = \frac{\psi^{n}_{j+1,k,l}-2\psi^{n}_{j,k,l}+\psi^{n}_{j-1,k,l}}{ \Delta x^{2}}+\frac{\psi^{n}_{j,k+1,l}-2\psi^{n}_{j,k,l}+\psi^{n}_{j,k-1,l}}{ \Delta y^{2}}\] (96)
\[+ \frac{\psi^{n}_{j,k,l+1}-2\psi^{n}_{j,k,l}+\psi^{n}_{j,k,l-1}}{ \Delta z^{2}}-S_{j,k,l},\]
where the superscript \(n\) denotes the label of virtual time and the subscript \(j,k\) and \(l\) denote labels of \(x-,y-\) and \(z-\)direction, respectively. Therefore, the field in the next step of the iteration is determined by
\[\psi^{n+1}_{j,k,l} = \left[1-2\Delta\tau\left(\frac{1}{\Delta x^{2}}+\frac{1}{\Delta y ^{2}}+\frac{1}{\Delta z^{2}}\right)\right]\psi^{n}_{j,k,l}-\Delta\tau\,S_{j,k,l}\] (97)
\[+ \Delta\tau\left[\frac{\psi^{n}_{j+1,k,l}+\psi^{n}_{j-1,k,l}}{ \Delta x^{2}}+\frac{\psi^{n}_{j,k+1,l}+\psi^{n}_{j,k-1,l}}{\Delta y^{2}}+\frac {\psi^{n}_{j,k,l+1}+\psi^{n}_{j,k,l-1}}{\Delta z^{2}}\right].\]
We continue to update the field \(\psi\) by the above expression until \(\psi\) relaxes and obtain the solution of the Poisson equation (\(\bigtriangleup\psi=S\)).
#### 5.2.1 Jacobi Method
In order to discuss method in practice, we consider one-dimensional Poisson equation and discretize it as
\[\bigtriangleup\psi=\frac{{\rm d}^{2}\psi}{{\rm d}x^{2}} = S,\]
\[\frac{\psi_{j+1}-2\psi_{j}+\psi_{j-1}}{\Delta x^{2}} = S_{j}.\] (98)
We consider Eq. (98) as the equation to determine \(\psi_{j}\),
\[\psi_{j}^{n+1,\mathcal{J}} = \frac{1}{2}\left[\psi_{j+1}^{n}+\psi_{j-1}^{n}-\Delta x^{2}S_{j}^ {n}\right],\] (99)
where the superscript \(n\) denotes the label of time step and we attach the label \(\mathcal{J}\) on the field of next time step in Jacobi’s method. Thus, we repeat updating the field until \(\psi\) converges. In other words, the flowchart of Jacobi method is as follows.
* Give a trial field \(\psi^{n}\).
* We obtain a new field \(\psi^{n+1}\) by Eq. (99).
* Set the obtained field to a new trial field.
* Repeat these steps (1.-3.) until the change of \(\psi\) is within a numerical error.
In addition, it is easy to extend to the three-dimensional Poisson equation as
\[\psi_{j,k,l}^{n+1,\mathcal{J}} = \frac{1}{6}\left[\psi_{j+1,k,l}^{n}+\psi_{j-1,k,l}^{n}+\psi_{j,k+ 1,l}^{n}+\psi_{j,k-1,l}^{n}+\psi_{j,k,l+1}^{n}+\psi_{j,k,l-1}^{n}\right]\] (100)
\[-\Delta h^{2}S_{j,k,l}^{n},\]
where we define \(\Delta h\equiv\Delta x=\Delta y=\Delta z\) for simplicity.
#### 5.2.2 Matrix expression
Discretized elliptic PDEs can be expressed by matrices and vectors. Once we describe the elliptic PDE via a matrix expression, the problem involves solving the inverse of the matrix. For example, a matrix expression for Jacobi method is given as follows. We introduce a solution vector \(\psi_{I}\) and source vector \(b_{I}\). Then, Eq. (98) can be expressed as
\[\left[\begin{array}[]{ccccccc}A_{00}&A_{01}&A_{02}&A_{03}&\cdots& A_{0N-1}&A_{0N}\\ 1&-2&1&0&\cdots&0&0\\ 0&1&-2&1&\cdots&0&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&0&\cdots&-2&1\\ A_{N0}&A_{N1}&A_{N2}&A_{N3}&\cdots&A_{NN-1}&A_{NN}\end{array}\right]\left[ \begin{array}[]{c}\psi_{0}\\ \psi_{1}\\ \psi_{2}\\ \vdots\\ \psi_{N-1}\\ \psi_{N}\end{array}\right]=\left[\begin{array}[]{c}b_{0}\\ \Delta x^{2}S_{1}\\ \Delta x^{2}S_{2}\\ \vdots\\ \Delta x^{2}S_{N-1}\\ b_{N}\end{array}\right],\] (101)
where \(A_{IJ}\) is the coefficient matrix corresponding to the Laplacian operator and the first and last rows of \(A_{IJ}\) denote boundary conditions to be determined by physics. It is formally expressed by \(A_{IJ}\psi_{J}=b_{I}\) and the problem leads to solving the inverse of coefficient matrix as \(\psi_{J}=A_{IJ}^{-1}b_{I}\). There are many methods to numerically solve the inverse of matrices. In general, Jacobi method for an arbitrary coefficient matrix is expressed as
\[A_{II}\psi_{I}+\sum_{I\neq J}A_{IJ}\psi_{J} = b_{I},\]
\[\psi_{I}^{n+1,\mathcal{J}} = \frac{1}{A_{II}}\left(b_{I}-\sum_{I\neq J}A_{IJ}\psi_{J}^{n} \right).\] (102)
We note that other elliptic PDEs may not be solvable by the Jacobi method because the iteration is not always stable; this can be shown by the von Neumann numerical stability analysis. However, the Poisson equation is fortunately stable, which is equivalent to that the matrix is diagonally dominant.
#### 5.2.3 Gauss-Seidel Method
In iterative methods to solve Poisson equations as Jacobi method, it depends on a trial field \(\psi^{n}\) how fast we obtain solutions. We usually expect that it becomes better solution as iteration step goes forward. In order to obtain a closer trial field to the solution, we should actively use updated values. Thus, by Gauss-Seidel method, we can determine a next trial field as
\[\psi_{j}^{n+1,\mathcal{GS}} = \frac{1}{2}\left[\psi_{j+1}^{n}+\psi_{j-1}^{n+1,\mathcal{GS}}- \Delta x^{2}S_{j}^{n}\right],\] (103)
where \(\mathcal{GS}\) denotes that the field is determined by Gauss-Seidel method. In addition, the matrix expression for Gauss-Seidel method is given by
\[A_{II}\psi_{I}+\sum_{I<J}A_{IJ}\psi_{J}+\sum_{I>J}A_{IJ}\psi_{J} =b_{I},\]
\[\psi_{I}^{n+1,\mathcal{GS}}=\frac{1}{A_{II}}\left(b_{I}-\sum_{I>J }A_{IJ}\psi_{J}^{n}-\sum_{I<J}A_{IJ}\psi_{J}^{n+1}\right).\] (104)
#### 5.2.4 SOR Method
It turns out that the Poisson equation is faster to solve with Gauss-Seidel method than with the Jacobi method, as shown later. Although the speed with which the numerical solution converges depends on the trial field in iterative methods, the Gauss-Seidel method gives a “better” field than Jacobi’s in that respect. Thus, it is possible to accelerate convergence by specifying trial guess of the field more aggressively. This method is called Successive Over-Relaxation(SOR) method, which is defined by
\[\psi^{n+1,\mathcal{S}}_{j}=\psi^{n}_{j}+\omega\left(\psi_{j}^{n+1 ,\mathcal{GS}}-\psi_{j}^{n}\right),\] (105)
where the superscript \(\mathcal{S}\) denotes the label of SOR method and \(\omega\) denotes an acceleration parameter whose range is defined in \(1\leq\omega<2\) by the stability analysis. When we set the acceleration parameter as unity, SOR method is identical to the Gauss-Seidel method by definition.
## 6 Results
In this section, we introduce sample codes to solve Poisson equations using different methods. These codes are sufficiently general that they can be applied to other problems in physics, provided one slightly changes the source term and boundary conditions.
### Code Tests
As tests for our codes, we use the following analytical solutions. We show numerical results of Poisson equations with simple linear source and sufficiently non-linear source. In addition, the code to find the AH of the Kerr BH is also shown as an example. Some sample codes parallelized with OpenMP are also available C.
#### 6.1.1 Linear source
Let us consider simple source term for the Poisson equation as
\[\bigtriangleup\psi=\frac{{\rm d}^{2}\psi}{{\rm d}x^{2}}=12x^{2}.\] (106)
In numerical computation, we set the range as \(0\leq x\leq 1\) and boundary conditions by
\[\frac{{\rm d}\psi}{{\rm d}x}\biggr{|}_{x=0}=0 , {\rm Neumann\,B.C.},\]
\[\psi\bigr{|}_{x=1}=1 , {\rm Dirichlet\,B.C.}.\] (107)
Then, we obtain the analytical solution by integrating Eq. (106) twice with boundary conditions (107),
\[\psi(x) = x^{4}.\] (108)
Arbitrary initial guess for the solution can be given and we set \(\psi(x)=1\) at initial for those boundary conditions. We set the resolution of the computational grid as \(\Delta x=1/100\). Fig. 1 (a) shows the numerical solution by Jacobi method as compared to the analytical solution. We note that the accuracy of the numerical result depends on the computational resolution and how the accuracy increase with resolution depends on the scheme of discretization; Fig. (b) is compatible with second-order accuracy. We compare Poisson solvers in Fig. 1 (c) by the time steps needed to obtain the solution. Curves show the difference of methods and we choose the Jacobi method, Gauss-Seidel method, SOR methods with \(\omega=1.5\) and \(\omega=1.9\). The SOR method gives the solution about 10-100 times faster than Jacobi method, and depends on the acceleration parameter \(\omega\).
[FIGURE:S6.F1][ENDFIGURE]
#### 6.1.2 Non-linear source
Next, let us consider a weak gravitational field, namely Newtonian gravitational source. A gravitational potential \(\Phi\) can be determined by the Poisson equation,
\[\bigtriangleup\Phi = -4\pi\rho,\] (109)
where we omit the Newton constant by using \(G=1\) units. Suppose gravitational sources are distributed with spherical symmetry as
\[\rho(r) = \left\{\begin{array}[]{cr}\rho_{0}\left(1-r^{2}\right),&r<1,\\ 0,&r\geq 1,\end{array}\right.\] (110)
where \(\rho_{0}\) is a constant. Corresponding Poisson equation is rewritten by
\[\bigtriangleup\Phi=\frac{1}{r^{2}}\frac{\partial}{\partial r} \left[r^{2}\frac{\partial}{\partial r}\right]\Phi+\frac{1}{r^{2}\sin\theta} \left[\sin\theta\frac{\partial}{\partial\theta}\right]\Phi+\frac{1}{r^{2}\sin^ {2}\theta}\frac{\partial^{2}}{\partial\phi^{2}}\Phi = -4\pi\rho\]
\[\frac{1}{r^{2}}\frac{\partial}{\partial r}\left[r^{2}\frac{ \partial}{\partial r}\right]\Phi = -4\pi\rho.\]
Thus, we obtain the analytical solution of the source (110) by solving the equations separately as the region (\(r>1\)) with the boundary condition \(\Phi\to 0\) at infinity and (\(r\leq 1\)) with the regularity condition at the origin.
\[\Phi(r)=\left\{\begin{array}[]{cr}\pi\rho_{0}\left[ \frac{r^{4}}{5}-\frac{2r^{2}}{3}+1\right],&r\leq 1,\\ \frac{8\pi\rho_{0}}{15r},&r>1.\end{array}\right.\] (112)
We consider this analytical solution to test our code. The Poisson equation with spherical symmetry can be regarded as one dimentional Poisson equation with the non-linear source in our method,
\[\frac{\partial^{2}\Phi}{\partial r^{2}}=-4\pi\rho-\frac{\partial \Phi}{\partial r},\] (113)
whose range to be considered as \(0\leq x\leq 10\) and boundary conditions are set as
\[\frac{{\rm d}\Phi}{{\rm d}r}\biggr{|}_{r=0}=0 , {\rm Neumann\,B.C.},\]
\[\frac{{\rm d}\left(r\Phi\right)}{{\rm d}r}\biggr{|}_{r=10}=0 , {\rm Robin\,B.C.},\] (114)
where the Robin boundary condition is chosen because we expect \(\Phi\to r^{-1}\) at large distance.
[FIGURE:S6.F2][ENDFIGURE]
Fig. 2 (a) shows the source distribution and (b) shows the numerical result by solving the Eq. (113). The result is obtained with 400 grid points but shown with only 40 points.
#### 6.1.3 Apparent Horizon of Kerr Black Hole
Let us apply our code for Poisson solver to solving the AH of Kerr BH. The AH equation (79) should be reduced to simpler equation with axisymmetry \(\partial_{\phi}=0\).
The normal vector \(s_{i}\) is defined with axisymmetry and the normalization \(C\) is determined by the Kerr metric as
\[\bar{s}_{i} = \left[1,-h_{,\theta},0\right],\ \ s_{i}=C\bar{s}_{i},\] (115)
\[C^{-2} = \gamma^{ij}\bar{s}_{i}\bar{s}_{j}=\gamma^{rr}+\gamma^{\theta \theta}h_{,\theta}^{2}.\] (116)
To be concrete, we note that the non-trivial part of the AH equation with axisymmetry in isotropic coordinates can be written by
\[D_{i}s^{i} = \frac{1}{\sqrt{\gamma}}\partial_{i}\sqrt{\gamma}\gamma^{ij}s_{j}\] (117)
\[=\]
\[-C\gamma^{\theta\theta}\cot\theta h_{,\theta}+C\gamma^{rr}_{,r}- Ch_{,\theta}\gamma^{\theta\theta}_{,\theta}-C\gamma^{\theta\theta}h_{,\theta\theta}\]
\[-\frac{C^{3}}{2}\gamma^{rr}\left[\gamma^{rr}_{,r}+\gamma^{\theta \theta}_{,r}h_{,\theta}^{2}\right]+\frac{C^{3}}{2}\gamma^{\theta\theta}h_{, \theta}\left[\gamma^{rr}_{,\theta}+\gamma^{\theta\theta}_{,\theta}h_{,\theta}^ {2}+2\gamma^{\theta\theta}h_{,\theta}h_{,\theta\theta}\right],\]
where
\[\frac{{\rm d}r_{BL}}{{\rm d}r} = 1-\frac{M^{2}-a^{2}}{4r^{2}},\ \ C_{,i}=-\frac{C^{3}}{2}\left[ \gamma^{rr}_{,i}+\gamma^{\theta\theta}_{,i}h_{,\theta}^{2}+2\gamma^{\theta \theta}h_{,\theta}h_{,\theta i}\right],\]
\[\Sigma_{,r} = 2r_{BL}\frac{{\rm d}r_{BL}}{{\rm d}r},\ \ \Sigma_{,\theta}=-2a^{ 2}\cos\theta\sin\theta,\ \ \Delta_{,r}=2\left(r_{BL}-M\right)\frac{{\rm d}r_{ BL}}{{\rm d}r},\]
\[A_{,r} = 4\left(r_{BL}^{2}+a^{2}\right)r_{BL}\frac{{\rm d}r_{BL}}{{\rm d} r}-\Delta_{,r}a^{2}\sin^{2}\theta,\ \ A_{,\theta}=-2\Delta a^{2}\sin\theta\cos\theta,\]
\[\gamma^{rr}_{,r} = \frac{2r\Sigma-r^{2}\Sigma_{,r}}{\Sigma^{2}},\ \ \gamma^{rr}_{, \theta}=-\frac{r^{2}\Sigma_{,\theta}}{\Sigma^{2}},\ \ \gamma^{\theta\theta}_{, r}=-\frac{\Sigma_{,r}}{\Sigma^{2}},\ \ \gamma^{\theta\theta}_{,\theta}=-\frac{ \Sigma_{,\theta}}{\Sigma^{2}}.\] (118)
On the other hand, we can also solve the AH in Boyer-Lindquist coordinates, only to change the following part.
\[D_{i}s^{i} = \frac{C}{2\Sigma}\left(\Sigma_{,r}\gamma^{rr}-\Sigma_{,\theta} \gamma^{\theta\theta}h_{,\theta}\right)+\frac{C}{2A}\left(A_{,r}\gamma^{rr}-A_ {,\theta}\gamma^{\theta\theta}h_{,\theta}\right)-\frac{C}{2\Delta}\Delta_{,r} \gamma^{rr}\] (119)
\[-C\gamma^{\theta\theta}\cot\theta h_{,\theta}+C\gamma^{rr}_{,r}- Ch_{,\theta}\gamma^{\theta\theta}_{,\theta}-C\gamma^{\theta\theta}h_{,\theta\theta}\]
\[-\frac{C^{3}}{2}\gamma^{rr}\left[\gamma^{rr}_{,r}+\gamma^{\theta \theta}_{,r}h_{,\theta}^{2}\right]+\frac{C^{3}}{2}\gamma^{\theta\theta}h_{, \theta}\left[\gamma^{rr}_{,\theta}+\gamma^{\theta\theta}_{,\theta}h_{,\theta}^ {2}+2\gamma^{\theta\theta}h_{,\theta}h_{,\theta\theta}\right].\]
In Fig. 3 (a), we show the surface of AH of the Schwarzschild BH in isotropic coordinates with the code “sor_AHF_SBH_ISO.f90” and show the three dimensional shape of the AH in 1/8 spaces of computational grid. Fig. 3 (b) shows the difference of the shape on x-z two dimensional plane among different spin parameters with the code “sor_AHF_KBH_ISO.f90”. The AH radius shrinks as the spin of BH increses.
[FIGURE:S6.F3][ENDFIGURE]
### Kerr Black Hole and Single Puncture Black Hole
As the last example, let us compare Kerr BH to single puncture BH with a spin as initial data for numerical relativity. A Kerr BH in quasi-isotropic coordinates can be used as the initial data discussed in Sec. 3.4. A single puncture BH is obtained by solving the Hamiltonian constraint (58) without any momenta \(P^{i}=0\) and with a spin \(S^{z}\) in the Bowen-York extrinsic curvature (3.3.1).
In order to check whether our AHF for this comparison works well, in Fig. 4 (a) we show the relation between AH area of the Kerr BH and AH radius in isotropic coordinates as a function of spin parameter. The blue line denotes the analytical AH area and red crosses denote numerical results by solving AH equation for Kerr BH. The green circles show the coordinate radii where the AHs with different are located. Much larger computational resources should be required to obtain the solution with a high BH spin because high resolution in the coordinate radius is required in this regime.
We perform numerical relativity simulations with the initial data of single puncture BH and Kerr BH in Fig. 4 (b). The BSSN evolution equations which give stable dynamical evolution[46, 47, 48] are adopted in these simulations. The color difference shows the difference among spins and the type of lines denotes the difference between Kerr BH and single puncture BH. The spins of single puncture BHs settle down at late time, which shows BHs relax to almost the stationary state and one can compare results of Kerr BHs at late time. The single puncture BH with the higher spin does not reach at the spin which we expect. This is because we assume the conformal flatness for constructing puncture BH but Kerr BH should not be expressed by the conformal flat metric. However, it should be noted that the puncture BH can represent the small spin BH well and it is actually powerful to construct the initial data for multi BHs system.
[FIGURE:S6.F4][ENDFIGURE]
## 7 Conclusions
In these notes, we showed how to prepare the initial data for numerical relativity and how to obtain the apparent horizon of BHs, which are reduced to solving elliptic PDEs in general. We presented several BH solutions as initial data for numerical relativity and described several numerical methods to solve elliptic PDEs. In particular, sample codes to solve Poisson equations with linear and non-linear sources are available online to public users. It is worth noting that these simple, “classical” methods are still powerful enough to be of use for current problems. In addition, we note that modern methods (e.g. Multi-Grid method) can help to eventually upgrade these classical methods in terms of numerical costs and consuming-time. Of course, one should carefully choose which method to use to solve elliptic PDEs, according to the problem at hand.
## Acknowledgments
The author would like to thank Vitor Cardoso for giving the opportunity to lecture on this school, and to the Organizers and Editors of the NR/HEP2: Spring School at Instituto Superior Técnico in Lisbon. The author would also thank an anonymous referee for a careful reading of the manuscript and many useful suggestions. The author is thankful to Ana Sousa who helps to improve English on this notes, Sérgio Almeida who maintains the cluster “Baltasar-Sete-Sóis” and Takashi Hiramatsu who maintains the “venus” cluster. Numerical computations in this work were carried out on the cluster of “Baltasar-Sete-Sóis” at Instituto Superior Técnico in Lisbon which is supported by the DyBHo-256667 ERC Starting Grant and on the “venus” cluster at the Yukawa Institute Computer Facility in Kyoto University. This work was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development & Innovation.
## Appendix A LDU decomposition
In this Appendix, we describe how to numerically solve only the Poisson equation. However, we can also solve general elliptic PDEs in principle, namely, in case except for the problem with diagonally dominant matrix. A system of linear equations can be expressed by the matrix described in Sec. 5.2.2 as
\[A_{IJ}\psi_{J} = b_{I}.\] (120)
Let us decompose a matrix \(A_{IJ}\) into the lower and upper triangular matrices defined as \(L_{IJ}\) and \(U_{IJ}\) respectively,
\[A_{IJ} \equiv L_{IK}D_{KK}U_{KJ}\]
\[= \left[\begin{array}[]{ccccc}1&0&0&\cdots&0\\ L_{21}&1&0&\cdots&0\\ L_{31}&L_{32}&1&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ L_{N1}&L_{N2}&L_{N3}&\cdots&1\end{array}\right]\left[\begin{array}[]{ccccc}D_{ 11}&0&0&\cdots&0\\ 0&D_{22}&0&\cdots&0\\ 0&0&D_{33}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&D_{NN}\end{array}\right]\left[\begin{array}[]{ccccc}1&U_{12}&U_{1 3}&\cdots&U_{1N}\\ 0&1&U_{23}&\cdots&U_{2N}\\ 0&0&1&\cdots&U_{3N}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&1\end{array}\right],\]
where \(D_{KK}\) denotes the diagonal matrix. Then, the solution vector can be solved \(\psi_{J}\) step by step as
\[b_{I} = A_{IJ}\psi_{J}=L_{IK}D_{KK}U_{KJ}\psi_{J},\] (122)
\[= L_{IK}D_{KK}\xi_{K},\]
where
\[\xi_{K} \equiv U_{KJ}\psi_{J}.\] (123)
Thus, it is easy to obtain the solution as the following precedures. First, we obtain an auxiliary vector \(\xi_{K}\) as
\[\xi_{1} = b_{1}/D_{11},\]
\[\xi_{2} = \left(b_{2}-L_{21}D_{11}\xi_{1}\right)/D_{22},\]
\[\xi_{3} = \left(b_{3}-L_{31}D_{11}\xi_{1}-L_{32}D_{22}\xi_{2}\right)/D_{33},\]
\[\vdots \vdots\]
\[\xi_{N} = \left(b_{N}-\sum_{I=1}^{N-1}L_{NI}D_{II}\xi_{I}\right)/D_{NN},\] (124)
solving Eq. (122) from \(\xi_{1}\) to \(\xi_{N}\),
\[D_{11}\xi_{1} = b_{1},\]
\[L_{21}D_{11}\xi_{1}+D_{22}\xi_{2} = b_{2},\]
\[L_{31}D_{11}\xi_{1}+L_{32}D_{22}\xi_{2}+D_{33}\xi_{3} = b_{3},\]
\[\vdots \vdots\]
\[\sum_{I=1}^{N-1}L_{NI}D_{II}\xi_{I}+D_{NN}\xi_{N} = b_{N}.\] (125)
Therefore, the solution vector \(\psi_{J}\) is written by
\[\psi_{N} = \xi_{N},\]
\[\psi_{N-1} = \xi_{N-1}-U_{N-1N}\psi_{N},\]
\[\vdots \vdots\]
\[\psi_{1} = \xi_{1}-\sum_{I=N}^{2}U_{1I}\psi_{I},\] (126)
similarly solving Eq. (123) from \(\psi_{N}\) to \(\psi_{1}\),
\[\psi_{N} = \xi_{N},\]
\[U_{N-1N}\psi_{N}+\psi_{N-1} = \xi_{N-1},\]
\[\vdots \vdots\]
\[\sum_{I=N}^{2}U_{1I}\psi_{I}+\psi_{1} = \xi_{1}.\] (127)
As the last of this section, we note how we compute the lower and upper matrices from our matrix \(A_{IJ}\), which is the time-consuming part. The matrix \(A_{IJ}\) is written with the diagonal, lower and upper triangular matrices by
\[\begin{array}[]{ccrcclr}A_{IJ}&=&D_{IJ}&+&\sum_{K<I}L_{IK}D_{KK}U _{KJ}&,&diagonal\ (I=J),\\ A_{IJ}&=&D_{II}U_{IJ}&+&\sum_{K<I}L_{IK}D_{KK}U_{KJ}&,&upper\ (I<J),\\ A_{IJ}&=&L_{IJ}D_{JJ}&+&\sum_{K<J}L_{IK}D_{KK}U_{KJ}&,&lower\ (I>J).\end{array}\] (128)
Thus, the components of marices are obtained in turn by
\[D_{II} = A_{II}-\sum_{J<I}L_{IJ}D_{JJ}U_{JI},\] (129)
\[U_{IJ} = \frac{1}{D_{II}}\left(A_{IJ}-\sum_{K<I}L_{IK}D_{KK}U_{KJ}\right),\] (130)
\[L_{IJ} = \frac{1}{D_{JJ}}\left(A_{IJ}-\sum_{K<J}L_{IK}D_{KK}U_{KJ}\right),\] (131)
Although LDU decomposition allows us to numerically solve general elliptic PDEs, the large numerical costs will be required in many cases.
## Appendix B Multi-Grid method
Multi-Grid method is proposed by R. Fedorenko and N. Bakhvalov and developed by A. Brandt[73, 74, 75, 76]. The SOR method as mentioned in Sec. 5.2.4 has the advantage of reducing the high frequency components of residual between the exact solution and numerical solution, because the values near the grid point to be updated are used for next trial guess during the iteration. On the other hand, it would take much time to reduce the low frequency modes of redisual with this iteration method. When we consider different resolution grids, however, the low frequency modes on the finer grid can be the high frequency modes on the coarser grid. The low frequency modes of residual on the finer grid can efficiently be reduced on the coarser grid. The Multi-Grid method is based on the concept of reducing different frequency modes of residual with different resolution grids. In fact, it was implemented by some groups[77, 78, 79].
### Multi-Grid structure
Suppose we have different resolution grids and the level of different grids is labeled by \(k\), which the larger \(k\) denotes the finer grid. One can solve the Poisson equation on the level \(k\) by any iterative methods described in Sec. 5 and obtain the numerical solution,
\[\bigtriangleup^{(k)}\phi^{(k)} = S^{(k)},\] (132)
where \(\phi^{(k)}\) is the numerical solution on the level \(k\). We define the residual on the level \(k\) between \(\phi^{(k)}\) and the exact solution by
\[r^{(k)} = S^{(k)}-\bigtriangleup^{(k)}\phi^{(k)}.\] (133)
#### b.1.1 Lagrange interpolation
In general, the communication of the quantities such as the residual with different grid levels is needed. Now, we just use the Lagrange interpolation to communicate with each other level defined by
\[F(x) = \sum_{j=0}^{N}F(x_{j})L_{j}(x),\] (134)
\[L_{j}(x) = \prod_{i\neq j}^{N}\frac{x-x_{i}}{x_{j}-x_{i}},\] (135)
where \(F,x_{j},x\) and \(N\) denote the quantity to be interpolated, the coordinate on the level, the location to be interpolated, and the number of grid points to be used by the interpolation, respectively.
#### b.1.2 Restriction operator
After we obtain the solution on the finer grid \(k\), we transfer the information of the solution from the finer grid \(k\) to the coarser grid \(k-1\). Now we use the second-order discretization scheme and choose the third-order Lagrange interpolation. We define the modified source term on the coarser level \(k-1\) with the information of the solution on the finer grid \(k\) by
\[\phi^{(k-1)}_{c} = \mathcal{R}^{k-1}_{k}\phi^{(k)},\] (136)
\[r^{(k-1)} = \mathcal{R}^{k-1}_{k}r^{(k)},\] (137)
\[S^{(k-1)} \equiv \bigtriangleup^{(k-1)}\phi^{(k-1)}_{c}+r^{(k-1)}\] (138)
\[=\]
\[{\rm d}\phi^{(k-1)} = \phi^{(k-1)}-\phi^{(k-1)}_{c},\] (139)
where \(\mathcal{R}^{k-1}_{k}\) denotes the restriction operator to the coarser grid \(k-1\) and \(\phi^{(k-1)}_{c}\) denotes the smoothing solution by the restriction operator. Roughly speaking, the modified source term \(S^{(k-1)}\) consists of that on the level \(k\) with smoothing operation and the correction by the difference of Laplacian operator between two levels. Then, we obtain the numerical solution \(\phi^{(k-1)}\) on the level \(k-1\) to solve the Poisson equation with the modified source term.
#### b.1.3 Prolongation operator
The solution with the modified source term on the coarser level \(k-1\) is to be brought back to the finer level \(k\). Now the communication is also done by third-order Lagrange interpolation.
\[\phi^{(k)}_{c} = \mathcal{P}^{k}_{k-1}\phi^{(k-1)},\] (140)
\[{\rm d}\phi^{(k)}_{c} = \mathcal{P}^{k}_{k-1}{\rm d}\phi^{(k-1)}=\mathcal{P}^{k}_{k-1} \left[\phi^{(k-1)}-\mathcal{R}^{k-1}_{k}\phi^{(k)}\right],\] (141)
\[\phi^{(k)}_{m} \equiv \phi^{(k)}+{\rm d}\phi^{(k)}_{c}=\phi^{(k)}+\mathcal{P}^{k}_{k-1} \left[\phi^{(k-1)}-\mathcal{R}^{k-1}_{k}\phi^{(k)}\right],\] (142)
\[{\rm d}\phi^{(k)} \equiv \phi^{(k)}_{m}-\phi^{(k)}_{c},\quad\phi^{(k)}=\phi^{(k)}_{m},\] (143)
where \(\mathcal{P}^{k}_{k-1}\) denotes the prolongation operator and \(\phi^{(k)}_{m}\) denotes the solution on the level \(k\) modified by the coarser grid \(k-1\). The modification is done by Eq. (142).
#### b.1.4 Cycle of the Multi-Grid method
There are some ways of deciding the order of the level to compute. Fig. (5) shows the difference of such order between the methods of V-cycle and W-cycle as examples. Now we choose V-cycle because it is easier to implement to the code. We use the restriction operator before computing on the coarser level and the prolongation operator before computing on the finer level. This cycle is repeated until we obtain the expected error of the Poisson equation.
[FIGURE:A2.F5][ENDFIGURE]
### Code test
Let us consider the same test problem as Sec. 6.1.2. In the 3D problem, we impose the boundary conditions at large distance by
\[0 = \frac{{\rm d}}{{\rm d}r}(r\Phi)=\Phi+r\frac{{\rm d}\Phi}{{\rm d}r }=\Phi+x\frac{\partial\Phi}{\partial x}+y\frac{\partial\Phi}{\partial y}+z \frac{\partial\Phi}{\partial z}.\] (144)
We note that the boundary of the finer grid is given by the interpolation. Fig. 6 shows the results on the x-axis by solving the Poisson equation with the source (110) by Multi-Grid method. The solution including the boundary converges to the analytical solution by iterations.
[FIGURE:A2.F6][ENDFIGURE]
## Appendix C List of Sample codes
We have some sample codes for the lecture on NR/HEP2: Spring School at Instituto Superior Técnico in Lisbon and they are available online. In this section, we show the simplest code to solve an elliptic PDE and the sample code which is parallelized with OpenMP. One can see what is the parallel computing in Ref. [80]. Here is the list of sample codes which are available in http://blackholes.ist.utl.pt/nrhep2/?page=material,
1. jacobi_test1.f90 This is the code for solving the problem described in Sec. 6.1.1 with Jacobi method(See Sec. 5.2.1).
2. gs_test1.f90 This is the code for solving the problem described in Sec. 6.1.1 with Gauss-Seidel method(See Sec. 5.2.3).
3. sor_test1.f90 This is the code for solving the problem described in Sec. 6.1.1 with SOR method(See Sec. 5.2.4).
4. jacobi_test2.f90 This is the code for solving the problem described in Sec. 6.1.1 with Jacobi method(See Sec. 5.2.1).
5. sor_AHF_SBH_ISO.f90 This is the code for solving the AH of Schwarzschild BH with SOR method(See Sec. 5.2.4).
6. sor_AHF_KBH_ISO.f90 This is the code for solving the AH of Kerr BH in isotropic coordinates described in Sec. 6.1.3 with SOR method(See Sec. 5.2.4).
7. sor_AHF_KBH_BL.f90 This is the code for solving the AH of Kerr BH in Boyer-Lindquist coordinates described in Sec. 6.1.3 with SOR method(See Sec. 5.2.4).
8. jacobi_openMP.f90 This is the code for solving the problem described in Sec. 6.1.1 with Jacobi method(See Sec. 5.2.1) using many processors with OpenMP.
9. jacobi_test1.C This is the code written in C++ for solving the problem described in Sec. 6.1.1 with Jacobi method(See Sec. 5.2.1).
10. sor_test1.C This is the code written in C++ for solving the problem described in Sec. 6.1.1 with SOR method(See Sec. 5.2.4).
11. jacobi_openMP.C This is the code written in C++ for solving the problem described in Sec. 6.1.1 with Jacobi method(See Sec. 5.2.1) using many processors with OpenMP.
### jacobi_test1.f90
1Ψ!@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
2Ψ!
3Ψ! Jacobi method for TEST PROBLEM 1
4Ψ!
5Ψ!--------------------------------------------------------------
6Ψ!
7Ψ! Sample Code for Lecture in NR/HEP2: Spring School
8Ψ!
9Ψ! Coded by Hirotada Okawa
10Ψ!
11Ψ!==============================================================
12Ψ! How to compile and use this program in terminal(bash)
13Ψ!==============================================================
14Ψ! $ gfortran -O2 -ffast-math -o j_test1 jacobi_test1.f90
15Ψ! $ ./j_test1
16Ψ!--------------------------------------------------------------
17Ψ!@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
18
19Ψmodule inc_coord
20Ψ implicit none
21
22Ψ!--------------------------------------------------------------
23Ψ! Grid points
24Ψ!--------------------------------------------------------------
25Ψ integer,parameter :: jli=1
26Ψ integer,parameter :: jui=100
27Ψ integer,parameter :: jlb=jli-1
28Ψ integer,parameter :: jub=jui+1
29
30Ψ!--------------------------------------------------------------
31Ψ! Maximum/Minimum Coordinates (physical)
32Ψ!--------------------------------------------------------------
33Ψ real(8),parameter :: xlower=0.
34Ψ real(8),parameter :: xupper=1.
35Ψ real(8),parameter :: dx=(xupper-xlower)/dble(jub-jli)
36Ψ real(8),parameter :: dxi=1.d0/dx
37
38Ψ!--------------------------------------------------------------
39Ψ! Array for Coordinates (physical)
40Ψ!--------------------------------------------------------------
41Ψ real(8),dimension(jlb:jub) :: x
42
43Ψ!--------------------------------------------------------------
44Ψ! Variables to solve
45Ψ!--------------------------------------------------------------
46Ψ real(8),dimension(jlb:jub) :: h, hprev
47
48Ψ!--------------------------------------------------------------
49Ψ! Source term for Poisson equation
50Ψ!--------------------------------------------------------------
51Ψ real(8),dimension(jlb:jub) :: src
52
53Ψend module inc_coord
54
55Ψprogram main
56Ψ use inc_coord
57Ψ implicit none
58
59Ψ!--------------------------------------------------------------
60Ψ! Definition of parameters
61Ψ!--------------------------------------------------------------
62Ψ integer,parameter :: stepmax=1d8 ! Loop step maximum
63Ψ real(8),parameter :: errormax=1.d-10 ! Error to exit loop
64Ψ real(8),parameter :: fpar=1.d0 ! for next guess
65
66Ψ!--------------------------------------------------------------
67Ψ! Definition of temporary variables to use
68Ψ!--------------------------------------------------------------
69Ψ integer :: j, step
70Ψ real(8) :: xx
71Ψ real(8) :: errortmp,vtmp
72
73Ψ!--------------------------------------------------------------
74Ψ! Output File
75Ψ!--------------------------------------------------------------
76Ψ open(200,file=’h_j.dat’)
77
78Ψ!--------------------------------------------------------------
79Ψ! Initialization
80Ψ!--------------------------------------------------------------
81Ψ do j=jlb,jub
82Ψ x(j) = xlower +(dble(j)-0.5d0)*dx ! Coordinates
83Ψ h(j) = 1.d0 ! variable to solve
84Ψ hprev(j) = h(j) ! previous variable
85Ψ end do
86
87Ψ!**************************************************************
88Ψ! Main Loop
89Ψ!**************************************************************
90Ψ do step=0,stepmax
91
92Ψ!--------------------------------------------------------------
93Ψ! Preserve data of previous step
94Ψ!--------------------------------------------------------------
95Ψ do j=jlb,jub
96Ψ hprev(j) = h(j)
97Ψ end do
98
99Ψ!--------------------------------------------------------------
100Ψ! Jacobi Method
101Ψ!--------------------------------------------------------------
102Ψ do j=jli,jui
103
104Ψ!==============================================================
105Ψ! Definition of Source term
106Ψ!==============================================================
107Ψ xx = x(j)
108Ψ src(j) = xx**2*12.
109
110Ψ h(j) = 0.5d0*( hprev(j+1) +hprev(j-1) -dx**2*src(j) )
111Ψ end do
112
113Ψ!==============================================================
114Ψ! Impose Boundary Condition
115Ψ!==============================================================
116Ψ h(jub)=x(jub)**4 ! Dirichlet Boundary Condition
117Ψ h(jlb)=h(jli) ! Neumann Boundary Condition
118
119Ψ!--------------------------------------------------------------
120Ψ! Check if values converge
121Ψ!--------------------------------------------------------------
122Ψ errortmp=0.d0
123Ψ vtmp=0.d0
124Ψ do j=jli,jui
125Ψ errortmp = errortmp +(h(j)-hprev(j))**2*dx**2
126Ψ vtmp = vtmp + dx**2
127Ψ end do
128Ψ errortmp = dsqrt(errortmp/vtmp)
129Ψ if( (errortmp.le.errormax) .and. (step.gt.1) ) exit
130
131Ψ!--------------------------------------------------------------
132Ψ! Next Guess
133Ψ!--------------------------------------------------------------
134Ψ do j=jlb,jub
135Ψ h(j) = fpar*h(j) +(1.d0-fpar)*hprev(j)
136Ψ end do
137
138Ψ write(*,*) "Step=",step,"Error=",errortmp
139Ψ end do
140
141Ψ!--------------------------------------------------------------
142Ψ! Print Data
143Ψ!--------------------------------------------------------------
144Ψ do j=jlb,jub
145Ψ write(200,’(4e16.8e2)’) x(j),h(j),hprev(j),src(j)
146Ψ end do
147
148Ψ!--------------------------------------------------------------
149Ψ! End of Program
150Ψ!--------------------------------------------------------------
151Ψ write(*,*) "End of Run",errortmp
152Ψ close(200)
153
154Ψend program main
### jacobi_openMP.f90
1Ψ!@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
2Ψ!
3Ψ! Jacobi method for TEST PROBLEM 1
4Ψ! parallelized with OpenMP
5Ψ!
6Ψ!--------------------------------------------------------------------
7Ψ!
8Ψ! Sample Code for Lecture in NR/HEP2: Spring School
9Ψ!
10Ψ! Coded by Hirotada Okawa
11Ψ!
12Ψ!====================================================================
13Ψ! How to compile and use this program in terminal(bash)
14Ψ!====================================================================
15Ψ! $ gfortran -O2 -ffast-math -fopenmp -o j_omp jacobi_openMP.f90
16Ψ! $ export OMP_NUM_THREADS=2
17Ψ! $ ./j_omp
18Ψ!--------------------------------------------------------------------
19Ψ! OMP_NUM_THREADS : Change the number of cores you want to use.
20Ψ!--------------------------------------------------------------------
21Ψ!@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
22
23Ψmodule inc_coord
24Ψ implicit none
25
26Ψ!--------------------------------------------------------------
27Ψ! Grid points
28Ψ!--------------------------------------------------------------
29Ψ integer,parameter :: jli=1
30Ψ integer,parameter :: jui=100
31Ψ integer,parameter :: jlb=jli-1
32Ψ integer,parameter :: jub=jui+1
33
34Ψ!--------------------------------------------------------------
35Ψ! Maximum/Minimum Coordinates (physical)
36Ψ!--------------------------------------------------------------
37Ψ real(8),parameter :: xlower=0.
38Ψ real(8),parameter :: xupper=1.
39Ψ real(8),parameter :: dx=(xupper-xlower)/dble(jub-jli)
40Ψ real(8),parameter :: dxi=1.d0/dx
41
42Ψ!--------------------------------------------------------------
43Ψ! Array for Coordinates (physical)
44Ψ!--------------------------------------------------------------
45Ψ real(8),dimension(jlb:jub) :: x
46
47Ψ!--------------------------------------------------------------
48Ψ! Variables to solve
49Ψ!--------------------------------------------------------------
50Ψ real(8),dimension(jlb:jub) :: h, hprev
51
52Ψ!--------------------------------------------------------------
53Ψ! Source term for Poisson equation
54Ψ!--------------------------------------------------------------
55Ψ real(8),dimension(jlb:jub) :: src
56
57Ψend module inc_coord
58
59Ψprogram main
60Ψ use inc_coord
61Ψ implicit none
62
63Ψ!--------------------------------------------------------------
64Ψ! Definition of parameters
65Ψ!--------------------------------------------------------------
66Ψ integer,parameter :: stepmax=1d8 ! Loop step maximum
67Ψ real(8),parameter :: errormax=1.d-10 ! Error to exit loop
68Ψ real(8),parameter :: fpar=1.d0 ! for next guess
69
70Ψ!--------------------------------------------------------------
71Ψ! Definition of temporary variables to use
72Ψ!--------------------------------------------------------------
73Ψ integer :: j, step
74Ψ real(8) :: xx
75Ψ real(8) :: errortmp,vtmp
76
77Ψ!--------------------------------------------------------------
78Ψ! Output File
79Ψ!--------------------------------------------------------------
80Ψ open(200,file=’h_o.dat’)
81
82
83Ψ!--------------------------------------------------------------
84Ψ! OpenMP threads folk
85Ψ!--------------------------------------------------------------
86Ψ!$OMP PARALLEL DEFAULT(SHARED) PRIVATE(j,xx,src)
87
88Ψ!--------------------------------------------------------------
89Ψ! Initialization
90Ψ!--------------------------------------------------------------
91Ψ!$OMP DO
92Ψ do j=jlb,jub
93Ψ x(j) = xlower +(dble(j)-0.5d0)*dx ! Coordinates
94Ψ h(j) = 1.d0 ! variable to solve
95Ψ hprev(j) = h(j) ! previous variable
96Ψ end do
97Ψ!$OMP END DO
98
99Ψ!**************************************************************
100Ψ! Main Loop
101Ψ!**************************************************************
102Ψ do step=0,stepmax
103
104Ψ!--------------------------------------------------------------
105Ψ! Preserve data of previous step
106Ψ!--------------------------------------------------------------
107Ψ!$OMP DO
108Ψ do j=jlb,jub
109Ψ hprev(j) = h(j)
110Ψ end do
111Ψ!$OMP END DO
112
113Ψ!--------------------------------------------------------------
114Ψ! Jacobi Method
115Ψ!--------------------------------------------------------------
116Ψ!$OMP DO
117Ψ do j=jli,jui
118
119Ψ!==============================================================
120Ψ! Definition of Source term
121Ψ!==============================================================
122Ψ xx = x(j)
123Ψ src(j) = xx**2*12.
124
125Ψ h(j) = 0.5d0*( hprev(j+1) +hprev(j-1) -dx**2*src(j) )
126Ψ end do
127Ψ!$OMP END DO
128
129Ψ!==============================================================
130Ψ! Impose Boundary Condition
131Ψ!==============================================================
132Ψ!$OMP SINGLE
133Ψ h(jub)=x(jub)**4 ! Dirichlet Boundary Condition
134Ψ h(jlb)=h(jli) ! Neumann Boundary Condition
135Ψ!$OMP END SINGLE
136
137Ψ!--------------------------------------------------------------
138Ψ! Check if values converge
139Ψ!--------------------------------------------------------------
140Ψ errortmp=0.d0
141Ψ vtmp=0.d0
142Ψ!$OMP BARRIER
143Ψ!$OMP DO REDUCTION(+:vtmp,errortmp)
144Ψ do j=jli,jui
145Ψ errortmp = errortmp +(h(j)-hprev(j))*(h(j)-hprev(j))*dx**2
146Ψ vtmp = vtmp + dx**2
147Ψ end do
148Ψ!$OMP END DO
149
150Ψ!$OMP SINGLE
151Ψ errortmp = dsqrt(errortmp/vtmp)
152Ψ!$OMP END SINGLE
153Ψ if( (errortmp.le.errormax) .and. (step.gt.1) ) exit
154
155Ψ!--------------------------------------------------------------
156Ψ! Next Guess
157Ψ!--------------------------------------------------------------
158Ψ!$OMP DO
159Ψ do j=jlb,jub
160Ψ h(j) = fpar*h(j) +(1.d0-fpar)*hprev(j)
161Ψ end do
162Ψ!$OMP END DO
163
164Ψ!$OMP SINGLE
165Ψ write(*,*) "Step=",step,"Error=",errortmp
166Ψ!$OMP END SINGLE
167Ψ end do
168
169Ψ!--------------------------------------------------------------
170Ψ! OpenMP threads join
171Ψ!--------------------------------------------------------------
172Ψ!$OMP END PARALLEL
173
174Ψ!--------------------------------------------------------------
175Ψ! Print Data
176Ψ!--------------------------------------------------------------
177Ψ do j=jlb,jub
178Ψ write(200,’(4e16.8e2)’) x(j),h(j),hprev(j),src(j)
179Ψ end do
180
181Ψ!--------------------------------------------------------------
182Ψ! End of Program
183Ψ!--------------------------------------------------------------
184Ψ write(*,*) "End of Run",errortmp
185Ψ close(200)
186
187Ψend program main
## References
* [1] B. Sathyaprakash and B. Schutz, _Living Rev.Rel._**12**, 2 (2009), arXiv:0903.0338 [gr-qc].
* [2] F. Pretorius (2007), arXiv:0710.1338 [gr-qc].
* [3] L. Blanchet, _Living Rev.Rel._**5**, 3 (2002), arXiv:gr-qc/0202016 [gr-qc].
* [4] A. Buonanno and T. Damour, _Phys.Rev._**D59**, 084006 (1999), arXiv:gr-qc/9811091 [gr-qc].
* [5] B. Carter, 136 (1997), arXiv:gr-qc/9712038 [gr-qc].
* [6] K. D. Kokkotas and B. G. Schmidt, _Living Rev.Rel._**2**, 2 (1999), arXiv:gr-qc/9909058 [gr-qc].
* [7] E. Berti, V. Cardoso and A. O. Starinets, _Class.Quant.Grav._**26**, 163001 (2009), arXiv:0905.2975 [gr-qc].
* [8] P. Pani (2013), arXiv:1305.6759 [gr-qc].
* [9] M. Boyle, D. A. Brown, L. E. Kidder, A. H. Mroue, H. P. Pfeiffer _et al._, _Phys.Rev._**D76**, 124038 (2007), arXiv:0710.0158 [gr-qc].
* [10] M. Shibata and K. Taniguchi, _Living Rev.Rel._**14**, 6 (2011).
* [11] J. A. Faber and F. A. Rasio, _Living Rev.Rel._**15**, 8 (2012), arXiv:1204.3858 [gr-qc].
* [12] V. Cardoso, L. Gualtieri, C. Herdeiro, U. Sperhake, P. M. Chesler _et al._, _Class.Quant.Grav._**29**, 244001 (2012), arXiv:1201.5118 [hep-th].
* [13] N. Arkani-Hamed, S. Dimopoulos and G. Dvali, _Phys.Lett._**B429**, 263 (1998), arXiv:hep-ph/9803315 [hep-ph].
* [14] I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali, _Phys.Lett._**B436**, 257 (1998), arXiv:hep-ph/9804398 [hep-ph].
* [15] L. Randall and R. Sundrum, _Phys.Rev.Lett._**83**, 4690 (1999), arXiv:hep-th/9906064 [hep-th].
* [16] L. Randall and R. Sundrum, _Phys.Rev.Lett._**83**, 3370 (1999), arXiv:hep-ph/9905221 [hep-ph].
* [17] S. Dimopoulos and G. L. Landsberg, _Phys.Rev.Lett._**87**, 161602 (2001), arXiv:hep-ph/0106295 [hep-ph].
* [18] S. B. Giddings and S. D. Thomas, _Phys.Rev._**D65**, 056010 (2002), arXiv:hep-ph/0106219 [hep-ph].
* [19] M. O. P. Sampaio (2013), arXiv:1306.0903 [gr-qc].
* [20] R. Penrose (1974).
* [21] D. M. Eardley and S. B. Giddings, _Phys.Rev._**D66**, 044011 (2002), arXiv:gr-qc/0201034 [gr-qc].
* [22]M. Shibata, H. Okawa and T. Yamamoto, _Phys.Rev._**D78**, 101501 (2008), arXiv:0810.4735 [gr-qc].
* [23] U. Sperhake, V. Cardoso, F. Pretorius, E. Berti and J. A. Gonzalez, _Phys.Rev.Lett._**101**, 161101 (2008), arXiv:0806.1738 [gr-qc].
* [24] U. Sperhake, V. Cardoso, F. Pretorius, E. Berti, T. Hinderer _et al._, _Phys.Rev.Lett._**103**, 131102 (2009), arXiv:0907.1252 [gr-qc].
* [25] W. E. East and F. Pretorius, _Phys.Rev.Lett._**110**, 101101 (2013), arXiv:1210.0443 [gr-qc].
* [26] H. Witek (2013), (in preparation) Introduction to NR methods in higher dimensions. Based on a series of lectures given at the NR/HEP2 Spring School.
* [27] H. Yoshino and M. Shibata, _Phys.Rev._**D80**, 084025 (2009), arXiv:0907.2760 [gr-qc].
* [28] H. Witek, M. Zilhao, L. Gualtieri, V. Cardoso, C. Herdeiro _et al._, _Phys.Rev._**D82**, 104014 (2010), arXiv:1006.3081 [gr-qc].
* [29] H. Witek, V. Cardoso, L. Gualtieri, C. Herdeiro, U. Sperhake _et al._, _Phys.Rev._**D83**, 044017 (2011), arXiv:1011.0742 [gr-qc].
* [30] M. Zilhao, M. Ansorg, V. Cardoso, L. Gualtieri, C. Herdeiro _et al._, _Phys.Rev._**D84**, 084039 (2011), arXiv:1109.2149 [gr-qc].
* [31] H. Okawa, K.-i. Nakao and M. Shibata, _Phys.Rev._**D83**, 121501 (2011), arXiv:1105.3331 [gr-qc].
* [32] M. Maliborski and A. Rostworowski (2013), arXiv:1308.1235 [gr-qc].
* [33] M. Shibata and H. Yoshino, _Phys.Rev._**D81**, 104035 (2010), arXiv:1004.4970 [gr-qc].
* [34] L. Lehner and F. Pretorius (2011), arXiv:1106.5184 [gr-qc].
* [35] M. Zilh o and F. L ffler (2013), arXiv:1305.5299 [gr-qc].
* [36] The cactus code.
* [37] E. Gourgoulhon, P. Grandclément, J.-A. Marck and J. Novak, Lorene: Langage objet pour la relativité numérique.
* [38] Einstein Toolkit: Open software for relativistic astrophysics.
* [39] R. L. Arnowitt, S. Deser and C. W. Misner, _Gen.Rel.Grav._**40**, 1997 (2008), arXiv:gr-qc/0405109 [gr-qc].
* [40] C. W. Misner, K. Thorne and J. Wheeler (1974).
* [41] R. M. Wald (1984).
* [42] J. W. York, Jr., Kinematics and dynamics of general relativity, in _Sources of Gravitational Radiation_, ed. L. L. Smarr (1979), pp. 83–126.
* [43] S. Frittelli, _Phys. Rev._**D55**, 5992 (1997).
* [44] G. Yoneda and H.-a. Shinkai, _Phys.Rev._**D63**, 124019 (2001), arXiv:gr-qc/0103032 [gr-qc].
* [45] M. Alcubierre (2008).
* [46]D. Hilditch (2013), (in preparation) Well posedness of evolution PDEs. Based on a series of lectures given at the NR/HEP2 Spring School.
* [47] M. Shibata and T. Nakamura, _Phys.Rev._**D52**, 5428 (1995).
* [48] T. W. Baumgarte and S. L. Shapiro, _Phys.Rev._**D59**, 024007 (1999), arXiv:gr-qc/9810065 [gr-qc].
* [49] C. Bona, T. Ledvinka, C. Palenzuela and M. Zacek, _Phys.Rev._**D67**, 104005 (2003), arXiv:gr-qc/0302083 [gr-qc].
* [50] G. B. Cook, _Living Rev.Rel._**3**, 5 (2000), arXiv:gr-qc/0007085 [gr-qc].
* [51] E. Gourgoulhon (2007), arXiv:gr-qc/0703035 [GR-QC].
* [52] K. Schwarzschild, _Sitzungsber.Preuss.Akad.Wiss.Berlin (Math.Phys.)_**1916**, 189 (1916), arXiv:physics/9905030 [physics].
* [53] A. Einstein and N. Rosen, _Phys.Rev._**48**, 73 (1935).
* [54] J. M. Bowen and J. York, James W., _Phys.Rev._**D21**, 2047 (1980).
* [55] D. R. Brill and R. W. Lindquist, _Phys.Rev._**131**, 471 (1963).
* [56] S. Brandt and B. Bruegmann, 738 (1997), arXiv:gr-qc/9711015 [gr-qc].
* [57] R. P. Kerr, _Phys.Rev.Lett._**11**, 237 (1963).
* [58] R. H. Boyer and R. W. Lindquist, _J.Math.Phys._**8**, 265 (1967).
* [59] S. R. Brandt and E. Seidel, _Phys.Rev._**D54**, 1403 (1996), arXiv:gr-qc/9601010 [gr-qc].
* [60] S. Brandt, K. Camarda and E. Seidel, 741 (1997), arXiv:gr-qc/9711016 [gr-qc].
* [61] S. Hawking and G. Ellis (1973).
* [62] J. Thornburg, _Living Rev.Rel._**10**, 3 (2007), arXiv:gr-qc/0512169 [gr-qc].
* [63] M. Kriele and S. A. Hayward, _Journal of Mathematical Physics_**38**, 1593 (1997).
* [64] M. Shibata, _Phys.Rev._**D55**, 2002 (1997).
* [65] M. Shibata and K. Uryu, _Phys.Rev._**D62**, 087501 (2000).
* [66] J. Thornburg, _Class.Quant.Grav._**21**, 743 (2004), arXiv:gr-qc/0306056 [gr-qc].
* [67] M. Ansorg, B. Bruegmann and W. Tichy, _Phys.Rev._**D70**, 064011 (2004), arXiv:gr-qc/0404056 [gr-qc].
* [68] M. Ansorg, _Class.Quant.Grav._**24**, S1 (2007), arXiv:gr-qc/0612081 [gr-qc].
* [69] P. Grandclement and J. Novak (2007), arXiv:0706.2286 [gr-qc].
* [70] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery (2007).
* [71] R. S. Varga, **27** (2009).
* [72] L. A. Hageman and D. M. Young (2012).
* [73] R. P. Fedorenko, _USSR Computational Mathematics and Mathematical Physics_**1**, 1092 (1962).
* [74] R. P. Fedorenko, _USSR Computational Mathematics and Mathematical Physics_**4**, 227 (1964).
* [75] N. S. Bakhvalov, _USSR Computational Mathematics and Mathematical Physics_**6**, 101 (1966).
* [76] A. Brandt, _Mathematics of computation_**31**, 333 (1977).
* [77] M. Choptuik and W. G. Unruh, _General Relativity and Gravitation_**18**, 813 (August 1986).
* [78] S. H. Hawley and R. A. Matzner, _Class.Quant.Grav._**21**, 805 (2004), arXiv:gr-qc/0306122 [gr-qc].
* [79] J. D. Brown and L. L. Lowe, _Journal of Computational Physics_**209**, 582 (2005).
* [80] S. Almeida (2013), (in preparation) Introduction to high performance computing. Based on a series of lectures given at the NR/HEP2 Spring School.
|
1907.11604 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 138944,
"num_imgs": 0,
"llama3_tokens_count": 50297
} | [] | # Minimizers for the thin one-phase free boundary problem
Max Engelstein
ME (University of Minnesota Twin Cities, Minnesota): maxe@mit.edu
Aapo Kauranen
AK (Universitat Autònoma de Barcelona-BGSMath, Catalonia): aapo.p.kauranen@jyu.fi
Martí Prats
MP (Universitat Autònoma de Barcelona, Catalonia): mprats@mat.uab.cat
Georgios Sakellaris
GS (Universitat Autònoma de Barcelona, Catalonia): gsakellaris@mat.uab.cat
Yannick Sire
YS (Johns Hopkins University, Maryland): sire@math.jhu.edu
###### Abstract
We consider the “thin one-phase” free boundary problem, associated to minimizing a weighted Dirichlet energy of the function in \(\mathbb{R}^{n+1}_{+}\) plus the area of the positivity set of that function in \(\mathbb{R}^{n}\). We establish full regularity of the free boundary for dimensions \(n\leq 2\), prove almost everywhere regularity of the free boundary in arbitrary dimension and provide content and structure estimates on the singular set of the free boundary when it exists. All of these results hold for the full range of the relevant weight.
While our results are typical for the calculus of variations, our approach does not follow the standard one first introduced in [1]. Instead, the nonlocal nature of the distributional measure associated to a minimizer necessitates arguments which are less reliant on the underlying PDE.
###### Acknowledgements
M.E. was partially supported by an NSF postdoctoral fellowship, NSF DMS 1703306 and by David Jerison’s grant DMS 1500771. A.K. acknowledges Financial support from the Spanish Ministry of Economy and Competitiveness, through the María de Maeztu Programme for Units of Excellence in R&D (MDM-2014-0445). M.P. was funded by the European Research Council under the grant agreement 307179-GFTIPFD. G.S. has received funding from the European Union’s Horizon 2020 research and innovation programme under Marie Skłodowska-Curie grant agreement No 665919. A.K., M.P. and G.S. were also partially funded by 2017-SGR-0395 (Catalonia) and MTM-2016-77635-P (MINECO, Spain). Y.S. is partially supported by the Simons foundation.
M.E. would also like to thank Nick Edelen for many fruitful conversations regarding the quantitative stratification and rectifiable Reifenberg framework.
A.K., M.P., G.S. would like to thank Xavier Cabré, Tomás Sanz, Matteo Cozzi, Albert Mas, Maria del Mar González, Luis Silvestre and Stefano Vita for some conversations around [36]. They would also like to thank Mihalis Mourgoglou for some conversations regarding the degenerate elliptic measure.
###### Contents
Contents
* 1 Introduction
* 2 Preliminaries
* 3 Compactness of minimizers * 3.1 Caccioppoli Inequality * 3.2 Compactness
* 4 Monotonicity formula and some immediate consequences * 4.1 Dimension reduction * 4.2 Upper semicontinuity
* 5 Measure-theoretic properties * 5.1 Finite perimeter * 5.2 Energy gap
* 6 Full regularity in \({\mathbb{R}}^{2+1}\)
* 7 Uniform bounds around the free boundary * 7.1 Uniform non-degeneracy * 7.2 Behavior of the distributional fractional Laplacian * 7.3 Uniform Hölder character * 7.4 Lower estimates for the distributional fractional Laplacian
* 8 Rectifiability of the singular set * 8.1 Quantitative stratification for minimizers to \(\mathcal{J}\) * 8.2 The Refined Covering Theorem * 8.3 Tools for bad balls: key dichotomy * 8.4 Tools for good balls: packing estimates and GMT
* A Relation with the nonlocal Bernoulli problem
* B Uniqueness of extensions
## 1 Introduction
This article is devoted to the study of the regularity properties of a weighted version of the thin one-phase problem. More precisely we investigate even, nonnegative minimizers of the following functionals: denote \(x\in{\mathbb{R}}^{n+1}\) by \(x=(x^{\prime},y)\in{\mathbb{R}}^{n}\times{\mathbb{R}}\), and for \(\beta\in(-1,1)\) we define
\[\mathcal{J}(v,\Omega):=\int_{\Omega}|y|^{\beta}|\nabla v|^{2}\ dx+m(\{v>0\} \cap{\mathbb{R}}^{n}\cap\Omega),\] (1.1)
where \(m\) stands for the \(n\)-dimensional Lebesgue measure. Here, and throughout the paper, the integration is done with respect to the \((n+1)\)-dimensional Lebesgue measure unless stated otherwise. This functional is finite for open sets, \(\Omega\), and functions in the weighted Hilbert space,
\[H^{1}(\beta,\Omega):=\{v\in L^{2}(\Omega;|y|^{\beta}):\nabla v\in L^{2}(\Omega ;|y|^{\beta})\},\]
equipped with the usual weighted norm.
Our main concern is to investigate fine regularity properties of the free boundary of minimizers \(v\) of (1.1), that is the set,
\[F(v):=\partial_{\mathbb{R}^{n}}\left\{v(x,0)>0\right\}\cap\Omega.\]
Since the free boundary lies on a codimension 1 subspace of the ambient space \({\mathbb{R}}^{n+1}\), such a problem is called a thin one-phase free boundary problem. This type of free boundary problem has been investigated for the first time by Caffarelli, Roquejoffre and the last author in [7] in relation with the theory of semi-permeable membranes (see, e.g., [13]). As we will describe later this is an analogue of the classical one-phase problem (also called the Bernoulli problem) but for the fractional Laplacian.
The Bernoulli problem was first treated in a rigorous mathematical way by Alt and Caffarelli in the seminal paper [1]: in the Bernoulli problem we consider minimizers of (1.1) where \(\beta=0\) and the second term is replaced by \(\mathcal{L}^{n+1}(\{v>0\}\cap\Omega)\) (where \(\mathcal{L}^{n+1}\) stands for the Lebesgue measure in \({\mathbb{R}}^{n+1}\)). In particular, for the Bernoulli problem, the free boundary fully sits in the ambient space, \({\mathbb{R}}^{n+1}\). In [1], the authors provided a general strategy to attack this type of problem. Out of necessity we needed to modify this blueprint in several substantial ways (see below for a more detailed comparison). For more information on the one-phase problem (and some of its variants) we refer to the book of Caffarelli and Salsa (and references therein) [8], and to the more recent survey of De Silva, Ferrari and Salsa [18].
As noticed in [7], problem (1.1) is related in a tight way to the standard one-phase free boundary problem but with the Dirichlet energy replaced by the Gagliardo semi-norm \([u]_{\dot{H}^{\alpha}}\), for \(\alpha=\frac{1-\beta}{2}\in(0,1)\). This connection suggests that the thin one-phase problem is actually intrinsically a nonlocal problem, though the energy in (1.1) is clearly local.
### Connection with the fractional one-phase problem
As previously mentioned, the functional \(\mathcal{J}\) introduced by Caffarelli, Roquejoffre and the last author in [7] is a local version of the following nonlocal free boundary problem: given a function \(f\in L^{1}_{\operatorname{loc}}({\mathbb{R}}^{n})\) with suitable decay at infinity, we can define its fractional Laplacian at \(x\in{\mathbb{R}}^{n}\) by
\[(-\Delta)^{\alpha}f(x)=c_{n,\alpha}\,p.v.\int_{{\mathbb{R}}^{n}}\frac{f(x)-f( \xi)}{|x-\xi|^{n+2\alpha}}\,d\xi.\]
At the formal level, we are interested in solutions of the free boundary problem
\[\begin{cases}(-\Delta)^{\alpha}f=0&\mbox{in }\Omega\cap\{f>0\},\\ \partial_{\nu}^{\alpha}f=A&\mbox{on }\Omega\cap F(f),\end{cases}\] (1.2)
where \(\partial_{\nu}^{\alpha}f(x):=\lim_{\Omega\cap\{f>0\}\ni\xi\to x}\frac{f(\xi)-f (x)}{((\xi-x)\cdot\nu(x))^{\alpha}}\) and where \(f\) satisfies a given “Dirichlet boundary condition” on the complement of \(\Omega\).
As in the case of the classical Laplacian (see [1]), we are interested in obtaining equation (1.2) as the Euler-Lagrange equation of a certain functional. Given a locally integrable function \(f\), consider its fractional Sobolev energy
\[\left[f\right]_{\dot{H}^{\alpha}({\mathbb{R}}^{n})}:=\iint_{{\mathbb{R}}^{2n}} \frac{|f(x)-f(\xi)|^{2}}{|x-\xi|^{n+2\alpha}}\,d\xi\,dx.\]
Since we want to study competitors which vary only in a certain domain \(\Omega\), it is natural to consider only the integration region which may suffer variations when changing candidates. Thus, we define the energy
\[J(f,\Omega):=c_{n,\alpha}\iint_{{\mathbb{R}}^{2n}\setminus(\Omega^{c})^{2}} \frac{|f(x)-f(\xi)|^{2}}{|x-\xi|^{n+2\alpha}}\,d\xi\,dx+m(\{f>0\}\cap\Omega).\] (1.3)
We say that \(f\in L^{1}_{\operatorname{loc}}\) is a minimizer of \(J\) in \(\Omega\) if \(J(f,\Omega)\) is finite and \(J(f,\Omega)\leq J(g,\Omega)\) for every \(g\) satisfying that \(f-g\in\dot{H}^{\alpha}({\mathbb{R}}^{n})\) and such that \(f(x)=g(x)\) for almost every \(x\in\Omega^{c}\). We say that \(f\) is a global minimizer if it is a minimizer for every open set \(\Omega\subset{\mathbb{R}}^{n}\). Note that both terms in (1.3) are in competition, since a minimizer of the fractional Sobolev energy in \(\Omega\) is \(\alpha\)-harmonic and, thus, if it is non-negative outside of \(\Omega\) it is strictly positive inside of \(\Omega\), maximizing the second term.
Consider now the Poisson kernel for fixed \(n\in{\mathbb{N}}\) and \(0<\alpha<1\)
\[P_{y}(\xi):=P_{n,\alpha}(\xi,y)=c_{n,\alpha}\frac{|y|^{2\alpha}}{|(\xi,y)|^{n+ 2\alpha}}\quad\quad\mbox{ for every }(\xi,y)\in{\mathbb{R}}^{n}\times{\mathbb{ R}}.\] (1.4)
The Poisson extension of \(f\in L^{1}_{\operatorname{loc}}({\mathbb{R}}^{n})\) is given by
\[u(x^{\prime},y):=f*P_{y}(x^{\prime})=\int_{{\mathbb{R}}^{n}}P_{n,\alpha}(\xi,y )f(x^{\prime}-\xi)\,d\xi\quad\quad\mbox{ for every }(x^{\prime},y)\in{\mathbb{ R}}^{n}\times{\mathbb{R}}.\] (1.5)
By [9], with a convenient choice of the constant one gets
\[\lim_{y\searrow 0}y^{1-2\alpha}u_{y}(x^{\prime},y)=-(-\Delta)^{\alpha}f(x^{ \prime})\]
in every point where \(f\) is regular enough. Moreover, the extension satisfies the localized equation \(\nabla\cdot(|y|^{\beta}\nabla u)=0\) weakly, away from \({\mathbb{R}}^{n}\times\{0\}\). The whole point is that local minimizers of (1.3) can be extended via the previous Poisson kernel \(P_{y}\) to (even) minimizers of (1.1) (see the Appendix for a precise statement). Therefore, the thin one-phase problem appears as a “localization” of the one-phase problem for the fractional Laplacian. Notice that, and this is of major importance for us, this localization technique does not carry over to other types of nonlocal operators besides pure powers of second-order elliptic operators. This is a major drawback of the theory, in the sense that, at the moment, it seems to be impossible to tackle one-phase problems involving more general operators than the fractional Laplacian. The main point is we do not know how to prove any kind of monotonicity for general integral operators.
This connection between the nonlocal analogue of the Bernoulli problem and our thin one-phase problem allows us to simplify several arguments by working in the purely nonlocal setting. However, this underlying nonlocality is also the reason why several results, which came more easily in the setting of [1], are non-trivial or substantially harder for us. For example, perturbations of solutions need to take into account long range effects which makes classical, local, perturbation arguments much more difficult.
In the paper [7], the authors proved basic properties of the minimizers for the functional \(\mathcal{J}\) such as optimal regularity, non-degeneracy near the free boundary, and positive densities of phases. Also they provided an argument for \(n=2\) showing that Lipschitz free boundaries are \(C^{1}\). A feature of the functional \(\mathcal{J}\) is that the weight \(|y|^{\beta}\) is either degenerate or singular at \(\left\{y=0\right\}\) (except in the case \(\beta=0\)). Such weights belong to the Muckenhoupt class \(A_{2}\) and the seminal paper of Fabes, Kenig and Serapioni [27] investigated regularity issues for elliptic PDEs involving such weights (among other things). After that, [19] proved an \(\varepsilon\)-regularity result and [3] showed the existence of a monotonicity formula for this setting.
In the case \(\beta=0\), the problem is still degenerate in the sense that derivatives near the free boundary blow up. The case \(\beta=0\) has been thoroughly investigated in the series of papers by De Silva, Savin and Roquejoffre [14, 16, 17].
The main goal of our paper is to provide a full picture of the regularity of the free boundary for any power \(\beta\in(-1,1)\), both in terms of measure-theoretic statements and partial (or full) regularity results. From this point of view our contribution is a complement of the paper by De Silva and Savin [17] for \(\beta=0\). It has to be noticed that the standard approach to regularity of Lipschitz free boundaries as developed by Caffarelli (see the monograph [8]) does not seem to work in our setting.
#### Our approach to regularity
In [1] (and many subsequent works), the minimizing property of the solution is used to prove that the distributional Laplacian of that solution is an Ahlfors-regular measure supported on the free boundary. This implies (amongst other things) that the free boundary is a set of (locally) finite perimeter, and thus almost every point on the free boundary has a measure theoretic tangent. One can then work purely with the weak formula (i.e. the analogue of (1.2)) to prove a “flat implies smooth” result which, together with the existence almost everywhere of a measure theoretic tangent, has as a consequence that the free boundary is almost everywhere a smooth graph and the free boundary condition in (1.2) holds in a classical sense at the smooth points.
A similar “flat implies smooth” result exists in our context (this is essentially due to De Silva, Savin and the last author, [19], see Theorem 2.4 below). However, showing that the free boundary is the boundary of a set of finite perimeter proves to be much more difficult. Due to the nonlocal nature of the problem, \(-\mathrm{div}(|y|^{\beta}\nabla u)\) (considered as a distribution) is not supported on the free boundary. Furthermore, the scaling of this measure does not allow us to conclude that the free boundary has the correct dimension (much less that it is Ahlfors regular).
To prove finite perimeter, we take the following approach inspired by the work of de Silva and Savin: after establishing some preliminaries we prove crucial compactness results. This, along with a monotonicity formula originally due to Allen [3] allows us to run a dimension reduction argument in the vein of Federer or (in the context of free boundary problems) Weiss [38]. With this tool in hand, we show that the set of points at which no blow-up is flat is a set of lower dimension. Locally finite perimeter and regularity for the reduced boundary then follow from a covering argument and some standard techniques.
Here and throughout the paper, we will denote the ball of radius \(r\) in \({\mathbb{R}}^{n+1}\) centered at the origin by \(B_{r}\), and \(B_{r}^{\prime}:=B_{r}\cap{\mathbb{R}}^{n}\times\{0\}\). Moreover, for the definition of \({\mathbf{H}}^{\beta}\), see Section 2. We may then summarize our regularity results in the following theorem.
**Theorem 1.1**.: _[Main Regularity Theorem] Let \(u\in{\mathbf{H}}^{\beta}(B_{1})\) be a (non-negative, even) local minimizer of \(\mathcal{J}\) in \(B_{1}\subset\mathbb{R}^{n+1}\). Let \(B_{1,+}^{\prime}(u):=\{x=(x^{\prime},0)\in B_{1}:u(x)>0\}\), let \(F(u)\) be the boundary of \(B_{1,+}^{\prime}(u)\) inside of \({\mathbb{R}}^{n}\times\{0\}\) and assume that \(0\in F(u)\). Then,_
1. \(B_{1,+}^{\prime}(u)\) _(as a subset of_ \({\mathbb{R}}^{n}\times\{0\}\)_) is a set of locally finite perimeter in_ \(B_{1}^{\prime}\)_._
2. _We can write the free boundary as a disjoint union_ \(F(u)=\mathcal{R}(u)\cup\Sigma(u)\)_, where_ \(\mathcal{R}(u)\) _is open inside_ \(F(u)\)_, and for_ \(x\in\mathcal{R}(u)\) _there exists an_ \(r_{x}>0\) _such that_ \(B(x,r_{x})\cap F(u)\) _can be written as the graph of a_ \(C^{1,s}\)_-continuous function._
3. _Furthermore, the set_ \(\Sigma(u)\) _is of Hausdorff dimension_ \(\leq n-3\) _(and, therefore, of_ \(\mathcal{H}^{n-1}\)_-measure zero). In particular, for_ \(n\leq 2\)_,_ \(\Sigma(u)\) _is empty, and moreover, if_ \(n=3\) _then_ \(\Sigma(u)\) _is discrete._
_The constants (implicit in the set of finite perimeter, and the Hölder continuity of the functions whose graph gives the free boundary) depend on \(n\) and \(\beta\) but not on \({\left\|{u}\right\|}_{{\mathbf{H}}^{\beta}(B_{1})}\)._
As usual \(\Sigma(u)\subset F(u)\) is called the _singular set_ of the free boundary: the set of points around which \(F(u)\) cannot be parameterized as a smooth graph and all the blow-ups will be non-trivial minimal cones, see Theorem 2.4.
Our second contribution concerns the structure and size of the singular set. It builds on recent major works on quantitative stratification [32], extended to free boundary problems (in particular the one-phase problem) by Edelen and the first author [22].
**Theorem 1.2**.: _Let \(u\in{\mathbf{H}}^{\beta}(B_{1})\) be a (non-negative, even) local minimizer of \(\mathcal{J}\) in \(B_{1}\) and \(0\in F(u)\). Let \(B_{1,+}^{\prime}(u):=\{x=(x^{\prime},0)\in B_{1}:u(x)>0\}\) and \(F(u)\) be the boundary of \(B_{1,+}^{\prime}(u)\) inside \(B_{1}^{\prime}\). Then, there exists a \(k_{\alpha}^{*}\geq 3\) such that \(\Sigma(u)\) is \((n-k_{\alpha}^{*})\)-rectifiable and_
\[\mathcal{H}^{n-k^{*}_{\alpha}}(\Sigma(u)\cap D)\leq C_{n,\alpha,{\rm dist}(D, \partial B_{1})}\quad\quad\mbox{for every }D\subset\subset B_{1}.\]
In [12], De Silva and Jerison constructed a singular minimizer for the Alt-Caffarelli one-phase problem in dimension \(7\), giving the dimension bound \(k^{*}\leq 8\) in the previous theorem in this case (see [22]). This result is not known for the thin one-phase problem. The reason is that the one-phase problem, seen from the nonlocal point of view involving the fractional Laplacian, is related to the so-called nonlocal minimal surfaces introduced by Caffarelli, Roquejoffre and Savin [6]. Indeed, in [34], the authors proved that a fractional version of Allen-Cahn equation converges variationally to the standard perimeter functional for \(\alpha\geq 1/2\) and to the so-called nonlocal minimal surfaces for \(\alpha<1/2\). We can then conjecture the bound \(k^{*}_{\alpha}\leq 8\) for \(\alpha\geq 1/2\) by analogy with the result for the standard one-phase problem but the bound for \(\alpha<1/2\) is not clear at all. However, one knows that there is no singular cone in dimension \(2\) for nonlocal minimal surfaces [35] and that the Bernstein problem is known for those in dimensions 2 and 3 [28].
We would like also to make a last remark about a result which is of purely nonlocal nature. In the case of the one-phase problem, one can show that the distributional Laplacian is a Radon measure along the free boundary. In the case of the thin one-phase free boundary problem, due to the nonlocality of the problem, such a behavior does not happen in the sense that we will show that the fractional Laplacian is an absolutely continuous measure with respect to \(n\)-dimensional Lebesgue measure with a precise behavior. This phenomenon is of purely nonlocal nature and similar to the fact that the fractional harmonic measure is of trivial nature. More precisely, every minimizer \(u\) satisfies \(\nabla\cdot(|y|^{\beta}\nabla u)=0\) weakly, away from \({\mathbb{R}}^{n}\cap\{u\leq 0\}\). Thus, equation (1.2) above can be understood as an Euler-Lagrange equation for the functional \(\mathcal{J}\) in the sense that the restriction to \({\mathbb{R}}^{n}\) of a given minimizer \(u\) in \(\Omega\subset{\mathbb{R}}^{n+1}\) and with asymptotic behavior \(u(x,y)=\mathcal{O}(|(x,y)|^{\alpha})\) is always a solution to (1.2) for \(A=A(\alpha)\) at “nice” points of the free boundary.
A brief summary of this paper follows. In Sections 3 and 4 we discuss compactness of minimizers and we recall Allen’s monotonicity formula to derive some immediate consequences. In Section 5 we show that the positive phase is a set of locally finite perimeter, establishing the first part of Theorem 1.1 (modulo energy bounds), and we show that the singular set can be identified using the Allen-Weiss density. Section 6 is devoted to deducing full regularity of minimizers in \({\mathbb{R}}^{2+1}\) concluding the proof of Theorem 1.1.
Once we have established the finite perimeter, in Section 7 we remove the dependence of the estimates on the energy of the minimizer in the previous theorems, using a rather subtle argument which combines results from all the previous sections. A crucial step is to analyze some basic properties of the distributional fractional Laplacian of our minimizer. As stated above this analysis will not be enough to establish that the positivity set of the minimizer is a set of locally finite perimeter. We believe that many of these results may be of independent interest. For example, corresponding results for the classical Bernoulli problem have been used to understand free boundary problems for harmonic measure (see [31]).
Finally, Section 8 is devoted to the proof of Theorem 1.2.
#### Notation
We denote the constants that depend on the dimension \(n\), \(\alpha\) and perhaps some other fixed parameters which are clear from the context by \(C\). Their value may change from an occurrence to another. On the other hand, constants with subscripts as \(C_{0}\) retain their values along the text. For \(a,b\geq 0\), we write \(a\lesssim b\) if there is \(C>0\) such that \(a\leq Cb\). We write \(a\approx b\) to mean \(a\lesssim b\lesssim a\).
Let \(u\) be a continuous function in \({\mathbb{R}}^{n+1}\). Then we write \(\Omega_{+}(u):=\Omega\cap\{u>0\}\), and we denote the zero phase, the positive phase and the free boundary by
\[\Omega_{0}(u):=\{x\in{\mathbb{R}}^{n}\times\{0\}:u(x)=0\}^{\circ} {},\]
\[\Omega_{+}^{\prime}(u):=\Omega_{+}\cap({\mathbb{R}}^{n}\times\{0 \})=\{x\in{\mathbb{R}}^{n}\times\{0\}:u(x)>0\},\mbox{ and}\]
\[F(u):=F_{\Omega}(u)=\partial(\Omega_{+}(u)\cap{\mathbb{R}}^{n} \times\{0\})\cap\Omega,\]
respectively. Here both the boundary and the interior are taken with respect to the standard topology in \({\mathbb{R}}^{n}\). Note that \({\mathbb{R}}^{n}\times\{0\}\) is the disjoint union of \(\Omega_{0}(u)\), \(\Omega_{+}^{\prime}(u)\) and \(F(u)\) whenever \(u\) is non-negative. We also call \(F_{\rm red}(u)=F_{{\rm red},\Omega}(u)\) the points of \(F_{\Omega}(u)\) where the free boundary is expressed locally as a \(C^{1}\) surface. Finally, let \(\Sigma(u)=\Sigma_{\Omega}(u)=F_{\Omega}(u)\setminus F_{{\rm red},\Omega}(u)\). In general we will write \(\Omega^{\prime}:=\Omega\cap({\mathbb{R}}^{n}\times\{0\})\).
Throughout the paper we will often fix \(\beta\in(-1,1)\) but then refer to \(\alpha\in(0,1)\) or vice versa. These two numbers are always connected by the relationship \(\alpha=\frac{1-\beta}{2}\).
## 2 Preliminaries
In this section, we provide the known results concerning the problem under consideration. We say that a function \(u\) is _even_ if it is symmetric with respect to the hyperplane \({\mathbb{R}}^{n}\times\{0\}\), that is, \(u(x^{\prime},y)=u(x^{\prime},-y)\). The function spaces that we will consider are the following
\[{\mathbf{H}}^{\beta}(\Omega):=\{u\in H^{1}(\beta,\Omega):u\mbox{ is even and non-negative}\}\]
and
\[{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega):=\{u\in L^{2}_{\operatorname {loc}}(\Omega):u\in{\mathbf{H}}^{\beta}(B)\mbox{ for every ball }B\subset \subset\Omega\}.\]
We will omit \(\Omega\) in the notation when it is clear from the context.
**Definition 2.1**.: _We say that a function \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) is a (local) minimizer of \(\mathcal{J}\) in a domain \(\Omega\) if for every ball \(B\subset\subset\Omega\) and for every function \(v\in{\mathbf{H}}^{\beta}(B)\) such that the traces \(v|_{\partial B}\equiv u|_{\partial B}\), the inequality_
\[\mathcal{J}(u,B)\leq\mathcal{J}(v,B)\]
_holds._
As usual for several free boundary problems, it is a natural question to exhibit a particular (global) solution so that one gets an idea of the qualitative properties of general solutions. Let us consider the following function: for every \(x\in{\mathbb{R}}^{n}\) let
\[f_{n,\alpha}(x):=c_{n,\alpha}(x_{n})_{+}^{\alpha},\]
where \(a_{+}=\max\{0,a\}\). If \(n=1\), \(f_{1,\alpha}\) is a solution to (1.2) for a convenient choice of \(c_{1,\alpha}\) (see [4, Theorem 3.1.4]). In fact one can see that the same is true for \(n\geq 1\) using Fubini’s Theorem conveniently, with
\[-(-\Delta)^{\alpha}f_{n,\alpha}(x)=c_{n,\alpha}(x_{n})_{-}^{-\alpha},\] (2.1)
where \(a_{-}=\max\{0,-a\}\).
As a toy question we wonder whether the trivial solutions are minimizers. Indeed, this is the case, as we will see later in Section 4.1.
**Proposition 2.2**.: _Let \(n\in{\mathbb{N}}\) and \(0<\alpha<1\). Then the trivial solution \(u_{n,\alpha}:=f_{n,\alpha}*P_{y}\) is a minimizer of \(\mathcal{J}\) in every ball \(B\subset{\mathbb{R}}^{n+1}\)._
Next we collect the main properties of minimizers in the unit ball proven in [7, Theorems 1.1-1.4, Proposition 3.3 and Corollary 3.4].
**Theorem 2.3**.: _If \(u\in{\mathbf{H}}^{\beta}(B_{1})\) is a minimizer of \(\mathcal{J}\) in \(B_{1}\) with \({\left\|{u}\right\|}_{\dot{\mathbf{H}}^{\beta}(B_{1})}:={\left\|{\nabla u} \right\|}_{L^{2}(B_{1},|y|^{\beta})}\leq E_{0}\) and \(x_{0}\in F(u)\cap B_{\frac{1}{2}}\), then it satisfies_
1. _Optimal regularity (see_ _[_7_, Theorem 1.1]__):_ \({\left\|{u}\right\|}_{\dot{C}^{\alpha}(B_{1/2})}\leq C{(1+E_{0})}\)_._
2. _Nondegeneracy (see_ _[_7_, Theorem 1.2]__):_ \(u(x)\geq C{\rm dist}(x,F(u))^{\alpha}\) _for_ \(x\in B_{\frac{1}{2}}^{\prime}\)_._
3. _Interior corkscrew condition (see_ _[_7_, Proposition 3.3]__): there exists_ \(x_{+}\in B_{r}^{\prime}(x_{0})\) _so that_ \(B^{\prime}(x_{+},C_{0}r)\subset\Omega_{+}^{\prime}(u)\)_._
4. _Positive density (see_ _[_7_, Theorem 1.3]__):_ \(|\Omega_{0}\cap B_{r}^{\prime}(x_{0})|\gtrsim r^{n}\)_._
5. _Blow-ups are minimizers (see_ _[_7_, Corollary 3.4]__): The limit of a blow-up sequence_ \(u_{k}(x):=\frac{u(x_{0}+\rho_{k}x)}{\rho_{k}^{\alpha}}\) _converging weakly in_ \(H^{1}(\beta,B_{1})\) _and uniformly is a global minimizer._
6. _Normal behavior at the free boundary (see_ _[_7_, Theorem 1.4]__): the boundary condition in (_1.2_) is satisfied at every point on the free boundary with a measure theoretic normal (see_ _[_23_]__) for a prescribed value of_ \(A\)_._
_All the constants depend on \(n\) and \(\alpha\); and also on \(E_{0}\) except for the one in P1._
A major tool in the present paper is an \(\epsilon-\)regularity result, i.e. in the language of free boundaries a statement of the type “flatness implies smoothness”. In [19], the authors proved such an \(\epsilon\)-regularity result for viscosity solutions to the overdetermined system associated to minimizers of \(\mathcal{J}\). Here we establish that all local minimizers are in fact viscosity solutions. While this verification may be standard for experts in the field, we include it here for the sake of completeness.
**Theorem 2.4** (\(\epsilon\)-regularity).: _There exists \(\epsilon>0\) depending only on \(n\), \(\alpha\) and \(E_{0}\) such that for every non-negative, even minimizer \(u\) of the energy (1.1) on a ball \(B\subset{\mathbb{R}}^{n+1}\) with \({\left\|{u}\right\|}_{{\mathbf{H}}^{\beta}(B)}\leq E_{0}r(B)^{\frac{n}{2}}\) and_
\[\{(x,0)\in B:x_{n}\leq-\epsilon\}\subset B_{0}(u)\subset\{(x,0)\in B:x_{n}\leq \epsilon\},\] (2.2)
_we have that \(F(u)\in C^{1,\gamma}_{\rm loc}(\frac{1}{2}B)\), with \(0<\gamma<1\)._
Note that the dependence on \(E_{0}\) will be removed in Section 7.
Proof.: We say that \(u\) is a _viscosity solution_ to
\[\begin{cases}\nabla\cdot(|y|^{\beta}\nabla u)=0&\mbox{in }B_{1}^{+}(u),\\ \lim_{t\to 0+}\frac{u(x_{0}+t\nu(x_{0}),0)}{t^{\alpha}}=1,&\mbox{for }(x_{0},0 )\in F(u),\end{cases}\] (2.3)
if
1. \(u\in C(B_{1})\), \(u\geq 0\),
2. \(u\in C^{1,1}_{\rm loc}(B_{1,+}(u))\), \(u\) is even and it solves \(\nabla\cdot(|y|^{\beta}\nabla u)=0\) in the viscosity sense, and
3. any strict _comparison subsolution_ (resp. supersolution) cannot touch from below (resp. from above) at a point \((x_{0},0)\in F(u)\).
We claim that
every non-negative even minimizer is a viscosity solution. (2.4)
Conditions (i) and (ii) have been verified in [19, 36]. To verify our claim it suffices to prove condition (iii) above: that any strict comparison subsolution cannot touch \(u\) from below at a point \((x_{0},0)\in F(u)\). The analogous claim for strict comparison supersolutions will follow in the same way.
Let us recall (see, e.g. Definition 2.2 in [19]), that \(w\in C(B_{1})\) is a strict comparison subsolution (resp. supersolution) to (2.3) if
1. \(w\geq 0\),
2. \(w\) is even with respect to \(\{y=0\}\),
3. \(w\in C^{2}(\{w>0\})\),
4. \(\mathrm{div}\left(|z|^{\beta}\nabla w\right)\geq 0\) in \(B_{1}\backslash\{y=0\}\),
5. \(F(w)\) is locally given by the graph of a \(C^{2}\) function and for any \(x_{0}\in F(w)\) we may write \[w(x,y)=aU((x-x_{0})\cdot\nu(x_{0}),y)+o(\|(x-x_{0},y)\|^{\alpha}),\qquad(x,y) \rightarrow(x_{0},0).\] (2.5) Here \(U\) is the extension of the trivial solution (see [19]), \(\nu(x_{0})\) is the unit normal to \(F(w)\) considered as a subset of \({\mathbb{R}}^{n}\) pointing into \(\{w>0\}\) and \(a\geq 1\).
6. Furthermore, either the inequality is strict in d), or \(a>1\) in e).
So assume that \(w\geq u\) where \(w\) is a strict comparison subsolution and \(u\) is some minimizer and that \(w=u\) at \((x_{0},0)\in F(u)\). Since \(u(x_{0},0)=0\) it follows that \((x_{0},0)\in F(w)\) and with a harmless rotation we can guarantee that \(\nu((x_{0},0))=e_{n}\). We want to show that \(e_{n}\) is also the measure theoretic unit normal to \(F(u)\). Indeed, since \(F(w)\) is \(C^{2}\) there must exist a ball \(B\subset\{w>0\}\) which is tangent to \(F(w)\) at \((x_{0},0)\). It must then be that case that \(B\subset\{u>0\}\) as well. Thus \((x_{0},0)\in F(u)\) has a tangent ball from the inside which, by [7] Proposition 4.5 implies that \(u\) has the asymptotic expansion
\[u(x,y)=U((x-x_{0})\cdot\nu(x_{0}),y)+o(\|(x-x_{0},y)\|^{\alpha}),\qquad(x,y) \rightarrow(x_{0},0).\]
If \(u\geq w\) this implies that \(w\) must satisfy the expansion in (2.5) with \(a=1\) at the point \(x_{0}\). This, in turn, implies that \(\mathrm{div}\left(|z|^{\beta}\nabla w\right)>0\) in \(B_{1}\backslash\{y=0\}\) (by the definition of a strict subsolution). Furthermore, since \(w\in C^{2}\) where \(\{w>0\}\) we can guarantee that \(\mathrm{div}\left(|z|^{\beta}\nabla w\right)\geq 0\) in all of \(B_{1}\cap\{w>0\}\).
Let us return to the ball \(B\) which is a subset of \(\{u>0\}\) and \(\{w>0\}\) and for which \((x_{0},0)\in\overline{B}\). We know that \(w-u\neq 0\) in \(B\setminus\{y=0\}\) (this is because \(w\) strictly satisfies the differential inequality in \(B\) away from \(\{y=0\}\)) and we know that \(w-u\) is a subsolution in \(B\). Furthermore \((x_{0},0)\in B\) is a strict maximum, so by the Hopf lemma in [10, Proposition 4.11] it must be that
\[\lim_{t\downarrow 0^{+}}\frac{({w-u})(x_{0}+t\nu(x_{0}),0)}{t^{\alpha}}>0.\]
This contradicts the fact \(u\) and \(w\) both satisfy (2.5) at \((x_{0},0)\) with \(a=1\). Therefore, \((x_{0},0)\) must not have been a touching point and \(u\) is indeed a viscosity solution.
Since, \(u\) is a viscosity solution, [19, Theorem 1.1] applies and we get the desired \(\varepsilon\)-regularity. ∎
## 3 Compactness of minimizers
In this section we prove important results on the compactness of minimizers. As we mentioned above, our contribution is that convergent sequences of minimizers also converge in the relevant weighted Sobolev spaces strongly rather than just weakly. This will prove essential to the compactness arguments used in the later sections of this paper.
### Caccioppoli Inequality
First we want to show that the distribution \(\lambda:=\nabla\cdot(|y|^{\beta}\nabla u)\) is in fact a Radon measure with support in the complement of the positive phase as long as \(u\) is a minimizer. In Section 7 we will come back to this measure to understand its behavior around the free boundary.
**Lemma 3.1**.: _Let \(\Omega\subset{\mathbb{R}}^{n+1}\) be an open set, and let \(u\in W^{1,2}_{\operatorname{loc}}(\Omega,|y|^{\beta})\) be such that \(\nabla\cdot(|y|^{\beta}\nabla u)=0\) weakly in \(\Omega_{+}(u)\), i.e., for every \(\eta\in C^{\infty}_{c}(\Omega_{+}(u))\),_
\[\langle\nabla\cdot(|y|^{\beta}\nabla u),\eta\rangle:=-\int(|y|^{\beta}\nabla u )\nabla\eta=0.\] (3.1)
_Then \(\lambda:=\nabla\cdot(|y|^{\beta}\nabla u)\) is a positive Radon measure supported on \(\{u=0\}\) and for every \(v\in W^{1,2}(\Omega,|y|^{\beta})\cap C_{c}(\Omega)\)_
\[\int v\,d\lambda=-\int|y|^{\beta}\nabla u\cdot\nabla v.\] (3.2)
Proof.: Indeed, by (3.1) the quantity
\[-\int|y|^{\beta}\nabla u\cdot\nabla\zeta=-\int|y|^{\beta}\nabla u\cdot\nabla \left(\zeta\max\left\{\min\left\{2-\frac{u}{\varepsilon},1\right\},0\right\} \right)\geq-\int_{\Omega\cap\{0<u<2\varepsilon\}}|y|^{\beta}|\nabla u||\nabla \zeta|\xrightarrow{\varepsilon\to 0}0\]
defines a positive functional on positive \(\zeta\in C^{0,1}_{c}(\Omega)\). Moreover, for compact \(K\subset\Omega\), consider a Lipschitz function \(f_{K}\) such that \(\chi_{K}\leq f_{K}\leq\chi_{\Omega}\). If \(\zeta\in C^{0,1}_{c}(K)\), by the positivity shown above we obtain
\[-\int|y|^{\beta}\nabla u\cdot\nabla\zeta\leq-{\left\|{\zeta}\right\|}_{L^{ \infty}}\int|y|^{\beta}\nabla u\cdot\nabla f_{K}\leq C_{K,u}{\left\|{\zeta} \right\|}_{L^{\infty}}\]
and, by Hahn-Banach’s theorem, we can extend the functional to a positive functional in \(C_{c}(\Omega)\), that is given by integration against a positive Radon measure by the Riesz representation theorem.
The fact that (3.2) holds for all functions in \(W^{1,2}(\Omega,|y|^{\beta})\cap C_{c}(\Omega)\) follows by a standard density argument. ∎
The Caccioppoli inequality is the first step to proving convergence in a Sobolev sense. It will also be useful when we remove the _a priori_ dependence of our results on the Sobolev norm of the minimizer.
**Lemma 3.2** (Caccioppoli Inequality).: _Let \(B\subset{\mathbb{R}}^{n+1}\) be a ball of radius \(r\) centered on \({\mathbb{R}}^{n}\times\{0\}\), and let \(u\in W^{1,2}(B,|y|^{\beta})\) be such that \(\nabla\cdot(|y|^{\beta}\nabla u)=0\) weakly in \(B\cap\{u>0\}\). Then_
\[\int_{\frac{1}{2}B}|y|^{\beta}|\nabla u|^{2}\leq\frac{4}{r^{2}}\int_{B \setminus\frac{1}{2}B}|y|^{\beta}u^{2}.\]
Proof.: Let \(\eta\) be a Lipschitz function such that \(\chi_{\frac{1}{2}B}\leq\eta\leq\chi_{B}\) and with \(|\nabla\eta|\leq\frac{1}{r}\). By Lemma 3.1
\[0=\int_{B}u\eta^{2}d\lambda=\int_{B}|y|^{\beta}\nabla u\cdot\nabla(u\eta^{2}).\]
By the Leibniz rule
\[\int_{B}|y|^{\beta}\eta^{2}|\nabla u|^{2}=-\int_{B}|y|^{\beta}2u\eta\nabla u \cdot\nabla\eta,\]
and using Hölder’s inequality we get
\[\int_{\frac{1}{2}B}|y|^{\beta}|\nabla u|^{2}\leq\int_{B}|y|^{\beta}\eta^{2}| \nabla u|^{2}\leq\int_{B}|y|^{\beta}4u^{2}|\nabla\eta|^{2}\leq\frac{4}{r^{2}} \int_{B\setminus\frac{1}{2}B}|y|^{\beta}u^{2}.\]
∎
**Lemma 3.3**.: _Let \(u\in{\mathbf{H}}^{\beta}(B_{r})\) be a minimizer of \(\mathcal{J}\) in \(B_{2r}\) and \(0\in F(u)\). Then_
\[r^{-n/2}{\left\|{\nabla u}\right\|}_{L^{2}(\frac{1}{2}B_{r};|y|^{\beta})}\leq r ^{-\alpha}{\left\|{u}\right\|}_{L^{\infty}(B_{r})}\leq{\left\|{u}\right\|}_{ \dot{C}^{\alpha}(B_{r})}{\leq C\left(1+r^{-n/2}{\left\|{\nabla u}\right\|}_{L^ {2}(B_{2r};|y|^{\beta})}\right)}.\]
Proof.: The first inequality is Caccioppoli, the middle estimate is trivial and the last follows from _P1_ in Theorem 2.3. ∎
### Compactness
In the following lemma we prove the compactness of minimizers in the relevant Sobolev spaces. For convenience, we also detail several compactness results which were either already proven in [7] or are standard consequences of the non-degeneracy estimates in Theorem 2.3. Nevertheless, we include full proofs here for the sake of completeness. We note here (as we did above and will do again below) that while we currently need to assume the uniform bound on the Hölder norm of the functions \(u_{k}\) we can get rid of this assumption in the light of the results of Section 7.
**Lemma 3.4** (Compactness results).: _Let \(\{u_{k}\}_{k=1}^{\infty}\subset{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) be a sequence of minimizers in a domain \(\Omega\subset{\mathbb{R}}^{n+1}\) with \({\left\|{u_{k}}\right\|}_{\dot{C}^{\alpha}(\Omega)}\leq E_{0}\) with non-empty free boundary. Then there exists a subsequence converging to some \(u_{0}\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) such that for every bounded open set \(G\subset\subset\Omega\) we have_
1. \(u_{k}\to u_{0}\) _in_ \(C^{\beta}(G)\) _for every_ \(\beta<\alpha\)_,_
2. \(u_{k}\to u_{0}\) _in_ \(L^{p}(G)\) _for every_ \(p\leq\infty\)_,_
3. \(\partial\{u_{k}>0\}\cap\bar{G}\rightarrow\partial\{u_{0}>0\}\cap\bar{G}\) _in the Hausdorff distance,_
4. \(\chi_{\{u_{k}>0\}}\rightarrow\chi_{\{u_{0}>0\}}\) _in_ \(L^{1}(G)\)_, and_
5. \(\nabla u_{k}\to\nabla u_{0}\) _in_ \(L^{p}(G;|y|^{\beta})\) _for every_ \(p\leq 2\)_._
Proof.: The first claim follows from uniform Hölder continuity and compact embeddings of Hölder spaces. The claim (2) follows from (1) easily.
We now prove the third claim. Let \(\epsilon>0.\) We will first show that for \(x\in{\mathbb{R}}^{n}\) we have
\[d(x,F(u_{0}))>\epsilon\Rightarrow d(x,F(u_{k}))>\frac{\epsilon}{2}\] (3.3)
for large \(k.\) This implies that \(F(u_{k})\subset\{x\colon\,d(F(u_{0}),x)<2\epsilon\}\) for \(k\) large enough.
Let \(B(x,\epsilon)\subset F(u_{0})^{c}.\) If \(u_{0}\) is positive in \(B(x,\epsilon)\) then it is bounded from below by a positive number in \(B(x,\epsilon/2).\) In this case \(u_{k}\) are also positive in \(B(x,\epsilon/2)\) for large \(k\) due to uniform convergence in \(G\). Thus \(B(x,\epsilon/2)\subset F(u_{k})^{c}\) for large \(k.\) If \(u\equiv 0\) in \(B^{\prime}(x,\epsilon)\) then due to the uniform convergence we know that for \(k\) large enough \(u_{k}<C\epsilon^{\alpha}\) in \(B^{\prime}(x,\epsilon)\), where \(C\) is a constant given by _P2_ in Theorem 2.3 so that \(u_{k}\) has no free boundary points in \(B(x,\epsilon/2)\) for all large \(k.\) This proves (3.3).
Next we will show that for all large \(k\)
\[F(u_{0})\subset\{x\colon\,d(F(u_{k}),x)<\epsilon\}.\] (3.4)
If this was not true we could find a point \(x\in F(u_{0})\) and a subsequence of \(u_{k}\) such that \(B^{\prime}(x,\epsilon)\subset F(u_{k})^{c}\) for every \(k\) in the subsequence. If the subsequence contains infinitely many \(u_{k}\) such that \(u_{k}\equiv 0\) in \(B(x,\epsilon)\) then also \(u_{0}\equiv 0\) due to uniform convergence. Otherwise, the sequence contains infinitely many \(u_{k}\) for which \(B(x,\epsilon)\) is contained in the positive phase. In this case the non-degeneracy implies that in \(B(x,\epsilon/2)\) we have \(u_{k}>C\epsilon^{\alpha},\) with \(C\) independent of \(k.\) Again uniform convergence implies the same lower bound for \(u_{0},\) which contradicts our choice \(x\in F(u_{0}).\)
To show the fourth claim we notice that \(F(u_{0})\) has zero Lebesgue measure by the Lebesgue differentiation Theorem and the positive density of the zero phase. Take an open set \(V\supset F(u_{0})\) with \(m(V\cap G)<\epsilon.\) For large \(k\) we have \(F(u_{k})\cup F(u_{0})\subset V\cap G\), so \({\left\|{\chi_{\{u_{k}>0\}}-\chi_{\{u_{0}>0\}}}\right\|}_{L^{1}(G)}<\epsilon.\)
Also the sequence is uniformly bounded in \(H^{1,p}(G;|y|^{\beta})\) by the Caccioppoli inequality. This implies by compactness [29, 1.31 Theorem] the weak convergence of \(\nabla u_{k}\) in \(L^{p}(G;|y|^{\beta}).\) To obtain strong convergence, use Lemma 3.5 below.
∎
It remains to show that weak convergence implies strong convergence.
**Lemma 3.5**.: _Any sequence of minimizers \(\{u_{k}\}_{k=0}^{\infty}\) in \(\Omega\subset{\mathbb{R}}^{n+1}\) with \(u_{k}\to u_{0}\) uniformly and \(\nabla u_{k}\rightharpoonup\nabla u_{0}\) weakly in \(L^{2}_{\operatorname{loc}}(\Omega,|y|^{\beta})\) satisfies that \(\nabla u_{k}\to\nabla u_{0}\) in \(L^{2}_{\operatorname{loc}}(\Omega,|y|^{\beta})\)._
Proof.: Let \(\eta\in C^{0,1}_{c}(\Omega)\) be a non-negative function. We claim that for every \(\varepsilon>0\) there exists \(j_{0}\) so that
\[\int|y|^{\beta}\eta|\nabla u-\nabla u_{j}|^{2}\leq\varepsilon\]
for \(j\geq j_{0}\).
First we isolate the main difficulty
\[\int|y|^{\beta}\eta|\nabla u_{0}-\nabla u_{j}|^{2}=\int|y|^{\beta}\eta(\nabla u _{0}-\nabla u_{j})\cdot\nabla u_{0}-\int|y|^{\beta}\eta(\nabla u_{0}-\nabla u_ {j})\cdot\nabla u_{j}.\]
By weak convergence,
\[\left|\int|y|^{\beta}\eta(\nabla u_{0}-\nabla u_{j})\cdot\nabla u_{0}\right| \leq\varepsilon/4\]
for \(j\) big enough. Note that this is true even if the \(u_{j}\) are not minimizers. The bound on the second term, however, needs the minimization property.
We observe that
\[\int|y|^{\beta}\eta(\nabla u_{0}-\nabla u_{j})\cdot\nabla u_{j}=\underbrace{ \int|y|^{\beta}(\nabla u_{0}-\nabla u_{j})\cdot\nabla(\eta u_{j})}_{=:I}- \underbrace{\int|y|^{\beta}u_{j}(\nabla u_{0}-\nabla u_{j})\cdot\nabla\eta}_{= :II}.\] (3.5)
To estimate \(I\) in (3.5), let \(\lambda_{j}\) be the measures corresponding to \(u_{j}\) from Lemma 3.1. By (3.2) we get that
\[\int|y|^{\beta}(\nabla u_{0}-\nabla u_{j})\cdot\nabla(\eta u_{j})=\int\eta u_{ j}\,d\lambda_{0}-\int\eta u_{j}\,d\lambda_{j}.\]
Since \(\lambda_{j}\) is supported on \(\{u_{j}=0\}\) we have that
\[\int\eta u_{j}\,d\lambda_{j}=0\]
for every \(j\) (including \(j=0\) as \(u_{0}\) is also a minimizer to \(\mathcal{J}\), see Corollary 3.4 in [7]).
To finish the estimate on \(I\) in (3.5) we observe that
\[\int\eta u_{j}\,d\lambda_{0}=\int\eta(u_{j}-u_{0})\,d\lambda_{0}\leq\sup_{{\rm supp }\;\eta}|u_{j}-u_{0}|\int\eta\,d\lambda_{0}.\]
By uniform convergence on compact subsets, for \(j\) big enough, .
We turn towards estimating \(II\) in (3.5):
\[|II|=\left|\int|y|^{\beta}u_{j}(\nabla u_{0}-\nabla u_{j})\cdot \nabla\eta\right| \leq\left|\int|y|^{\beta}(\nabla u_{0}-\nabla u_{j})\cdot(u_{0} \nabla\eta)\right|\]
\[\quad+\sup_{{\rm supp}\;\eta}|u_{j}-u_{0}|{\left\|{\nabla u_{0}- \nabla u_{j}}\right\|}_{L^{2}(\Omega,|y|^{\beta})}{\left\|{\nabla\eta}\right\| }_{L^{2}(\Omega,|y|^{\beta})}.\] (3.6)
The first term goes to zero by weak convergence of \(\nabla u_{j}\) to \(\nabla u_{0}\). The second term satisfies
\[\sup_{{\rm supp}\eta}|u_{j}-u_{0}|{\left\|{\nabla u_{0}-\nabla u_{j}}\right\|} _{L^{2}({\rm supp}\eta,|y|^{\beta})}{\left\|{\nabla\eta}\right\|}_{L^{2}( \Omega,|y|^{\beta})}\leq\varepsilon/4\]
for \(j\) big enough, by uniform convergence and the uniform bound of \({\left\|{\nabla u_{j}}\right\|}_{L^{2}({\rm supp}\;\eta,|y|^{\beta})}\) derived from the Caccioppoli inequality in Lemma 3.2 together with uniform convergence. ∎
Lemma 3.4 implies that minimizers converge to minimizers (which was observed in Corollary 3.4 in [7]), but also implies the stronger fact that the energy is continuous under this convergence:
**Corollary 3.6**.: _Let \(u_{k}\) be a sequence of minimizers in \(\Omega\subset\mathbb{R}^{n+1}\) with \(u_{k}\to u_{0}\) locally uniformly and \(\sup_{k}\|u_{k}\|_{{\mathbf{H}}^{\beta}}<\infty\). Then \(u_{0}\) is also a minimizer to \(\mathcal{J}\) in \(\Omega\) and for any \(B\subset\subset\Omega\) we have \(\mathcal{J}(u_{k},B)\rightarrow\mathcal{J}(u_{0},B)\)._
## 4 Monotonicity formula and some immediate consequences
From [3] we have the following monotonicity formula:
**Theorem 4.1** (Monotonicity formula, see [3, Theorem 4.3] ).: _Let \(u\in{\mathbf{H}}^{\beta}(B_{\delta}(x_{0}))\) be a minimizer in \(B_{\delta}(x_{0})\) for the functional \(\mathcal{J}\) with \(x_{0}\in F(u)\). Then the function_
\[r\mapsto\Psi^{u}_{r}(x):=\Psi(r)=\frac{\mathcal{J}(u,B_{r}(x_{0}))}{r^{n}}- \frac{\alpha}{r^{n+1}}\int_{\partial B_{r}(x_{0})}|y|^{\beta}u^{2}\,d\mathcal{ H}^{n}\]
_is defined and nondecreasing in \((0,\delta)\), and for \(0<\rho<\sigma<\delta\), it satisfies_
\[\Psi(\sigma)-\Psi(\rho)=\int_{B_{\sigma}(x_{0})\setminus B_{\rho}(x_{0})}|y|^{ \beta}\frac{2\left|\alpha u(x)-(x-x_{0})\cdot\nabla u(x)\right|^{2}}{|x_{0}-x| ^{n+2}}dx\geq 0.\]
As a consequence, the blow-up limits are cones, in the sense of the following corollary.
**Corollary 4.2**.: _Let \(u\in{\mathbf{H}}^{\beta}(B_{\delta}(x_{0}))\) be a minimizer in \(B_{\delta}(x_{0})\) with \(x_{0}=(x_{0}^{\prime},0)\). Consider a decreasing sequence \(0<\rho_{k}\xrightarrow{k\to\infty}0\) and the associated rescalings \(u_{k}(x):=\frac{u(x_{0}+\rho_{k}x)}{r^{\alpha}}\). Then the Allen-Weiss density_
\[\Psi^{u}_{0}(x_{0}):=\lim_{r\searrow 0}\Psi^{u}_{r}(x_{0})\]
_is well defined. Furthermore, for every bounded open set \(D\subset{\mathbb{R}}^{n+1}\) and \(k\geq k(D)\) this subsequence \(u_{k}\) is bounded in \(H^{1,2}(D;|y|^{\beta})\) and, passing to a subsequence \(u_{k_{j}}\), converges (in the sense of Lemma 3.4) to \(u_{0}\) which is a globally defined minimizer of \(\mathcal{J}\) that is homogeneous of degree \(\alpha\)._
The proof is the same as in [38, Theorem 2.8]
**Remark 4.3** (Non-uniqueness of blow-ups).: _We call the function \(u_{0}\) appearing in Corollary 4.2 a blow-up of \(u\) at \(x_{0}\). A priori, the function \(u_{0}\) may depend on the subsequence \(u_{k_{j}}\). However, a simple scaling argument shows that for all radii \(r\geq 0\) and all blow-ups \(u_{0}\) to \(u\) at \(x_{0}\) we have_
\[\Psi^{u_{0}}_{r}(0)\equiv\Psi^{u}_{0}(x_{0}).\]
### Dimension reduction
We use the homogeneity of the blow-ups to obtain dimension estimates on the points in the free boundary for which there exists a non-flat blow-up. This process is known as “dimension reduction” and has been applied to a variety of situations (see [38] for its application to the Bernoulli problem).
The first lemma shows that blow-up limits of blow-up limits have additional symmetry:
**Lemma 4.4**.: _Let \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}({\mathbb{R}}^{n+1})\) be an \(\alpha\)-homogeneous minimizer of \(\mathcal{J}\) and let \(x_{0}\in F(u)\setminus\{0\}\). Then any blow-up limit \(u_{0}\) at \(x_{0}\) is invariant in the direction of \(x_{0}\), i.e., for every \(x\in{\mathbb{R}}^{n+1}\) and every \(\lambda\in{\mathbb{R}}\),_
\[u_{0}(x+\lambda x_{0})=u_{0}(x).\]
Proof.: Let \(x\in{\mathbb{R}}^{n+1}\), and consider its decomposition \(x=\widetilde{x}+\lambda x_{0}\) with \(\widetilde{x}\in\langle x_{0}\rangle^{\bot}\). We only need to check that
\[u_{0}(x)=u_{0}(\widetilde{x}).\] (4.1)
[FIGURE:S4.F1][ENDFIGURE]
Consider a ball \(B=B(0,r)\subset{\mathbb{R}}^{n+1}\) so that \(\widetilde{x},x\in B\). Let \(\{\rho_{k}\}\) be a sequence of radii converging to zero and such that \(u_{k}(x):=\frac{u(x_{0}+\rho_{k}x)}{\rho_{k}^{\alpha}}\) converges to \(u_{0}\) uniformly on \(B_{r}\). For \(k\) big enough, \({\left\|{u_{k}-u_{0}}\right\|}_{L^{\infty}(B_{r})}<\varepsilon\). Then,
\[|u_{0}(x)-u_{0}(\widetilde{x})|\leq 2\varepsilon+|u_{k}(x)-u_{k}(\widetilde{x} )|.\] (4.2)
To control the last term above, we use the homogeneity of \(u\). Writing \(P_{1}:=x_{0}+\rho_{k}\widetilde{x}\) and \(P_{2}:=x_{0}+\rho_{k}x\) we have \(\rho_{k}^{\alpha}u_{k}(\widetilde{x})=u(P_{1})\) and \(\rho_{k}^{\alpha}u_{k}(x)=u(P_{2})\). Let \(P_{3}\) be the intersection between the line through \(P_{1}\) and \(x_{0}\) and the line through the origin and \(P_{2}\) (see Figure 4.1). By homogeneity of \(u\)
\[u(P_{2})=u(P_{3})\left(\frac{|P_{2}|}{|P_{3}|}\right)^{\alpha}=u(P_{3})\left(1 \pm\frac{|P_{2}-P_{3}|}{|P_{3}|}\right)^{\alpha}.\]
Thus,
\[\rho_{k}^{\alpha}|u_{k}(x)-u_{k}(\widetilde{x})|\]
By Thales’ Theorem, \(|P_{1}-P_{3}|=\frac{|P_{1}-P_{2}||P_{3}-x_{0}|}{|x_{0}|}=\mathcal{O}(\rho_{k}^ {2})\) and using the \(\dot{C}^{\alpha}\) character of \(u\) and the fact that \(u(x_{0})=0\), we get
\[\rho_{k}^{\alpha}|u_{k}(x)-u_{k}(\widetilde{x})| \leq{\left\|{u}\right\|}_{\dot{C}^{\alpha}}\left(\left|P_{1}-P_{3 }\right|^{\alpha}+\left|P_{3}\right|^{\alpha}\mathcal{O}(\rho_{k})\right)= \mathcal{O}(\rho_{k}^{2\alpha})+\mathcal{O}(\rho_{k}),\]
and (4.1) follows by (4.2) since \(\rho_{k}\to 0\). ∎
We then recall that a minimizer with a translational symmetry is actually a minimizer without that symmetry in one dimension less. This is known as “cone splitting”:
**Lemma 4.5**.: _Let \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}({\mathbb{R}}^{n+1})\) be an \(\alpha\)-homogeneous minimizer of \(\mathcal{J}\) in \({\mathbb{R}}^{n+1}\) which is invariant in the direction \(e_{n}\). Then \(\widetilde{u}(x^{\prime},y):=u(x^{\prime},0,y)\) is a minimizer of \(\mathcal{J}\) in one dimension less._
Proof.: The proof is a slight variation of [38, Proof of Lemma 3.2]. ∎
Next we provide a non-standard proof of Proposition 2.2, that is, to show that the trivial solution is a minimizer. We use _P5_ in a sequence of conveniently chosen blow-ups and a dimension reduction argument, based on the following lemma. Note that the proposition could also be proven via a classical dimension reduction argument.
Proof of Proposition 2.2.: Consider a non-zero minimizer \(u\) with non-empty free boundary (see [7, Proposition 3.2] for its existence), choose a free boundary point \(x_{0}\in F(u)\) and consider \(u_{0}\) to be a blow-up weak limit at this point, which exists and is \(\alpha\)-homogeneous by Lemma 4.2. Then \(u_{0}\) is also a global minimizer by _P5_ and not null by the nondegeneracy condition.
Next we argue by induction: given \(0\leq j\leq n-2\) let \(u_{j}\) be an \(\alpha\)-homogeneous global minimizer different from \(0\) such that it is invariant in a \(j\)-dimensional linear subspace \(H_{j}\subset{\mathbb{R}}^{n}\), i.e., for every \(v\in H_{j}\) and every \(x^{\prime}\in{\mathbb{R}}^{n}\),
\[u_{j}(x^{\prime},y)=u(x^{\prime}+v,y).\]
Consider a point \(x_{j}\in F(u_{j})\setminus(H_{j}\times\{0\})\) which exists as long as \(j<n-1\) by the interior corkscrew condition and positive density, and let \(u_{j+1}\) be a blow-up limit at this point, which is again an \(\alpha\)-homogeneous global minimizer. We claim that \(u_{j+1}\) is invariant in fact in the \((j+1)\)-dimensional subspace \(H_{j}+\langle x_{j}^{\prime}\rangle\).
Indeed \(u_{j+1}\) is invariant in \(\langle x_{j}^{\prime}\rangle\) by Lemma 4.4. On the other hand, since \(u_{j}\) is invariant in \(H_{j}\), so are the functions in the blow-up sequence and, thus, \(u_{j+1}\) is invariant in \(H_{j}\). Thus, for \(v\in H_{j}\), \(v_{0}\in\langle x_{j}^{\prime}\rangle\) and \(x\in{\mathbb{R}}^{n+1}\) we get
\[u(x+v+x_{j}^{\prime})=u(x+v)=u(x),\]
and the claim follows.
Thus, after \(n-1\) steps, we obtain \(u_{n-1}\) which is an \(\alpha\)-homogeneous global minimizer invariant in an \((n-1)\)-dimensional space \(H_{n-1}\), with non-empty free boundary. Thus,
\[u_{n-1}(x^{\prime},0)=C_{n,\alpha}(x^{\prime}_{n})_{+}^{\alpha},\]
where the constant is given by _P6_. The proposition follows by Proposition A.1. ∎
### Upper semicontinuity
Next we show that Allen-Weiss’ energy at a fixed radius is continuous both with respect to the minimizer and with respect to the point:
**Lemma 4.6**.: _Let \(u_{j}\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) be minimizers of \(\mathcal{J}\) in \(\Omega\) and \(u_{j}\to u_{0}\) in the sense of Lemma 3.4. Then, for \(x_{j}\to x_{0}\) and \(r<{\rm dist}(x_{0},\partial\Omega)\),_
\[\Psi_{r}^{u_{j}}(x_{j})\xrightarrow{j\to\infty}\Psi_{r}^{u_{0}}(x_{0}).\]
Proof.: Let \(\varepsilon>0\). We want to check that for \(j\) big enough,
\[\left|\Psi_{r}^{u_{j}}(x_{j})-\Psi_{r}^{u_{0}}(x_{0})\right|\leq\varepsilon.\]
We will consider the three terms of the energy separately. For the first term,
\[\int_{B_{r}(x_{j})}|y|^{\beta}|\nabla u_{j}|^{2}-\int_{B_{r}(x_{0})}|y|^{\beta }|\nabla u_{0}|^{2}\leq r^{n}\varepsilon/3\]
follows from the \(L^{2}\) convergence of the gradients. Indeed, if \(\delta_{j}:=|x_{j}-x_{0}|\leq\delta\) for \(j\) big enough and \(B_{r+\delta}\subset\Omega\), then
\[\int_{B_{r}(x_{j})}|y|^{\beta}|\nabla u_{j}|^{2}-\int_{B_{r}(x_{0 })}|y|^{\beta}|\nabla u_{0}|^{2} \leq\int_{B_{r}(x_{j})}|y|^{\beta}\left(|\nabla u_{j}|^{2}-| \nabla u_{0}|^{2}\right)+\int_{B_{r}(x_{j})\Delta B_{r}(x_{0})}|y|^{\beta}| \nabla u_{0}|^{2}\]
\[\leq\int_{B_{r+\delta}(x_{j})}|y|^{\beta}\left(|\nabla u_{j}|^{2} -|\nabla u_{0}|^{2}\right)+\int_{(B_{r+\delta_{j}}\setminus B_{r-\delta_{j}})( x_{0})}|y|^{\beta}|\nabla u_{0}|^{2}\]
\[\leq r^{n}\varepsilon/3.\]
For the measure, we estimate
\[\left|\int_{B_{r}(x_{j})^{\prime}}\chi_{\Omega_{+}(u_{j})}dm-\int_{B_{r}(x_{0} )^{\prime}}\chi_{\Omega_{+}(u_{0})}dm\right|\leq r^{n}\varepsilon/3\]
for \(j\) big enough as a consequence of \(\chi_{\Omega_{+}(u_{j})}\to\chi_{\Omega_{+}(u_{0})}\) in \(L^{1}_{\operatorname{loc}}\) as before. The fact that
\[\alpha\left|\int_{\partial B_{r}(x_{j})}u_{j}^{2}-\int_{\partial B_{r}(x_{0})} u_{0}^{2}\right|\leq r^{n+1}\varepsilon/3\]
for \(j\) big enough is a straight consequence of the uniform convergence and the continuity of \(u_{0}\). ∎
It is well known that the limit of a decreasing sequence of continuous functions is upper semicontinuous (see [11, Theorem 1.8]). The monotonicity formula also implies the following result.
**Lemma 4.7**.: _Let \(u_{j}\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) be minimizers of \(\mathcal{J}\) in \(\Omega\) and \(u_{j}\xrightarrow{j\to\infty}u_{0}\) in the sense of Lemma 3.4, with \(x_{j}\in F(u_{j})\) for \(j\in{\mathbb{N}}\). Then, if \(x_{j}\to x_{0}\) and \(r_{j}\to 0\),_
\[\limsup_{j}\Psi_{0}^{u_{j}}(x_{j})\leq\limsup_{j}\Psi_{r_{j}}^{u_{j}}(x_{j}) \leq\Psi_{0}^{u}(x_{0}).\]
Proof.: The first inequality comes from monotonicity.
To see that
\[\limsup_{j}\Psi_{r_{j}}^{u_{j}}(x_{j})\leq\Psi_{0}^{u_{0}}(x_{0}),\]
it is enough to check that for every \(r>0\)
\[\limsup_{j}\Psi_{r_{j}}^{u_{j}}(x_{j})\leq\Psi_{r}^{u_{0}}(x_{0}),\]
or using monotonicity it suffices to show that for every \(\varepsilon>0\) and \(j\) big enough,
\[\Psi_{r}^{u_{j}}(x_{j})-\Psi_{r}^{u_{0}}(x_{0})\leq\varepsilon.\]
But this is true for \(j\) big enough because the left-hand side converges to \(0\) by the continuity of the energy from Lemma 4.6. ∎
## 5 Measure-theoretic properties
### Finite perimeter
We will show that \(\Omega^{\prime}_{+}(u)\) is a set of locally finite perimeter. Then \(F_{\rm red}(u)\) will coincide with the measure-theoretic reduced boundary by the \(\epsilon\)-regularity theorem, see [1, Sections 4.6 and 4.7].
**Definition 5.1**.: _For every \(0<\alpha<1\) we can define_
\[k^{*}_{\alpha}:=\inf\left\{k\in{\mathbb{N}}:\exists\mbox{ an $\alpha$- homogeneous minimizer $u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}({\mathbb{ R}}^{k+1})$ such that $\Sigma(u)=\{0\}$}\right\}.\]
Note that, to the best of our knowledge, there is no result showing that \(k^{*}_{\alpha}\) needs to be finite.
**Lemma 5.2**.: _Let \(u\) be an \(\alpha\)-homogeneous minimizer of \(\mathcal{J}\) in \({\mathbb{R}}^{n+1}\) with \(n<k^{*}_{\alpha}\). Then \(u\) is a rotation of the trivial solution._
See [38, Section 3] for the proof.
From the positive density properties, we know that \(k^{*}_{\alpha}\geq 2\). From the homogeneity of the blow-ups we find out that the free boundary in \({\mathbb{R}}^{1+1}\) is in fact a collection of isolated points. Later in Theorem 6.1 we will show that in fact \(k^{*}_{\alpha}\geq 3.\)
**Lemma 5.3** (Isolated singularities).: _Let \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) for \(\Omega\subset{\mathbb{R}}^{1+1}\) be a minimizer of \(\mathcal{J}\) in \(\Omega\). Then \(F(u)\) has no accumulation points in \(\Omega\)._
Proof.: Arguing by contradiction, we assume that \(F(u)\) has an interior accumulation point which, without loss of generality, we assume to be the origin.
Let \((x_{k},0)\) be a sequence of singular points converging to \(0\) with \(x_{k}>0\). Consider the blow-up rescaling \(u_{k}(x):=\frac{u(x_{k}x)}{x_{k}^{\alpha}}\). Note that \(u_{k}(0,0)=u_{k}(1,0)=0\). Moreover, by the interior corkscrew condition, there exist \(z_{k}\in(1/2,3/2)\) such that \(u_{k}|_{B^{\prime}_{c}(z_{k},0)}>0\), so \(u(z_{k},0)\gtrsim C\) by the non-degeneracy condition.
Choosing a subsequence, we may assume that \(z_{k}\to z_{0}\geq 1/2\), and \(u_{k}\to u_{0}\) in the sense of Lemma 3.4. In particular \(u_{0}\) is homogeneous by Corollary 4.2, reaching a contradiction with the fact that \(u_{0}(1,0)=0\) and \(u_{0}(z_{0},0)\gtrsim C\). ∎
We will prove the local finiteness of the perimeter of the free boundary adapting a proof of De Silva and Savin in [17]. Our proof is essentially the same, but we repeat it for the sake of completeness.
As in [17] we say that a set \(A\subset{\mathbb{R}}^{n}\) satisfies the property (P) if following holds: for every \(x\in A\) there exists an \(r_{x}>0\) such that for every \(0<r<r_{x}\), every subset \(S\) of \(B(x,r)\cap A\) can be covered with a finite number of balls \(B(x_{i},r_{i})\) with \(x_{i}\in S\) such that
\[\sum_{i}r_{i}\leq r^{\alpha}/2.\] (5.1)
**Lemma 5.4**.: _If \(\mathcal{H}^{t}(\Sigma(U))=0\) for some \(\alpha>0\) and for every minimal cone \(U\) in \({\mathbb{R}}^{n+1}\) then \(\mathcal{H}^{t}(\Sigma(u))=0\) for every minimizer \(u\) of \(\mathcal{J}\) defined on \(\Omega\subset{\mathbb{R}}^{n+1}\)_
Proof.: We first show that \(\Sigma(u)\) satisfies the property (P). If (P) does not hold we find a point \(y\in\Sigma(u)\) for (P) is violated for a sequence \(r_{k}\to 0.\) We consider the blow-up sequence
\[u_{r_{k}}(x)=r_{k}^{-\alpha}u(y+r_{k}x).\] (5.2)
By Corollary 3.6 we may assume, by taking a subsequence, that \(u_{r_{k}}\) converges to a minimal cone \(U\). By our assumptions we may cover \(\Sigma(U)\cap B(0,1)\) with a finite collection of balls \(\{B(x_{i},\frac{\rho_{i}}{10})\}_{i=1}^{k}\) with
\[\sum_{i}\rho_{i}^{t}\leq\frac{1}{2}.\]
By Lemma 3.4 we know that free boundaries converge in Hausdorff sense and thus the set \(F(u_{r_{k}})\cap B(0,1)\setminus\bigcup_{i}B(x_{i},\rho_{i}/5)\) is flat for all large \(k.\) From Theorem 2.4 we infer that all singularities must be covered by the same balls, that is, for all \(k\geq k_{0}\)
\[\Sigma(u_{r_{k}})\cap B(0,1)\subset\bigcup_{i}B(x_{i},\rho_{i}/5).\] (5.3)
After rescaling we see that \(u\) satisfies the condition for property (P) in the ball \(B(y,r_{k}),\) which is a contradiction. Therefore the property (P) holds as claimed.
Consider the set \(D_{k}:=\{y\in\Sigma(u):r_{y}\geq 1/k\}.\) Fix a point \(y_{0}\in D_{k}.\) By property (P) applied to \(r_{0}=1/k\) we find a finite cover of \(D_{k}\cap B(y_{0},r_{0})\) with balls \(B(y_{i},r_{i}),\)\(y_{i}\in D_{k},\) satisfying
\[\sum_{i}r_{i}^{t}\leq r_{0}^{t}/2.\]
Similarly, for each ball \(B(y_{i},r_{i})\) in the cover we use the property (P) to find a finite number of balls \(B(y_{ij},r_{ij}),\)\(y_{ij}\in D_{k},\) which cover \(D_{k}\cap B(y_{i},r_{i})\) and satisfy
\[\sum_{j}r_{ij}^{t}\leq r_{i}^{t}/2,\]
and thus \(\sum_{i,j}r_{ij}\leq r_{0}^{t}/4\). By repeating the argument \(N\) times we obtain a cover of \(D_{k}\cap B(y_{0},r_{0})\) by balls \(B(z_{l},r_{l})\) which satisfies
\[\sum_{l}r_{l}^{t}\leq 2^{-N}r_{0}^{t}.\]
This implies that \(\mathcal{H}^{t}(B(y_{0},r_{0})\cap D_{k})=0\) and thus \(\mathcal{H}^{t}(D_{k})=0.\) By countable additivity we obtain the claim.
∎
**Lemma 5.5**.: _If \(\mathcal{H}^{t}(\Sigma(U))=0\) for some \(t>0\) and for every minimal cone in \({\mathbb{R}}^{n+1}\), we then have that \(\mathcal{H}^{t+1}(\Sigma(V))=0\) for every minimal cone \(V\) in \({\mathbb{R}}^{(n+1)+1}.\)_
Proof.: Without loss of generality we may assume \(\Sigma(V)\neq\{0\}.\) Let \(x\in\Sigma(V)\setminus\{0\}.\) By Corollary 3.6 the blow-ups at any point of \(\Sigma(V)\cap\partial B\) converge to a minimal cone in dimension \((n+1)+1\) up to a subsequence. Let \(V_{x}\) be a blow-up at \(x.\) By Lemma 4.5\(V_{x}\) is a minimal cone which is invariant in at least one direction. By Lemma 4.5, using our assumption this implies that \(\mathcal{H}^{t+1}(\Sigma(V_{x}))=0\), and thus the singular set of every possible blow-up cone of any minimizer \(V\) has zero \(\mathcal{H}^{t+1}\)-measure.
Arguing as in Lemma 5.4 we obtain \(\mathcal{H}^{t+1}(\Sigma(V))=0.\)
∎
Combining Lemmas 5.3, 5.4 and 5.5 we obtain the following corollary. Notice that we will be able to replace \(n-1\) by \(n-2\) by Theorem 6.1.
**Corollary 5.6**.: _Every minimizer satisfies_
\[\mathcal{H}^{n-1}(\Sigma(u))=0.\]
**Lemma 5.7**.: _Let \(u\in{\mathbf{H}}^{\beta}(2B)\) be a minimizer of \(\mathcal{J}\) in \(2B\) with \({\left\|{u}\right\|}_{\dot{C}^{\alpha}(2B)}<E_{0}.\) Then there exists a constant \(C\) depending on \(n\), \(\alpha\) and \(E_{0}\) and a finite collection of balls \(\{B(X_{i},r_{i})\}\) s.t._
\[\mathcal{H}^{n-1}\left((F(u)\cap B)\setminus\bigcap_{i=1}^{m}B(X_{i},r_{i}) \right)\leq C\] (5.4)
_and_
\[\sum_{i=1}^{m}r_{i}^{n-1}\leq\frac{1}{2}.\] (5.5)
Proof.: Proof is by contradiction. For \(k\in{\mathbb{N}}\) assume we have \({\left\|{u_{k}}\right\|}_{\dot{C}^{\alpha}(2B)}<E_{0}\) and the left-hand side of (5.4) is bounded below by \(k>0\) for every collection of balls satisfying (5.5). By Lemma 3.2 we know the sequence \(u_{k}\) is bounded in \({\mathbf{H}}^{\beta}(B).\) Taking a subsequence we may assume that \(u_{k}\) converges locally uniformly to a minimizer \(u\) (see Corollary 3.6).
By Corollary 5.6 the set of singularities \(\Sigma(u)\) has \(\mathcal{H}^{n-1}\) -measure zero and thus they can be covered with finitely many balls \(B_{i}\) satisfying (5.5).
Since \(F(u)\setminus\Sigma(u)\) is a \(C^{1,\gamma}\)-surface by Theorem 2.4, using the Hausdorff convergence of the free boundaries we apply again Theorem 2.4 to see that \(F(u_{k})\cap B_{1}\setminus\bigcup_{i=1}^{M}B_{i}\) are also \(C^{1,\gamma}\)-surfaces converging to \(F(u)\cap B_{1}\setminus\bigcup_{i=1}^{M}B_{i}\) uniformly in the \(C^{1}\)-norm. This is a contradiction with the assumption that the Hausdorff measure blows up as \(k\) goes to \(\infty.\) ∎
The fact that the free boundary has finite perimeter follows now from the same iteration argument as [17, Lemma 5.10].
**Lemma 5.8**.: _Let \(u\) be as in Lemma 5.7. Then for some constant \(C\) depending only on \(E_{0}\),_
\[\mathcal{H}^{n-1}\left(F(u)\cap B\right)\leq C.\] (5.6)
Proof.: By Lemma 5.7 we find a finite collection of balls \(B_{r_{i}}\) such that
\[F(u)\cap B\subset\Gamma\cup\bigcup B_{r_{i}},\] (5.7)
with \(\mathcal{H}^{n-1}(\Gamma)\leq C\) and \(\sum r_{i}^{n-1}\leq\frac{1}{2}.\)
Applying Lemma 5.7 again for each ball \(B_{r_{i}}\) we have
\[F(u)\cap B_{r_{i}}\subset\Gamma_{i}\cup\bigcup B_{r_{ij}},\] (5.8)
with \(\mathcal{H}^{n-1}(\Gamma_{i})\leq Cr_{i}^{n-1}\) and \(\sum r_{ij}^{n-1}\leq\frac{1}{2}r_{i}^{n-1}.\) Moreover, we have
\[\mathcal{H}^{n-1}\left((F(u)\cap B_{1})\cap\bigcup_{i,j}B_{r_{ij}}\right)\leq \mathcal{H}^{n-1}(\Gamma)+\sum\mathcal{H}^{n-1}(\Gamma_{i})\leq C\left(1+\sum_ {i,j}r_{ij}^{n-1}\right)\leq C\left(1+\frac{1}{2}\right).\]
Continuing inductively, after \(k\) steps we have that
\[F(u)\cap B_{1}\subset\Gamma^{\prime}\cup\bigcup_{q=1}^{N}B_{r_{q}},\] (5.9)
with
\[\mathcal{H}^{n-1}(\Gamma^{\prime})\leq C\left(\sum_{i=0}^{k}2^{-i}\right)\leq 2C,\]
and \(\sum r_{q}^{n-1}\leq 2^{-k}.\) This gives the claim. ∎
Finally the fact that \(\{u>0\}\cup\Omega\) has locally finite perimeter in \(\Omega\) follows from the previous lemma and well-known results of Federer, see for example [2, Prop. 3.62] or [25, 4.5.11].
### Energy gap
Next we will check that the Allen-Weiss density can also be used to identify singular points. First let us state a useful identity for minimizers (which is also valid in the context of variational solutions in the sense of [37]).
**Lemma 5.9** (See [3, Proposition 3.4]).: _Let \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) be a minimizer to (1.1) in \(\Omega\). For every \(B\subset\subset\Omega\) we have_
\[\int_{B}|y|^{\beta}|\nabla u|^{2}=\int_{\partial B}|y|^{\beta}u\nabla u\cdot \nu\,d\mathcal{H}^{n}.\] (5.10)
Let \(u\) be a minimizer and \(x_{0}\in F(u)\). If we consider a blow-up \(u_{0}\) at \(x_{0}\), then
\[\Psi^{u}_{0}(x_{0})=\Psi^{u_{0}}_{1}(0)=\int_{B_{1}}|y|^{\beta}|\nabla u_{0}|^ {2}+m(\{u_{0}>0\}\cap{\mathbb{R}}^{n}\cap B_{1})-\alpha\int_{\partial B_{1}}|y |^{\beta}u_{0}^{2}\,d\mathcal{H}^{n}.\]
By Lemma 5.9 we get
\[\Psi^{u_{0}}_{1}(0)=\int_{\partial B_{1}(x_{0})}|y|^{\beta}u_{0}\nabla u_{0} \cdot\nu\,d\mathcal{H}^{n}+m(\{u_{0}>0\}\cap{\mathbb{R}}^{n}\cap B_{1})-\alpha \int_{\partial B_{1}}|y|^{\beta}u_{0}^{2}\,d\mathcal{H}^{n}.\]
Since \(\nabla u_{0}(x)\cdot\nu(x)=\frac{\alpha}{|x|}u_{0}(x)\) almost everywhere on the sphere, the first and the third terms cancel out and we obtain
\[\Psi^{u_{0}}_{1}(0)=m(\{u_{0}>0\}\cap B_{1}^{\prime}).\]
Thus, the density \(\Psi^{u}_{0}\) at a free boundary point is given by the area of the positive phase of any blow-up at the same point.
We write \(\omega_{n}:=m(B_{1}^{\prime})\) for the volume of the \(n\)-dimensional ball.
**Proposition 5.10**.: _Every homogeneous minimizer \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}({\mathbb{R}}^{n+1})\) has density_
\[\Psi^{u}_{1}(0)=m(\{u>0\}\cap B_{1}^{\prime})\geq\frac{\omega_{n}}{2},\]
_and equality is only attained when \(u\) is the trivial minimizer._
_Proof._: Let \(u\) be a minimizer such that \(\Psi^{u}_{1}(0)\leq\frac{\omega_{n}}{2}\).
Let \(x_{1}\in F_{red}(u)\). Being a regular point, \(\Psi^{u}_{0}(x_{1})=\frac{\omega_{n}}{2}\). On the other hand, by the homogeneity and the continuity in Lemma 4.6,
\[\lim_{r\to\infty}\Psi^{u}_{r}(x_{1})=\lim_{r\to\infty}\Psi^{u}_{1}(x_{1}/r)= \Psi^{u}_{1}(0)\leq\frac{\omega_{n}}{2}.\]
Combining both assertions with the monotonicity of \(\Psi\) we get that \(\Psi^{u}_{r}(x_{1})\equiv\frac{\omega_{n}}{2}\). But using the second formula in Theorem 4.1, one can see that this is true only whenever \(\Psi\) is \(\alpha\)-homogeneous with respect to \(x_{1}\). Thus, \(u\) is \(1\)-symmetric and invariant in the direction of \(\langle x_{1}\rangle\).
Since \(\Omega_{0}\) is a finite perimeter set (see Section 5), \(F_{red}(u)\) has full \(\mathcal{H}^{n-1}\) measure on \(F(u)\). Thus, we can find \(x_{1},\dots x_{n-1}\in F_{red}(u)\) linearly independent. By the previous discussion \(u\) is invariant on an \((n-1)\)-dimensional affine manifold and, thus, it is the trivial solution. ∎
**Corollary 5.11** (Energy Gap).: _There exists \(\overline{\epsilon}>0\) depending only on \(n\) and \(\alpha\) such that every minimizer \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) and every singular point \(x_{0}\in\Sigma(u)\) satisfy_
\[\Psi^{u}_{1}(x_{0})-\frac{\omega_{n}}{2}\geq\overline{\epsilon}.\]
Proof.: Assume the conclusion to be false. Then there exist \(u_{j}\) minimizers in \(B_{1}\) with
\[\Psi^{u_{j}}_{1}(0)\leq\frac{\omega_{n}}{2}+1/j.\]
Passing to a subsequence, \(u_{j}\to u_{0}\) as in Lemma 3.4. Using Lemma 4.7 we get that
\[\Psi^{u_{0}}_{1}(0)=\lim_{j}\Psi^{u_{j}}_{1}(0)\leq\frac{\omega_{n}}{2}.\]
But then \(u_{0}\) is the trivial cone by Proposition 5.10. Since \(F(u_{j})\to F(u)\) in the Hausdorff distance, using \(\epsilon\)-regularity (see Theorem 2.4) we get that \(u_{j}\) is the trivial cone for \(j\) big enough. ∎
The value \(\overline{\epsilon}\) above depends on the constants and on \({\left\|{u}\right\|}_{\dot{C}^{\alpha}}\) in a neighborhood of \(x_{0}\). In the next section we will show that \(\overline{\epsilon}\) does not depend on \(u\) at all.
## 6 Full regularity in \({\mathbb{R}}^{2+1}\)
In the case of \(n=2\), we prove full regularity of the free boundary for minimizers of our functional. Note that this result does not depend on the previous sections except that we use dimension reduction and blow-ups to deduce regularity of the free boundary.
**Theorem 6.1**.: _Let \(n=2\). Then there is no singular minimal cone. In particular, the free boundary \(F(u)\) of every minimizer \(u\) is \(C^{1,\alpha}\) everywhere._
Proof.: We follow closely the arguments in [17, Theorem 5.5], building on [35]. The case \(\beta=0\) has been considered in [17]. The idea is to construct a competitor by a perturbation argument. We note at this point that the argument is two dimensional in nature and does not generalize to higher dimensions. Recall the functional under consideration:
\[\mathcal{J}(u,\Omega)=\int_{\Omega}|y|^{\beta}\,|\nabla u|^{2}+m(\left\{u>0 \right\}\cap\mathbb{R}^{n}\cap\Omega).\]
Let \(V\) be a non trivial minimal cone. Define, as in [17], the Lipschitz continuous function
\[\psi_{R}(t)=\left\{\begin{array}[]{ccc}1,\,\,\,\,\,\ 0\leq t\leq R,\\ 2-\frac{\ln(t)}{\ln(R)},\,\,\,\,\,\ R\leq t\leq R^{2},\\ 0,\,\,\,\,\,\ t\geq R^{2}\end{array}\right.\] (6.1)
Define now the bi-Lipschitz change of coordinates
\[Z(x^{\prime},y)=(x^{\prime},y)+\psi_{R}(|(x^{\prime},y)|)e_{1}\]
and set \(V^{+}_{R}(Z)=V(x^{\prime},y)\). Clearly, one has
\[D_{(x^{\prime},y)}Z=\text{Id}+A\]
where \(\|A\|\leq|\psi^{\prime}_{R}(|(x^{\prime},y)|)|<<1\). Defining now \(V^{-}_{R}\) exactly as \(V^{+}_{R}\) changing \(\psi_{R}\) into \(-\psi_{R}\), the very same computation as in [17] gives
\[\mathcal{J}(V^{+}_{R},B_{R^{2}})+\mathcal{J}(V^{-}_{R},B_{R^{2}})\leq 2 \mathcal{J}(V,B_{R^{2}})+\int_{B_{R^{2}}}|y|^{\beta}\,|\nabla V|^{2}\|A\|^{2}.\]
Now, we have
\[\int_{B_{R^{2}}}|y|^{\beta}\,|\nabla V|^{2}\|A\|^{2}=\int_{R}^{R^{2}}\int_{ \partial B_{r}}|y|^{\beta}\,|\nabla V|^{2}\|A\|^{2}\,d\mathcal{H}^{n}\,\,dr.\]
Now since \(V\) is homogeneous of degree \(\alpha\) by assumption, the function \(g(x,y)=|y|^{\beta}\,|\nabla V|^{2}\) is homogeneous of degree \(\beta+2\alpha-2=-1\). Therefore by a trivial change of variables on the sphere of radius \(r\) and using the fact that \(n=2\), we get the very same estimate
\[\int_{B_{R^{2}}}|y|^{\beta}\,|\nabla U|^{2}\|A\|^{2}\leq\frac{C}{\ln(R)} \xrightarrow{R\to\infty}0.\]
The rest of the proof follows verbatim [17], page \(1318\) since this is only based on energy considerations and we refer the reader to it. ∎
## 7 Uniform bounds around the free boundary
The optimal regularity bound and the non-degeneracy described in Theorem 2.3 were obtained in [7] with bounds that depend on the seminorm \({\left\|{u}\right\|}_{\dot{\mathbf{H}}^{\beta}(B_{1})}\). As a consequence, this dependence propagates to many of our estimates above. In this chapter we use the semi-norm dependent estimates (e.g. Lemma 5.8) to prove semi-norm _independent_ non-degeneracy estimates. Re-running the arguments above yields the semi-norm independent results presented in our main Theorem 1.1.
The question of semi-norm independence may seem purely technical; however, independence allows the compactness arguments of the next section to work without additional assumptions on the minimizers involved.
### Uniform non-degeneracy
We will begin by showing uniform non-degeneracy from scratch to deduce uniform Hölder character from this fact, reversing the usual arguments in the literature.
The following lemma was shown in [3, Corollary 4.2] in a more general setting. Here we give a more basic approach based on [1, Lemma 3.4]. The main difference is that where Alt and Caffarelli could use the energy to directly control the \(H^{1}\) norm of the minimizer, in our case we need to find an alternative because the measure term of the functional is computed on the thin phase (as opposed to the \(H^{1}\) norm which is computed on the whole space). To bypass this difficulty we will use Allen’s monotonicity formula.
The drawback of our approach is that we need the ball to be centered on the free boundary, while in the original lemma, Alt and Caffarelli could center the ball in the zero phase, allowing for a slightly better result.
**Lemma 7.1**.: _Let \(u\) be a minimizer in \(B_{r}\) with \(0\in F(u)\). Then \(\sup_{\partial B_{r}}u\geq Cr^{\alpha}\) with \(C\) depending only on \(n\) and \(\alpha\)._
Proof.: By rescaling we can assume that \(r=1\).
Let \(\mathcal{L}u:=-\nabla\cdot(|y|^{\beta}\nabla u)\), consider \(\Gamma(x)=\frac{1}{|x|^{n-2\alpha}}\) which is a solution of \(\mathcal{L}\Gamma=0\) away from the origin (or \(\Gamma(x)=\log|x|\) if \(n=1\) and \(\alpha=1/2\)), and let
\[\widetilde{v}(x):=\ell\frac{\max\{1-\Gamma(2x),0\}}{1-\Gamma(2)},\]
where
\[\ell:=\sup_{\partial B_{1}}u.\]
It follows that \(u\leq\widetilde{v}\) on \(\partial B_{1}\) and thus
\[\mathcal{J}(u,B_{1})\leq\mathcal{J}(\min\{u,\widetilde{v}\},B_{1}),\]
and observing that \(\tilde{v}=0\) on \(B_{1/2}\) and \(\tilde{v}>0\) on the annulus \(A:=B_{1}\setminus B_{1/2}\), we get
\[\int_{B_{\frac{1}{2}}}|y|^{\beta}|\nabla u|^{2}+m\left(B_{\frac{1 }{2},+}(u)\right) \leq\int_{A}|y|^{\beta}(|\nabla(\min\{u,\widetilde{v}\})|^{2}-| \nabla u|^{2})+m(A_{+}^{\prime}(\min\{u,\widetilde{v}\}))-m(A_{+}^{\prime}(u))\]
\[\leq-2\int_{A}|y|^{\beta}\nabla\max\{u-\widetilde{v},0\}\cdot \nabla\widetilde{v}.\]
By Green’s theorem, writing \(d\sigma=|y|^{\beta}d\mathcal{H}^{n}\) we get
\[\int_{B_{\frac{1}{2}}}|y|^{\beta}|\nabla u|^{2}+m\left(B_{\frac{1 }{2},+}(u)\right)\leq-2\int_{\partial B_{\frac{1}{2}}}u\partial_{\nu} \widetilde{v}\,d\sigma=C_{n,\alpha}\ell\int_{\partial{B_{\frac{1}{2}}}}u\,d\sigma,\] (7.1)
with \(C_{n,\alpha}>0\).
Using the monotonicity formula and Proposition 5.10, we get that \(\psi^{u}(r)\geq\psi^{u}(0)\geq\frac{\omega(B_{1})}{2}\) and, therefore
\[\frac{\alpha}{r}\int_{\partial{B_{r}}}u^{2}\,d\sigma+\frac{\omega(B_{1})r^{d}} {2}\leq\mathcal{J}_{r}(u),\] (7.2)
so using Hölder’s inequality and the AM-GM inequality we obtain
\[\int_{\partial{B_{\frac{1}{2}}}}u\,d\sigma\leq\left(\int_{\partial{B_{\frac{1} {2}}}}u^{2}\,d\sigma\right)^{\frac{1}{2}}C_{n,\alpha}^{\frac{1}{2}}\leq\frac{1 }{2}\int_{\partial{B_{\frac{1}{2}}}}u^{2}\,d\sigma+\frac{1}{2}C_{n,\alpha}\leq C _{n,\alpha}\mathcal{J}_{\frac{1}{2}}(u).\] (7.3)
Combining (7.1), (7.2) and (7.3) we obtain
\[0<\mathcal{J}_{\frac{1}{2}}(u)\leq C_{n,\alpha}\ell\mathcal{J}_{\frac{1}{2}}(u),\]
and therefore \(\ell\geq C_{n,\alpha}^{-1}\).
∎
To show averaged non-degeneracy we need a mean value principle which is well-known, but we include its proof for the sake of completeness.
**Lemma 7.2** (Mean value principle).: _Let \(u\in H^{1}(\beta,\Omega)\) be a weak solution to \(\mathcal{L}u:=\nabla\cdot(|y|^{\beta}\nabla u)=0\) in \(\Omega\), and let \(x_{0}\in{\mathbb{R}}^{n}\times\{0\}\) with \(B_{r}(x_{0})\subset\Omega\). Then_
\[u(x_{0})=\fint_{B_{r}}u\,d\omega\]
_where the mean is taken with respect to the measure \(d\omega:=|y|^{\beta}\,dx\)._
Proof.: Changing variables, we have that
\[A(\rho):=\frac{1}{\rho^{\beta+n+1}}\int_{B_{\rho}(x_{0})}|y|^{\beta}u(x)dx= \int_{B_{1}}|y|^{\beta}u(\rho x+x_{0})dx.\]
On the other hand, set
\[\widetilde{A}(\rho) :=\int_{B_{1}}|y|^{\beta}\nabla u(\rho x+x_{0})\cdot x\,dx\]
\[=\int_{B_{\rho}(x_{0})}\left(\frac{|y|}{\rho}\right)^{\beta}\frac {\nabla u(x)\cdot(x-x_{0})}{\rho}\frac{dx}{\rho^{n+1}}=\frac{1}{2\rho^{\beta+n +2}}\int_{B_{\rho}(x_{0})}|y|^{\beta}\nabla u(x)\cdot\nabla|x-x_{0}|^{2}\,dx.\]
Since \(u\) is a weak solution to \(\nabla\cdot(|y|^{\beta}\nabla u)=0\) in \(\Omega\), we can apply Green’s formula twice to obtain
\[\widetilde{A}(\rho) =\frac{1}{2\rho^{\beta+n+2}}\int_{\partial B_{\rho}(x_{0})}|x-x_{ 0}|^{2}|y|^{\beta}\nabla u(x)\cdot\nu\,dx=\frac{1}{2\rho^{\beta+n}}\int_{ \partial B_{\rho}(x_{0})}|y|^{\beta}\nabla u(x)\cdot\nu\,dx=0.\]
Since \(u\) is absolutely continuous on lines (see [23, Theorem 4.21]), for almost every \(x\) we have \(\int_{\rho}^{r}\nabla u(tx+x_{0})\cdot x\,dt=u(rx+x_{0})-u(\rho x+x_{0})\). Applying Fubini’s Theorem we get
\[\int_{\rho}^{r}\widetilde{A}(t)\,dt=\int_{B_{1}}|y|^{\beta}\int_{\rho}^{r} \nabla u(tx+x_{0})\cdot x\,dt\,dx=\int_{B_{1}}|y|^{\beta}(u(rx+x_{0})-u(\rho x +x_{0}))\,dx=A(r)-A(\rho).\]
So \(A(r)-A(\rho)=0\) for all \(\rho<r\).
On the other hand, taking the mean with respect to the measure \(d\omega:=|y|^{\beta}\,dx\) and using the continuity of \(u\) (see [27, Theorem 2.3.12]) we obtain
\[\left|u(x_{0})-\frac{1}{\omega(B_{1})}\lim_{\rho\to 0}A(\rho)\right|=\lim_{ \rho\to 0}\frac{1}{\omega(B_{\rho}(x_{0}))}\left|\int_{B_{\rho}(x_{0})}(u(x_{0 })-u(x))\,d\omega(x)\right|\leq\lim_{\rho\to 0}o_{\rho\to 0}(1)=0.\]
∎
**Corollary 7.3**.: _Let \(u\) be a minimizer in \(B_{r}\) with \(0\in F(u)\) and let \(d\sigma=|y|^{\beta}d\mathcal{H}^{n}\). Then \(\fint_{\partial B_{r}}u\,d\sigma\geq Cr^{\alpha}\) with \(C\) depending only on \(n\) and \(\alpha\)._
Proof.: Let \(v\) be the \(\mathcal{L}-\)harmonic replacement of \(u\) in \(B_{r}\), that is, the solution to
\[\begin{cases}\mathcal{L}v=0&\mbox{ in }B_{r},\\ v\equiv u&\mbox{ on }\partial B_{r},\end{cases}\] (7.4)
see [29, Theorem 3.17]. After differentiating with respect to the radius, by the mean value principle we get that \(v(0)=\fint_{\partial B_{r}}u\,d\sigma\). By the comparison principle and the Harnack inequality we get that
\[Cr^{\alpha}\leq\sup_{B_{r/2}}u\leq\sup_{B_{r/2}}v\leq C\fint_{\partial B_{r}}u \,d\sigma.\] (7.5)
∎
### Behavior of the distributional fractional Laplacian
Next we use an idea of [1] and investigate the behavior of the distributional \(\alpha\)-Laplacian of the minimizer introduced in Section 3. As mentioned in the introduction, in [1] this investigation immediately yields that the positivity set is a set of locally finite perimeter, and more precisely, that it is Ahlfors regular of the correct dimension. However, the nonlocal nature of this problem indicates that the distributional fractional Laplacian may not be supported on the free boundary and thus we cannot expect to immediately gain such strong geometric information.
First we can bound the growth of the fractional Laplacian measure around a free boundary point. Note that this growth is the natural counterpart to the upper Ahlfors regularity in the case of Alt-Caffarelli minimizers.
**Theorem 7.4**.: _Let \(u\in{\mathbf{H}}^{\beta}(B_{2r}(x_{0}))\) be a minimizer of \(\mathcal{J}\) in \(B_{2r}(x_{0})\), and let \(x_{0}\in F(u)\). Then, we have_
\[\lambda(B_{r}(x_{0}))\leq Cr^{n-\alpha}.\]
_In particular \(\lambda(F(u))=0\)._
A glance at (2.1) will convince the reader that these estimates are sharp, for they cannot be improved even in the case of the trivial solution.
Proof.: Without loss of generality we may assume that \(x_{0}=0\). Let \(\mathcal{L}u:=-\nabla\cdot(|y|^{\beta}\nabla u)\) and let \(v\) be the \(\mathcal{L}\)-harmonic replacement of \(u\) in \(B_{2r}\), see (7.4). Write \(d\sigma=|y|^{\beta}d\mathcal{H}^{n}\) and \(M:=\fint_{\partial B_{2r}}u\,d\sigma\). By Harnack’s inequality (see [7], for instance) and the mean value principle in Lemma 7.2,
\[\inf_{B_{r}}v\geq Cv(0)=CM.\]
We have that
\[\lambda(B_{r})=\int_{B_{r}}d\lambda\leq\frac{1}{CM}\int_{B_{r}}vd\lambda.\]
Since \(u\equiv 0\) in the support of \(\lambda\) and \(u\) is \(\mathcal{L}\)-subharmonic (see [1, Lemma 2.2]) we get
\[\int_{B_{r}}vd\lambda=\int_{B_{r}}(v-u)d\lambda\leq\int_{B_{2r}}(v-u)d\lambda.\]
By the properties of the measure \(\lambda\), we obtain
\[\int_{B_{2r}}(v-u)d\lambda=-\int_{B_{2r}}|y|^{\beta}\nabla(v-u)\cdot\nabla u= \int_{B_{2r}}|y|^{\beta}\left(|\nabla u|^{2}-|\nabla v|^{2}\right),\]
and using the definition of the functional and the fact that \(u\) is a minimizer, we get
\[\int_{B_{2r}}|y|^{\beta}\left(|\nabla u|^{2}-|\nabla v|^{2}\right)=\mathcal{J} (u,B_{2r})-m(B_{2r}^{+}(u))-\mathcal{J}(v,B_{2r})+m(B_{2r}^{\prime})\leq Cr^{n}.\]
All together, we have that
\[\lambda(B_{r})\leq\frac{1}{CM}Cr^{n},\]
and, since uniform non-degeneracy (see Lemma 7.1) implies that \(M\geq Cr^{\alpha}\) we can conclude the proof of the first statement.
To show the second one, note that since the free boundary has locally finite \((n-1)\)-dimensional Hausdorff measure, given a set \(E\subset F(u)\) and \(k\in{\mathbb{N}}\) we can find a collection of balls \(I_{k}=\{B^{k}_{i}\}_{i}\) such that
\[E\subset\bigcup_{B\in I_{k}}B,\quad\quad\sup_{B\in I_{k}}r(B)\leq 1/k\quad \quad\mbox{and}\quad\quad\sum_{B\in I_{k}}r(B)^{n-1}\leq 2\mathcal{H}^{n-1}(E).\]
Thus,
\[\lambda(E)\leq\sum_{B\in I_{k}}\lambda(B)\lesssim\sum_{B\in I_{k}}r(B)^{n- \alpha}\leq\sup_{B\in I_{k}}r(B)^{1-\alpha}\sum_{B\in I_{k}}r(B)^{n-1} \xrightarrow{k\to\infty}0.\]
∎
Next we study the measure away from the free boundary. We should emphasize here that even though the estimates in Lemma 7.5 and Theorem 7.6 depend on \(E_{0}\), they will be used to remove the dependence of our other estimates on \(E_{0}\). More precisely, Theorem 7.6 will play a role in establishing the continuity of the Green function in Lemma 7.9. This qualitative fact is used to prove the quantitative uniform Hölder character in Theorem 7.8.
After proving Theorem 7.8, we may drop the hypothesis \({\left\|{u}\right\|}_{{\mathbf{H}}^{\beta}(B_{2})}\leq E_{0}\) from both Lemma 7.5 and Theorem 7.6.
**Lemma 7.5**.: _If \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(B_{2})\) is a minimizer of \(\mathcal{J}\) in the ball \(B_{2}\) with \({\left\|{u}\right\|}_{{\mathbf{H}}^{\beta}(B_{2})}\leq E_{0}\) and \(0\in F(u)\), then for every \(x_{0}=(x^{\prime},0)\in B_{1,0}(u)\) we get_
\[\lim_{y\to 0}|y|^{\beta}|u_{y}(x^{\prime},y)|\approx C{\rm dist}(x_{0},F(u))^{ -\alpha}.\]
_Moreover, for every ball \(B\) centered at \({\mathbb{R}}^{n}\times\{0\}\) with \(B^{\prime}\subset\subset B_{1,0}(u)\), we have that_
\[|y|^{\beta}|u_{y}(x^{\prime},y)|\leq C{\rm dist}(x,F(u))^{-\alpha},\]
_for \(|y|<C_{B}{\rm dist}(x,F(u))\), where the constant \(C_{B}\) may depend on \(B\)._
Proof.: Let \(u\) be a minimizer, and let \(B:=B_{r}(x_{0})\) with \(B^{\prime}\subset B_{1,0}(u)\). By [33, Lemma 2.2], we can write \(u(x^{\prime},y)=|y|^{1-\beta}g(x^{\prime})+\mathcal{O}(y^{2})\), where \(g\) is a \(C^{1+\beta}(\frac{1}{2}B^{\prime})\) function, with a uniform control on the error term in terms of \({\left\|{u}\right\|}_{L^{2}(B,|y|^{\beta})}\). In particular, \(\lim_{y\to 0}|y|^{\beta-1}u(x^{\prime},y)=g(x^{\prime})\).
Let us define
\[\widetilde{u}(x^{\prime},y):=\begin{cases}u(x^{\prime},y)&\mbox{if }y\geq 0\\ -u(x^{\prime},-y)&\mbox{if }y<0.\end{cases}\] (7.6)
It is clear that \(\mathcal{L}\widetilde{u}\equiv 0\) in \(B\). According to [36, Lemma 3.26, Corollary 3.29]\(v(x^{\prime},y)=|y|^{\beta}y^{-1}\widetilde{u}(x^{\prime},y)\) is an even \(C^{\infty}(\frac{1}{2}B)\) function in \({\mathbf{H}}^{2-\beta}(B)\) (note that \(1<2-\beta<3\) is out of the usual range of \(\beta\)) and satisfying \(\nabla\cdot(|y|^{2-\beta}\nabla v)=0\). The mean value principle (see Lemma 7.2) applies also to this case, so
\[g(x_{0}^{\prime})=v(x_{0})=\frac{1}{\int_{\frac{1}{2}B}|y|^{2-\beta}}\int_{ \frac{1}{2}B}|y|^{2-\beta}v(x)=C\frac{1}{r^{2-\beta+n+1}}\int_{\frac{1}{2}B}|y |u(x),\]
and using _P1_-_P3_ , if \(r={\rm dist}(x_{0},F(u))\) we get
\[g(x_{0}^{\prime})=v(x_{0})\approx Cr^{\beta-2+1+\alpha}=Cr^{-\alpha}.\]
On the other hand, on the upper half plane we have \(u_{y}=(y^{1-\beta}v)_{y}=(1-\beta)y^{-\beta}v+y^{1-\beta}v_{y}\), so
\[y^{\beta}u_{y}(x^{\prime},y)=(1-\beta)v(x^{\prime},y)+yv_{y}(x^{\prime},y),\]
and
\[\lim_{y\to 0^{+}}y^{\beta}u_{y}(x^{\prime},y)=(1-\beta)g(x^{\prime})\approx r^ {-\alpha},\]
the limit being uniform on compact subsets of \(B\). ∎
**Theorem 7.6**.: _If \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(B_{2})\) is a minimizer of \(\mathcal{J}\) in the ball \(B_{2}\) with \({\left\|{u}\right\|}_{{\mathbf{H}}^{\beta}(B_{2})}\leq E_{0}\), then the measure \(\lambda\) is absolutely continuous with respect to the Lebesgue measure, and for \(m\)-almost every \(x\in B_{1}^{\prime}(u)\) we have that_
\[\frac{d\lambda}{dm}(x)=2\lim_{y\to 0}|y|^{\beta}u_{y}(x^{\prime},y)\approx\chi _{B_{1,0}(u)}(x){\rm dist}(x,F(u))^{-\alpha},\]
_with constants depending on \(n\), \(\alpha\) and \(E_{0}\)._
_Proof._: By Theorem 7.4 we only need to show absolute continuity in \(B_{1,0}(u)\cup B_{1,+}^{\prime}(u)\). For \(x=(x^{\prime},0)\in B_{1,+}^{\prime}(u)\) by [9, Lemma 4.2] we have that
\[\lim_{y\to 0}|y|^{\beta}u_{y}(x^{\prime},y)=0,\]
and, for \(x\in B_{1,0}(u)\) we have seen in Lemma 7.5 that
\[\lim_{y\to 0}|y|^{\beta}u_{y}(x^{\prime},y)\approx{\rm dist}(x,F(u))^{-\alpha},\]
showing the second part of the statement.
Consider a ball \(B_{r}(x_{0})\) with \(x_{0}\in{\mathbb{R}}^{n}\times\{0\}\) and a collection of even smooth functions \(\chi_{B_{r}}\leq\psi_{k}\leq\chi_{B_{r+\frac{1}{k}}}\). Then
\[\lambda(B_{r})\leq-\int|y|^{\beta}\nabla u\cdot\nabla\psi_{k}\leq\lambda(B_{r+ \frac{1}{k}}),\] (7.7)
and for every \(\varepsilon>0\) we use the Green’s theorem to get
\[-\int|y|^{\beta}\nabla u\cdot\nabla\psi_{k}=-\int_{|y|\leq\varepsilon}|y|^{ \beta}\nabla u\cdot\nabla\psi_{k}-\int_{|y|=\varepsilon}|y|^{\beta}\psi_{k} \nabla u\cdot\nu\,dm.\]
Using the symmetry properties and taking limits,
\[-\int|y|^{\beta}\nabla u\cdot\nabla\psi_{k}=2\lim_{\varepsilon\to 0}\int \varepsilon^{\beta}\psi_{k}(x^{\prime},\varepsilon)u_{y}(x^{\prime}, \varepsilon)\,dm(x^{\prime}).\] (7.8)
Next we want to apply the dominated convergence theorem. Let us begin by considering a ball \(B_{r}(x_{0})\subset B_{1}\) centered in the zero phase, with \({\rm dist}(B^{\prime}_{r}(x_{0}),F(u))\geq 2r\). In this case, by Lemma 7.5 we have
\[\varepsilon^{\beta}u_{y}(x^{\prime},\varepsilon)\lesssim r^{-\alpha},\] (7.9)
with constants depending perhaps on \(u\) and \(B_{r}\) as well.
If instead \(B^{\prime}_{r}(x_{0})\subset\subset B_{1,+}^{\prime}(u)\), by [36, Theorem 3.28]\(u\) is an even \(C^{\infty}\) function on \(B^{\prime}_{r}(x_{0})\), so \(|y|^{\beta}u_{y}=\mathcal{O}(|y|^{1+\beta}).\) Thus
\[\varepsilon^{\beta}u_{y}(x^{\prime},\varepsilon)\lesssim r^{2-2\alpha}.\] (7.10)
In both cases, the dominated convergence theorem applies and
\[\lim_{\varepsilon\to 0}\int_{B_{r+\frac{1}{k}}\cap\{y=\varepsilon\}} \varepsilon^{\beta}\psi_{k}u_{y}\,dm=\int_{B_{r+\frac{1}{k}}^{\prime}}\psi_{k} \lim_{\varepsilon\to 0}(\varepsilon^{\beta}u_{y})\,dm,\]
and by (7.7) and (7.8) we obtain
\[\lambda(B_{r})\leq 2\int_{B_{r+\frac{1}{k}}^{\prime}}\psi_{k}\lim_{\varepsilon \to 0}(\varepsilon^{\beta}u_{y})\,dm\leq\lambda(B_{r+\frac{1}{k}}).\]
In particular \(\lim_{\varepsilon\to 0}(\varepsilon^{\beta}u_{y})\in L^{1}_{\operatorname{loc} }(B_{1,0}(u)\cup B_{1,+}^{\prime}(u))\) and taking limits in \(k\) we get
\[\lambda(B_{r})=2\int_{B_{r}^{\prime}}\lim_{\varepsilon\to 0}(\varepsilon^{ \beta}u_{y})\,dm.\]
∎
A consequence of our control of the behavior of \(\lambda\) is that we can establish the existence of exterior corkscrews. We should note that exterior corkscrews can be also obtained by a purely geometric argument given the non-degeneracy and positive density of Theorem 2.3 (see, e.g. the proof of Proposition 10.3 in [20]).
**Corollary 7.7**.: _If \(u\in\) is a minimizer in \(B_{2}\) with \({\left\|{u}\right\|}_{{\mathbf{H}}^{\beta}(B_{2})}\leq E_{0}\), then \(B_{1,+}^{\prime}(u)\) satisfies the exterior corkscrew condition, i.e. there exists a constant \(C_{1}\) such that for every \(x\in F(u)\) and every \(0<r<{\rm dist}(x,\partial B_{1})\) one can find \(x_{0}\in B_{r}(x)\) so that_
\[B(x_{0},C_{1}r)\cap B_{1,+}^{\prime}(u)=\emptyset.\]
Proof.: This is a consequence of Theorems 7.4 and 7.6, and the positive density condition for the zero phase. Indeed, given a ball \(B_{r}\subset{\mathbb{R}}^{n+1}\), combining both theorems we get
\[r^{n-\alpha} \gtrsim\lambda(B_{1,0}(u)\cap B_{r})\geq C_{E_{0}}\int_{B_{1,0}(u )\cap B_{r}}{\rm dist}(x,\partial B_{1})^{-\alpha}\]
\[\geq C\left(\sup_{B_{1,0}(u)\cap B_{r}}{\rm dist}(x,\partial B_{1 })\right)^{-\alpha}|B_{1,0}(u)\cap B_{r}|,\]
and the positive density condition implies that
\[|B_{1,0}(u)\cap B_{r}|\geq C_{E_{0}}r^{n}.\]
Thus,
\[\sup_{B_{1,0}(u)\cap B_{r}}{\rm dist}(x,\partial B_{1})\geq C_{E_{0}}r,\]
which is equivalent to the exterior corkscrew condition. ∎
### Uniform Hölder character
The uniform non-degeneracy of Section 7.1 lets us conclude uniform control on the Hölder norm of \(u\).
**Theorem 7.8**.: _Let \(u\) be a minimizer of \(\mathcal{J}\) in \(B_{r}\) with \(0\in F(u)\). Then \(|u(x)|\leq C|x|^{\alpha}\) for every \(x\in\partial B_{r/2}\) with \(C\) depending only on \(n\) and \(\alpha\)._
Proof.: Again we set \(v\) to be the \(\mathcal{L}\)-harmonic replacement of \(u\) inside of \(B_{r}\) as in (7.4). Let \(\widetilde{u}:=v-u\), so that
\[\mathcal{L}\widetilde{u}=\mathcal{L}v-\mathcal{L}u=-\lambda=-\nabla\cdot(|y|^{ \beta}\nabla u),\]
and \(\widetilde{u}\in H^{1,2}_{0}(B_{r};|y|^{\beta})\).
Consider the Green function \(G:B_{r}\times B_{r}\to{\mathbb{R}}\) such that \(\mathcal{L}G(\cdot,z)=\delta_{z}\), and \(G(\cdot,z)\in H^{1,2}_{\operatorname{loc}}(\overline{B_{r}}\setminus\{z\})\) with null trace on \(\partial B_{r}\) (see [26, Proposition 2.4]). By [26, Proposition 2.1, Lemma 2.7] there exists \(p_{0}>1\) so that \(\widetilde{u}\) is the unique function in \(H^{1,p_{0}}_{0}(B_{r};|y|^{\beta})\) such that \(\mathcal{L}\widetilde{u}=\lambda\), and moreover
\[\widetilde{u}(z)=\int_{B_{r}}G(z,x)\,d\lambda(x),\] (7.11)
for almost every \(z\in B_{r}\).
Below, in Lemma 7.9, we will see that the equality (7.11) is in fact valid for every \(z\in B_{r/4}\), that is, \(\widetilde{u}=\int_{B_{r}}G(\cdot,x)\,d\lambda(x)\). In particular
\[v(0)=\widetilde{u}(0)=\int_{B_{r}}G(0,x)\,d\lambda(x).\]
Next we use the following estimate (see [26, Theorem 3.3]): let \(z,x\in B_{r/4}\). Then
\[G(z,x)\approx\int_{|x-z|}^{r}\frac{s\,ds}{w(B(x,s))},\]
where \(w\) is the \(A_{2}\) weight \(w(x)=|y|^{\beta}\). Computing, for \(x=(x^{\prime},y)\) we obtain
\[w(B(x,s))\approx s^{n}\int_{y-s}^{y+s}|t|^{\beta}\,dt\approx s^{n}\max\{|y|,s \}^{\beta+1}.\]
First we assume that \(n-2\alpha>0\). Thus, if \(x\in B_{r/4}^{\prime}\) then
\[G(z,x)\approx\int_{|x-z|}^{r}s^{-n-\beta}\,ds\approx|x-z|^{-n-\beta+1}=|x-z|^{ 2\alpha-n}.\] (7.12)
Note that \(\lambda(B_{r})\leq Cr^{n-\alpha}\) by Theorem 7.4. Thus, writing \(A_{t,s}:=B_{s}\setminus B_{t}\), we have that
\[v(0)=\int_{B_{r}}G(0,x)\,d\lambda(x)\leq\int_{cr^{2\alpha-n}}^{\infty}\lambda \left(\{x\in B_{r/4}:G(0,x)>t\}\right)dt+\int_{A_{r/4,r}}G(0,x)\,d\lambda(x).\]
By the strong maximum principle, the Green function in the annulus is bounded by \(Cr^{n-2\alpha}\). This fact, together with Theorem 7.4, implies that
\[v(0)\leq\int_{cr^{2\alpha-n}}^{\infty}\lambda\left(B_{Ct^{\frac{-1}{(n-2\alpha )}}}\right)dt+Cr^{\alpha}\leq C\int_{cr^{2\alpha-n}}^{\infty}t^{-\frac{n- \alpha}{n-2\alpha}}dt+Cr^{\alpha}=Cr^{\alpha}.\]
By the mean value theorem we conclude that
\[\fint_{\partial B_{r}}v\,d\sigma\leq Cr^{\alpha},\]
where \(d\sigma=|y|^{\beta}d\mathcal{H}^{n}\). The theorem follows by observing that, as in (7.5), the mean of \(v\) dominates \(u\) by \(\sup_{\partial B_{r/2}}u\leq\sup_{\partial B_{r/2}}v\leq C\fint_{\partial B_{r }}v\,d\sigma\).
In case \(n-2\alpha=0\), which could only happen for \(n=1\) and \(\alpha=1/2\), estimate (7.12) reads as
\[G(z,x)\approx\log\left(\frac{r}{|x-z|}\right),\]
and the proof follows the same steps.
In case \(n-2\alpha<0\), then estimate (7.12) reads as
\[G(z,x)\approx r^{n-2\alpha},\]
and the estimate is even better compared to the above.
∎
**Lemma 7.9**.: \(\int_{B_{r}}G(z,x)\,d\lambda(x)\) _is continuous in \(z\in B_{r/4}\)._
Proof.: Let \(\varepsilon<r/2\) and let \(z_{1},z_{2}\in B_{r/4}\), with \(|z_{1}-z_{2}|\leq\varepsilon/2\). Then
\[\int_{B_{r}}|G(z_{1},x)-G(z_{2},x)|\,d\lambda(x) \leq\int_{B_{r}\setminus B_{\varepsilon}(z_{1})}|G(z_{1},x)-G(z_{ 2},x)|\,d\lambda(x)\]
\[\quad+\int_{B_{\varepsilon}(z_{1})}G(z_{1},x)d\lambda(x)+\int_{B_ {\varepsilon}(z_{1})}G(z_{2},x)d\lambda(x).\] (7.13)
Next we use (7.12) and Theorems 7.4 and 7.6. By decomposing the domain on dyadic annuli, in case \(n-2\alpha>0\) we get
\[\int_{B_{\varepsilon}(z_{1})}G(z_{1},x)d\lambda(x) \leq\sum_{j\leq 0}\int_{A_{2^{j-1}\varepsilon,2^{j}\varepsilon}(z _{1})}G(z_{1},x)d\lambda(x)\lesssim\sum_{j\leq 0}\lambda(B_{2^{j}\varepsilon}( z_{1}))(2^{j-1}\varepsilon)^{2\alpha-n}\lesssim\varepsilon^{\alpha}\sum_{j\leq 0 }2^{j\alpha}.\]
In case \(n-2\alpha=0\) we obtain \(\varepsilon^{\alpha}\sum_{j\leq 0}2^{j\alpha}\log\left(\frac{r}{2^{j} \varepsilon}\right)\) on the right-hand side instead, and in case \(n-2\alpha<0\) we obtain \(\varepsilon^{n-\alpha}r^{2\alpha-n}\sum_{j\leq 0}2^{j(n-\alpha)}\). In every case, fixing \(\varepsilon\) small enough this term can be as small as wanted. The same will happen with the last term on the right-hand side of (7.3).
On the other hand, by [27, Theorem 2.3.12] Green’s function is uniformly continuous on the set \(\{(z,x)\in B_{r}\times B_{r}:|z-x|>\varepsilon\}\) so \(|G(z_{1},x_{1})-G(z_{2},x_{2})|\leq\delta_{\varepsilon}(|z_{1}-z_{2}|+|x_{1}-x _{2}|)\) with \(\delta_{\varepsilon}(t)\xrightarrow{t\to 0}0\). Thus,
\[\int_{B_{r}\setminus B_{\varepsilon}(z_{1})}|G(z_{1},x)-G(z_{2},x)|\,d\lambda( x)\leq\delta_{\varepsilon}(|z_{1}-z_{2}|)\lambda(B_{r})\to 0.\]
Assuming that \(|z_{1}-z_{2}|\) is small enough, we obtain that \(\int_{B_{r}}|G(z_{1},x)-G(z_{2},x)|\,d\lambda(x)\) is as small as wanted and the claim follows. ∎
**Remark 7.10**.: _In light of Theorem 7.8, arguing as in [7, Theorem 1.1] we obtain that every minimizer \(u\) in a ball \(B_{r}\) with \(0\in F(u)\) has uniform \(C^{\alpha}\) character in \(B_{r/2}\). By the Caccioppoli inequality (see Section 3.1) we also obtain the same for the \({\mathbf{H}}^{\beta}\) norm. Moreover, using [7, Theorem 1.2] we can find interior corkscrew points with constants not depending on these norms. This allows us to remove the a priori dependence on \(\|u\|_{{\mathbf{H}}^{\beta}}\) from all of our results above._
### Lower estimates for the distributional fractional Laplacian
Next we bound the growth of the measure around a free boundary point from below. None of these results will be used in the present paper, but we include them to give a complete picture of the tools under consideration.
**Theorem 7.11**.: _Let \(u\in{\mathbf{H}}^{\beta}(B_{2r})\) be a minimizer of \(\mathcal{J}\) in \(B_{2r}\) such that \(0\in F(u)\). Then we have_
\[\lambda(B_{r})\geq Cr^{n-\alpha}.\]
Proof.: Let \(\mathcal{L}u:=-\nabla\cdot(|y|^{\beta}\nabla u)\) and let \(v\) be the \(\mathcal{L}\)-harmonic replacement of \(u\) in \(B_{r}\) (see (7.4)). Let \(\widetilde{u}:=v-u\) and consider the Green function \(G:B_{r}\times B_{r}\to{\mathbb{R}}\) as in the proof of Theorem 7.8.
Let \(0<\kappa<1\) to be fixed later. By _P1_-_P3_ in Theorem 2.3 there exists a point \(z_{0}\in B_{\kappa r}\) with
\[u(z_{0})\approx(\kappa r)^{\alpha},\] (7.14)
with constants depending only on \(n\) and \(\alpha\) by Remark 7.10. By _P1_ there is a constant \(c\) such that for every \(z\in B(z_{0},c\kappa r)\) we have that \(u(z)\approx(\kappa r)^{\alpha}\). Since \(\lambda\) is supported on the zero phase of \(u\), the ball \(B(z_{0},c\kappa r)\) is away from its support, and
\[\widetilde{u}(z)=\int_{B_{r}\setminus B(z_{0},c\kappa r)}G(z,x)\,d\lambda(x).\]
Using the strong maximum principle (see [29, Theorem 6.5]) and (7.12), for almost every \(z\in B(z_{0},c\kappa r/2)\) we get
\[\widetilde{u}(z) \leq\lambda(B_{r})\sup_{x\in B_{r}\setminus B(z_{0},c\kappa r)}G( z,x)=\lambda(B_{r})\sup_{x\in B_{r/4}\setminus B(z_{0},c\kappa r)}G(z,x)\]
\[\approx\lambda(B_{r})\sup_{x\notin B(z_{0},c\kappa r)}|x-z|^{2 \alpha-n}=\lambda(B_{r})(c\kappa r)^{2\alpha-n}.\]
That is,
\[\widetilde{u}(z)\lesssim\lambda(B_{r})(c\kappa r)^{2\alpha-n}.\] (7.15)
On the other hand, note that \(u\) is continuous. By the Riesz representation theorem, there exists a probability measure \(\omega^{z}_{\mathcal{L}}\) such that
\[v(z)=\int_{\partial B_{r}}u(x)d\omega^{z}_{\mathcal{L}}(x).\]
We can choose \(r\) so that \(\partial B_{r}\) intersects a big part of a corkscrew ball, i.e., assume that there exists a point \(\xi_{0}\in\partial B_{r}^{\prime}\) which is the center of a ball \(B^{\prime}(\xi_{0},cr)\) where \(u\) has positive values. This can be done by the interior corkscrew condition, with all the constants involved depending only on \(n\) and \(\alpha\). Then, changing the constant if necessary, all points \(\xi\in B(\xi_{0},cr)\) satisfy that \(u(\xi)\geq Cr^{\alpha}\) by the non-degeneracy condition and the optimal regularity. Call \(U:=\partial B_{r}\cap B(\xi_{0},cr)\). Then
\[v(z)\gtrsim r^{\alpha}\omega^{z}_{\mathcal{L}}(U).\]
But \(\omega^{z}_{\mathcal{L}}(U)\) is bounded below by a constant by [29, Lemma 11.21] and the Harnack inequality (use a convenient Harnack chain). All in all, we have that
\[v(z)\gtrsim r^{\alpha}.\] (7.16)
Combining (7.15), (7.14) and (7.16) and choosing \(\kappa\) small enough, depending in \(n\) and \(\alpha\), we get
\[\lambda(B_{r})\gtrsim\frac{\widetilde{u}(z_{0})}{(c\kappa r)^{2\alpha-n}}\geq \frac{Cr^{\alpha}-C^{\prime}(\kappa r)^{\alpha}}{(c\kappa r)^{2\alpha-n}}\geq C _{n,\alpha}r^{n-\alpha},\]
for \(\kappa\) small enough.
In case \(n-2\alpha=0\), that is for \(n=1\) and \(\alpha=1/2\), using similar changes as in the proof of Theorem 7.8 we get \(\widetilde{u}(z)\lesssim\lambda(B_{r})\sup_{x\notin B(z_{0},c\kappa r)}\log \left(\frac{r}{|x-z|}\right)\approx\lambda(B_{r})|\log\kappa|\) instead of (7.15). In case \(n-2\alpha<0\), the proof is even easier than before. ∎
**Remark 7.12**.: _Theorem 7.11 implies that the \((n-\alpha)\)-Hausdorff measure of the free boundary is locally finite. This does not suffice to show finite perimeter of the positive phase and, therefore, we had to use the approach in Section 5._
The following theorem summarizes the information that we have gathered so far about the measure \(\lambda\).
**Theorem 7.13**.: _If \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) is a minimizer of \(\mathcal{J}\) in \(\Omega\), then the measure \(\lambda\) is absolutely continuous with respect to the Lebesgue measure in \(\Omega^{\prime}(u)\). Moreover, given \(x_{0}\in F(u)\) and \(r>0\) such that \(B_{2r}(x_{0})\subset\Omega\), then_
\[\lambda(B_{r}(x_{0}))\approx r^{n-\alpha},\] (7.17)
_and for almost every \(x\in B_{r}^{\prime}(x_{0})\) we have that_
\[\frac{d\lambda}{dm}(x)=2\lim_{y\to 0}|y|^{\beta}u_{y}(x^{\prime},y)\approx\chi _{\Omega_{0}(u)}(x){\rm dist}(x,F(u))^{-\alpha},\]
_with constants depending only on \(n\) and \(\alpha\)._
## 8 Rectifiability of the singular set
In this section we use the Rectifiable-Reifenberg and quantitative stratification framework of Naber-Valtorta [32] to prove Hausdorff measure and structure results for the singular set. Recall that \(k^{*}_{\alpha}\) is the first dimension in which there exists non-trivial \(\alpha\)-homogeneous global minimizers to (1.1) defined in Section 5.
**Theorem 8.1**.: _Let \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) be a minimizer of (1.1) in a domain \(\Omega\). Then \(\Sigma(u)\) is \((n-k^{*}_{\alpha})\)-rectifiable and for every \(D\subset\subset\Omega\), we have_
\[\mathcal{H}^{n-k^{*}_{\alpha}}(\Sigma(u)\cap D)\leq C_{n,\alpha,{\rm dist}(D, \partial\Omega)}.\]
Part of the power of this framework is that it is very general. One needs certain compactness properties on the minimizers and a connection between the drop in the monotonicity formula and the local flatness of the singular set (see Theorem 8.14 below). To avoid redundancy and highlight the original contributions of this article, we omit many details here and try to focus on the estimates needed to apply this framework to minimizers of (1.1). Whenever we omit details we will refer the interested reader to the relevant parts of [22].
The key first step is to introduce the appropriate formulation of quantitative stratification. First introduced by Cheeger and Naber [5] in the context of manifolds with Ricci curvature bounded from below, this is a way to quantify the intuitive fact that \(F(u)\) should “look” \((n-k^{*}_{\alpha})\)-dimensional near a point \(x_{0}\in F(u)\) at which the blow-ups have \((n-k^{*}_{\alpha})\)-linearly independent translational symmetries.
### Quantitative stratification for minimizers to \(\mathcal{J}\)
We have seen in Section 4.1 that homogeneous functions have linear spaces of translational symmetry. Here we want to quantify (both in terms of size and stability) how far a function is from having no more than \(k\) directions of translational symmetry.
**Definition 8.2**.: _We write \(V^{k}\) for the collection of linear \(k\)-dimensional subspaces of \({\mathbb{R}}^{n}\). A function \(u\) is said to be \(k\)-symmetric if it is \(\alpha\)-homogeneous with respect to some point, and there exists a \(L\in V^{k}\) so that_
\[u(x+v)=u(x),\quad\quad\mbox{for every }v\in L.\]
_A function \(u\) is said to be \((k,\epsilon)\)-symmetric in a ball \(B\) if for some \(k\)-symmetric \(\widetilde{u}\) we have_
\[r(B)^{-2-n}\int_{B}|y|^{\beta}|u-\widetilde{u}|^{2}dy<\epsilon.\]
Next we define the \(k\)-stratum \(S^{k}(u)\), the \((k,\epsilon)\)-stratum \(S^{k}_{\epsilon}(u)\) and the \((k,\epsilon,r)\)-stratum \(S^{k}_{\epsilon,r}(u)\). A key insight here is to define these strata by the blow-ups having \(k\) or fewer symmetries as opposed to exactly \(k\) symmetries.
**Definition 8.3**.: _Let \(0\leq k\leq n\), \(0<\varepsilon<\infty\) and \(0<r<{\rm dist}(x,\partial\Omega)\), let \(u\) be a continuous function in \(\Omega\) and let \(x\in F(u)\). We say that:_
* \(x\in S^{k}(u)\) _if_ \(u\) _has no_ \((k+1)\)_-symmetric blow-ups at_ \(x\)_._
* \(x\in S^{k}_{\epsilon}(u)\) _if_ \(u\) _is not_ \((k+1,\epsilon)\)_-symmetric in_ \(B_{s}(x)\) _for every_ \(0<s\leq\min\{1,{\rm dist}(x,\partial\Omega)\}\)_._
* \(x\in S^{k}_{\epsilon,r}(u)\) _if_ \(u\) _is not_ \((k+1,\epsilon)\)_-symmetric in_ \(B_{s}(x)\) _for every_ \(r\leq s\leq\min\{1,{\rm dist}(x,\partial\Omega)\}\)_._
_If it is clear from the context we will omit \(u\) from the notation._
We now detail some standard properties of the strata defined above and how they interact with the free boundary \(F(u)\). While the proofs are mostly standard, we give the details as the scaling associated to the problem (1.1) adds some technical difficulties. This proof also provides a blueprint for fleshing out the details in Sections 8.3 and 8.4.
**Lemma 8.4**.: _Let \(0\leq j\leq k\leq n\), \(0<\varepsilon\leq\tau<\infty\), \(0<r\leq s<{\rm dist}(x,\partial\Omega)\), and let \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) be a minimizer in \(\Omega\). Then:_
1. \(S^{0}\subset S^{1}\subset\cdots\subset S^{n-1}=S^{n}=F(u)\)_. Moreover, for the reduced boundary, we have that_ \(F_{red}(u)\subset S^{n-1}\setminus S^{n-2}\) _and_ \(\Sigma(u)\subset S^{n-k^{*}_{\alpha}}\)_._
2. _We have_ \(S^{j}_{\tau}\subset S^{k}_{\epsilon}\subset S^{k}\) _and, moreover,_ \( S^{k}=\bigcup_{\epsilon>0}S^{k}_{\epsilon}\)_._
3. _Also_ \(S^{j}_{\tau}\subset S^{j}_{\tau,r}\subset S^{k}_{\epsilon,s}\) _and, moreover,_ \( S^{k}_{\epsilon}=\bigcap_{r>0}S^{k}_{\epsilon,r}\)_._
4. _The sets_ \(S^{k}_{\epsilon}\) _are closed, in both_ \(x\) _and_ \(u\)_: if_ \(u_{i}\xrightarrow{L^{2}_{\rm loc}(\Omega;|y|^{\beta})}u\) _and_ \(x_{i}\to x\) _with_ \(x_{i}\in S^{k}_{\epsilon}(u_{i})\)_, then_ \(x\in S^{k}_{\epsilon}(u)\)_._
5. _If_ \(u_{i}\xrightarrow{L^{2}_{\rm loc}(\Omega;|y|^{\beta})}u\)_,_ \(\epsilon_{i}\to 0\)_, and_ \(u_{i}\) _are_ \((k,\epsilon_{i})\)_-symmetric in_ \(B_{1}\)_, then_ \(u\) _is_ \(k\)_-symmetric in_ \(B_{1}\)_._
Proof.: _1_. The inclusions \(S^{k}\subset S^{k+1}\) of the first property are trivial. The last equalities are consequences of the non-degeneracy. The fact that \(F_{red}(u)\cap S^{n-2}=\emptyset\) can be deduced from the Hausdorff convergence of the free boundaries described in Lemma 3.4 and Theorem 2.4. Finally, \(\Sigma(u)\subset S^{n-k^{*}_{\alpha}}\) is a consequence of Lemmas 4.5 and 5.2.
_2_. The inclusions \(S^{j}_{\tau}\subset S^{k}_{\epsilon}\) of the second property come from the definitions: if \(x\notin S^{k}_{\epsilon}\) then there exist a ball \(B\subset\Omega\) centered at \(x\) and a \((k+1)\)-symmetric \(\widetilde{u}\) so that \(r(B)^{-2-n}\int_{B}|y|^{\beta}|u-\widetilde{u}|^{2}dy<\epsilon\leq\tau.\) But \(\widetilde{u}\) is also \((j+1)\)-symmetric. Thus, \(x\notin S^{j}_{\tau}\).
The fact that \(S^{k}_{\epsilon}\subset S^{k}\) is a consequence of the uniform convergence on Lemma 3.4: if \(x\notin S^{k}\), then \(u\) has a \((k+1)\)-symmetric blow-up sequence \(u_{i}\to u_{0}\) at \(x\) converging uniformly. Thus,
\[\int_{B_{\rho_{i}}(x)}|y|^{\beta}\left|u(x)-\rho_{i}^{\alpha}u_{0 }\left(\frac{x-x_{0}}{\rho_{i}}\right)\right|^{2}dx =\rho_{i}^{{\beta}+2\alpha+n+1}\int_{B_{1}(x)}|y|^{\beta}\left| \frac{u\left(x_{0}+\rho_{i}x\right)}{\rho_{i}^{\alpha}}-u_{0}(x)\right|^{2}dx\]
\[\leq\rho_{i}^{n+2}\omega(B_{1}){\left\|{u_{i}-u_{0}}\right\|}_{L^ {\infty}}.\]
That is,
\[\rho_{i}^{-n-2}\int_{B_{\rho_{i}}(x)}|y|^{\beta}\left|u(x)-r_{i}^{\alpha}u_{0} \left(\frac{x-x_{0}}{\rho_{i}}\right)\right|^{2}dx\xrightarrow{i\to\infty}0,\]
and therefore, for every \(\varepsilon\) there exists a ball small enough so that \(u\) is \((k+1,\varepsilon)\)-symmetric in it. In particular \(S^{k}\supset\bigcup_{\epsilon>0}S^{k}_{\epsilon}\).
To see the converse, assume that \(x\notin\bigcup_{\epsilon}S^{k}_{\epsilon}\). Then for every \(i\in{\mathbb{N}}\) there exist a \((k+1)\)-symmetric function \(\widetilde{u}_{i}\), invariant with respect to \(L_{i}\in V^{k+1}\) and \(r_{i}<\min\{1,{\rm dist}(x,\partial\Omega)\}\) such that
\[\frac{1}{r_{i}^{n+2}}\int_{B_{r_{i}}}|y|^{\beta}\left|u(x)-\widetilde{u}_{i}(x )\right|^{2}\,dx<\frac{1}{i}.\]
In the case when \(r_{i}\) stays away from zero, since \(r_{i}<1\), we can take a subsequence converging to \(r_{0}\in(0,1)\), and one can see that \(u\) is \((k+1)\)-symmetric in the ball \(B_{r_{0}}(x_{0})\). Otherwise, consider \(u_{i}:=\frac{u\left(x_{0}+r_{i}x\right)}{r_{i}^{\alpha}}\) and \(\widetilde{u}_{i,i}=\frac{\widetilde{u}_{i}\left(x_{0}+r_{i}x\right)}{r_{i}^{ \alpha}}\). Taking subsequences, we can assume that \(L_{i}\to L_{0}\) locally in the Hausdorff distance, and that \(u_{i}\to u_{0}\) locally uniformly. One can check also using the Hölder character of \(u\) that \(\{\widetilde{u}_{i,i}\}\) is uniformly bounded in \(L^{2}(B;|y|^{\beta})\), so taking subsequences again, we can assume the existence of \(\widetilde{u}_{0}\) so that \(\widetilde{u}_{i,i}\to\widetilde{u}_{0}\) in \(L^{2}(B;|y|^{\beta})\). This function will be \((k+1)\)-symmetric, being invariant in the directions of \(L_{0}\). By the triangle inequality we get
\[\int_{B_{1}}|y|^{\beta}|u_{0}-\widetilde{u}_{0}|^{2}\,dx\lesssim\int_{B_{1}}|y |^{\beta}|u_{0}-u_{i}|^{2}\,dx+\int_{B_{1}}|y|^{\beta}|u_{i}-\widetilde{u}_{i, i}|^{2}\,dx+\int_{B_{1}}|y|^{\beta}|\widetilde{u}_{i,i}-\widetilde{u}_{0}|^{2} \,dx.\]
The first and the last integrals converge to zero by our choice of the subsequence. For the middle term just change variables as before:
\[\int_{B_{1}}|y|^{\beta}|u_{i}-\widetilde{u}_{i,i}|^{2}\,dx=\frac{1}{r_{i}^{n+2 }}\int_{B_{r_{i}}}|y|^{\beta}\left|u(x)-\widetilde{u}_{i}(x)\right|^{2}\,dx\to 0.\]
Thus we have that \(u_{0}=\widetilde{u_{0}}\) and, therefore, \(x\notin S_{k}\).
_3_. The inclusions \(S^{j}_{\tau}\subset S^{j}_{\tau,r}\subset S^{k}_{\epsilon,s}\) of the third property come from the definitions and thus, \(S^{k}_{\epsilon}\subset\bigcap_{r>0}S^{k}_{\epsilon,r}\). The converse implication is also trivial.
_4_. The closedness is obtained by a contradiction argument again. It is straightforward but we write it here for the sake of completeness.
Assume by contradiction that \(x\notin S^{k}_{\epsilon}(u)\). Then there exist a \((k+1)\)-symmetric function \(\widetilde{u}\) and a radius \(r\) such that
\[\epsilon_{0}:=\frac{1}{r^{n+2}}\int_{B_{r}(x)}|y|^{\beta}\left|u(x)-\widetilde {u}(x)\right|^{2}\,dx<\epsilon.\]
Let \(\tau<1\) to be fixed and consider \(i_{0}\in{\mathbb{N}}\) so that \(B_{\tau r}(x_{i})\subset B_{r}(x)\) for every \(i\geq i_{0}\). By the triangle inequality
\[\frac{1}{(\tau r)^{n+2}}\int_{B_{\tau r}(x_{i})}|y|^{\beta}\left|u_{i}(x)- \widetilde{u}(x)\right|^{2}\,dx\leq\frac{1}{(\tau r)^{n+2}}{\left\|{u_{i}-u} \right\|}_{L^{2}(B_{\tau r}(x_{i});|y|^{\beta})}^{2}+\frac{\epsilon_{0}}{\tau^ {n+2}}.\]
We define \(\tau\) so that \(\frac{\epsilon_{0}}{\tau^{n+2}}=\frac{\epsilon+\epsilon_{0}}{2}\). Choose \(i_{0}\) big enough so that every \(i\geq i_{0}\) satisfies that \({\left\|{u_{i}-u}\right\|}_{L^{2}(B_{\tau r}(x_{i});|y|^{\beta})}^{2}<(\tau r) ^{n+2}\frac{\epsilon-\epsilon_{0}}{2}\). Then \(x_{i}\notin S^{k}_{\epsilon}(u_{i})\), contradicting the hypothesis.
_5_. Assume that \(\widetilde{u}_{i}\) is invariant with respect to \(L_{i}\in V^{k+1}\) and
\[\int|y|^{\beta}|u_{i}-\widetilde{u}_{i}|^{2}\leq\epsilon_{i}.\]
Consider a subsequence \(\{u_{i}\}\) so that the varieties \(L_{i}\to L\) locally in the Hausdorff distance. Using the triangle inequality as in _4_ it follows that \(u\) is \((k,\delta_{i})\)-symmetric with \(\delta_{i}\to 0\).
∎
**Proposition 8.5**.: _There exists \(\epsilon(n,\alpha)>0\) such that if \(u\in{\mathbf{H}}^{\beta}_{\operatorname{loc}}(\Omega)\) is a minimizer of \(\mathcal{J}\) in a domain \(\Omega\subset{\mathbb{R}}^{n+1}\), then \(\Sigma(u)\subset S^{n-k^{*}_{\alpha}}_{\epsilon}(u)\)._
Proof.: It is enough to show that if \(u\) is a minimizer of \(\mathcal{J}\) in \(B_{2}(0)\), then \(\Sigma(u)\cap B_{1}(0)\subset S^{n-k^{*}_{\alpha}}_{\epsilon}(u)\).
By contradiction, let us assume that there is a sequence of positive numbers \(\epsilon_{i}\xrightarrow{i\to\infty}0\), functions \(u_{i}\) minimizing \(\mathcal{J}\) in \(B_{2}(0)\) and \(x_{i}\in\Sigma(u_{i})\cap B_{1}(0)\), \(r_{i}\in(0,1]\), with \(u_{i}\) being \((n-k^{*}_{\alpha}+1,\epsilon_{i})\)-symmetric in \(B_{r_{i}}(x_{i})\), and let \(L_{i}\) be an \((n-k^{*}_{\alpha}+1)\)-dimensional subspace that leaves invariant one of the admissible \((n-k^{*}_{\alpha}+1)\)-symmetric approximants. By rescaling we can assume that \(r_{i}=1\).
Passing to a subsequence we can assume that \(L_{i}\to L_{0}\in V^{n-k^{*}_{\alpha}+1}\) locally in the Hausdorff distance and \(x_{i}\to x_{0}\). By the compactness results in Lemma 3.4 we have a uniform limit \(u_{0}\) which is a minimizer as well, and it is \((n-k^{*}_{\alpha}+1)\)-symmetric with invariant manifold \(L_{0}\). By Lemma 4.4 any blow-up \(u_{0,0}\) at \(x_{0}\) will be \((n-k^{*}_{\alpha}+1)\)-symmetric as well. Applying Lemma 4.5\((n-k^{*}_{\alpha}+1)\) times we find that the restriction of \(u_{0,0}\) to the orthogonal manifold \(L_{0}^{\perp}\) is a \((k^{*}_{\alpha}-1)\)-dimensional minimal cone which, by Lemma 5.2 is the trivial solution, and so is \(u_{0,0}\). Thus, \(x_{0}\) is a regular point for \(u_{0}\).
On the other hand, the Hausdorff convergence of Lemma 3.4 together with the improvement of flatness of Theorem 2.4 imply that for \(i\) big enough \(x_{i}\in F_{\rm red}(u_{i})\), reaching a contradiction. ∎
### The Refined Covering Theorem
Our estimates on the size and structure of the singular set \(\Sigma(u)\) come from similar results concerning the \(S^{k}_{\epsilon}(u)\). In particular, we prove the following covering result:
**Theorem 8.6**.: _Let \(u\in{\mathbf{H}}^{\beta}(B_{5})\) be a minimizer to (1.1) in \(B_{5}\) with \(0\in F(u)\). For given real numbers \(\epsilon>0\), \(0<r\leq 1\) and every natural number \(1\leq k\leq n-1\), we can find a collection of balls \(\{B_{r}(x_{i})\}_{i=1}^{N}\) with \(N\leq C_{n,\alpha,\epsilon}r^{-k}\) such that_
\[S^{k}_{\epsilon,r}(u)\cap B_{1}\subset\bigcup_{i}B_{r}(x_{i}).\]
_In particular, \(|B^{\prime}_{r}(S^{k}_{\epsilon,r}\cap B_{1})|\leq C_{n,\alpha,\epsilon}r^{n-k}\) for every \(0<r\leq 1\) and_
\[\mathcal{H}^{k}(S^{k}_{\epsilon}(u)\cap B_{1})\leq C_{n,\alpha,\epsilon}.\]
From Proposition 8.5 and Theorem 8.6 we can conclude the following corollary which comprises the second part of Theorem 8.1 above.
**Corollary 8.7**.: _If \(u\in{\mathbf{H}}^{\beta}(B_{5})\) is a minimizer to (1.1) in \(B_{5}\) with \(0\in F(u)\), then \(\Sigma(u)\) is \((n-k^{*}_{\alpha})\)-rectifiable and for every \(0<r\leq 1\) we have_
\[|B_{r}(\Sigma(u)\cap B_{1})|\leq C_{n,\alpha}r^{k^{*}_{\alpha}}.\]
_In particular,_
\[\mathcal{H}^{n-k^{*}_{\alpha}}(\Sigma(u)\cap B_{1})\leq C_{n,\alpha}.\]
Rectifiability is encoded in the following result. We omit the details of proof here but it is a consequence of the packing result above, the Rectifiable-Reifenberg theorem of [32] and Theorem 8.14 below. For more details see Sections 2 and 8 of [22] (particularly Theorem 2.2 in the former and the proof of Theorem 1.12 in the latter).
**Theorem 8.8**.: _Let \(u\) be a non-negative, even minimizer to (1.1) in a domain \(\Omega\). Then \(S^{k}_{\epsilon}(u)\) is \(k\)-rectifiable for every \(\epsilon\) and, hence, each stratum \(S^{k}(u)\) is \(k\)-rectifiable as well._
The proof of Theorem 8.6 follows from inductively applying the following, slightly more technical, packing result (for details see Section 4 of [22]).
**Theorem 8.9**.: _Let \(\epsilon>0\). There exists \(\eta(n,\alpha,\epsilon)\) such that, for every minimizer \(u\in{\mathbf{H}}^{\beta}(B_{5})\) of \(\mathcal{J}\) in \(B_{5}\) with \(0\in F(u)\) and \(0<R<1/10\), there is a finite collection \(\mathcal{U}\) of balls \(B\) with center \(x_{B}\in S^{k}_{\epsilon,\eta R}\) and radius \(R\leq r_{B}\leq 1/10\) which satisfy the following properties:_
1. _Covering control:_ \[S^{k}_{\epsilon,\eta R}\cap B_{1}\subset\bigcup_{B\in\mathcal{U}}B.\]
2. _Energy drop: For every_ \(B\in\mathcal{U}\)_,_ \[\mbox{either}\quad r_{B}=R,\quad\quad\mbox{or}\quad\sup_{2B}\Psi^{u}_{2r_{B}} \leq\sup_{B_{2}}\Psi^{u}_{2}-\eta.\]
3. _Packing:_ \[\sum_{B\in\mathcal{U}}r_{B}^{k}\leq c(n,\alpha,\epsilon).\]
We construct the balls of Theorem 8.9 using a “stopping time” or “good ball/bad ball” argument. Much of this argument uses harmonic analysis and geometric measure theory and is completely independent of the original problem (1.1). However, there are a few places in which we need to connect the behavior of minimizers to the geometric structure of the singular set. Here we will sketch the “good ball/bad ball” argument, taking for granted the estimates needed to apply this argument to our functional. In the next few subsection we will provide these estimates. For more details on the construction itself we refer the reader to Section 7 in [22].
**Outline of the Construction in Theorem 8.9** To find this covering we define good and bad balls as follows: imagine our ball, \(B\), has radius 1. We say that \(B\) is a _good ball_, if at every point in \(x\in S^{k}_{\varepsilon}(u)\cap B\) the monotone quantity centered at that point at some small scale, \(\rho\), is not much smaller than the monotone quantity on ball \(B\) (we say these points have “small density drop”). A ball \(B\) is a _bad ball_ if all the points in \(S^{k}_{\varepsilon}(u)\cap B\) with small density drop are contained in a small neighborhood of a \((k-1)\)-plane. This good/bad is a dichotomy follows from Theorem 8.10 in Section 8.3.
In a good ball of radius \(r\) we cover \(S^{k}_{\varepsilon}(u)\) with balls of radius \(\rho r\) iterating the construction until we find a bad ball or until the radius of the ball becomes very small. In a bad ball, we cover \(S^{k}_{\varepsilon}(u)\) away from the \((k-1)\)-plane without much care. Close to the \((k-1)\)-plane we cover \(S^{k}_{\varepsilon}(u)\) with balls of radius \(\rho r\) iterating the construction until we reach a good ball or until the radius of the ball becomes very small.
Inside long strings of good balls, the packing estimates follow from powerful tools in geometric measure theory (see Theorem 8.13 below) and the connection between the drop in monotonicity and the local flatness of the singular strata (see Theorem 8.14 below). We give more details in Section 8.4.
Inside long strings of bad balls each of which is near the \((k-1)\)-plane of the previous bad ball, we have even better packing estimates than expected (as we are effectively well approximated by planes which are lower dimensional). This leaves only points which are in many bad balls and in most of those balls they are far away from the \((k-1)\)-plane. However, at these points the monotone quantity drops a definite amount many times, which contradicts either finiteness or monotonicity. This implies that the points and scales inside the bad balls which are not close to the \((k-1)\)-plane form a negligible set (the technical term is a Carleson set). We give more information about the bad balls in Section 8.3.
### Tools for bad balls: key dichotomy
**Theorem 8.10** (Key dichotomy).: _Let \(\epsilon,\rho,\gamma,\eta^{\prime}>0\) be fixed numbers with \(\rho\gamma<2\). There exists an \(\eta_{0}(n,\alpha,\epsilon,\rho,\gamma,\eta^{\prime})<\rho/100\) such that for every \(\eta\leq\eta_{0}\), every \(r>0\), every \(E>0\) and every minimizer \(u\in{\mathbf{H}}^{\beta}(B_{4r})\) of \(\mathcal{J}\) in \(B_{4r}\) with \(0\in F(u)\) and \(\sup_{B_{r}}\Psi^{u}_{2r}\leq E\), then either_
* \(\Psi_{\gamma\rho r}^{u}\geq E-\eta^{\prime}\) _on_ \(S^{k}_{\epsilon,\eta r}\cap B_{r}\)_, or_
* _there exists_ \(\ell\in L^{k-1}\) _so that_ \(\{x\in B_{r}:\Psi^{u}_{2\eta r}(x)\geq E-\eta\}\subset B_{\rho r}(\ell)\)_._
The key dichotomy is a direct consequence of the Lemma 8.11 below. The core idea is to make effective the following assertion: if \(u\) is \(k\)-symmetric, then along the invariant manifold the Allen-Weiss density is constant, and every point away from the manifold will have \((k+1)\)-symmetric blow-ups by Lemma 4.4.
**Lemma 8.11**.: _Let \(\epsilon,\rho,\gamma,\eta^{\prime}>0\) be fixed numbers with \(\gamma\rho<2\). There exist \(\eta_{0},\theta>0\) such that for every \(\eta<\eta_{0}\), every \(E>0\) and every minimizer \(u\) of \(\mathcal{J}\) in \(B_{4}\) with \(0\in F(u)\) and \(\sup_{B_{1}}\Psi^{u}_{2}\leq E\), if there exist \(w_{0},\dots,w_{k}\in B_{1}\) and affine manifolds \(L^{i}:=\langle w_{0},\dots,w_{i}\rangle\in V^{i}\) with_
\[w_{i}\notin B_{\rho}(L^{i-1}),\quad\quad\mbox{ and }\quad\quad\Psi^{u}_{2\eta} (w_{i})\geq E-\eta\quad\quad\mbox{for every }i\in\{0,\cdots,k\},\]
_then,_
\[\Psi^{u}_{\gamma\rho}(x)\geq E-\eta^{\prime}\quad\quad\mbox{on }B_{\theta}(L^{ k})\cap B_{1}\] (8.1)
_and_
\[S^{k}_{\epsilon,\eta}\cap B_{1}\subset B_{\theta}(L^{k})\] (8.2)
The proof follows (with only minor modifications) the proof in [22, Lemma 3.3]. We end this subsection by formally defining the good/bad balls alluded to above:
**Definition 8.12**.: _Let \(x\in B_{2}\), \(0<R<r<2\) and \(u\) be a minimizer to \(\mathcal{J}\) in \(B_{5}\). We say that the ball \(B_{r}(x)\) is good if_
\[\Psi^{u}_{\gamma\rho r}\geq E-\eta^{\prime}\quad\quad\mbox{on }S^{k}_{\epsilon ,\eta R}\cap B_{r}(x),\]
_and otherwise we say that \(B_{r}(x)\) is bad._
By Theorem 8.10 in any bad ball \(B\) there exists an affine \((k-1)\)-manifold \(\ell_{B}\) with
\[\{w\in B:\Psi^{u}_{2\eta r}(w)\geq E-\eta\}\subset B_{\rho r}(\ell_{B}^{k-1}).\] (8.3)
### Tools for good balls: packing estimates and GMT
In this section we control the local flatness of the singular strata by the drop in monotonicity. To do this we introduce a key tool from geometric measure theory which estimates the flatness of a set. Given a Borel measure \(\mu\), a point \(x\) and a radius \(r\), the beta coefficient is defined as follows:
\[\beta^{k}_{\mu,2}(B_{r}(x))^{2}:=\beta^{k}_{\mu,2}(x,r)^{2}=\inf_{L\in V^{k}_{ a}}\frac{1}{r^{k}}\int_{B_{r}(x)}\frac{{\rm dist}(z,L)^{2}}{r^{2}}\,d\mu(z)\] (8.4)
where \(V^{k}_{a}\) stands for the collection of \(k\)-dimensional affine sets of \({\mathbb{R}}^{n}\). The beta coefficients are meant to measure in a scale invariant way how far is a measure from being flat, in this case in the \(L^{2}\) distance, although other \(L^{p}\) versions have been used in the literature for \(1\leq p\leq\infty\) quite often, dating back to [30] (for the \(L^{\infty}\) version) and David-Semmes [15] (for the \(L^{p}\) version).
If we control the size of the \(\beta^{k}\)’s we can conclude size and structure estimates on the measure \(\mu\). The following theorem says exactly this and represents a major technical achievement. It differs (importantly) from prior work in this area by the lack of _a priori_ assumptions on the upper or lower densities of the measure involved.
**Theorem 8.13** (Discrete-Reifenberg Theorem, see [32, Theorem 3.4]).: _Let \(\{B_{r_{q}}(q)\}_{q}\) be a collection of disjoint balls, with \(q\in B_{1}(0)\) and \(0<r_{q}\leq 1\), and let \(\mu\) be the packing measure \(\mu:=\sum_{q}r_{q}^{k}\delta_{q}\), where \(\delta_{q}\) stands for the Dirac delta at \(q\). There exist constants \(\tau_{DR},C_{DR}>0\) depending only on the dimension such that if_
\[\int_{0}^{2r}\int_{B_{r}(x)}\beta^{k}_{\mu,2}(z,s)^{2}\,d\mu(z)\frac{ds}{s} \leq\tau_{DR}r^{k}\quad\quad\mbox{for every }x\in B_{1}(0),\,0<r\leq 1,\]
_then_
\[\mu(B_{1}(0))=\sum_{q}r_{q}^{k}\leq C_{DR}.\]
To obtain the packing estimates required for the Discrete-Reifenberg Theorem, we need to control the beta coefficients. The key estimate of this entire framework lies in the following theorem, which shows the drop in monotonicity at a given point and a given scale controls the beta coefficient at a comparable scale.
**Theorem 8.14**.: _Let \(\epsilon>0\) be given. There exist \(\delta(n,\alpha,\epsilon)\) and \(c(n,\alpha,\epsilon)\) such that for every \(u\in{\mathbf{H}}^{\beta}(B_{5r})\) minimizing \(\mathcal{J}\) in \(B_{5r}(x)\) with \(x\in F(u)\) and_
\[\begin{cases}u\mbox{ is }(0,\delta)\mbox{-symmetric in }B_{4r}(x)\\ u\mbox{ is not }(k+1,\epsilon)\mbox{-symmetric in }B_{4r}(x),\end{cases}\] (8.5)
_and every Borel measure \(\mu\), we have that_
\[\beta^{k}_{\mu,2}(B_{r}(x))^{2}\leq\frac{c(n,\alpha,\epsilon)}{r^{k}}\int_{B_{ r}(x)}\left(\Psi_{4r}^{u}(w)-\Psi_{r}^{u}(w)\right)\,d\mu(w).\] (8.6)
We follow the proof of [22, Theorem 5.1] closely. First the authors give an explicit formula for the beta coefficients.
**Lemma 8.15**.: _Let \(X\) be the center of mass of a Borel measure \(\mu\) on \(B=B_{r}(x)\). Let \(\{\lambda_{i}\}_{i=1}^{n}\) be the decreasing sequence of eigenvalues of the non-negative bilinear form_
\[Q(v,w):=\fint_{B}(v\cdot(z-X))(w\cdot(z-X))\,d\mu(z),\]
_and let \(\{v_{i}\}_{i=1}^{n}\) be a corresponding orthonormal sequence of eigenvectors, that is \(v_{i}\cdot v_{j}=\delta^{ij}\) and \(Q(v_{i},v)=\lambda_{i}v_{i}\cdot v\). Then_
\[\beta_{\mu,2}^{k}(B)^{2}=\frac{1}{r^{k}}\int_{B}\frac{{\rm dist}(z,L^{k})^{2}} {r^{2}}\,d\mu(z)=\frac{\mu(B)}{r^{k}}\frac{(\lambda_{k+1}+\dots+\lambda_{n})}{ r^{2}},\]
_where \(L^{k}:=X+\mathrm{span}\langle v_{1},\dots,v_{k}\rangle\)._
Next we find a relation between the eigenvalues of \(Q\) and Allen-Weiss’ energy.
**Lemma 8.16**.: _Under the hypothesis of Lemma 8.15, for every \(u\in{\mathbf{H}}^{\beta}(B_{5r})\) minimizing \(\mathcal{J}\) in \(B_{5r}(x)\) and every \(i\leq n\), we have that_
\[\lambda_{i}\frac{2}{r^{n+2}}\int_{A_{2r,3r}(x)}|y|^{\beta}(v_{i}\cdot Du(z))^{ 2}\,dz\leq C\fint_{B_{r}(x)}\left(\Psi_{4r}^{u}(w)-\Psi_{r}^{u}(w)\right)\,d \mu(w).\] (8.7)
Proof.: The argument follows as in [22, (18) and below]. In formula _(18)_ one needs to change \(u(z)\) by \(\alpha u(z)\), which can be done with exactly the same argument. ∎
Finally, using compactness, we bound the left-hand side of (8.16) from below.
**Lemma 8.17**.: _Let \(\epsilon>0\) be given. There exists a \(\delta(n,\alpha,\epsilon)\) and \(c(n,\alpha,\epsilon)\) such that, for every orthonormal basis \(\{v_{i}\}_{i=1}^{n}\) and every \(u\in{\mathbf{H}}^{\beta}(B_{5r})\) minimizing \(\mathcal{J}\) in \(B_{5r}(x)\) with \(x\in F(u)\) and satisfying (8.5), we have that_
\[\frac{1}{c(n,\alpha,\epsilon)}\leq r^{-n}\int_{A_{2r,3r}(x)}|y|^{\beta}\sum_{i =1}^{k+1}(v_{i}\cdot Du(z))^{2}\,dz.\] (8.8)
Proof.: The proof follows that of [22, (19)] and we omit it.
∎
Proof of Theorem 8.14.: By Lemmas 8.15, 8.17 and 8.16 we get that
\[\beta^{k}_{\mu,2}(B)^{2} \leq\frac{\mu(B)}{r^{k+2}}(n-k)\lambda_{k+1}\]
\[\leq\frac{\mu(B)}{r^{k}}(n-k)c(n,\alpha,\epsilon)\sum_{i=1}^{k+1} \frac{\lambda_{i}}{r^{n+2}}\int_{A_{2r,3r}(x)}|y|^{\beta}(v_{i}\cdot Du(z))^{2 }\,dz\]
\[\leq\frac{c(n,\alpha,\epsilon)}{r^{k}}\int_{B_{r}(x)}\left(\Psi_{ 4r}^{u}(w)-\Psi_{r}^{u}(w)\right)\,d\mu(w).\]
∎
## Appendix A Relation with the nonlocal Bernoulli problem
As in [21, Lemma 2.1], we see that the study of minimizers of \(\mathcal{J}\) includes the study of minimizers of \(J\).
**Proposition A.1**.: _If \(f\) is a minimizer of \(J\) in the unit ball of \({\mathbb{R}}^{n}\) then \(f*P_{y}\) is a minimizer of \(\mathcal{J}\) in every ball \(B\) such that \(B^{\prime}\subset\subset B_{1}^{\prime}\)._
_If \(u=f*P_{y}\) is a minimizer of \(\mathcal{J}\), then \(f\) is a minimizer for \(J\). In particular, if \(u\) is a minimizer of \(\mathcal{J}\) in every ball, positive outside the hyperplane \(\{y=0\}\), and \(u(x,y)=\mathcal{O}(|(x,y)|^{\alpha})\), then \(u|_{{\mathbb{R}}^{n}\times\{0\}}\) is a minimizer for \(J\) in every ball._
We follow [21, Lemma 2.1], that is, we use the following result from [6, Section 7].
**Lemma A.2** (see [6, Section 7]).: _Let \(f,g\) satisfy that \(J_{0}(f,B_{1}),J_{0}(g,B_{1})<\infty\), and suppose that \(f-g\) is compactly supported in \(B_{1}\subset{\mathbb{R}}^{n}\). Then we have that_
\[J_{0}(g,B_{1})-J_{0}(f,B_{1})=c_{n,\alpha}\inf\int_{\Omega}|y|^{\beta}(|\nabla v (x,u)|^{2}-|\nabla(f*P_{y})(x)|^{2}),\]
_where the infimum is taken among all the symmetric bounded Lipschitz domains \(\Omega\) with the property that \(\Omega\cap({\mathbb{R}}^{n}\times\{0\})\subset B_{1}\) and among all symmetric functions \(v\) with trace \(g\) satisfying that \(v-f*P_{y}\) is compactly supported on \(\Omega\)._
Proof of Proposition a.1.: Let \(f\) be a minimizer of \(J\) in the unit ball of \({\mathbb{R}}^{n}\) and let \(B_{r}\) be a ball such that \(B_{r}^{\prime}\subset\subset B_{1}^{\prime}\). We want to show that \(u:=f*P_{y}\) is a minimizer of \(\mathcal{J}\) in \(B_{r}\).
Let \(v:{\mathbb{R}}^{n+1}\to{\mathbb{R}}\) so that \(v\equiv u\) in \({\mathbb{R}}^{n+1}\setminus B_{r}\) and \(v\in H^{1}(\beta,B_{r})\). Let \(g\) be the trace of \(v\) in \({{\mathbb{R}}^{n}\times\{0\}}\). By Lemma A.2 we have that
\[J_{0}(g,B_{1})-J_{0}(f,B_{1})\leq c_{n,\alpha}\int_{B_{r+\varepsilon}}|y|^{ \beta}(|\nabla v|^{2}-|\nabla u|^{2})\] (A.1)
for every \(\varepsilon>0\).
In particular, since \(g|_{(B^{\prime})^{c}}\equiv 0\), \(g\) is an admissible competitor for \(f\) and \(J(f,B_{1})\leq J(g,B_{1})\), i.e.,
\[J_{0}(g,B_{1})-J_{0}(f,B_{1}) \geq-m(\{g>0\}\cap B_{1})+m(\{f>0\}\cap B_{1})\] (A.2)
\[=m(\{u>0\}\cap B_{r}^{\prime})-m(\{v>0\}\cap B_{r}^{\prime}).\]
The proposition follows combining (A.1) and (A.2) and letting \(\varepsilon\to 0\).
The converse follows the same sketch: every global minimizer can be expressed as the Poisson extension of its restriction to the hyperplane by Proposition B.1 and it is left to the reader. ∎
As a consequence of the previous proposition, all the results that we have proven for minimizers of \(\mathcal{J}\) also apply to minimizers of \(J\):
**Corollary A.3**.: _If \(u:{\mathbb{R}}^{n}\to{\mathbb{R}}\) is a minimizer to \(J\) in \(B_{2}\subset{\mathbb{R}}^{n}\) and \(0\in F(u)\), then \({\left\|{u}\right\|}_{C^{\alpha}(B_{1})}\leq C\), it satisfies the nondegeneracy condition \(u(x)\geq C{\rm dist}(x,F(u))^{\alpha}\) for \(x\in B_{1}\), the positive phase satisfies the corkscrew condition, every blow-up limit is \(\alpha\)-homogeneous, and the boundary condition in (1.2) is satisfied at \(F_{{\rm red}}(u)\)._
_Moreover, the positive phase \(\{u>0\}\cap B_{1}\) is a set of finite perimeter, the singular set is an \((n-3)\)-rectifiable set, it is discrete whenever \(n=3\) and it is empty if \(n\leq 2\)._
_All the constants depend only on \(n\) and \(\alpha\)._
## Appendix B Uniqueness of extensions
In Proposition A.1 we have used the following result, included in [7, Proposition 3.1]. Here we provide a proof which is different than the one appearing in [7].
**Proposition B.1**.: _Let \(\alpha\in(0,1)\), \(\beta=1-2\alpha\), and set \(\mathcal{L}u=-\operatorname{div}(|y|^{\beta}\nabla u)\) in \(\mathbb{R}^{n+1}\). Suppose that \(v:\overline{\mathbb{R}^{n+1}_{+}}\to\mathbb{R}\) is nonnegative outside \(\mathbb{R}^{n}\), it is a solution to \(\mathcal{L}v=0\) in \(\mathbb{R}^{n+1}_{+}\) with \(v(x^{\prime},0)=0\) for all \(x^{\prime}\in\mathbb{R}^{n}\) and \(|v(x)|\leq C|x|^{\alpha}\). Then \(v\equiv 0\)._
Proof.: First, since \(|y|^{\beta}\) is \(C^{\infty}\) away from the hyperplane \(\mathbb{R}^{n}\), \(v\in C^{\infty}_{\operatorname{loc}}(\mathbb{R}^{n+1}_{+})\). Let now \(i\in\{1,\dots n\}\), and set
\[f_{m}(x)=\frac{v\left(x+\frac{1}{m}e_{i}\right)-v(x)}{1/m}.\]
Let \(B_{r}=B_{r}(x^{\prime},0)\) be a ball centered at \((x^{\prime},0)\in\mathbb{R}^{n}\times\{0\}\) with radius \(r\), and let \(B_{2r}\) be its double ball. Set also \(w(x)=w(x^{\prime},y)=y^{\beta}\) for \(y>0\). Since \(f_{m}\) is a solution of \(\mathcal{L}f_{m}=0\) in \(B_{r}^{+}=B\cap\mathbb{R}^{n+1}_{+}\), [27, Theorem 2.4.3] shows that
\[\max_{B_{r}^{+}}|f_{m}(x)|\leq C\left(\frac{1}{w(B_{2r}^{+})}\int_{B_{2r}^{+}} |f_{m}|^{2}w\right)^{1/2}.\]
From convergence of difference quotients (similarly to [24, Theorem 3, page 277]), if \(v\in H^{1}(\beta,B_{2r}^{+})\), the last estimate will imply that \(f_{m}\) is uniformly bounded in \(B_{r}^{+}\) by a constant \(C_{r}\). Therefore, from the boundary Caccioppoli estimate ([27, (2.4.2)]) we have that
\[\int_{B_{r/2}^{+}}|\nabla f_{m}|^{2}w\leq\frac{C}{r^{2}}\int_{B_{r}^{+}}|f_{m} |^{2}w\leq\frac{C}{r^{2}}\int_{B_{r}^{+}}C_{r}^{2}w\leq C_{n,r,w}<\infty,\]
hence \(\{f_{m}\}\) is bounded in \(H^{1}(\beta,B_{r/2}^{+})\). From weak compactness, a subsequence of \(\{f_{m}\}\) converges to a solution of \(\mathcal{L}u=0\) in \(B_{r/2}^{+}\), and since \(f_{m}\to\partial_{i}v\) pointwise, we obtain that \(\partial_{i}v\) is an \(H^{1}(\beta,B_{r/2}^{+})\) solution in \(B_{r/2}^{+}\). Hence \(\partial_{i}v\) is a solution to \(\mathcal{L}u=0\) in \(\mathbb{R}^{n+1}_{+}\).
Now, for \(x=(x^{\prime},y)\in\mathbb{R}^{n+1}_{+}\), let \(R=|x|\). We distinguish between two cases: \(y>R/16\), and \(y<R/16\).
In the first case, set \(B_{R}\) to be the ball of radius \(R\), centered at \(x\). Note then that \(B_{R/16}\subseteq\mathbb{R}^{n+1}_{+}\). Then, from [27, Theorem 2.3.1], Caccioppoli’s estimate and the assumption \(|v(x)|\leq C|x|^{\alpha}\),
\[|\partial_{i}v(x)|{{}^{2}} \leq\frac{C}{w(B_{R/32})}\int_{B_{R/32}}|\partial_{i}v|^{2}w\leq \frac{C}{w(B_{R/32})}\frac{C}{R^{2}}\int_{B_{R/16}}|v|^{2}w\leq\frac{C}{R^{2}} \frac{w(B_{R/16})}{w(B_{R/32})}\sup_{B_{R/16}}|v|\leq CR^{2\alpha-2}.\]
In the second case, let \(B_{R}\) be the ball centered at \((x^{\prime},0)\) with radius \(R\), and denote \(B_{R}^{+}=B_{R}\cap\mathbb{R}^{N+1}_{+}\). Then \(x\in B_{R/8}^{+}\), therefore from [27, Theorem 2.4.3] and the boundary Caccioppoli estimate,
\[|\partial_{i}v(x)|{{}^{2}} \leq{\frac{C}{w(B_{R/8}^{+})}\int_{B_{R/8}^{+}}|\partial_{i}v|^{2 }w\leq\frac{C}{w(B_{R/8}^{+})}\frac{C}{R^{2}}\int_{B_{R/4}^{+}}|v|^{2}w\leq \frac{C}{R^{2}}\frac{w(B_{R/4}^{+})}{w(B_{R/8}^{+})}\sup_{B_{R/4}^{+}}|v|\leq CR ^{2\alpha-2}.}\]
So, in all cases, \(|\partial_{i}v(x)|\leq C|x|^{\alpha-1}\). Letting \(R\to\infty\) and using the maximum principle, we find that \(\partial_{i}v=0\) for any \(i=1,\dots n\). Therefore \(v\) does not depend on the first \(n\) variables, so \(v(x^{\prime},y)=v(y)\). Hence, in \(\mathbb{R}^{n+1}_{+}\),
\[0=-\operatorname{div}(y^{\beta}{\nabla}v(y))=-{\partial_{y}}(y^{\beta}v^{ \prime}(y))\,\,\Rightarrow\,\,y^{\beta}v^{\prime}(y)=\tilde{c},\]
for some constant \(\tilde{c}\). From [27, Theorem 2.4.6], \(v\) is Hölder continuous up to the boundary, therefore for any \(y>0\),
\[v(y)=v(y)-v(0)=\int_{0}^{y}v^{\prime}=\int_{0}^{y}\tilde{c}s^{-\beta}\,ds= \frac{\tilde{c}}{1-\beta}y^{1-\beta},\]
which implies that
\[|\tilde{c}|=(1-\beta)y^{\beta-1}|v(y)|=(1-\beta)y^{\beta-1}|v(0,y)|\leq(1- \beta)y^{\beta-1}y^{\alpha}=(1-\beta)y^{-\alpha},\]
for any \(y>0\). Letting \(y\to\infty\) we obtain that \(\tilde{c}=0\), hence \(v^{\prime}(y)=0\) as well, which implies that \(v\) is a constant. Since \(v\) vanishes on \(\mathbb{R}^{n}\), this implies that \(v\equiv 0\). ∎
## References
* [AC81] Hans Wilhem Alt and Luis A. Caffarelli. Existence and regularity for a minimum problem with free boundary. _Journal für die reine und angewandte Mathematik_, 325:105–144, 1981.
* [AFP00] Luigi Ambrosio, Nicola Fusco, and Diego Pallara. _Functions of bounded variation and free discontinuity problems_. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000.
* [All12] Mark Allen. Separation of a lower dimensional free boundary in a two-phase problem. _Mathematical research letters_, 19(5):1055–1074, 2012.
* [BV16] Claudia Bucur and Enrico Valdinoci. _Nonlocal diffusion and applications_, volume 20. Springer, 2016.
* [CN13] Jeff Cheeger and Aaron Naber. Lower bounds on ricci curvature and quantitative behavior of singular sets. _Invent. Math._, 191(2):321–339, 2013.
* [CRS10a] Luis A. Caffarelli, Jean-Michel Roquejoffre, and Ovidiu Savin. Nonlocal minimal surfaces. _Communications on Pure and Applied Mathematics_, 63(9):1111–1144, 2010.
* [CRS10b] Luis A. Caffarelli, Jean-Michel Roquejoffre, and Yannick Sire. Variational problems with free boundaries for the fractional laplacian. _Journal of the European Mathematical Society_, 12(5):1151–1179, 2010.
* [CS05] Luis A. Caffarelli and Sandro Salsa. _A geometric approach to free boundary problems_, volume 68. American Mathematical Society Providence, RI, 2005.
* [CS07] Luis A. Caffarelli and Luis Silvestre. An extension problem related to the fractional laplacian. _Communications in partial differential equations_, 32(8):1245–1260, 2007.
* [CS14] Xavier Cabré and Yannick Sire. Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates. _Ann. Inst. H. Poincaré Anal. Non Linéaire_, 31(1):23–53, 2014.
* [Dal12] Gianni Dal Maso. _An introduction to \(\Gamma\)-convergence_, volume 8. Springer Science & Business Media, 2012.
* [DJ09] Daniela De Silva and David Jerison. A singular energy minimizing free boundary. _J. Reine Angew. Math._, 635:1–21, 2009.
* [DL76] Georges Duvaut and Jacques-Louis Lions. _Inequalities in mechanics and physics_. Springer-Verlag, Berlin-New York, 1976. Translated from the French by C. W. John, Grundlehren der Mathematischen Wissenschaften, 219.
* [DR12] Daniela De Silva and Jean-Michel Roquejoffre. Regularity in a one-phase free boundary problem for the fractional Laplacian. _Ann. Inst. H. Poincaré Anal. Non Linéaire_, 29(3):335–367, 2012.
* [DS93] Guy David and Stephen Semmes. _Analysis of and on uniformly rectifiable sets_, volume 38. American Mathematical Soc., 1993.
* [DS12] Daniela De Silva and Ovidiu Savin. \(C^{2,\alpha}\) regularity of flat free boundaries for the thin one-phase problem. _J. Differential Equations_, 253(8):2420–2459, 2012.
* [DS15] Daniela De Silva and Ovidiu Savin. Regularity of lipschitz free boundaries for the thin one-phase problem. _Journal of the European Mathematical Society_, 17(6):1293–1326, 2015.
* [DSFS19] Daniela De Silva, Fausto Ferrari, and Sandro Salsa. Recent progresses on elliptic two-phase free boundary problems. to appear in Disc. Cont. Dyn. Systems, 2019.
* [DSS14] Daniela De Silva, Ovidiu Savin, and Yannick Sire. A one-phase problem for the fractional laplacian: regularity of flat free boundaries. _arXiv preprint arXiv:1401.6443_, 2014.
* [DT15] Guy David and Tatiana Toro. Regularity of almost minimizers with free boundary. _Calc. Var. Partial Differential Equations_, 54(1):455–524, 2015.
* [DV17] Serena Dipierro and Enrico Valdinoci. Continuity and density results for a one-phase nonlocal free boundary problem. In _Annales de l’Institut Henri Poincare (C) Non Linear Analysis_, volume 34, pages 1387–1428. Elsevier, 2017.
* [EE19] Nick Edelen and Max Engelstein. Quantitative stratification for some free-boundary problems. _Trans. Amer. Math. Soc._, 371(3):2043–2072, 2019.
* [EG15] Lawrence Craig Evans and Ronald F. Gariepy. _Measure theory and fine properties of functions_. CRC press, 2015.
* [Eva98] Lawrance Craig Evans. _Partial differential equations_, volume 19 of _Graduate Studies in Mathematics_. Oxford University Press, 1998.
* [Fed69] Herbert Federer. _Geometric measure theory_. Springer, 1969.
* [FJK82] Eugene Barry Fabes, David Jerison, and Carlos E. Kenig. The wiener test for degenerate elliptic equations. _Ann. Inst. Fourier (Grenoble)_, 32(3):151–182, 1982.
* [FKS82] Eugene Barry Fabes, Carlos E. Kenig, and Raul Paolo Serapioni. The local regularity of solutions of degenerate elliptic equations. _Comm. Partial Differential Equations_, 7(1):77–116, 1982.
* [FV17] Alessio Figalli and Enrico Valdinoci. Regularity and Bernstein-type results for nonlocal minimal surfaces. _J. Reine Angew. Math._, 729:263–273, 2017.
* [HKM06] Juha Heinonen, Tero Kilpeläinen, and Olli Martio. _Nonlinear potential theory of degenerate elliptic equations_. Courier Corporation, 2006.
* [Jon90] Peter W. Jones. Rectifiable sets and the traveling salesman problem. _Invent. Math._, 102(1):1–15, 1990.
* [KT03] Carlos E. Kenig and Tatiana Toro. On the free boundary regularity theorem of alt and caffarelli. _Discrete and Continuous Dynamical Systems_, 10:397–422, 2003.
* [NV17] Aaron Naber and Daniele Valtorta. Rectifiable-reifenberg and the regularity of stationary and minimizing harmonic maps. _Annals of Mathematics_, pages 131–227, 2017.
* [Sil12] Luis Silvestre. On the differentiability of the solution to an equation with drift and fractional diffusion. _Indiana University Mathematics Journal_, pages 557–584, 2012.
* [SV12] Ovidiu Savin and Enrico Valdinoci. \(\Gamma\)-convergence for nonlocal phase transitions. _Ann. Inst. H. Poincaré Anal. Non Linéaire_, 29(4):479–500, 2012.
* [SV13] Ovidiu Savin and Enrico Valdinoci. Regularity of nonlocal minimal cones in dimension 2. _Calc. Var. Partial Differential Equations_, 48(1-2):33–39, 2013.
* [Vit18] Stefano Vita. _Strong competition systems ruled by anomalous diffusions_. PhD thesis, Universitá degli Studi di Torino, 2018.
* [Wei98] Georg Sebastian Weiss. Partial regularity for weak solutions of an elliptic free boundary problem. _Communications in partial differential equations_, 23(3-4):439–455, 1998.
* [Wei99] Georg Sebastian Weiss. Partial regularity for a minimum problem with free boundary. _Journal of Geometric Analysis_, 9(2):317–326, 1999.
|
0708.4056 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 24824,
"num_imgs": 6,
"llama3_tokens_count": 7442
} | [
"content_image/0708.4056/x1.png",
"content_image/0708.4056/x2.png",
"content_image/0708.4056/x3.png",
"content_image/0708.4056/x4.png",
"content_image/0708.4056/x5.png",
"content_image/0708.4056/x6.png"
] | # Renormalization group analysis of resonantly interacting anyons
Yusuke Nishida
Institute for Nuclear Theory, University of Washington, Seattle, Washington 98195-1550, USA
August 2007
###### Abstract
We formulate a field theory for resonantly interacting anyons, that enables us to perform a perturbative calculation near the fermionic limit. We derive renormalization group equations for three-body and four-body couplings at one-loop order. In addition to two fixed points, we find a limit cycle behavior in the four-body coupling, which implies an infinite set of bound states in the four-anyon system.
pacs: 05.30.Pr, 11.10.Hi, 21.45.+v †
[FOOTNOTE:†][ENDFOOTNOTE]
## I Introduction
Physics in two-dimensional space has intriguing features that cannot be found in three dimensions. One such example is anyons, quantum particles having fractional statistics [1; 2]. The statistics of anyons interpolates continuously between usual fermion and boson cases. Anyons are not only theoretically interesting but also can be realized as quasiparticles in the fractional quantum Hall effect [3]. Recently it is argued that anyonic quasiparticles may be created using rotating Bose-Einstein condensates [4] or cold atoms confined in an optical lattice [5]. Furthermore anyons are attracting renewed interest in the context of quantum computation [6].
In this article, we investigate nonrelativistic anyons interacting via a contact interaction. Motivated by recent experimental realization of fermions and bosons at infinite scattering length, we focus on the case where the two-body interaction is tuned to the resonance at threshold, i. e., two anyons form a bound state with zero binding energy. Because of its scale invariance, such a system should be described by nonrelativistic conformal field theories (CFT). A field-theoretical description of resonantly interacting anyons known so far allows for perturbative calculations only near the bosonic limit [7; 8] ¹.
[FOOTNOTE:1][ENDFOOTNOTE]
Here we propose an alternative description that becomes perturbative near the fermionic limit and study a few-body problem of resonantly interacting anyons. For more than two anyons, additional three-body and four-body contact interactions turn out to be necessary to ensure the renormalizability of the theory. In the renormalization group (RG) equations of three-body and four-body couplings, we will find two nontrivial fixed points and a limit cycle behavior of the four-body coupling. Our finding implies the existence of an infinite set of discrete bound states in the four-anyon system, where the ratio of two successive binding energies is given by
\[\frac{E_{n+1}}{E_{n}}\to\exp\!\left(-\frac{\pi}{3|\alpha|}\right)\] (1)
near the fermionic limit \(|\alpha|\ll 1\).
## II Anyons in quantum mechanics
Before going on to the field-theoretical formulation of anyons with the contact interaction, it is instructive to review two-anyon problem in quantum mechanics. Anyons are characterized by a phase \(e^{i\pi(\alpha-1)}\) acquired by the wave function under the exchange of two anyons. Without loss of generality, the statistics parameter \(\alpha\) is assumed to be in the interval \([-1,1]\). For the later convenience, we chose \(\alpha\) so that \(\alpha=0\) corresponds to fermions and \(|\alpha|=1\) corresponds to bosons.
In the center of mass frame, the two-body wave function can be expressed as \(\Psi_{\alpha}({\bm{r}})=e^{i\alpha\varphi}\Phi(r,\varphi)\), where \(r\) and \(\varphi\) are relative distance and angle between two anyons. \(\Phi(r,\varphi)\) is a fermionic wave function which is antisymmetric under \(\varphi\to\varphi+\pi\). Since \(\Psi_{\alpha}({\bm{r}})\) obeys a free Schrödinger equation, the wave function expanded in partial waves \(\Phi(r,\varphi)=\sum_{l}e^{il\varphi}R_{l}(r)\) satisfies (below \(\hbar=1\) and anyon mass \(m=1\))
\[\left[\frac{d^{2}}{dr^{2}}+\frac{1}{r}\frac{d}{dr}-\frac{\left(l+\alpha\right) ^{2}}{r^{2}}+E\right]R_{l}(r)=0\] (2)
for \(r\neq 0\), where the odd integer \(l\) is an orbital angular momentum.
The contact interaction between anyons can be introduced by allowing a singular solution that behaves as \(R_{l}(r)\sim r^{-|l+\alpha|}\) at origin \(r\sim 0\), i. e., when the positions of two anyons coincide [9; 10; 11]. Because of the square integrability of the wave function, the contact interaction is available only for \(l=-1\) when \(\alpha>0\) or for \(l=1\) when \(\alpha<0\). In this channel, the contact interaction is equivalent to imposing the following boundary condition on the wave function at origin:
\[R_{\mp 1}(r)\to\frac{1}{r^{1-|\alpha|}}-\mathrm{sgn}(a)\left(\frac{r}{a^{2}} \right)^{1-|\alpha|}+O(r^{1+|\alpha|}),\] (3)
where \(a\) is a real parameter analogous to the scattering length in three spatial dimensions. There are two scale-independent boundary conditions; \(a\to 0\) and \(a\to\infty\). The former corresponds to the usual non-colliding limit. Since the binding energy is given by [10]
\[E_{\mathrm{b}}=-\frac{4}{a^{2}}\left[\frac{\Gamma(2-|\alpha|)}{\Gamma(|\alpha| )}\right]^{\frac{1}{1-|\alpha|}}\] (4)
when \(a\) is positive, the resonant limit corresponds to \(a\to\infty\).
An interesting observation is that the normalization integral of the wave function (3) is logarithmically divergent at \(r\to 0\) in the fermionic limit \(|\alpha|\to 0\). Therefore, if the statistics is close to fermion, the two-body wave function is concentrated near the origin, and the anyon pair forms a pointlike dimer. The same situation has been observed in two-component fermions at infinite scattering length near four spatial dimensions [12]. This observation has led to a field theory composed of weakly coupled fermions and dimers [13; 14]. Similarly, as we shall see below, it is possible to formulate a field theory describing resonantly interacting anyons, which allows for a perturbative analysis near the fermionic limit.
## III Field-theoretical formulation
The field-theoretical representation of anyons is provided by a nonrelativistic field \(\psi\) minimally coupled to a Chern-Simons (CS) gauge field \((a_{0},{\bm{a}})\) [15; 16]:
\[\begin{split}\mathcal{L}_{\psi}&=\frac{1}{4\pi\alpha }\partial_{t}{\bm{a}}\times{\bm{a}}-\frac{1}{2\pi\alpha}a_{0}\bm{\nabla}\times {\bm{a}}-\frac{1}{2\zeta}\left(\bm{\nabla}\cdot{\bm{a}}\right)^{2}\\ &\quad+\psi^{*}\left(i\partial_{t}-a_{0}+\mu\right)\psi-\frac{1}{ 2}\left|\left(\bm{\nabla}-i{\bm{a}}\right)\psi\right|^{2}.\end{split}\] (5)
In our definition of \(\alpha\), \(\psi\) is a fermionic field. \(\mu\) is the anyon chemical potential and here we consider the system at zero density \(\mu=0\). We shall work in the Coulomb gauge \(\zeta=0\).
Motivated by the above argument, it is natural to add the following terms involving the dimer field \(\phi\) to (5):
\[\mathcal{L}_{\phi} =\phi^{*}\left(i\partial_{t}-2a_{0}-\epsilon_{0}\right)\phi-\frac {1}{4}\left|\left(\bm{\nabla}-2i{\bm{a}}\right)\phi\right|^{2}\] (6)
\[\quad+g\,\phi^{*}\psi\left(-i\partial_{x}\pm\partial_{y}\right) \psi+g\,\psi^{*}\left(-i\partial_{x}\mp\partial_{y}\right)\psi^{*}\phi.\]
\(\phi\) is a bosonic field with mass \(2m\) and has the orbital angular momentum \(l=\mp 1\). The upper sign corresponds to \(\alpha>0\) and the lower sign corresponds to \(\alpha<0\) throughout this article. \(\phi\) also couples to the CS gauge field in a gauge invariant way. \(g\) is a dimensionless coupling between the dimer and a pair of anyons in \(p\)-wave. \(\epsilon_{0}\) is a cutoff-dependent bare “mass gap”, which is fine-tuned so that the \(\phi\) propagator has a pole at \(p_{0}={\bm{p}}=0\), i. e.,
\[\epsilon_{0}=2g^{2}\int^{\Lambda}\!\frac{d{\bm{k}}}{(2\pi)^{2}}+O(\alpha g^{2}).\] (7)
This condition is equivalent to the fine-tuning to the resonant limit (zero binding energy).
The other marginal operators one can add to the Lagrangian density are three-body and four-body contact interactions:
\[\mathcal{L}_{\delta}=v_{3}\phi^{*}\psi^{*}\psi\phi+v_{4}\phi^{*}\phi^{*}\phi\phi.\] (8)
Note that terms as \(\bm{\nabla}{\,\times\,}{\bm{a}}\,\psi^{*}\psi\) and \(\bm{\nabla}{\,\times\,}{\bm{a}}\,\phi^{*}\phi\) are also allowed, but they can be absorbed by the redefinition of \(v_{3}\) and \(v_{4}\) if we use the CS Gauss law
\[\bm{\nabla}\times{\bm{a}}=-2\pi\alpha\left(\psi^{*}\psi+2\phi^{*}\phi\right).\] (9)
Thus, the most general renormalizable Lagrangian density composed of \(\psi\) and \(\phi\) becomes \(\mathcal{L}=\mathcal{L}_{\psi}+\mathcal{L}_{\phi}+\mathcal{L}_{\delta}\). First, we concentrate on the two-body sector, i. e., a renormalization of the two-body coupling \(g\).
## IV Two-anyon physics
The renormalization of the theory can be performed in the standard way. There is a one-loop self-energy diagram for \(\phi\) that is logarithmically divergent [Fig. 1(a)]. Integrating out modes in the momentum shell \(e^{-s}\Lambda<k<\Lambda\), we obtain
\[\Sigma(p)=-\frac{g^{2}}{\pi}\left(p_{0}-\frac{{\bm{p}}^{2}}{4}\right)\ln\frac{ \Lambda}{e^{-s}\Lambda},\] (10)
which corresponds to the wave-function renormalization of \(\phi\); \(Z_{\phi}=1-(g^{2}/\pi)s\). The vertices of \(\phi\) with \(a_{0}\) or \({\bm{a}}\) are also renormalized in the same way. Thus, the anomalous dimension of \(\phi\) is found to be
\[\gamma_{\phi}=-\frac{1}{2}\frac{\partial\ln Z_{\phi}}{\partial s}=\frac{g^{2}} {2\pi}.\] (11)
<figure><img src="content_image/0708.4056/x1.png"><figcaption>Figure 1: (a) One-loop self-energy diagram to renormalize the wave function ofϕ. (b) One-loop vertex diagram to renormalize the two-body coupling g. Thesolid, dashed, and wavy lines represent propagators of ψ, ϕ, and CS gaugefields, respectively.</figcaption></figure>
The vertex \(g\,\phi^{*}\psi\left(-i\partial_{x}\pm\partial_{y}\right)\psi\) is renormalized by the logarithmically divergent one-loop diagram in Fig. 1(b), which is evaluated as
\[i|\alpha|g\left[\left({\bm{p}}-{\bm{q}}\right)_{x}\pm i\left({\bm{p}}-{\bm{q}} \right)_{y}\right]\ln\frac{\Lambda}{e^{-s}\Lambda}.\] (12)
As a result, a RG equation that governs the running of \(g\) is given by
\[\frac{dg}{ds}=-g\gamma_{\phi}+|\alpha|g=-\frac{g^{3}}{2\pi}+|\alpha|g.\] (13)
There is a nontrivial fixed point located at
\[{g^{*}}^{2}=2\pi|\alpha|.\] (14)
At this fixed point, the theory is a nonrelativistic CFT describing anyons at the two-body resonance. Since \({g^{*}}^{2}\sim\alpha\ll 1\) in the fermionic limit, one can perform a perturbative analysis in terms of \(\alpha\).
In order to confirm the connection of physics described by this fixed point with resonantly interacting anyons, we compute two physical quantities at \(g=g^{*}\) using perturbative expansions in terms of \(\alpha\) and compare them with exact results in quantum mechanics. First, the amplitude of two-anyon scattering at the tree level (Fig. 2) is given by
\[\begin{split} A(p,\varphi)&=-\frac{4i\pi\alpha}{\sin \varphi}-4{g^{*}}^{2}e^{\mp i\varphi}\\ &=-4i\pi\alpha\frac{e^{\mp 2i\varphi}}{\sin\varphi}+O(\alpha^{2}) ,\end{split}\] (15)
where \(p\) is the magnitude of the relative momentum and \(\varphi\) is the scattering angle. The exact scattering amplitude obtained by solving the Schrödinger equation (2) with the boundary condition (3) at \(a\to\infty\) is
\[f_{\mathrm{exact}}(p,\varphi)=\frac{\sin\pi\alpha}{\sqrt{{i\pi p}}}\frac{e^{ \mp 2i\varphi}}{\sin\varphi}\qquad\text{for}\quad\varphi\neq 0,\,\pi.\] (16)
Thus, up to a kinematical factor, the perturbative result (15) reproduces the correct expanded form of the exact result to the leading order in \(\alpha\).
<figure><img src="content_image/0708.4056/x2.png"><figcaption>Figure 2: Tree diagrams for two-anyon scattering.</figcaption></figure>
The ground state energy of two anyons in a harmonic potential can be directly extracted from the scaling dimension of the operator \(\phi\) [17]. At the fixed point, the scaling dimension of \(\phi\) becomes
\[\Delta_{\phi}=1+\gamma_{\phi}=1+|\alpha|+O(\alpha^{2}),\] (17)
from which the ground state energy is given by \(E=\Delta_{\phi}\,\omega\) with \(\omega\) being the oscillator frequency. This result coincides with the exact result obtained in quantum mechanics,
\[E_{\mathrm{exact}}=\left(1+|\alpha|\right)\omega,\] (18)
including the energy of center of mass motion. The corresponding relative wave function is
\[\Phi(r,\varphi)=\frac{e^{\mp i\varphi}}{r^{1-|\alpha|}}e^{-\omega r^{2}/4},\] (19)
which satisfies the boundary condition (3) with \(a\to\infty\).
These results establish that the fixed point (14) describes the physics of anyons with the two-body contact interaction tuned to the resonance.
## V RG equations for \(v_{3}\) and \(v_{4}\)
Now we study the renormalization of the three-body and four-body couplings \(v_{3}\) and \(v_{4}\). First, we shall look at the one-loop diagram in Fig. 3. Integrating out modes in the momentum shell \(e^{-s}\Lambda<k<\Lambda\), we obtain
\[i\frac{g^{2}}{2\pi}\left[({\bm{p}}+{\bm{q}})_{i}\mp i\epsilon^{ij}({\bm{p}}-{ \bm{q}})_{j}\right]\ln\frac{\Lambda}{e^{-s}\Lambda}.\] (20)
The first term renormalizes the vertex \(-i{\bm{a}}\cdot\phi^{*}\tensor{\bm{\nabla}}\phi/2\) and is consistent with the wave-function renormalization of \(\phi\). The second term generates the following vertex in the Lagrangian density:
\[\delta\mathcal{L}=\mp\frac{g^{2}}{2\pi}s\,\bm{\nabla}\times{\bm{a}}\,\phi^{*}\phi.\] (21)
Using the CS Gauss law (9), the generated vertex \(\delta\mathcal{L}\) is converted into the renormalization of the couplings \(v_{3}\) and \(v_{4}\).
<figure><img src="content_image/0708.4056/x3.png"><figcaption>Figure 3: One-loop diagram to renormalize the vertex −ia⋅ϕ∗\tensor∇ϕ/2. Thisdiagram also contributes to the renormalization of v3 and v4.</figcaption></figure>
The other one-loop diagrams that contribute to the renormalization of \(v_{3}\) and \(v_{4}\) are depicted in Figs. 4 and 5, respectively. Taking into account the contribution of the wave-function renormalization of \(\phi\), RG equations that govern the running of \(v_{3}\) and \(v_{4}\) are given by
\[\frac{dv_{3}}{ds} =\frac{20\pi}{3}\alpha^{2}-\frac{22}{3}|\alpha|v_{3}+\frac{2}{3 \pi}v_{3}^{\,2},\] (22)
\[\frac{dv_{4}}{ds} =-20\pi\alpha^{2}+4|\alpha|v_{3}-4|\alpha|v_{4}+\frac{2}{\pi}v_{4 }^{\,2}.\] (23)
Here \(g=g^{*}\) was substituted. We note that the \(\beta\) function of \(v_{3}\) (\(v_{4}\)) is a quadratic function of \(v_{3}\) (\(v_{4}\)) to all orders. Higher order corrections appear only in their coefficients and do not alter the following discussions as far as the perturbative expansion is valid \(|\alpha|\ll 1\).
<figure><img src="content_image/0708.4056/x4.png"><figcaption>Figure 4: One-loop diagrams to renormalize the three-body coupling v3.</figcaption></figure>
<figure><img src="content_image/0708.4056/x5.png"><figcaption>Figure 5: One-loop diagrams to renormalize the four-body coupling v4.</figcaption></figure>
Equation (22) has two fixed points at
\[v_{3}^{*}=\pi|\alpha|\qquad\text{and}\qquad v_{3}^{*}=10\pi|\alpha|,\] (24)
each of which corresponds to a scale-independent short range boundary condition on the three-body wave function. The stable fixed point (\(v_{3}^{*}=\pi|\alpha|\)) corresponds to the regular wave function without the three-body resonance, while the unstable one (\(v_{3}^{*}=10\pi|\alpha|\)) corresponds to the three-body resonance at threshold. When \(v_{3}\) is at the former fixed point \(v_{3}^{*}=\pi|\alpha|\), Eq. (23) also has stable and unstable fixed points at
\[v_{4}^{*}=-2\pi|\alpha|\qquad\text{and}\qquad v_{4}^{*}=4\pi|\alpha|.\] (25)
The unstable fixed point \(v_{4}^{*}=4\pi|\alpha|\) corresponds to the four-body resonance at threshold.
However, when \(v_{3}\) is at the other fixed point corresponding to the three-body resonance, Eq. (23) does not have a fixed point. In this case, the solution of the coupled equations (22) and (23) is
\[\begin{split} v_{3}(s)&=10\pi|\alpha|,\\ v_{4}(s)&=\pi|\alpha|+3\pi|\alpha|\tan\left(6| \alpha|s+C\right),\end{split}\] (26)
where \(C\) is a constant of integration. We thus find a RG limit cycle where \(v_{3}(s)\) is a constant and \(v_{4}(s)\) is a periodic function of \(s\) with a primitive period \(\pi/(6|\alpha|)\). The system tuned to this limit cycle has a discrete scaling symmetry with a scaling factor \(\exp[\pi/(6|\alpha|)]\) in momentum. It especially implies a geometric spectrum of energy eigenvalues [18] in the four-anyon system when two-body and three-body interactions are simultaneously tuned to the resonance. When the statistics is close to fermion \(|\alpha|\ll 1\), the ratio of two successive energy eigenvalues is given by
\[\frac{E_{n+1}}{E_{n}}\to\exp\!\left[-\frac{\pi}{3|\alpha|+O(\alpha^{2})}\right].\] (27)
There are an infinite number of bound states with an accumulation point at zero energy. This resembles Efimov states in three-body systems at infinite scattering length in three spatial dimensions [19; 20; 21], whose experimental evidence has been recently reported using cold atoms in Ref. [22].
Near the bosonic limit, on the other hand, the Efimov states do not exist in a few-anyon system. This is because the two-body resonant interaction becomes just zero interaction in the bosonic limit. Therefore, there should be at least one critical value of \(\alpha\) where the four-anyon bound states disappear. Determination of such a critical \(\alpha\) is, however, beyond the scope of our perturbative approach.
The RG flow diagram obtained from Eqs. (22) and (23) is shown in Fig. 6. We see two fixed points located at
\[(v_{3}^{*},v_{4}^{*})=(\pi|\alpha|,-2\pi|\alpha|)\quad\text{and}\quad(\pi| \alpha|,4\pi|\alpha|),\] (28)
and the limit cycle given by the solution (26).
<figure><img src="content_image/0708.4056/x6.png"><figcaption>Figure 6: RG flow diagram in the plane of v3 and v4 at g=g∗. There are twofixed points (dots) and one limit cycle (thick line with two arrows). Thearrows indicate flows toward the infrared limit.</figcaption></figure>
## VI Scaling dimensions of operators
It is worthwhile to determine scaling dimensions of \(N\)-body operators at the fixed points because they are observable as energy eigenvalues in a harmonic potential [17]. At the one-loop level, anomalous dimensions of three-body and four-body composite operators \(\phi\psi\) and \(\phi\phi\) are, respectively, computed as
\[\begin{split}\gamma_{\phi\psi}&=\frac{8|\alpha|}{3}- \frac{2v_{3}^{*}}{3\pi}+O(\alpha^{2}),\\ \gamma_{\phi\phi}&=-\frac{2v_{4}^{*}}{\pi}+O(\alpha^ {2}).\end{split}\] (29)
From those, we obtain the scaling dimension of the lowest \((2n+1)\)-body operator \(\phi^{n}\psi\) as
\[\Delta_{\phi^{n}\psi}=n+1+n\gamma_{\phi}+n\gamma_{\phi\psi}\] (30)
and that of the lowest \((2n)\)-body operator \(\phi^{n}\) as
\[\Delta_{\phi^{n}}=n+n\gamma_{\phi}+\frac{n(n-1)}{2}\gamma_{\phi\phi}\] (31)
up to the order \(\alpha\). Then the ground state energy of \(N\) anyons in a harmonic potential is given by the scaling dimension of the \(N\)-body operator times \(\omega\). The leading-order results can be easily understood by recalling that, in the fermionic limit \(|\alpha|\to 0\), fermion pairs at the \(p\)-wave resonance form point-like bosons and they do not interact with each other or with extra fermions. Since these particles can occupy the same state, the ground state energy is simply given by the number of particles multiplied by the lowest single particle energy \(\omega\) in a harmonic potential.
## VII Many-body physics
An interesting extension of our work is to the many-body system of anyons. Here we discuss some qualitative features of such a system. When the two-body resonant interaction is present, we have choices to further introduce the three-body and four-body resonant interactions. If the three-body resonant interaction is introduced, the Efimov effect occurs in the four-anyon system (the limit cycle in Fig. 6) and accordingly the corresponding many-body system can not be stable. Instead, if the four-body resonant interaction is introduced (the upper fixed point in Fig. 6), the corresponding many-body system will be unstable because of the attractive interaction between bosonic dimers. Therefore the only stable many-body system seems to be the case where the two-body resonant interaction is present without the three-body and four-body resonant interactions (the lower fixed point in Fig. 6).
In this case, we define \(\xi\) as a ratio of the ground state energy \(E\) to that of “noninteracting” anyons \(E_{0}\) at the same density and statistics; \(\xi=E/E_{0}|_{n,\alpha}\). Since the system has no intrinsic dimensionful quantity, \(\xi\) can depend only on the statistics parameter \(\alpha\). Although it is a difficult many-body problem to determine \(\xi\) in general \(\alpha\), the problem becomes treatable near the fermionic limit. In the fermionic limit, the ground state energy \(E\) vanishes because resonantly interacting anyons form noninteracting point-like bosons, while \(E_{0}\) remains nonzero because of the Fermi energy. Therefore we have \(\xi\to 0\) in the fermionic limit \(\alpha\to 0\). Corrections to \(\xi\) near the fermionic limit are computable using the perturbative framework developed in this article. It will be important in future works to further study many-body properties of resonantly interacting anyons, for example, whether they exhibit superfluidity, and their implications for real planar systems.
## VIII Conclusion
We have investigated anyons with a contact interaction by formulating a field theory that becomes perturbative near the fermionic limit. We identified a fixed point that describes anyons at the two-body resonance. In RG equations for three-body and four-body couplings, we found two nontrivial fixed points and computed scaling dimensions of \(N\)-body operators which are observable as a ground state energy of \(N\) anyons in a harmonic potential.
Furthermore, we found a limit cycle behavior in the four-body coupling, which implies an infinite set of discrete bound states (Efimov states) in the four-anyon system when both the two-body and three-body interactions are tuned to the resonance. Such a system provides a unique opportunity to study Efimov-type physics within a perturbative framework. It should be possible to verify these results by solving a few-body Schrödinger equation for anyons with suitable short range boundary conditions and hopefully by experiments in the future. We believe this work opens up new universal physics in the system of anyons in two spatial dimensions.
###### Acknowledgements.
The author thanks D. T. Son for useful discussions. This work was supported by JSPS Postdoctoral Program for Research Abroad.
## References
* (1) J. M. Leinaas and J. Myrheim, Nuovo Cimento B **37**, 1 (1977).
* (2) F. Wilczek, Phys. Rev. Lett. **48**, 1144 (1982); Phys. Rev. Lett. **49**, 957 (1982).
* (3) See, e. g., F. Wilczek, _Fractional statistics and anyon superconductivity_ (World Scientific, Singapore, 1990).
* (4) B. Paredes, P. Fedichev, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. **87**, 010402 (2001). B. Paredes, P. Zoller, and J. I. Cirac, Phys. Rev. A **66**, 033609 (2002).
* (5) A. S. Sørensen, E. Demler, and M. D. Lukin, Phys. Rev. Lett. **94**, 086803 (2005). M. Hafezi, A. S. Sørensen, E. Demler, and M. D. Lukin, Phys. Rev. A **76**, 023613 (2007).
* (6) A. Y. Kitaev, Annals Phys. **303**, 2 (2003).
* (7) O. Bergman and G. Lozano, Annals Phys. **229**, 416 (1994).
* (8) G. Amelino-Camelia and D. Bak, Phys. Lett. B **343**, 231 (1995).
* (9) J. Grundberg, T. H. Hansson, A. Karlhede, and J. M. Leinaas, Mod. Phys. Lett. B **5**, 539 (1991).
* (10) C. Manuel and R. Tarrach, Phys. Lett. B **268**, 222 (1991).
* (11) M. Bourdeau and R. D. Sorkin, Phys. Rev. D **45**, 687 (1992).
* (12) Z. Nussinov and S. Nussinov, Phys. Rev. A **74**, 053622 (2006).
* (13) Y. Nishida and D. T. Son, Phys. Rev. Lett. **97**, 050403 (2006); Phys. Rev. A **75**, 063617 (2007). Y. Nishida, Phys. Rev. A **75**, 063618 (2007).
* (14) P. Nikolić and S. Sachdev, Phys. Rev. A **75**, 033608 (2007).
* (15) C. R. Hagen, Phys. Rev. D **31**, 848 (1985).
* (16) R. Jackiw and S.-Y. Pi, Phys. Rev. D **42**, 3500 (1990) [Erratum-ibid. D **48**, 3929 (1993)].
* (17) Y. Nishida and D. T. Son, Phys. Rev. D **76**, 086004 (2007).
* (18) E. J. Mueller and T.-L. Ho, arXiv:cond-mat/0403283.
* (19) V. Efimov, Phys. Lett. B **33**, 563 (1970); Sov. J. Nucl. Phys. **12** 589 (1971); Sov. J. Nucl. Phys. **29** 546 (1979).
* (20) S. Albeverio, R. Høegh-Krohn, and T. T. Wu, Phys. Lett. A **83**, 105 (1981).
* (21) P. F. Bedaque, H.-W. Hammer, and U. van Kolck, Phys. Rev. Lett. **82**, 463 (1999); Nucl. Phys. A **646**, 444 (1999).
* (22) T. Kraemer _et al._, Nature **440**, 315 (2006).
|
1810.12908 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 5417,
"num_imgs": 0,
"llama3_tokens_count": 1510
} | [] | Appendix Choosing optimal measurements for wind estimation Due to the sensitivity of the optimization problem, a natural question is how to choose measurements to maximize observability of the wind velocity, i.e. reduce sensitivity to noise. This problem is closely related to generating exciting trajectories for optimal parameter estimation in robotics, see Armstrong1989, Swevers1997, Park2006. The results of the following analysis may then be applied to path planning, trajectory generation, or as a control signal to achieve active sensing.
According to matrix function literature Trefethen1997, Higham2008, Golub2012, the sensitivity to noise of an optimization problem such as eq:minim_goal_function depends on the condition number of its Jacobian. By minimizing the condition number of the Jacobian \({\mat{J}=\partial\vc{f}(\vc{x})/\partial\vc{x}}\), we maximize the observability of the problem. The absolute condition number \(\hat{\kappa}(\cdot)\) is defined as
\[\hat{\kappa}=\|\mat{J}(\vc{x})\|,\] ( )
whereas the _relative condition number_\(\kappa(\cdot)\) is defined as
\[\kappa=\frac{\norm{\mat{J}(\vc{x})}}{\norm{\vc{f}(\vc{x})}\norm{\vc{x}}},\] ( )
where the Frobenius norm of a matrix
\[\norm{\mat{A}}_{F}=\sqrt{\mathrm{Tr}\{\mat{A}^{T}\mat{A}\}}=\sqrt{\sum_{i=1}^{ m}\sum_{j=1}^{n}|a_{ij}|^{2}}\] ( )
is commonly used. We are interested in the Jacobian of the previously defined least squares problem. Recall the Levenberg-Marquardt update step
\[\delta\vc{x}=\left(\mat{J}^{T}\mat{J}+\mu\mat{I}\right)^{-1}\mat{J}^{T}\,\vc{f }(\vc{x}),\] ( )
where \(\mu\) is the regularization parameter, \(\delta\vc{x}\) is the update step of the optimized variable, and \(\vc{f}(\vc{x})\) is the cost function value evaluated at \(\vc{x}\). Hence, the condition number of the inverted matrix \(\mat{J}^{T}\mat{J}\) is a measure of the problem’s sensitivity. Note also that \({\kappa(\mat{J}^{T}\mat{J})=\kappa^{2}(\mat{J})}\). Next, we define the optimization problem for choosing propeller orientations that result in the best conditioning of the least squares problem. First, we define a _sequential optimization problem_ where a set of \({K-1}\) measurements is available, and we want to choose the next measurement \(K\) that will maximize the observability of the least squares problem. This approach is useful for online motion planning or control, where only local information is available. Let \({\vc{\varphi}}\) be a suitable parameterization of the measurement orientation. The orientation \(\vc{\varphi}_{K}^{*}\) associated with the _next best measurement_ is then obtained by solving the optimization problem
\[\vc{\varphi}_{K}^{*}=\underset{\vc{\varphi}}{\mathrm{arg\,min}}\ \kappa(\mat{J }).\] ( )
The problem eq:minim_goal_function is normalized for easily interpretable physical values. We may therefore simplify the optimization to minimizing the absolute condition number \(\hat{\kappa}\), obtaining
\[\vc{\varphi}_{K}^{*}=\underset{\vc{\varphi}}{\mathrm{arg\,min}}\ \norm{\mat{J} }_{F}^{2},\] ( )
where we squared the norm to simplify computation of the derivatives. Note that this approach is similar to observability analysis, with the Jacobian \(\mat{J}\) corresponding to the observation matrix. This local problem may also be solved online by means of a gradient descent algorithm. Define \({\vc{\varphi}}\) to be a suitable parameterization of the measurement orientation. In continuous time, solving
\[\dot{\vc{\varphi}}=-\gamma\,\frac{\partial\norm{\mat{J}}_{F}^{2}}{\partial\vc{ \varphi}},\] ( )
where \(\gamma\) is the descent factor, will locally converge to the optimal measurement angles. Note that barrier functions should be included as well in order to ensure physically feasible solutions only. Using the squared Frobenius norm leads to
\[\frac{\partial\norm{\mat{J}}_{F}^{2}}{\partial\varphi_{i}}=2\sum_{i=1}^{m}\sum _{j=1}^{n}J_{ij}\frac{\partial J_{ij}}{\partial\varphi_{i}}\] ( )
In other cases we want to plan a path or trajectory that will contain optimal measurement poses. Such a _simultaneous optimization_ problem can be defined by having a set of roll and pitch angles \({(\vc{\phi},\vc{\theta})}\). The optimal angles are then a solution to the optimization problem over all \(\vc{\varphi}\) such that
\[\vc{\varphi}_{1\ldots K}^{*}=\underset{\vc{\varphi}_{1\ldots K}}{\mathrm{arg\, min}}\ \kappa(\mat{J}).\] ( )
Though we have shown how trajectories for optimal wind estimation can be obtained in principle, this active sensing approach to wind estimation is clearly out of scope of this paper and left for future work.
## Jacobian of the transformed formulation
Take \({j=(1,2,3)}\) and the substitutions \({\vc{z}=\vc{v}+\mat{R}_{k}^{T}\vc{v}_{0,k}}\), and \({\vc{y}=\vc{v}-\mat{R}_{k}^{T}\left(\vc{v}_{0,k}-v_{i,k}\vc{e}_{3}\right)}\). Elements of the Jacobian eq:f_jacobian can now be written as
( )
## Derivative of Jacobian
In order to obtain the gradient eq:sequential_optimal_potential, the partial derivatives of the Jacobian eq:f_jacobian are required. Define the partial derivative of \(A\) w.r.t \(\varphi\) as \({A^{\varphi}:=\frac{\partial A}{\partial\varphi}}\). For \({j=(1,2,3)}\), the partial derivatives of the Jacobian w.r.t a rotation parameter \(\varphi\) are
( )
Note that we dropped the term associated with \({\norm{\vc{v}_{k}}^{\varphi}=0}\), because the rotation does not affect the wind velocity norm.
|
1805.03852 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 59284,
"num_imgs": 0,
"llama3_tokens_count": 19459
} | [] | # When Names Are Not Commonly Known: Epistemic Logic with Assignments
Yanjing Wang
Department of Philosophy, Peking University
Jeremy Seligman
Department of Philosophy, University of Auckland
###### Abstract
In standard epistemic logic, agent names are usually assumed to be common knowledge implicitly. This is unreasonable for various applications. Inspired by term modal logic and assignment operators in dynamic logic, we introduce a lightweight modal predicate logic where names can be non-rigid. The language can handle various _de dicto_ /_de re_ distinctions in a natural way. The main technical result is a complete axiomatisation of this logic over S5 models.
t erm modal logic, axiomatisation, non-rigid constants, dynamic logic
## 1 Introduction
One dark and stormy night, Adam was attacked and killed. His assailant, Bob, ran away, but was seen by a passer-by, Charles, who witnessed the crime from start to finish. This led quickly to Bob’s arrest. Local news picked up the story, and that is how Dave heard it the next day, over breakfast. Now, in one sense we can say that both Charles and Dave know that Bob killed Adam. But there is a difference in what they know about just this fact. Although Charles witnessed the crime, and was able to identify the murderer and victim to the police, he might have no idea about their names. If asked “Did Bob kill Adam?” he may not know. Yet this is a question that Dave could easily answer, despite not knowing who Adam and Bob are—he is very unlikely to be able to identify them in a line-up.
The distinction between these _de re_ and _de dicto_ readings of “knowing Bob killed Adam” is hard to make in standard epistemic logic, where it is implicitly assumed that the names of agents are rigid designators and thus that it’s common knowledge to whom they refer. But in many cases, the distinction is central to our understanding. On the internet, for example, users of websites and other online applications typically have multiple identities, and may even be anonymous. Names are rarely a matter of common knowledge and distinctions as to who knows who is whom are of great interest.
Further complexities arise with higher-order knowledge and belief. In [7], Grove gives an interesting example of a robot with a mechanical problem calling out for help (perhaps in a Matrix-like future with robots ruling the world unaided by humans). To plan further actions, the broken robot, called \(a\), needs to know if its request has been heard by the maintenance robot, called \(b\). But how to state exactly what \(a\)_needs to know_? In English we would probably write it as:
1. \(a\) knows that \(b\) knows that \(a\) needs help.
A naive formulation in standard (predicate) epistemic logic is \(\mathsf{K}_{a}\mathsf{K}_{b}H(a)\). But without the assumption that the robots’ names are both commonly known, there are various ambiguities. For example, if \(b\) does not know which robot is named ‘\(a\)’ then neither does \(b\) know whom to help nor has \(a\) any confidence of being helped. On the other hand, \(a\) may not know that ‘\(b\)’ is the name of the maintenance robot, thus merely knowing \(b\) knows \(a\) needs help is not enough for \(a\) to be sure it will be helped. The authors of [4] list several possible readings of (\(\star\)), which we will elaborate as follows: \(a\), the broken robot, knows that
1. the robot named ‘\(b\)’ knows that the robot named ‘\(a\)’ needs help, or
2. the robot named ‘\(b\)’ knows that it, i.e. the broken robot, needs help, or
3. the maintenance robot knows that the robot named ‘\(a\)’ needs help, or
4. the maintenance robot knows that it, i.e. the broken robot, needs help.
It is impossible to distinguish the above readings in standard epistemic logic. In the literature [8, 7, 4, 5, 11], various approaches are proposed. In [7], Grove correctly pinpoints the problems of _scope_ and _manner of reference_ in giving various _de re_ /_de dicto_ readings for higher-order knowledge, and proposes a new semantics for 2-sorted first-order modal logic that is based on world-agent pairs, so as to cope with indexicals like “me”. A special predicate symbol ‘In’ is introduced to capture scope explicitly: \(\texttt{In}(a,b,n)\) holds at a world \(w\) iff \(b\) is someone named \(n\) by \(a\) in \(w\). In [5, 20], an intensional first-order modal logic uses predicate abstraction to capture different readings. \((\lambda x.\mathsf{K}_{b}Hx)(a)\) says that agent \(b\) knows _de re_ that \(a\) is in need of help, whether or not \(b\) knows that agent is named ‘\(a\)’, whereas \(\mathsf{K}_{b}H(a)\) says that \(b\) knows _de dicto_ that someone called ‘\(a\)’ needs help, whether or not \(b\) knows who \(a\) is. The authors of [4] propose a very general framework with complex operators based on counterpart semantics.¹ Without going into details, the formula \(|t:\genfrac{}{}{0.0pt}{}{t_{1}\dots t_{n}}{x_{1}\dots x_{n}}|\varphi(x_{1} \dots x_{n})\) means, roughly, that the agent named by term \(t\) knows _de re_ that \(\varphi\) of the things denoted by terms \(t_{1}\dots t_{n}\). Holliday and Perry also bring the alethic modality into the picture together with the doxastic modality, and highlights the use of _roles_ to capture subtle readings in [11], where the multi-agent cases are handled by perspective switching based on a single-agent framework.
[FOOTNOTE:1][ENDFOOTNOTE]
In this paper, we follow the _dynamic term modal logic_ approach proposed by Kooi [13], based on _term modal logic_ proposed in [6]. Term modal logic uses terms to index modalities which can also be quantified, so that \(\mathsf{K}_{f(a)}\neg\forall x\mathsf{K}_{x}\varphi\) says that \(a\)’s father knows that not everyone knows \(\varphi\). The accessibility relation used in the semantics of \(\mathsf{K}_{t}\), where \(t\) is a term, is then relative to the world \(w\) at which this formula is evaluated: it is the one labeled by the agent denoted by term \(t\) in \(w\). Based on this, Kooi [13] borrows dynamic assignment modalities from (first-order) dynamic logic so as to adjust the denotation of names, now assumed to be non-rigid in general, in contrast to the usual _constants_ of first-order modal logic which are assumed rigid.
Full first-order term modal language is clearly undecidable. In [16], it is shown that even its propositional fragment is undecidable,² and the addition of the program modalities in dynamic logic makes things worse. As Kooi remarks in [13], the combination of term modal logic and dynamic assignment logic is not even recursively enumerable. A closely related study is the doctoral thesis of Thalmann [20], which provides many results including sequent calculi and tableaux systems for both term modal logic (with quantifiers) and quantifier-free dynamic assignment logic (with regular program constructions). But the two logics are studied _separately_, leaving their combination as future work.³ It is shown that the quantifier-free part of dynamic assignment logic is undecidable with both (Kleene) star operator and (rigid) function symbols but it is decidable if there is no star operator.⁴ A rich treatment of various issues of ‘semantic competence’ with names that uses term modal logic is given by Rendsvig in [18].
[FOOTNOTE:2][ENDFOOTNOTE]
[FOOTNOTE:3][ENDFOOTNOTE]
[FOOTNOTE:4][ENDFOOTNOTE]
In this paper, we take a minimalist approach, introducing only the basic assignment modalities from dynamic logic combined with a quantifier-free term modal logic, without function symbols, to obtain a small fragment of the logic in [13], which we conjecture to be decidable over S5 models (see discussions at the end of the paper). However, as we will soon see, it is already a very powerful tool for expressing various _de re/de dicto_ distinctions, as well as a kind of _knowing who_, which was discussed by Hintikka [10] at the very inception of epistemic logic.⁵ The language is very simple and intuitive to use as a genuine multi-agent epistemic logic that does not presuppose common knowledge of names.
[FOOTNOTE:5][ENDFOOTNOTE]
Before the formal details, let us first illustrate the ideas. As in predicate epistemic logic more generally, the formula \(\mathsf{K}_{a}Pb\) says that \(a\) knows _de dicto_ that \(b\) is \(P\), whereas \(\mathsf{K}_{a}Px\) says that \(a\) knows _de re_ of \(x\) that it is \(P\). The formula \([x:=b]Px\) says of \(b\) that it is \(P\), which is equivalent to \(Pb\), but combining operators we get \([x:=b]\mathsf{K}_{a}Px\), which says that \(a\) knows _de re_ of \(b\) that it is \(P\). More precisely, our semantics is based on first-order Kripke models with a constant domain of agents (not names) with formulas evaluated with respect to both a world \(w\) and a variable assignment function \(\sigma\). Formula \([x:=t]\varphi\) is then true iff \(\varphi\) is true at \(w\) when we change \(\sigma\) so that it assigns to \(x\) the agent named by \(t\) in \(w\), and \(\mathsf{K}_{t}\varphi\) is true iff \(\varphi\) is true at all worlds indistinguishable from \(w\) by the agent named \(t\) in \(w\). (This is in line with the _innermost-scope_ semantics of [7].)
Returning to Grove’s poor broken robot \(a\), the various readings of ‘\(a\) knows that \(b\) knows that \(a\) needs help’ can be expressed as follows:
1. \(\mathsf{K}_{a}\mathsf{K}_{b}H(a)\), \(a\) knows that the robot named ‘\(b\)’ knows that the robot named ‘\(a\)’ needs help,
2. \([x:=a]\mathsf{K}_{a}\mathsf{K}_{b}H(x)\), \(a\) knows that the robot named ‘\(b\)’ knows it (the broken robot \(a\)) needs help,
3. \([y:=b]\mathsf{K}_{a}\mathsf{K}_{y}H(a)\), \(a\) knows that it (the maintenance robot \(b\)) knows the robot named ‘\(a\)’ needs help,
4. \([x:=a][y:=b]\mathsf{K}_{a}\mathsf{K}_{y}H(x)\), \(a\) knows that it (the maintenance robot \(b\)) knows that it (the broken robot \(a\)) needs help.
Moreover, since names are non-rigid, we can express _\(a\) knowing who \(b\) is_ by \([x:=b]\mathsf{K}_{a}(x\approx b)\) which says that \(a\) identifies the right person with name \(b\) on all relevant possible worlds. This we abbreviate as \(\mathsf{K}_{a}b\).⁶ Thus we are able to express the following:
[FOOTNOTE:6][ENDFOOTNOTE]
1. \(\neg\mathsf{K}_{a}a\): \(a\) does not know he is called \(a\) (c.f., “the most foolish person may not know that he is the most foolish person” in [13]).
2. \(b\approx c\land\mathsf{K}_{a}b\land\neg\mathsf{K}_{a}c\): \(a\) knows who \(b\) is but does not know who \(c\) is, although they are just two names of the same person.
3. \([x:=b][y:=a](\mathsf{K}_{c}M(x,y)\land\neg\mathsf{K}_{c}(a\approx x\wedge y \approx b))\): Charles knows who killed whom that night but does not know the names of the murderer and the victim.
4. \(\mathsf{K}_{d}M(b,a)\land\neg\mathsf{K}_{d}a\land\neg\mathsf{K}_{d}b\): Dave knows that a person named Bob murdered a person named Adam without knowing who they are.
The innocent look of our logical language belies some technical complexity. The main technical result is a complete axiomatisation of the logic over epistemic (S5) models (Section 3 and 4), requires much work to handle the constant domain without Barcan-like formulas. We conclude with discussions on the issues of decidability of our logic (Section 5).
## 2 Preliminaries
In this section we introduce formally the language and semantics of our logic. [Epistemic language with assignments] Given a denumerable set of names **N**, a denumerable set of variables **X\(,and\)** a denumerable set **P** of predicate symbols, the language \(\mathbf{ELAS}\) is defined as:
\[t::=x\mid a\]
\[\varphi::=(t\approx t)\mid P\overline{t}\mid\neg\varphi\mid(\varphi\wedge \varphi)\mid\mathsf{K}_{t}\varphi\mid[x:=t]\varphi\]
where \(a\in\textbf{N}\), \(P\in\textbf{P}\), and \(\overline{t}\) is a vector of terms of length equal to the arity of predicate \(P\). We write \(\widehat{\mathsf{K}}_{t}\varphi\) as the the abbreviation of \(\neg\mathsf{K}_{t}\neg\varphi\) and write \(\langle x:=t\rangle\varphi\) as the abbreviation of \(\neg[x:=t]\neg\varphi\).⁷ We call the \([x:=t]\)-free fragment \(\mathbf{EL}\). We define the semantics of \(\mathbf{ELAS}\) over first-order Kripke models. A constant domain Kripke model \(\mathcal{M}\) for \(\mathbf{ELAS}\) is a tuple \(\langle W,I,R,\rho,\eta\rangle\) where:
[FOOTNOTE:7][ENDFOOTNOTE]
* \(W\) is a non-empty set of possible worlds.
* \(I\) is a non-empty set of agents.
* \(R:I\to 2^{W\times W}\) assign a binary relation \(R(i)\) (also written \(R_{i}\)) between worlds, to each agent \(i\).
* \(\rho:\textbf{P}\times W\to\bigcup_{n\in\omega}2^{I^{n}}\) assigns an \(n\)-ary relation \(\rho(P,w)\) between agents to each \(n\)-ary predicate \(P\) at each world \(w\).
* \(\eta:\textbf{N}\times W\to I\) assigns an agent \(\eta(n,w)\) to each name \(n\) at each world \(w\).
We call \(\mathcal{M}\) an _epistemic model_ if \(R_{i}\) is an equivalence relation for each \(i\in I\).
Note that the interpretations of predicates and names are not required to be rigid, and there may be worlds in which an agent has no name or multiple names. To interpret free variables, we need a variable assignment \(\sigma:\textbf{X}\to I\). Formulas are interpreted on pointed models \(\mathcal{M},w\) with variable assignments \(\sigma\). Given an assignment \(\sigma\) and a world \(w\in W\), let \(\sigma_{w}(a)=\eta(a,w)\) and \(\sigma_{w}(x)=\sigma(x)\). So although names may not be rigid, variables are.
The truth conditions are given w.r.t. pointed Kripke models with assignments \(\mathcal{M},w,\sigma\).
\[\begin{array}[]{|rcl|}\hline\mathcal{M},w,\sigma\vDash t\approx t^{\prime}& \Leftrightarrow&\sigma_{w}(t)=\sigma_{w}(t^{\prime})\\ \mathcal{M},w,\sigma\vDash P(t_{1}\cdots t_{n})&\Leftrightarrow&(\sigma_{w}(t_ {1}),\cdots,\sigma_{w}(t_{n}))\in\rho(P,w)\\ \mathcal{M},w,\sigma\vDash\neg\varphi&\Leftrightarrow&\mathcal{M},w,\sigma \nvDash\varphi\\ \mathcal{M},w,\sigma\vDash(\varphi\land\psi)&\Leftrightarrow&\mathcal{M},w, \sigma\vDash\varphi\text{ and }\mathcal{M},w,\sigma\vDash\psi\\ \mathcal{M},w,\sigma\vDash\mathsf{K}_{t}\varphi&\Leftrightarrow&\mathcal{M},v, \sigma\vDash\varphi\text{ for all $v$ s.t.\ $wR_{\sigma_{w}(t)}v$}\\ \mathcal{M},w,\sigma\vDash[x:=t]\varphi&\Leftrightarrow&\mathcal{M},w,\sigma[x \mapsto\sigma_{w}(t)]\vDash\varphi\\ \hline\end{array}\]
An \(\mathbf{ELAS}\) formula is _valid_ (over epistemic models) if it holds on all the (epistemic) models with assignments \(\mathcal{M},s,\sigma\).
We can translate \(\mathbf{ELAS}\) into the corresponding (2-sorted) first-order language with not only the equality symbol but also a ternary relation symbol \(R\) for the accessibility relation, a function symbol \(f^{a}\) for each name \(a\), and an \(n+1\)-ary relation symbol \(Q^{P}\) for each predicate symbol \(P\). The non-trivial clauses are for \(\mathsf{K}_{t}\) and \([x:=t]\varphi\) based on translation for terms:
\[\begin{array}[]{l}\textsf{Tr}_{w}(x)=x\qquad\textsf{Tr}_{w}(a)=f^{a}(w)\\ \textsf{Tr}_{w}(t\approx t^{\prime})=\textsf{Tr}_{w}(t)\approx\textsf{Tr}_{w}( t^{\prime})\qquad\textsf{Tr}_{w}(P\overline{t})=Q^{P}(w,\textsf{Tr}_{w}( \overline{t}))\\ \textsf{Tr}_{w}(\neg\psi)=\neg\textsf{Tr}_{w}(\psi)\qquad\textsf{Tr}_{w}( \varphi\land\psi)=\textsf{Tr}_{w}(\varphi)\land\textsf{Tr}_{w}(\psi).\\ \textsf{Tr}_{w}(\mathsf{K}_{t}\psi)=\forall v(R(w,v,\textsf{Tr}_{w}(t))\to \textsf{Tr}_{v}(\psi))\\ \textsf{Tr}_{w}([x:=t]\psi)=\left\{\begin{array}[]{l@{ \text{ if } }l}\exists x (x\approx\textsf{Tr}_{w}(t)\land\textsf{Tr}_{w}(\psi))\hfil\text{ if }&t\not=x \\ \textsf{Tr}_{w}(\psi)\hfil\text{ if }&t=x\end{array}\right.\end{array}\]
Note that when \(t\not=x\) we can also (equivalently) define \(\textsf{Tr}_{w}([x:=t]\psi)=\forall x(x\approx\textsf{Tr}_{w}(t)\to\textsf{Tr} _{w}(\psi))\), since there is one and only one value of \(\textsf{Tr}_{w}(t)\). When \(x=t\), then \([x:=x]\psi\) is equivalent to \(\psi\) according to the semantics.
In the light of this translation, we can define the free and bound occurrences of a variable in an \(\mathbf{ELAS}\)-formula by viewing \([x:=t]\) in \([x:=t]\varphi\) as a quantifier for \(x\) binding \(\varphi\). Note that the \(t\) in \([x:=t]\) is not bound in \([x:=t]\varphi\), even when \(t=x\). The set of free variables \(\textsf{Fv}(\varphi)\) in \(\varphi\) is defined as follows (where \(\textsf{Var}(\overline{t})\) is the set of variables in the terms \(\overline{t}\)):
\begin{tabular}{l l}
\(\textsf{Fv}(P\overline{t})=\textsf{Var}(\overline{t})\) & \\
\(\textsf{Fv}(\neg\varphi)=\textsf{Fv}(\varphi)\) & \(\textsf{Fv}(\varphi\land\psi)=\textsf{Fv}(\varphi)\cup\textsf{Fv}(\psi)\) \\
\(\textsf{Fv}(\mathsf{K}_{t}\varphi)=\textsf{Var}(t)\cup\textsf{Fv}(\varphi)\) & \(\textsf{Fv}([x:=t]\varphi)=(\textsf{Fv}(\varphi)\setminus\{x\})\cup\textsf{Var }(t)\) \\
\end{tabular}
We use \(\varphi[y/x]\) to denote the result of substituting \(y\) for all the free occurrences of \(x\) in \(\varphi\). We say \(\varphi[y/x]\) is _admissible_ if all the occurrences of \(y\) by replacing free occurrences of \(x\) in \(\varphi\) are also free.
We first show that \(\mathbf{ELAS}\) is indeed more expressive than \(\mathbf{EL}\). The assignment operator \([x:=t]\) cannot be eliminated over (epistemic) models with variable assignments. Consider the following two (epistemic) models (reflexive arrows omitted) with a fixed domain \(I=\{i,j\}\), worlds \(W=\{s_{1},s_{2}\}\) and a fixed assignment \(\sigma(x)=i\) for all \(x\in\textbf{X}\):
\([x:=a]\widehat{\mathsf{K}}_{a}Px\) can distinguish \(\mathcal{M}_{1},s_{1}\) and \(\mathcal{M}_{2},s_{1}\) given \(\sigma\). But the only atomic formulas other than identities are \(Px\), \(Pa\) and \(a\approx x\), which are all false at \(s_{1}\) and all true at \(s_{2}\) in both models. Also note that \(\mathsf{K}_{a}\) and \(\mathsf{K}_{x}\) have exactly the same interpretation on the corresponding worlds in the two models. Based on these observations, a simple inductive proof on the structure of formulas would show that \(\mathbf{EL}\) cannot distinguish the two models given \(\sigma\).
Interested readers may also wonder whether we can eliminate \([x:=t]\) in each \(\mathbf{ELAS}\) formula to obtain an \(\mathbf{EL}\) formulas which is _equally satisfiable_. However, the naive idea of translating \([x:=t]\varphi\) into \(z\approx t\land\varphi[z/x]\) with fresh \(z\) will not work in formulas like \(\mathsf{K}_{a}[x:=c]\widehat{\mathsf{K}}_{b}x\not\approx c\) since the name \(c\) is not rigid.
To better understand the semantics, the reader is invited to examine the following valid and invalid formulas over epistemic models:
\begin{tabular}{|l r l|}
\hline
1 & valid & \(x\approx y\to\mathsf{K}_{t}x\approx y\), \(x\not\approx y\to\mathsf{K}_{t}x\not\approx y\) \\
& invalid & \(x\approx a\to\mathsf{K}_{t}x\approx a\), \(x\not\approx a\to\mathsf{K}_{t}x\not\approx a\), \(a\approx b\to\mathsf{K}_{t}a\approx b\) \\
2 & valid & \(\mathsf{K}_{x}\varphi\to\mathsf{K}_{x}\mathsf{K}_{x}\varphi\), \(\neg\mathsf{K}_{x}\varphi\to\mathsf{K}_{x}\neg\mathsf{K}_{x}\varphi\), \(\mathsf{K}_{t}\varphi\to\varphi\). \\
& invalid & \(\mathsf{K}_{t}\varphi\to\mathsf{K}_{t}\mathsf{K}_{t}\varphi\), \(\neg\mathsf{K}_{t}\varphi\to\mathsf{K}_{t}\neg\mathsf{K}_{t}\varphi\) \\
3 & valid & \([x:=y]\varphi\to\varphi[y/x]\) (\(\varphi[y/x]\) is admissible) \\
& invalid & \([x:=a]\mathsf{K}_{t}Px\to\mathsf{K}_{t}Pa\) \\
4 & valid & \(x\approx a\to(\mathsf{K}_{x}\varphi\to\mathsf{K}_{a}\varphi)\), \(a\approx b\to(Pa\to Pb)\) \\
& invalid & \(x\approx a\to(\mathsf{K}_{b}Px\to\mathsf{K}_{a}Pa)\), \(a\approx b\to(\mathsf{K}_{c}Pa\to\mathsf{K}_{c}Pb)\) \\
5 & valid & \([x:=y]\mathsf{K}_{a}\varphi\to\mathsf{K}_{a}[x:=y]\varphi\) \\
& invalid & \([x:=b]\mathsf{K}_{a}Px\to\mathsf{K}_{a}[x:=b]Px\) \\ \hline
\end{tabular}
Here are some brief explanations:
* It shows the distinction between (rigid) variables and (non-rigid) names. The invalid formula shows that although two names co-refer, you may not know it (recall Frege’s puzzle).
* Axioms 4 and 5 do not work for names in general, since \(a\) may not know that he is named ‘\(a\)’. On the other hand, positive and negative introspection hold when the index is a variable. The \(\mathsf{T}\) axiom works in general.
* It also demonstrates the non-rigidity of names. \([x:=a]\mathsf{K}_{b}Px\) does not imply \(\mathsf{K}_{b}Pa\) since \(b\) may consider a world possible where \(Pa\) does not hold since \(a\) on that world does not refer to the actual person named by \(a\) in the real world.
* This shows that it is fine to do the first-level substitutions for the equal names but not in the scope of other modalities.
* The last pair also demonstrates the distinction between rigid variables and non-rigid names. In particular, the analog of _Barcan formula_\([x:=t]\mathsf{K}_{s}\varphi\to\mathsf{K}_{s}[x:=t]\varphi\) is not in general valid, if \(t\) is a name.
## 3 Axiomatisation
In this section we give a complete axiomatisation of valid \(\mathbf{ELAS}\)-formulas over epistemic models. The axioms and rules can be categorised into several classes:
* For normal propositional modal logic: \(\mathtt{TAUT},\mathtt{DISTK},\mathtt{MP},\mathtt{NECK}\);
* Axiom for epistemic conditions: \(\mathtt{Tx},\mathtt{4x},\mathtt{5x}\);
* Axioms for equality and first-level substitutability: \(\mathtt{ID},\)\(\texttt{SUBP},\)\(\texttt{SUBK},\)SUBAS;
* Axioms capturing rigidity of variables: RIGIDP and RIGIDN;
* Properties of assignment operator: KAS (normality), DETAS (determinacy), DAS (executability), and EFAS (the effect of the assignment).
* Quantifications: SUB2AS and \(\mathtt{NECAS}\), as in the usual first-order setting (viewing assignments as quantifiers).
\begin{tabular}{l l}
\multicolumn{2}{c}{System \(\mathsf{SELAS}\)} \\
**Axioms** & \\
\begin{tabular}{l l} \(\mathtt{TAUT}\) & Propositional tautologies \\ \(\mathtt{DISTK}\) & \(\mathsf{K}_{t}(\varphi\to\psi)\to(\mathsf{K}_{t}\varphi\to\mathsf{K}_{t}\psi)\) \\ \(\mathtt{Tx}\) & \(\mathsf{K}_{x}\varphi\to\varphi\) \\ \(\mathtt{4x}\) & \(\mathsf{K}_{x}\varphi\to\mathsf{K}_{x}\mathsf{K}_{x}\varphi\) \\ \(\mathtt{5x}\) & \(\neg\mathsf{K}_{x}\varphi\to\mathsf{K}_{x}\neg\mathsf{K}_{x}\varphi\) \\ \(\mathtt{ID}\) & \(t\approx t\) \\ SUBP & \(\overline{t}\approx\overline{t^{\prime}}\to(P\overline{t}\leftrightarrow P \overline{t^{\prime}})\) \\ & (\(P\) can be \(\approx\)) \\ SUBK & \(t\approx t^{\prime}\to(\mathsf{K}_{t}\varphi\leftrightarrow\mathsf{K}_{t^{ \prime}}\varphi)\) \\ \end{tabular} & \begin{tabular}{l l} SUBAS & \(t\approx t^{\prime}\to\) \\ & \(([x:=t]\varphi\leftrightarrow[x:=t^{\prime}]\varphi)\) \\ RIGIDP & \(x\approx y\to\mathsf{K}_{t}x\approx y\) \\ RIGIDN & \(x\not\approx y\to\mathsf{K}_{t}x\not\approx y\) \\ KAS & \([x:=t](\varphi\to\psi)\to\) \\ & \(([x:=t]\varphi\to[x:=t]\psi)\) \\ DETAS & \(\langle x:=t\rangle\varphi\to[x:=t]\varphi\) \\ DAS & \(\langle x:=t\rangle\top\) \\ EFAS & \([x:=t]x\approx t\) \\ SUB2AS & \(\varphi[y/x]\to[x:=y]\varphi\) \\ & (\(\varphi[y/x]\) is admissible) \\ \end{tabular} \\
\end{tabular}
\begin{tabular}{l c l c l c}
**Rules:** & & & & & \\
\(\mathtt{MP}\) & \(\dfrac{\varphi,\varphi\to\psi}{\psi}\) & \(\mathtt{NECK}\) & \(\dfrac{\vdash\varphi}{\vdash\mathsf{K}_{t}\varphi}\) & \(\mathtt{NECAS}\) & \(\dfrac{\vdash\varphi\to\psi}{\vdash\varphi\to[x:=t]\psi}\quad(x\not\in\textsf{ Fv}(\varphi))\) \\
& & & & & \\
\end{tabular}
where \(\overline{t}\approx\overline{t^{\prime}}\) means point-wise equivalence for sequences of terms \(\overline{t}\) and \(\overline{t^{\prime}}\) such that \(|\overline{t}|=|\overline{t^{\prime}}|\). It is straightforward to verify the soundness of the system. [Soundness] \(\mathsf{SELAS}\) is sound over epistemic models with assignments. The following are derivable in the above proof system (where \(\varphi[y/x]\) is admissible below in \(\mathtt{SUBASEQ}\)) :
\begin{tabular}{l l l l}
\(\mathtt{SYM}\) & \(t\approx t^{\prime}\to t^{\prime}\approx t\) & \(\mathtt{TRANS}\) & \(t\approx t^{\prime}\wedge t^{\prime}\approx t^{\prime\prime}\to t\approx t^{ \prime\prime}\) \\
\(\mathtt{DBASEQ}\) & \(\langle x:=t\rangle\varphi\leftrightarrow[x:=t]\varphi\) & \(\mathtt{SUBASEQ}\) & \(\varphi[y/x]\leftrightarrow[x:=y]\varphi\) \\
\(\mathtt{EAS}\) & \([x:=t]\varphi\leftrightarrow\varphi\) (\(x\not\in\textsf{Fv}(\varphi)\)) & \(\mathtt{T}\) & \(\mathsf{K}_{t}\varphi\to\varphi\) \\
\(\mathtt{CNECAS}\) & \(\dfrac{\vdash\varphi\to\psi}{\vdash[x:=t]\varphi\to\psi}\quad(x\not\in\textsf{ Fv}(\psi))\) & \(\mathtt{NECAS}\)’ & \(\dfrac{\vdash\varphi}{\vdash[x:=t]\varphi}\) \\
\(\mathsf{EX}\) & \([x:=x]\varphi\leftrightarrow\varphi\) & & \\
\end{tabular}
(Sketch) \(\mathtt{SYM}\) and \(\mathtt{TRANS}\) are trivial based on \(\mathtt{ID}\) and SUBP. \(\mathtt{DBASEQ}\) is based on DETAS and DAS. \(\mathtt{SUBASEQ}\) is due to the contrapositive of SUB2AS and \(\mathtt{DBASEQ}\). \(\mathtt{CNECAS}\) is due to \(\mathtt{NECAS}\) and \(\mathtt{DBASEQ}\) for contrapositive. \(\mathtt{EAS}\) is based on \(\mathtt{NECAS}\) and \(\mathtt{CNECAS}\) (taking \(\psi=\varphi\)). \(\mathsf{EX}\) is a special case of SUB2AS and \(\mathtt{NECAS}^{\prime}\) is a special case of \(\mathtt{NECAS}\). As a more detailed example, let us look at the proof (sketch) of \(\mathtt{T}\) (we omit the rountine steps using the normality of \([x:=t]\)):
\[\begin{array}[]{lll}(1)&\vdash\mathsf{K}_{t}\varphi\to(z\approx t\to\mathsf{K} _{z}\varphi)&\text{($\texttt{SUBK}$, $z$ is fresh)}\\ (2)&\vdash\mathsf{K}_{t}\varphi\to[z:=t](z\approx t\to\mathsf{K}_{z}\varphi)& \text{($\mathtt{NECAS}(1)$)}\\ (3)&\vdash\mathsf{K}_{t}\varphi\to[z:=t](\mathsf{K}_{z}\varphi)&\text{($ \texttt{EFAS}(2)$)}\\ (4)&\vdash[z:=t](\mathsf{K}_{z}\varphi\to\varphi)&\text{($\mathtt{NECAS}^{ \prime},\mathtt{Tx}$)}\\ (5)&\vdash\mathsf{K}_{t}\varphi\to[z:=t]\varphi&\text{(normality of [z:=t] and $\mathtt{MP}$)}\\ (6)&\vdash\mathsf{K}_{t}\varphi\to\varphi&\text{($\mathtt{EAS},\mathtt{MP}$)} \end{array}\]
Based on the above result, we can reletter the bound variables in any \(\mathbf{ELAS}\) formula like in first-order logic. [Relettering] Let \(z\) be a fresh variable not in \(\varphi\) and \(t\), then
\[[x:=t]\varphi\leftrightarrow[z:=t]\varphi[z/x]\]
. Since \(z\) is fresh, \(\varphi[z/x]\) is admissible. We have the following proof (sketch):
\[\begin{array}[]{lll}(1)&\vdash\varphi[z/x]\leftrightarrow[x:=z]\varphi&\text{( $\mathtt{SUBASEQ}$)}\\ (2)&\vdash[z:=t]\varphi[z/x]\leftrightarrow[z:=t][x:=z]\varphi&\text{( normality of $[z:=t]$)}\\ (3)&\vdash[z:=t]\varphi[z/x]\leftrightarrow[z:=t](z\approx t\land[x:=z]\varphi )&\text{($\texttt{EFAS}$)}\\ (4)&\vdash[z:=t]\varphi[z/x]\leftrightarrow[z:=t](z\approx t\land[x:=t]\varphi )&\text{($\texttt{SUBAS}$)}\\ (5)&\vdash[z:=t]\varphi[z/x]\leftrightarrow[z:=t][x:=t]\varphi&\text{($\texttt {EFAS}$)}\\ (6)&\vdash[z:=t]\varphi[z/x]\leftrightarrow[x:=t]\varphi&\text{($\mathtt{EAS}$ )}\\ \end{array}\]
## 4 Completeness
To prove the completeness, besides the treatments of \([x:=t]\) and termed modality \(\mathsf{K}_{t}\), the major difficulty is the lack of the Barcan-like formulas in \(\mathbf{ELAS}\), which are often used to capture the condition of the constant domain. As in standard first-order logic, we need to provide witnesses for each name, and the Barcan formula can make sure we can always find one when building a successor of some maximal consistent set with enough witnesses. On the other hand, we can build an increasing domain pseudo model without such a formula using the techniques in [12]. Inspired by the techniques in [3], to obtain a constant domain model, when building the successors in the increasing domain pseudo model, we only create a new witness if all the old ones are not available, and we make sure by formulas in the maximal consistent sets that the new one is not equal to any old ones (throughout the whole model). In this way, there will not be any conflicts between the witnesses when we collect all of them together. We may then create a constant domain by considering the equivalence classes of all the witnesses occurring in the pseudo model with an increasing domain.
Here is the general proof strategy:
* Extend the language with countably many new variables.
* Build a pseudo canonical frame using maximal consistent sets for various sublanguages of the extended language, with witnesses for the names.
* Given a maximal consistent set, cut out its generated subframe from the pseudo frame, and build a constant-domain canonical model, by taking certain equivalence classes of variables as the domain.
* Show that the truth lemma holds for the canonical model.
* Take the reflexive symmetric transitive closure of the relations in pseudo model and show that the truth of the formulas in the original language are preserved.
* Extend each consistent set of the original model to a maximal consistent set with witnesses.
We first extend the language \(\mathbf{ELAS}\) with countably infinitely many new variables, and call the new language \(\mathbf{ELAS}^{+}\) with the variable set \(\textbf{X}^{+}\). We say a language \(L\) is an _infinitely proper sublanguage_ of another language \(L^{\prime}\) if:
* \(L\) and \(L^{\prime}\) only differ in their sets of variables,
* \(L\subseteq L^{\prime}\),
* there are infinitely many new variables in \(L^{\prime}\) that are not in \(L\).
We use maximal consistent sets w.r.t. different infinitely proper sublanguages of \(\mathbf{ELAS}^{+}\) that are extensions of \(\mathbf{ELAS}\) to build a pseudo canonical frame.
[Pseudo canonical frame] The pseudo canonical frame \(\mathcal{F}^{c}=\langle W,R\rangle\) is defined as follows:
* \(W\) is the set of MCS \(\Delta\) w.r.t. some infinitely proper sublanguages \(L_{\Delta}\) of \(\mathbf{ELAS}^{+}\) such that for each \(\Delta\in W\): * \(\mathbf{ELAS}\subseteq L_{\Delta}\), * For each \(a\in\textbf{N}\) there is a variable \(x\) in \(L_{\Delta}\) (notation: \(x\in\textsf{Var}(\Delta))\) such that \(x\approx a\in\Delta\) (call it \(\exists\)-property)
* For each \(x\in\textbf{X}^{+}\), \(\Delta R_{x}\Theta\) iff the following three conditions hold: 1. \(x\) in \(\textsf{Var}(\Delta)\), the set of variables in \(L_{\Delta}\). 2. \(\{\varphi\mid\mathsf{K}_{x}\varphi\in\Delta\}\subseteq\Theta\). 3. if \(y\in\textsf{Var}(\Theta)\setminus\textsf{Var}(\Delta)\) then \(y\not\approx z\in\Theta\) for all \(z\in\textsf{Var}(\Theta)\) such that \(z\not=y\).
ObservationThe last condition for \(R_{x}\) makes sure that every new variable in the successor is distinguished from any other variables by inequalities. It is also easy to see that if \(t\in L_{\Delta}\) then there is \(x\in\textsf{Var}(\Delta)\) such that \(x\approx t\in\Delta\) by \(\exists\)-property and \(\mathtt{ID}\).
If \(\Delta R_{x}\Theta\) in \(\mathcal{F}^{c}\), then:
* \(L_{\Delta}\) is a sublanguage of \(L_{\Theta}\)
* for any \(y\not=z\in\textsf{Var}(\mathbf{ELAS}^{+})\): \(y\approx z\in\Delta\) iff \(y\approx z\in\Theta\).
For the first: For all \(y\in\textsf{Var}(\Delta)\), \(y\approx y\in\Delta\) therefore \(\mathsf{K}_{x}(y\approx y)\in\Delta\) by RIGIDP, thus \(y\approx y\in\Theta\).
For the second:
* Suppose \(y,z\in\textsf{Var}(\Delta)\) * If \(y\approx z\in\Delta\), then \(\mathsf{K}_{x}y\approx z\in\Delta\) by RIGIDP, thus \(y\approx z\in\Theta.\) * If \(y\approx z\not\in\Delta\) then \(y\not\approx z\in\Delta\) since \(\Delta\) is an \(L_{\Delta}\)-MCS and \(y,z\in\textsf{Var}(\Delta)\). Then \(\mathsf{K}_{x}y\not\approx z\in\Delta\) by RIGIDN, thus \(y\approx z\not\in\Theta\).
* Suppose w.o.l.g. \(y\not\in\textsf{Var}(\Delta)\) thus \(y\approx z\not\in\Delta\). * If \(y\not\in\textsf{Var}(\Theta)\) or \(z\not\in\textsf{Var}(\Theta)\) then then \(y\approx z\in\Delta\) iff \(y\approx z\in\Theta\) trivially holds. * If \(y\in\textsf{Var}(\Theta)\) and \(z\in\textsf{Var}(\Theta)\) then \(y\not\approx z\in\Theta\) due to the third condition of \(R_{x}\). Therefore \(y\approx z\not\in\Theta\) since \(\Theta\) is consistent. Thus \(y\approx z\in\Delta\) iff \(y\approx z\in\Theta.\)
The second part of the above proposition makes sure that we do not have conflicting equalities in different states which are accessible from one to another.
[Existence lemma] If \(\Delta\in W\) and \(\widehat{\mathsf{K}}_{t}\varphi\in\Delta\) then there is a \(\Theta\in W\) and an \(x\in\textsf{Var}(L_{\Delta})\) such that \(\varphi\in\Theta\), \(x\approx t\in\Delta\), and \(\Delta R_{x}\Theta\). If \(\widehat{\mathsf{K}}_{t}\varphi\in\Delta\) then there is \(x\approx t\in\Delta\) for some \(x\), due to the fact that \(\Delta\) has the \(\exists\)-property. Let \(\Theta^{--}=\{\psi\mid\mathsf{K}_{x}\psi\in\Delta\}\cup\{\varphi\}\). We first show that \(\Theta^{--}\) is consistent by \(\mathtt{DISTK}\) and \(\mathtt{NECK}\) (routine). Next we show that it can be extended to a state in \(W\). We can select an infinitely proper sublanguage \(L\) of \(\mathbf{ELAS}^{+}\) such that \(L_{\Delta}\) is an infinitely proper sublanguage of \(L\). We can list the new variables in \(L\) but not in \(L_{\Delta}\) by \(y_{0},y_{1},y_{2},\dots\). We also list the names in **N** as \(a_{0},a_{1},\dots\). In the following, we add the witness to the names by building \(\Theta_{i}\) as follows:
* \(\Theta_{0}=\Theta^{--}\)
* \(\Theta_{k+1}=\left\{\begin{array}[]{lll}\Theta_{k}&\text{if}&\text{ $x\approx a _{k}$ is in $\Theta_{k}$ for some $x\in\textsf{Var}(\Delta)$ (1)}\\ \Theta_{k}\cup\{x_{i}\approx a_{k}\}&\text{if}&\text{(1) does not hold but $\{ x\approx a_{k}\}\cup\Theta_{k}$}\\ &&\text{ is consistent for some $x\in\textsf{Var}(\Delta)$,}\\ &&\text{ and $x_{i}$ is the first such $x$ according to}\\ &&\text{a fixed enumeration of $\textsf{Var}(\Delta)$ \qquad(2) }\\ \Theta_{k}\cup\{y_{j}\approx a_{k}\}\cup&\text{if}&\text{ neither (1) nor (2) holds and }\\ \{y_{j}\not\approx z\mid z\in\textsf{Var}(\Theta_{k})\}&&\text{$y_{j}$ is the first in the enumeration }\\ &&\text{ of the new variables not in $\Theta_{k}$}\qquad(3)\end{array}\right.\)
We can show that \(\Theta_{k}\) is always consistent. Note that \(\Theta_{0}\) is consistent, we just need to show if \(\Theta_{k}\) is consistent and \((1),(2)\) do not hold, then \(\Theta_{k+1}\) is consistent too. Suppose for contradiction that \(\Theta_{k}\cup\{y_{j}\approx a_{k}\}\cup\{y_{j}\not\approx z\mid z\in\textsf{ Var}(\Theta_{k})\}\) is not consistent then there are fomulas \(\psi_{1}\dots\psi_{n}\in\Theta_{k}\), and \(z_{i_{1}}\dots z_{i_{m}}\in\textsf{Var}(\Theta_{k})\) such that:
\[\vdash\psi_{1}\land\dots\land\psi_{n}\wedge y_{j}\approx a_{k}\to\bigvee_{i\in \{i_{1},\dots,i_{m}\}}y_{j}\approx z_{i}\quad(\star)\]
First note that since \(y_{j}\) is not in \(\Theta_{k}\), \(\Theta_{k}\cup\{y_{j}\approx a_{k}\}\) is consistent, for otherwise there are \(\psi_{1}\dots\psi_{n}\in\Theta_{k}\) such that \(\vdash\bigwedge_{i\leq n}\psi_{i}\to y_{j}\not\approx a_{k}\), then by \(\mathtt{NECAS}\), we have \(\vdash\bigwedge_{i\leq n}\psi_{i}\to[y_{j}:=a_{k}]y_{j}\not\approx a_{k}\) thus by EFAS we have \(\vdash\bigwedge_{i\leq n}\psi_{i}\to[y_{j}:=a_{k}](a_{k}\approx y_{j}\wedge a_{ k}\not\approx y_{j})\), contradicting to the consistency of \(\Theta_{k}\) (by DETAS). By \((\star)\), \(\Theta_{k}\cup\{y_{j}\approx a_{k}\}\) is consistent with one of \(y_{j}\approx z_{i}\) for some \(z_{i}\) in \(\textsf{Var}(\Theta_{k})\). Thus \(\Theta_{k}\cup\{z_{i}\approx a_{k}\}\) is also consistent which contradicts the assumption that condition (2) and (1) do not hold.
Then we define \(\Theta^{-}\) to be the union of all \(\Theta_{k}\). Clearly, \(\Theta^{-}\) has the \(\exists\)-property. We build the language \(L^{\prime}\) based on \(\textsf{Var}(\Theta^{-})\). Note that \(L^{\prime}\) is still an infinitely proper sublanguage of \(\mathbf{ELAS}^{+}\).
Finally, we extend \(\Theta^{-}\) into an MCS w.r.t. \(L^{\prime}\) and it is not hard to show \(\Delta R_{x}\Theta\) by verifying the third condition: when we introduce a new variable we always make sure it is differentiated with the previous one in the construction of \(\Theta\).
Given a state \(\Gamma\) in \(\mathcal{F}^{c}\), we can define an equivalence relation \(\sim_{\Gamma}\): \(x\sim_{\Gamma}y\) iff \(x\approx y\in\Gamma\) or \(x=y\) (note that \(x\approx x\) is _not_ in \(\Gamma\) if \(x\not\in L_{\Gamma}\)). Due to \(\mathtt{ID},\mathtt{SYM},\mathtt{TRANS}\), \(\sim_{\Gamma}\) is indeed an equivalence relation. When \(\Gamma\) is fixed, we write \(|x|\) for the equivalence class of \(x\) w.r.t. \(\sim_{\Gamma}\). By definition, for all \(x\not\in\textsf{Var}(\Gamma)\), \(|x|\) is a singleton.
Now we are ready to build the canonical model.
[Canonical model] Given a \(\Gamma\) in \(\mathcal{F}^{c}\) we define the canonical model \(\mathcal{M}_{\Gamma}=\langle W_{\Gamma},I^{c},R^{c},\rho^{c},\eta^{c}\rangle\) based on the psuedo canonical frame \(\langle W,R\rangle\)
* \(W_{\Gamma}\) is the subset of \(W\) generated from \(\Gamma\) w.r.t. the relations \(R_{x}\).
* \(I^{c}=\{|x|\mid x\in\textsf{Var}(W_{\Gamma})\}\) where \(\textsf{Var}(W_{\Gamma})\) is the set of all the variables appearing in \(W_{\Gamma}\).
* \(\Delta R^{c}_{|x|}\Theta\) iff \(\Delta R_{x}\Theta\), for any \(\Delta,\Theta\in W_{\Gamma}\).
* \(\eta^{c}(a,\Delta)=|x|\) iff \(a\approx x\in\Delta\).
* \(\rho^{c}(P,\Delta)=\{\overline{|x|}\mid P\overline{x}\in\Delta\}.\)
Here is a handy observation. If \(y\in\textsf{Var}(\Delta)\setminus\textsf{Var}(\Gamma)\) then \(y\not\approx z\in\Delta\) for all \(z\in\textsf{Var}(\Delta)\) such that \(z\not=y\). Due to the condition 3 of the relation \(R_{x}\) in \(\mathcal{F}^{c}\) and Proposition 4, and the fact that \(W_{\Gamma}\) is generated from \(\Gamma\).
The canonical model is well-defined.
* For \(R_{|x|}\): We show that the choice of the representative in \(|x|\) does not change the definition. Suppose \(x\sim_{\Gamma}y\) then either \(x=y\) or \(x\approx y\in\Gamma\). In the first case, \(\Delta R_{x}\Theta\) iff \(\Delta R_{y}\Theta\). In the second case, suppose \(\Delta R_{x}\Theta\). We show that the three conditions for \(\Delta R_{y}\Theta\) hold. For condition 1, \(y\in\textsf{Var}(\Delta)\) since \(y\in\textsf{Var}(\Gamma)\) and \(\Delta\) is generated from \(\Gamma\) by \(R\). For condition 2, we just need to note that \(\vdash y\approx x\to(\mathsf{K}_{x}\varphi\leftrightarrow\mathsf{K}_{y}\varphi)\) by SUBK. And condition 3 is given directly by condition 3 for \(\Delta R_{x}\Theta\).
* For \(\eta(a,\Delta)\): We first show that the choice of the representative in \(|x|\) does not change the definition by \(\vdash x\approx y\to(a\approx x\leftrightarrow a\approx y)\) (\(\mathtt{TRANS}\)). Then we need to show that \(\eta^{c}(a,\Delta)\) is unique. Note that due to the \(\exists\)-property, there is always some \(x\) such that \(x\approx a\) in \(\Delta\). Suppose towards contradiction that \(a\approx x\in\Delta\), \(a\approx y\in\Delta\) and \(x\not\sim_{\Gamma}y\) then clearly \(x,y\) cannot be both in \(\textsf{Var}(\Gamma)\) for otherwise \(x\not\approx y\in\Delta\). Suppose w.l.o.g. \(x\) is not in \(\textsf{Var}(\Gamma)\) then we should have \(x\not\approx y\in\Delta\) due to Proposition 4, contradicting the assumption that \(\Delta\) is consistent.
\(R_{|x|}\) is transitive. Suppose \(\Delta R_{|x|}\Theta\) and \(\Theta R_{|x|}\Lambda\) then in \(\mathcal{F}^{c}\)\(\Delta R_{x}\Theta\) and \(\Theta R_{x}\Lambda\) (note that the representative of \(|x|\) does not really matter since \(R_{|x|}\) is well-defined). We have to show the three conditions for \(\Delta R_{x}\Lambda\). For condition 1, \(x\in\textsf{Var}(\Delta)\) since \(\Delta R_{x}\Theta\). For condition 2, by Axiom \(\mathtt{4x}\), we have for any \(\varphi\) such that \(\mathsf{K}_{x}\varphi\in\Delta\) we have \(\mathsf{K}_{x}\mathsf{K}_{x}\varphi\in\Delta\) thus \(\mathsf{K}_{x}\varphi\in\Theta\) thus \(\varphi\in\Lambda\), by the definition of \(R_{x}\). For condition 3, suppose \(y\in\textsf{Var}(\Lambda)\setminus\textsf{Var}(\Delta)\). Then since \(\Delta\in W\), \(y\not\in\textsf{Var}(\Gamma)\), so by Proposition 4 we are done.
Before proving the truth lemma, we have two simple observations: .
* If \(x\approx y\) is in some \(\Delta\in W_{\Gamma}\) then \(x\sim_{\Gamma}y\).
* If \(x\sim_{\Gamma}y\) then \(x=y\) or \(x\approx y\) in all the \(\Delta\in W_{\Gamma}\).
For the first, suppose \(x\approx y\) is in some \(\Delta\in W_{\Gamma}\). We just need to consider the case when \(x\not=y\) for if \(x=y\) then \(x\sim_{\Gamma}y\) by definition. By Proposition 4, \(x\) and \(y\) must be both in \(\textsf{Var}(\Gamma)\), thus by RIGIDN and the fact that \(\Delta\) is connected to \(\Gamma\), \(x\approx y\in\Gamma\).
The second is immediate by the definition of \(\sim_{\Gamma}\): if \(x\approx y\in\Gamma\) then \(x\approx y\in\Delta\) due to RIGIDP and the fact that all the \(\Delta\in W_{\Gamma}\) are connected to \(\Gamma\).
Although \(R_{|x|}\) is transitive, the model \(\mathcal{M}_{\Gamma}\) is not reflexive nor symmetric in general. For the failure of reflexivity, note that some \(x\) may not be in the language of some state. For the failure of symmetry: We may have \(\Delta R_{|x|}\Theta\) and \(L_{\Delta}\subset L_{\Theta}\) thus it is not the case that \(\Theta R_{|x|}\Delta\) by Proposition 4. We will turn this model into an S5 model later on. Before that we first prove a (conditional) truth lemma w.r.t. \(\mathcal{M}_{\Gamma}\) and the canonical assignment \(\sigma^{*}\) such that \(\sigma^{*}(x)=|x|\) for all \(x\in\textsf{Var}(W_{\Gamma})\). [Truth lemma] For any \(\varphi\in\mathbf{ELAS}^{+}\) and any \(\Delta\in W\), if \(\varphi\in L_{\Delta}\) then:
\[\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash\varphi\Leftrightarrow\varphi\in\Delta\]
We do induction on the structure of the formulas.
For the case of \(t\approx t^{\prime}\in L_{\Delta},\) by \(\exists\)-property we have some \(x,y\in\textsf{Var}(L_{\Delta})\) such that \(t\approx x\in\Delta,t^{\prime}\approx y\in\Delta\).
* Suppose \(t\approx t^{\prime}\in\Delta\). Since \(t\approx x\in\Delta,t^{\prime}\approx y\in\Delta\), by \(\mathtt{TRANS}\), \(x\approx y\in\Delta\). Now by Proposition 4, \(x\sim_{\Gamma}y\). Thus \(\sigma^{*}(t,\Delta)=|x|=|y|=\sigma^{*}(t^{\prime},\Delta)\), then \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash t\approx t^{\prime}.\)
* If \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash t\approx t^{\prime}\) then \(\sigma^{*}(t,\Delta)=\sigma^{*}(t^{\prime},\Delta)\). If \(t\) and \(t^{\prime}\) are variables \(x,y\), then \(|x|=|y|\) i.e., \(x\sim_{\Gamma}y\). By Proposition 4, either \(x=y\) or \(x\approx y\in\Delta\). Actually, even if \(x=y\), since \(x\) is in \(\textsf{Var}(\Delta)\), \(x\approx x\in\Delta\) by \(\mathtt{ID}\). If \(t\) and \(t^{\prime}\) are both in **N**, then by the definition of \(\eta^{c}\), there are \(t\approx x\) and \(t\approx y\) in \(\Delta\) and \(y\in|x|\), which means \(x\sim_{\Gamma}y\). By Proposition 4 and \(\mathtt{ID}\) again, \(x\approx y\in\Delta\) therefore \(t\approx t^{\prime}\in\Delta\). Finally, w.l.o.g. if \(t\in\textsf{Var}(\Delta)\) and \(t^{\prime}\in\textbf{N}\), then by definition of \(\eta\), \(t^{\prime}\approx x\in\Delta\) for some \(x\) and \(t\sim_{\Gamma}x\). Again, since \(x,t\in\textsf{Var}(\Delta)\), \(x\approx t\in\Delta\) therefore \(t\approx t^{\prime}\in\Delta\).
For the case of \(P\overline{t}\in L_{\Delta}\).
* If \(P\overline{t}\in\Delta\), then by \(\exists\)-property, there are \(\overline{x}\) in \(\textsf{Var}(\Delta)\) such that \(\overline{x}\approx\overline{t}\in\Delta\). Then by SUBP we have \(P\overline{x}\in\Delta\). Thus by the definition of \(\rho^{c}\), \(\overline{|x|}\in\rho^{c}(P,\Delta)\). By the definitions of \(\sigma^{*}\) and \(\eta^{c}\), \(\overline{\sigma^{*}(t,\Delta)}=\overline{|x|}\). Therefore \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash P\overline{t}\).
* If \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash P\overline{t}\), then the vector \(\overline{\sigma^{*}(t,\Delta)}\in\rho^{c}(P,\Delta)\). It means that \(\overline{\sigma^{*}(t,\Delta)}\)=\(\overline{|x|}\) (coordinate-wise) for some \(P\overline{x}\in\Delta\) such that \(\overline{t}\approx\overline{y}\in\Delta\) for some \(\overline{y}\) such that \(\overline{x}\sim_{\Gamma}\overline{y}\). Note that since \(P\overline{x}\in\Delta\), \(\overline{x}\in\textsf{Var}(\Delta)\). It is not hard to show that \(\overline{x}\approx\overline{y}\in\Delta\) by Proposition 4. Now based on SUBP, \(P\overline{t}\in\Delta\).
The boolean cases are routine.
For the case of \(\mathsf{K}_{t}\psi\in L_{\Delta}\):
* Suppose \(\mathsf{K}_{t}\psi\not\in\Delta\), then \(\widehat{\mathsf{K}}_{t}\neg\psi\in\Delta\). By Lemma 4 there is some variable \(x\) and \(\Theta\in W_{\Gamma}\) such that \(\Delta R_{|x|}\Theta\), \(x\approx t\in\Delta\) and \(\neg\psi\in\Theta\). Therefore, by the induction hypothesis, \(\mathcal{M}_{\Gamma},\Theta,\sigma^{*}\nvDash\psi\) and so \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\nvDash\mathsf{K}_{x}\psi\). If \(t\) is a variable then \(x\sim_{\Gamma}t\) by Proposition 4, thus \(\sigma^{*}(\Delta,t)=|x|\). If \(t\) is a name then by definition \(\eta^{c}(\Delta,t)=|x|\). Therefore in either case we have \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\nvDash\mathsf{K}_{t}\psi\).
* Suppose \(\mathsf{K}_{t}\psi\in\Delta\), then by \(\exists\)-property, there is an \(x\in\textsf{Var}(\Delta)\) such that \(x\approx t\in\Delta\), thus \(\mathsf{K}_{x}\psi\in\Delta\) by SUBK and \(\sigma^{*}(t,\Delta)=|x|\). By induction hypothesis, \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash x\approx t\). Now consider any \(R_{|x|}\)-successor \(\Theta\) of \(\Delta\), it is clear that \(\psi\in\Theta\) by definition of \(R_{|x|}\). Now by induction hypothesis again, \(\mathcal{M}_{\Gamma},\Theta,\sigma^{*}\vDash\psi\). Therefore, \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash\mathsf{K}_{t}\psi\).
For the case of \([x:=t]\psi\in L_{\Delta}\):
* Suppose \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash[x:=t]\psi\). * If \(t\in\textbf{N}\), by \(\exists\)-property, we have \(y\approx t\in\Delta\) for some \(y\in\textsf{Var}(\Delta).\) By induction hypothesis, \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash y\approx t\). Therefore \(\sigma^{*}(\Delta,t)=|y|\) thus \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}[x\mapsto|y|]\vDash\psi\). Now if \(\psi[y/x]\) is admissible then we have \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash\psi[y/x]\). By IH, \(\psi[y/x]\in\Delta\). Thus \([x:=y]\psi\in\Delta\) by SUB2AS. Since \(t\approx y\in\Delta\), thus \([x:=t]\psi\in\Delta\) by SUBAS. Note that if \(\psi[y/x]\) is not admissible, then we can reletter \(\psi\) to have an equivalent formula \(\psi^{\prime}\in L(\Delta)\) such that \(\psi^{\prime}[y/x]\) is admissible. Then the above proof still works to show that \([x:=t]\psi^{\prime}\in\Delta\). Since relettering can be done in the proof system by Proposition 3, we have \([x:=t]\psi\in\Delta\). * If \(t\) is a variable \(y\), then \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}[x\mapsto|y|]\vDash\psi\). From here a similar (but easier) proof like the above suffices.
* Supposing \([x:=t]\psi\in\Delta\), by the \(\exists\)-property of \(\Delta\), we have some \(y\in\textsf{Var}(\Delta)\) such that \(t\approx y\in\Delta\). Like the proof above we can assume w.l.o.g. that \(\psi[y/x]\) is admissible, for otherwise we can reletter \(\psi\) first. Thus \([x:=y]\psi\in\Delta\) by SUBAS. Then by \(\mathtt{SUBASEQ}\), \(\psi[y/x]\in\Delta\). By IH, \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash\psi[y/x]\wedge t\approx y\). By the semantics and the assumption that \(\psi[y/x]\) admissible, \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash[x:=t]\psi\).
Now we will transform the canonical model into a proper S5 model by taking the reflexive, symmetric and transitive closure of each \(R_{|x|}\) in \(\mathcal{M}_{\Gamma}\). Note that although \(\mathcal{M}_{\Gamma}\) is a transitive model, the symmetric closure will break the transitivity. Actually, it can be done in one go by taking the reflexive transitive closure via undirected paths. More precisely, let \(\mathcal{N}_{\Gamma}\) be the model like \(\mathcal{M}_{\Gamma}\) but with the revised relation \(R^{*}_{|x|}\) for each \(x\in\textsf{Var}(W_{\Gamma})\), defined as:
\begin{tabular}{l l l}
\(\Delta R^{*}_{|x|}\Theta\) & \(\Leftrightarrow\) & either \(\Delta=\Theta\) or there are some \(\Delta_{1}\dots\Delta_{n}\) for some \(n\geq 0\) \\
& & such that \(\Delta_{k}R_{|x|}\Delta_{k+1}\) or \(\Delta_{k+1}R_{|x|}\Delta_{k}\) \\
& & for each \(0\leq k\leq n\) where \(\Delta_{0}=\Delta\) and \(\Delta_{n+1}=\Theta\). \\
\end{tabular}
We will show that it preserves the truth value of \(\mathbf{ELAS}\) formulas.
[Preservation lemma] For all \(\varphi\in\mathbf{ELAS}:\)
\[\mathcal{N}_{\Gamma},\Delta,\sigma^{*}\vDash\varphi\Leftrightarrow\varphi\in\Delta\]
Since we only altered the relations, We just need to check \(\mathsf{K}_{t}\psi\in\mathbf{ELAS}\). Note that then \(\mathsf{K}_{t}\psi\) is in all the local language \(L_{\Delta}\).
* If \(\mathcal{N}^{\Gamma},\Delta,\sigma^{*}\vDash\mathsf{K}_{t}\psi\) then since the closure only adds relations then we know \(\mathcal{M}_{\Gamma},\Delta,\sigma^{*}\vDash\mathsf{K}_{t}\psi\) by induction hypothesis and Lemma 4. Now by Lemma 4 again \(\mathsf{K}_{t}\psi\in\Delta.\)
* Suppose \(\mathsf{K}_{t}\psi\in\Delta\). Since \(\Delta\) has \(\exists\)-property, there is some \(x\in\textsf{Var}(\Delta)\) such that \(x\approx t\in\Delta\) thus \(\mathsf{K}_{x}\psi\in\Delta\). Now consider an arbitrary \(R^{*}_{|x|}\)-successor \(\Theta\) in \(\mathcal{N}^{\Gamma}\). If \(\Delta=\Theta\) then by \(\mathtt{KT}\) it is trivial to show that \(\psi\in\Delta\). Now by the definition of \(R^{*}_{|x|}\), suppose there are some \(\Delta_{1}\dots\Delta_{n}\) such that \(\Delta_{k}R_{|x|}\Delta_{k+1}\) or \(\Delta_{k+1}R_{|x|}\Delta_{k}\) for each \(0\leq k\leq n\) where \(\Delta=\Delta_{0}\) and \(\Theta=\Delta_{n+1}\). Now we do induction on \(n\) to show that \(\mathsf{K}_{x}\psi\in\Delta_{k}\) for all those \(k\leq n+1\). Note that if the claim is correct then by \(\mathtt{KT}\) we have \(\psi\in\Delta_{k+1}\) thus by IH we have \(\mathcal{N}^{\Gamma},\Delta,\sigma^{*}\vDash\mathsf{K}_{t}\psi\). * \(n=0:\) Then there are two cases: * \(\Delta R_{|x|}\Theta\) in \(\mathcal{M}_{\Gamma}\): by \(\mathtt{4x}\), \(\mathsf{K}_{x}\mathsf{K}_{x}\psi\in\Delta\) and then \(\mathsf{K}_{x}\psi\in\Theta\) by the definition of \(R_{|x|}\). * \(\Theta R_{|x|}\Delta\) in \(\mathcal{M}_{\Gamma}\): First note that there is some \(y\in|x|\) such that \(y\in\textsf{Var}(\Theta)\) by the definition of \(R_{|x|}\). If \(y\not=x\) then by Proposition 4, we have \(y\approx x\in\Theta\), therefore \(x\in\textsf{Var}(\Theta)\). Towards contradiction suppose \(\neg\mathsf{K}_{x}\psi\in\Theta\). By \(\mathtt{5x}\), \(\mathsf{K}_{x}\neg\mathsf{K}_{x}\psi\in\Theta\). By definition of \(R_{|x|}\), \(\neg\mathsf{K}_{x}\psi\in\Delta\). Contradiction. * \(n=k+1:\) Supposing that the claim holds for \(n=k\), i.e., \(\mathsf{K}_{x}\psi\in\Delta_{k}\). There are again two cases: \(\Delta_{k}R_{|x|}\Delta_{k+1}\) or \(\Delta_{k+1}R_{|x|}\Delta_{k}\) and they can be proved as above. In sum, \(\mathcal{N}^{\Gamma},\Theta,\sigma^{*}\vDash\psi\) for any \(\Theta\) such that \(\Delta R_{|x|}\Theta\). Therefore, \(\mathcal{N}^{\Gamma},\Delta,\sigma^{*}\vDash\mathsf{K}_{t}\psi\).
It can be easily checked that: \(\mathcal{N}^{\Gamma}\) is an epistemic model, i.e., all the \(R_{|x|}\) are equivalence relations. The following is straightforward by using some new variables but leaving infinitely many new variables still unused. Each \(\mathsf{SELAS}\)-consistent set \(\Gamma^{--}\) can be extended to a consistent set \(\Gamma^{-}\) w.r.t. some infinitely proper sublanguage \(L\) of \(\mathbf{ELAS}^{+}\) such that for each \(a\in\textbf{N}\) there is an \(x\in\textsf{Var}(L)\) such that \(x\approx a\in\Gamma^{-}\). Finally we can extend it to an MCS \(\Gamma\) w.r.t. \(L\).
\(\mathsf{SELAS}\) is sound and strongly complete over epistemic Kripke models with assignments. Soundness is from Theorem 3. Then given a consistent set \(\Gamma^{-}\), using the above proposition we have a \(\Gamma\). By the Truth Lemma 4 we have a model satisfying \(\Gamma\) and hence \(\Gamma^{-}\). From the above proof, it is not hard to see that we can obtain the completeness of \(\mathsf{SELAS}\) without \(\mathtt{Tx},\mathtt{4x},\mathtt{5x}\) over arbitrary models by some minor modifications of the proof.
## 5 Discussions and future work
In this paper, we proposed a lightweight epistemic language with assignment operators from dynamic logic, which can express various _de re_/_de dicto_ readings of knowledge statements when the references of the names are not commonly known. We gave a complete axiomatisation of the logic over epistemic models with constant domain of agents.
The complexity of the epistemic logic \(\mathsf{SELAS}\) is currently unknown to us though we conjecture it is decidable due to the very limited use of quantifiers. Under the translation in Section 2, the name-free fragment can be viewed as a guarded fragment of first-order logic with transtive guards [19], which implies decidability. However, the non-rigid names translate into function symbols in first-order language, which may cause troubles since the guarded fragment with function symbols in general yields an undecidable logic [9]. We are not that far from the decidability boundary, if not on the wrong side.
To actually design a tableaux method in pursuing the decidability of our logic, we have to handle the difficulties from various sources:
* S5 frame conditions
* equalities
* constant domain
* non-rigid names
* termed modalities
* assignment operators
Some of the issues are already complicated on their own based on the knowledge of existing work. The biggest hurdle for the termination in a tableau method for S5-based logic like the ones proposed in [5, 14], is to ensure loops in finite steps. This requires us in our setting to show that given a satisfiable formula, we can bound the number of necessary elements in the domain (for non-rigid names) and the number of subformulas we may encounter when building the tableau. The S5 condition and the assignment operator may ask us to always introduce new elements in the domain when creating new successors, while the new elements can essentially create new subformulas, if we add new symbols for them in the tableaux. On the other hand, without the transitivity and symmetry conditions, it is possible to bound the number of new elements in the domain to obtain decidability via some finite model property. We leave the details to a future occasion as well as the exploration of other ideas for decidability such as filtering the canonical model.
Below we list a few other further directions:
* Model theoretical issues of \(\mathbf{ELAS}\).
* Extension with function symbols.
* Extension with a (termed) common knowledge operator.
* Extension with limited quantifications over agents as in [15].
* Extension to varying domain models, where the existence of all the agents is not commonly known.
Finally, as a general direction, it would be interesting to consider what happens if we replace the standard epistemic logic with our \(\mathbf{ELAS}\) in various existing logical framework extending the standard one.
AcknowledgementThe authors thank Johan van Benthem, Rasmus Rendsvig, and Dominik Klein for pointers on related work. The authors are also grateful to the anonymous reviewers of AiML, whose comments helped in improving the presentation of the paper.⁸ The research for this work was supported by the New Zealand Centre at Peking University and Major Program of the National Social Science Foundation of China (NO. 17ZDA026).
[FOOTNOTE:8][ENDFOOTNOTE]
## References
* [1] Aloni, M., “Quantification under Conceptual Covers,” Ph.D. thesis, University of Amsterdam (2001).
* [2] Aloni, M., _Knowing-who in quantified epistemic logic_, in: H. van Ditmarsch and G. Sandu, editors, _Jaakko Hintikka on Knowledge and Game-Theoretical Semantics_, Springer, 2018 pp. 109–129.
* [3] Corsi, G., _A unified completeness theorem for quantified modal logics_, Journal of Symbolic Logic **67** (2002), pp. 1483–1510.
* [4] Corsi, G. and E. Orlandelli, _Free quantified epistemic logics_, Studia Logica **101** (2013), pp. 1159–1183.
* [5] Fitting, M. and R. L. Mendelsohn, “First-order modal logic,” Synthese Library, Springer, 1998.
* [6] Fitting, M., L. Thalmann and A. Voronkov, _Term-modal logics_, Studia Logica **69** (2001), pp. 133–169.
* [7] Grove, A. J., _Naming and identity in epistemic logic part II: a first-order logic for naming_, Artificial Intelligence **74** (1995), pp. 311–350.
* [8] Grove, A. J. and J. Y. Halpern, _Naming and identity in epistemic logics part I: the propositional case_, Journal of Logic and Computation **3** (1993), pp. 345–378.
* [9] Grädel, E., _On the restraining power of guards_, Journal of Symbolic Logic **64** (1998), pp. 1719–1742.
* [10] Hintikka, J., “Knowledge and Belief: An Introduction to the Logic of the Two Notions,” Cornell University Press, Ithaca N.Y., 1962.
* [11] Holliday, W. H. and J. Perry, _Roles, rigidity, and quantification in epistemic logic_, in: A. Baltag and S. Smets, editors, _Johan van Benthem on Logic and Information Dynamics_, Springer, 2014 pp. 591–629.
* [12] Hughes, G. E. and M. J. Cresswell, “A New Introduction to Modal Logic,” Routledge, 1996.
* [13] Kooi, B., _Dynamic term-modal logic_, in: _Proceedings of LORI-I_, 2007, pp. 173–186.
* [14] Massacci, F., _Strongly analytic tableaux for normal modal logics_, in: A. Bundy, editor, _Automated Deduction — CADE-12_ (1994), pp. 723–737.
* [15] Naumov, P. and J. Tao, _Everyone knows that someone knows: Quantifiers over epistemic agents_, The Review of Symbolic Logic (2018).
* [16] Padmanabha, A. and R. Ramanujam, _The monodic fragment of propositional term modal logic_, Studia Logica (2018).
* [17] Rendsvig, R., “Towards a Theory of Semantic Competence,” Master’s thesis, Roskilde University (2011).
* [18] Rendsvig, R., _Modeling semantic competence: A critical review of frege’s puzzle about identity_, in: L. D. and S. M., editors, _New Directions in Logic, Language and Computation, ESSLLI 2010, ESSLLI 2011_, LNCS **7415** (2012), pp. 140–157.
* [19] Szwast, W. and L. Tendera, _On the decision problem for the guarded fragment with transitivity_, in: _Proceedings of the 16th Annual IEEE Symposium on Logic in Computer Science_, LICS ’01 (2001), pp. 147–156.
* [20] Thalmann, L., “Term-modal logic and quantifier-free dynamic assignment logic,” Ph.D. thesis, Uppsala University (2000).
* [21] Wang, Y., _A new modal framework for epistemic logic_, in: _Proceedings Sixteenth Conference on Theoretical Aspects of Rationality and Knowledge, TARK 2017, Liverpool, UK, 24-26 July 2017._, 2017, pp. 515–534.
* [22] Wang, Y., _Beyond knowing that: A new generation of epistemic logics_, in: H. van Ditmarsch and G. Sandu, editors, _Jaakko Hintikka on Knowledge and Game-Theoretical Semantics_, Springer, 2018 pp. 499–533.
|
1602.07753 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 32970,
"num_imgs": 3,
"llama3_tokens_count": 9129
} | [
"content_image/1602.07753/x1.png",
"content_image/1602.07753/x2.png",
"content_image/1602.07753/x3.png"
] | # Short-range photoassociation from the inner wall of the lowest triplet potential of \({}^{85}\)Rb\({}_{2}\)
R. A. Carollo¹, J. L. Carini², E. E. Eyler, P. L. Gould, and W. C. Stwalley
Department of Physics, University of Connecticut, Storrs, CT 06269, USA
carollo@phys.uconn.edu
[FOOTNOTE:1][ENDFOOTNOTE]
[FOOTNOTE:2][ENDFOOTNOTE]
February 25, 2024
###### Abstract
Ultracold photoassociation is typically performed at large internuclear separations, where the scattering wavefunction amplitude is large and Franck-Condon overlap is maximized. Recently, work by this group and others on alkali-metal diatomics has shown that photoassociation can efficiently form molecules at short internuclear distance in both homonuclear and heteronuclear dimers. We propose that this short-range photoassociation is due to excitation near the wavefunction amplitude maximum at the inner wall of the lowest triplet potential. We show that Franck-Condon factors from the highest-energy bound state can almost precisely reproduce Franck-Condon factors from a low-energy scattering state, and that both calculations match experimental data from the near-zero positive-energy scattering state with reasonable accuracy. We also show that the corresponding photoassociation from the inner wall of the ground-state singlet potential at much shorter internuclear distance is weaker and undetectable under our current experimental conditions. We predict from Franck-Condon factors that the strongest of these weaker short-range photoassociation transitions are one order of magnitude below our current sensitivity.
†
[FOOTNOTE:†][ENDFOOTNOTE]
_Keywords_: photoassociation, ultracold molecules, transition matrix elements
## 1 Introduction
In recent work, our lab has performed a number of short-range photoassociation (PA) experiments in homonuclear \({}^{85}\)Rb\({}_{2}\). In [1] we studied blue-detuned and short-range photoassociation to the 4 different \(\Omega=0^{+}\), \(0^{-}\), 1, and 2 components of the \(1\,^{3}\Pi_{g}\) state, including various spectra. We also studied other short-range photoassociation spectra to the \(2\,^{1}\Sigma_{g}^{+}\) state [2, 3], including a single rovibrational level in the short-range state (\(v^{\prime}=111\), \(J^{\prime}=5\)) that is perturbed by a single, predominately long-range rovibrational level (\(v^{\prime}=155\), \(J^{\prime}=5\)) of the \(1\,^{1}\Pi_{g,\,\Omega=1}\) state, the two levels being separated by \(\sim 0.017\) cm\({}^{-1}\).
When homonuclear photoassociation is done in the traditional long-range regime, the \(\Sigma_{n}C_{n}/R^{n}\) form of the potential is different in the ground and excited states. For the lowest asymptote of two ground-state atoms, the long-range potential is due to London dispersion forces and takes the form
\[U=-\frac{C_{6}}{R^{6}}-\frac{C_{8}}{R^{8}}-\frac{C_{10}}{R^{10}}\,.\] (1)
These terms correspond to the dipole-dipole, dipole-quadrupole, and quadrupole-quadrupole plus dipole-octupole interactions. The excited state potentials, in addition to \(C_{6}\), \(C_{8}\), and \(C_{10}\) terms, also contain a leading (positive or negative) \(C_{3}/R^{3}\) term (for homonuclear molecules) [4], which dominates at long range.
This difference in functional form means that increasing detuning from the excited atomic asymptote reduces photoassociation efficiency. In heteronuclear molecules with no excited-state \(C_{3}\) term [5], efficiency still decreases due to the decreasing amplitude of the wavefunction in the scattering state at the internuclear distance of the excited-state outer turning point. In [6], Pillet _et al._ derive an expression (equation (55)) for the homonuclear photoassociation efficiency and find that, in the limit of small \(\Delta\), the efficiency is proportional to \(\Delta^{-(4J^{\prime}+7)/3}\), where \(\Delta\) is the detuning below the atomic asymptote and \(J^{\prime}\) is the excited state rotational quantum number. This derivation assumes that PA can only occur below the asymptote, and implies that it is only strong enough to be observable near the atomic dissociation asymptote. Although true in many cases, both of these have been demonstrated to have exceptions in our work (in Rb\({}_{2}\) [1, 2]) and that of others (in LiCs [7], RbCs [8, 9, 10], and NaCs [11]). We note that the general approach in [6] could be applied to short-range PA, but the expressions therein would need to be re-derived without restriction to long range.
This traditional view of photoassociation as predominantly a long-range process is based on several factors, including the number of atom pairs that exist at a given internuclear distance in the MOT and the fact that the amplitude of the nuclear wavefunction is small at short range and much larger at long range. However, there are also several mechanisms that provide exceptions to this assumption. One is that there is a significant local maximum in the square of the wavefunction amplitude at the inner wall of the \(a\,^{3}\Sigma_{u}^{+}\) potential in \({}^{85}\)Rb\({}_{2}\), roughly at \(9.7\,a_{0}\), as can be seen in Figure 1. This can enhance the PA rate if the excited state has significant short-range amplitude as well, such as exists behind a short- or intermediate-range potential barrier. As was discussed in [2], there is also a \(g\)-wave shape resonance in the triplet potential because of its centrifugal barrier near \(80\,a_{0}\), which can provide a second source of short-range PA enhancement. Similar considerations apply for the zero-energy wavefunction in the \(a\,^{3}\Sigma^{+}\) state of many other homo- and heteronuclear alkali dimers. A third mechanism for enhancing PA to short-range states is excited-state resonant coupling, which can combine efficient long-range excitation with efficient short-range decay in both homonuclear [3, 12] and heteronuclear [11, 13, 14] cases. A fourth path to efficient formation of short-range states is Feshbach-optimized PA (FOPA) [15, 16].
Here we explore the first of these mechanisms, photoassociation from the inner turning point of the continuum wavefunction. To estimate relative PA rates to various excited-state vibrational levels, we take advantage of the continuity of the absorption (or emission) cross section as one passes through a dissociation limit, which was pointed out by Allison and Dalgarno [17] and Smith [18] some 45 years ago, using as examples the H\({}_{2}\) (a “VUV” alkali metal diatomic) Lyman bands and O\({}_{2}\) Schumann-Runge bands. While [17] used a Morse potential for the excited B state, which is unphysical at long range, the continuity behavior for the \(B\) state of H\({}_{2}\) with correct long-range behavior was published two years later [19], and quantitatively showed continuity at dissociation. In particular, we take advantage of the fact that the short-to-intermediate range behavior of the wavefunction of the highest long-range bound level (readily calculated by standard programs such as LEVEL 8.2 [20]) is very nearly identical to the short-to-intermediate range behavior of the continuum wavefunction at near-zero-energy, i.e. ultracold energies, as shown in Figure 1. Note that for most cases of photoassociation at long range, where the last bound level and near-zero-energy continuum wavefunctions differ, the two Franck-Condon factor (FCF) calculations will no longer agree quantitatively.
A direct consequence is that the ultracold limit of the FCF for free \(\rightarrow\) bound photoassociation occurring at relatively short range is, within a scaling factor, well approximated by the bound \(\rightarrow\) bound FCF for a corresponding transition from the uppermost vibrational level, \(v^{\prime\prime}_{max}\):
\[\left|\left<k^{\prime\prime},J^{\prime\prime}=0|v^{\prime},J^{\prime}=1\right> \right|^{2}\propto\left|\left<v^{\prime\prime}_{max},J^{\prime\prime}=0|v^{ \prime},J^{\prime}=1\right>\right|^{2}\] (2)
where the radial wavefunctions for \(\left|k^{\prime\prime},J^{\prime\prime}=0\right>\) (free) and \(\left|v^{\prime\prime}_{max},J^{\prime\prime}=0\right>\) (bound) are very similar, as shown in Figure 1, as are the FCFs listed in Equation 2. Here the ultracold limit is taken to be \(\sim 120\,\mu\)K, the approximate temperature of our magneto-optical trap.
## 2 Photoassociation Model
As stated in Section 1, we often operate under circumstances where traditional assumptions about photoassociation do not apply because of a local short-range maximum in the target-state wavefunction. We thus introduce a simple model to predict the efficiency of photoassociation in the short-range or blue-detuned regions that were formerly considered inaccessible. Our model assumes that the inner turning point of the near-zero-energy free wavefunction, or incoming scattering state, creates a population of closely-spaced atom pairs sufficient to form molecules at relatively short range. If the excited state of interest has significant amplitude at short range, the PA rate can be enhanced. Further, for sufficiently small energy differences, the wavefunction at short range is equivalent both above and below the atomic dissociation asymptote, allowing the highest bound state to be used for FCF calculations. An example of short-range excitation to the \({}^{85}\)Rb\({}_{2}\)\(1\,^{3}\Pi_{g,\,\Omega=1}\) state, which lies above the \(5s_{1/2}+5p_{3/2}\) asymptote, is shown in Figure 1. We believe that this model is an additional new and useful ‘shortcut’ for understanding rovibronic spectroscopy of ultracold molecules [21] and enhancing their formation.
<figure><img src="content_image/1602.07753/x1.png"><figcaption>Figure 1: (Color online) Wavefunctions of the a3Σ+u, v′′=39 vibrational levelof 85Rb2 and the 120 μK scattering state, each with zerorotational/collisional angular momentum, and three rovibrational levels of the13Πg,Ω=1 state, the v′=0, 5, and 8 levels, each with J′=1 are shown. Thev′′=39 wavefunction closely resembles a low-energy continuum wavefunctionthroughout the range of R shown here. The potentials shown are from [22] forthe a3Σ+u state and from [1] (and [17] therein) for the 13Πg,Ω=1 state. Notethat the v′=0 and 5 wavefunctions are similar to harmonic oscillatorwavefunctions, while v′=8, near the top of a potential barrier, is moreasymmetric toward larger distance. For v′=0 (and 2, 4, and even 6), there is asignificant FCF from v′′=39, while it is small for v′=5 (and 3 and 1, as shownin Figure 2). Here, the 120 μK wavefunction is scaled to match the v′′=39wavefunction at short R to demonstrate their similarity. For all other uses,this state was scaled as described in the text. It should be noted that thev′′=39 wavefunction amplitude increases dramatically at longer range out toits maximum at ∼62 a0.</figcaption></figure>
In a time-dependent view, this can be interpreted as meaning that atom pairs spend significant time in close proximity when they collide at short range. This period of slow movement at short range means that there is always a non-negligible fraction of atoms available to interact at these internuclear distances. Since the lowest triplet potential inner wall is at significantly longer range than the inner wall of the singlet \(X\) state, colliding triplet pairs of ultracold atoms are able to access many states of interest that are inaccessible to ultracold colliding pairs of singlet character. By contrast, singlet wavefunctions have rapid, low-amplitude oscillations at most internuclear distances at which excited states are present, drastically reducing the transition probability. A partial exception is in the region of an intermediate-range barrier maximum in the \(B\,^{1}\Pi_{u}\) state in \({}^{85}\)Rb\({}_{2}\), discussed in Section 3.
Thus, any PA transitions that are to be studied at short range must be accessible from the lowest triplet potential. In particular, this means that the transition must be allowed from the lowest triplet state in the appropriate Hund’s case. For pairs of identical ground-state alkali atoms, this implies PA to a triplet _gerade_ state in Hund’s case (a), or a \(0_{g}^{+}\), \(0_{g}^{-}\), \(1_{g}\), or \(2_{g}\) state in Hund’s case (c). Although for heavy alkalis such as \({}^{85}\)Rb the singlet \(\nleftrightarrow\) triplet selection rule is weakened (or equivalently, case (a) quantum numbers are no longer perfectly “good” and case (c) quantum numbers may be more appropriate), the \(g\leftrightarrow u\) selection rule is still strong in a homonuclear system.
A simple way to use this model to calculate relative excitation probabilities from free triplet atoms to a given excited state is to calculate the FCF between the highest bound level of the lowest triplet state and each level of the target excited state. It is slightly more accurate to calculate the square of the overlap integral from the true near-zero-energy (free) state, but it is more convenient to use a bound \(\rightarrow\) bound calculation program such as LEVEL 8.2 [20]. With appropriate scaling, the highest bound vibrational level is nearly identical at short range to the near-zero-energy scattering state, as shown in Figure 1, due to the steepness of the repulsive wall and the similarity in their energies. At our experimental temperature of \(\sim 120\,\mu\)K, the thermal population distribution has a peak at 2.5 MHz, while the \(v^{\prime\prime}=39\) level is bound by 0.007238 cm\({}^{-1}\), or 217 MHz, for a difference of less than \(\sim 220\) MHz. The scattering state has been calculated using the code developed in [23, 24, 25]. Franck-Condon factors were calculated from this box-normalized free wavefunction and were scaled to give the same FCF at \(v^{\prime}=2\) as produced by the bound state calculation. We show a comparison of such an FCF calculation to experimental data in Section 3.
As with any use of a Franck-Condon factor, the \(R\)-dependence of the transition dipole moment may play an important role that is neglected by our approximation. The \(2\,^{1}\Sigma_{g}^{+}\sim 2\,(0_{g}^{+})\) state studied in [2, 3] is an excellent example. The case (c) transition dipole is constant and large at long range, but at short range (in the region of our experiment) it drops rapidly toward zero [26]. This caused a drop-off in our signal for shorter-range and more deeply-bound vibrational levels, and reflects a transition from the allowed transition in case (c) to a region where case (a) better represents the coupling and the transition is singlet \(\leftrightarrow\) triplet forbidden.
The calculation of FCFs between the \(v^{\prime\prime}=39\) level of the \(a\,^{3}\Sigma_{u}^{+}\) state and the \(v^{\prime}\) levels of the \(1\,^{3}\Pi_{g,\,\Omega=1}\) state is based on the experimentally determined potential of the \(a\,^{3}\Sigma_{u}^{+}\) state [22] and an _ab initio_ potential of the \(1\,^{3}\Pi_{g,\,\Omega=1}\) state calculated by Dulieu and Gerdes that reproduces the experimental vibrational and rotational constants fairly accurately (see Table 1 of [1]). The \(a\,^{3}\Sigma_{u}^{+}\) state potential produces a scattering length of \(a_{T}=-371\)\(a_{0}\) in our calculations, which agrees well with published scattering lengths for \({}^{85}\)Rb\({}_{2}\) in Roberts _et al._ (\(a_{T}=-369\pm 16\)\(a_{0}\) [27]) and van Kempen _et al._ (\(a_{T}=-388\pm 3\)\(a_{0}\) [28]). The electronic transition dipole moment is generally not expected to vary significantly in the small region of overlap of vibrational wavefunctions \(\psi^{\prime}(R)\) and \(\psi^{\prime\prime}(R)\). In this case, the transition dipole moments for \(1\,^{3}\Pi_{g}\gets a\,^{3}\Sigma_{u}^{+}\) (and \(B\,^{1}\Pi_{u}\gets X\,^{1}\Sigma_{g}^{+}\), discussed below) are quite similar, \(\sim 3.5\) a.u. and \(\sim 4\) a.u., respectively, at the internuclear distance of the transition [26].
The potential energy curve used here and plotted in our Figure 1 for the \(a\,^{3}\Sigma_{u}^{+}\) state from [22] is a high-quality experimentally-based potential (note that Figure 1 of [22] does not have the correct R axis values; for example, the Table II value of \(R_{e}=5.07\) Å\(\;=11.5160\,a_{0}\) is correct in our Figure 1, but is \(\sim 10\,a_{0}\) in Figure 1 of [22]). In particular, [22] argues that their results, compared to earlier results, “are significantly improved close to the atomic asymptote by including data on the mixed singlet-triplet levels of this study and data on Feshbach resonances from various other sources.” This near dissociation region is exactly the region of greatest significance for our present study.
## 3 Comparison to Data
To test whether these FCFs can predict the relative efficiency of short-range PA to various levels, we looked at several lines that our group previously detected [1]. The target state is the \(\Omega=1\) component of the \(1\,^{3}\Pi_{g}\) manifold of \({}^{85}\)Rb\({}_{2}\). This state is blue-detuned above the \(5s+5s_{3/2}\) asymptote.
Our experiment was carried out under conditions similar to the original work, and a detailed description can be found in [1, 2]. The molecules were formed in a \({}^{85}\)Rb MOT of typically \(8\times 10^{7}\) atoms and a density of \(1\times 10^{11}\) cm\({}^{-3}\) at \(\sim 120\,\mu\)K. The excitation was performed with a fiber-coupled photoassociation laser (Coherent 899-29 Ti:Sapphire) delivering 650 mW to the experimental chamber. After photoassociation, the molecules rapidly decay to deeply-bound levels of the \(a\,^{3}\Sigma_{u}^{+}\) state and are detected via pulsed ionization. Ions are detected on a discrete-dynode multiplier (ETP model 14150) and spectra are acquired via a boxcar integrator which gates the molecular time-of-flight signal.
To ensure accurate relative line height measurements, detection was done using photoionization with a pulsed 355 nm UV tripled Nd:YAG laser at 3.6 mJ/pulse. This photon energy corresponds to \(\sim 28,169\) cm\({}^{-1}\). Based on the data we reported in [29], the \(v=0\) level of the Rb\({}_{2}^{+}\) ground-state potential is no higher than 27,383.2 cm\({}^{-1}\). Accounting for the \(\sim 234.7\) cm\({}^{-1}\) binding energy of the \(a\,^{3}\Sigma_{u}^{+}\), \(v^{\prime\prime}=0\) level, up to 27,617.9 cm\({}^{-1}\) could be necessary to ionize. Our UV photon energy is \(\sim 551\) cm\({}^{-1}\) above this, and thus all ionization should be single-photon and line strengths should be independent of any intermediate-state resonances. Measured line strengths should therefore reflect the true relative transition probabilities. We note that the PA rates for these lines are too small to be observed via trap loss.
<figure><img src="content_image/1602.07753/x2.png"><figcaption>Figure 2: (Color online) Comparison of FCF calculations to experimental data.For each vibrational level in the excited state, three bars are shown. Thefirst bar (solid red) shows FCFs to the 13Πg,Ω=1 state from the highest boundstate of the lowest triplet potential, a3Σ+u,v′′=39, which closelyapproximates the zero-energy scattering state. The excited state potentialsare from [1]. The second bar (hashed red) is calculated from the 120 μKscattering state, and scaled to match the bound-state calculation at v′=2. Theexperimental data, shown in the third (hashed black) bar, are obtained usingPA from a Ti:Sapphire laser detected by photoionization at 355 nm from atripled Nd:YAG laser, and are also normalized to agree with the bound-stateFCFs at v′=2. The FCFs are calculated for J′′=0 and J′=1.</figcaption></figure>
Each vibrational level was measured and its line height at \(J^{\prime}=3\) recorded. As discussed in Section 2, FCFs were calculated for the same transitions by using the \(a\,^{3}\Sigma_{u}^{+}\), \(v^{\prime\prime}=39\) level as a proxy for the near-zero-energy scattering state. The FCFs from the 120 \(\mu\)K scattering state itself were also calculated. A comparison of these data and the FCF calculations is shown in Figure 2. The experimental data are scaled to match the bound-state FCF calculations at \(v^{\prime}=2\) to better show the quality of the comparison. The scattering-state FCFs are also scaled to the bound-state FCFs to ensure comparable normalization. It should be noted that levels with FCFs as low as \(2.99\times 10^{-6}\) (\(v^{\prime}=3\)) are detected. However, \(v^{\prime}=5\), with a FCF of \(1.70\times 10^{-7}\), is not detected.
The prominent alternation in FCF seen in Figure 2 bears closer examination. The strong barrier in the \(1\,^{3}\Pi_{g,\,\Omega=1}\) state creates a short-range well that differs little from a harmonic potential, and the resulting wavefunctions also closely approximate those of a harmonic oscillator. In this case, the strong maximum of the \(v^{\prime}=0\) wavefunction is replaced by a node in \(v^{\prime}=1\), with two weaker and opposite-signed extrema adjacent. If the FCF from the initial state to \(v^{\prime}=0\) is coming from a localized region of the \(v^{\prime\prime}=39\) wavefunction, the expected result is a sharp reduction (nearly cancellation) in the FCF to \(v^{\prime}=1\). Similarly, at \(v^{\prime}=2\) a new, albeit weaker, extremum will occupy the position of the \(v^{\prime}=0\) maximum, which gives rise to an alternating series of FCFs with gradually decaying contrast. The strong alternation actually observed thus supports the hypothesis that the PA in this experiment is due to the local maximum at the inner turning point of the \(a\,^{3}\Sigma_{u}^{+}\) state.
As further evidence in support of our model, we also scanned the predicted locations of quasibound vibrational levels of the \(B\,^{1}\Pi_{u}\) state, which corresponds to the Hund’s case (c) \(3\,(1_{u})\) state. Much like the \(1\,^{3}\Pi_{g}\) state, the \(B\) state has an intermediate-range barrier (17.3 \(a_{0}\) [30]) and is repulsive at long range, causing vibrational states to be confined to short range. Levels of this state were accurately measured by Amiot and Vergès using optical-optical double resonance and Fourier-transform spectroscopy [30, 31]. If our model is accurate, however, any photoassociation to this state must, by selection rules, originate from free atoms of _gerade_ symmetry, and therefore must come from the \(X\,^{1}\Sigma_{g}^{+}\) state. This state has a very short-range inner wall, and does not give highly-enhanced FCFs for excitation to higher bound states. The relatively larger FCFs high in the \(B\) state come predominantly from the outer turning points at the potential barrier, where the \(X\) state wavefunction is beginning to grow in amplitude.
The FCFs for transitions to \(J^{\prime}=1\) levels of the \(B\,^{1}\Pi_{u}\) state from \(v^{\prime\prime}=122\), \(J^{\prime\prime}=0\) of the \(X\,^{1}\Sigma_{g}^{+}\) state are shown in Figure 3. The potential used to calculate these FCFs was obtained from the “inverted perturbation approach” (IPA) potential in [30], with the top of the barrier and repulsive wall from the _ab initio_ potential in [1]. The _ab initio_ barrier was translated to match the energy and slope of the IPA potential, and the vibrational eigenstates were shown to match experimental values closely. No attempt was made to reproduce the linewidths observed in high rotational levels of \(v^{\prime}=66\) and 67. Even the largest FCF to this state is \(\sim 1\times 10^{-7}\), and the vast majority of FCFs are \(<1\times 10^{-13}\). These are well below the FCFs of levels we are able to detect based on the \(1\,^{3}\Pi_{g}\) state data above. FCFs can be directly compared to make this determination because the transition dipole moments are so similar. We discuss our detection sensitivity below.
<figure><img src="content_image/1602.07753/x3.png"><figcaption>Figure 3: (Color online) FCFs for excitation to the B1Πu state from the X1Σ+gstate. Although the last two levels have dramatically increasing FCFs, itshould be noted that, since they are quasibound, this transition strength isspread over a larger energy range. Widths of low-J levels were notexperimentally observed, although some high-J widths are reported [30].</figcaption></figure>
To briefly summarize, the \(B\) state has a short-range potential well that is amenable to our short-range PA efficiency calculation technique. Due to symmetry considerations, only atom pairs of singlet-_gerade_ character may be excited to this state. As the \(X\) state’s near-zero-energy inner turning point is at shorter range than the classically allowed region, very poor PA efficiency is predicted. In fact, the dominant contribution to the largest FCFs for the \(B\) state come from the outer turning points, as in traditional long-range PA, although these turning points are at the intermediate-range potential barrier.
We have measured the statistics of our laser scan data in the vicinity of the \(v^{\prime}=65\) level of the \(B\) state. Our scans have noise with an average standard deviation of 0.18 ions per shot across several scans. This is easily capable of \(5\sigma\) detection of a single ion per laser shot, and several stray ions are indeed observed within the correct boxcar gate during the scan. These features are not repeatable, and thus cannot be spectroscopic features. Additionally, they show an exponential drop-off characteristic of our real-time boxcar shot averaging rate (typically a 10-shot rolling exponential average). Other than these, no lines are detected near the predicted \(v^{\prime}=65\) energy.
In a similar scan, the \(1\,^{3}\Pi_{g,\,\Omega=1}\), \(v^{\prime}=8\) level is detected. Between lines, the scan background shows a standard deviation of 0.33 ions per shot, comparable to the average mentioned above. The peak line height is 9.18 ions above the baseline. Based on the FCF of \(9.48\times 10^{-6}\) for this level, we are sensitive to FCFs as small as \(9.3\times 10^{-7}\) with average noise levels. Thus a non-detection of \(B\) state levels is consistent with our FCF calculation.
## 4 Conclusion
We have presented a model of short-range photoassociation in an alkali dimer that is both conceptually and computationally simple. It applies when the square of the excited-state target wavefunction has an appreciable local maximum near the internuclear distance of the scattering state’s short-range turning point. In this model, short-range excitation is proportional to the FCF of a transition from the near-zero-energy continuum of the lowest triplet state (or the highest bound vibrational level, which we have shown to produce nearly-identical results), so long as the transition is allowed in the appropriate Hund’s case. Generally, the correct coupling description is Hund’s case (a) or (c), with case (c) likely being more important in heavier alkalis with strong spin-orbit coupling.
We have presented experimental data in \({}^{85}\)Rb\({}_{2}\) using detection via single-photon ionization that should accurately reflect molecule production regardless of vibrational level. The FCF predictions from our model show quite reasonable agreement with the observed \(1\,^{3}\Pi_{g,\,\Omega=1}\) state lines, and correctly predict that lines from the \(B\,^{1}\Pi_{u}\) state should not be observed.
The concepts presented here were initially stimulated by [17] and decades of conversations of William C. Stwalley with Arthur Allison and Alex Dalgarno. This research was funded with support from the National Science Foundation grant numbers PHY-1208317 and PHY-1506244 and Air Force Office of Scientific Research grant number FA9550-09-1-0588.
## References
* [1] Bellos M A, Rahmlow D, Carollo R, Banerjee J, Dulieu O, Gerdes A, Eyler E E, Gould P L and Stwalley W C 2011 _Phys. Chem. Chem. Phys._**13**(42) 18880–18886 URL http://dx.doi.org/10.1039/C1CP21383K
* [2] Bellos M A, Carollo R, Rahmlow D, Banerjee J, Eyler E E, Gould P L and Stwalley W C 2012 _Phys. Rev. A_**86**(3) 033407 URL http://link.aps.org/doi/10.1103/PhysRevA.86.033407
* [3] Carollo R, Bellos M A, Rahmlow D, Banerjee J, Eyler E E, Gould P L and Stwalley W C 2013 _Phys. Rev. A_**87**(2) 022505 URL http://link.aps.org/doi/10.1103/PhysRevA.87.022505
* [4] Le Roy R J, Dattani N S, Coxon J A, Ross A J, Crozet P and Linton C 2009 _The Journal of Chemical Physics_**131** 204309 URL http://scitation.aip.org/content/aip/journal/jcp/131/20/10.1063/1.3264688
* [5] Marinescu M and Sadeghpour H R 1999 _Phys. Rev. A_**59**(1) 390–404 URL http://link.aps.org/doi/10.1103/PhysRevA.59.390
* [6] Pillet P, Crubellier A, Bleton A, Dulieu O, Nosbaum P, Mourachko I and Masnou-Seeuws F 1997 _Journal of Physics B: Atomic, Molecular and Optical Physics_**30** 2801 URL http://stacks.iop.org/0953-4075/30/i=12/a=010
* [7] Deiglmayr J, Grochola A, Repp M, Mörtlbauer K, Glück C, Lange J, Dulieu O, Wester R and Weidemüller M 2008 _Phys. Rev. Lett._**101**(13) 133004 URL http://link.aps.org/doi/10.1103/PhysRevLett.101.133004
* [8] Gabbanini C and Dulieu O 2011 _Phys. Chem. Chem. Phys._**13**(42) 18905–18909 URL http://dx.doi.org/10.1039/C1CP21497G
* [9] Ji Z, Zhang H, Wu J, Yuan J, Yang Y, Zhao Y, Ma J, Wang L, Xiao L and Jia S 2012 _Phys. Rev. A_**85**(1) 013401 URL http://link.aps.org/doi/10.1103/PhysRevA.85.013401
* [10] Bruzewicz C D, Gustavsson M, Shimasaki T and DeMille D 2014 _New Journal of Physics_**16** 023018 URL http://stacks.iop.org/1367-2630/16/i=2/a=023018
* [11] Zabawa P, Wakim A, Haruza M and Bigelow N P 2011 _Phys. Rev. A_**84**(6) 061401 URL http://link.aps.org/doi/10.1103/PhysRevA.84.061401
* [12] Pechkis H K, Wang D, Huang Y, Eyler E E, Gould P L, , Stwalley W C and Koch C P 2007 _Phys. Rev. A_**76** 022504 URL http://link.aps.org/doi/10.1103/PhysRevA.76.022504
* [13] Banerjee J, Rahmlow D, Carollo R, Bellos M, Eyler E E, Gould P L and Stwalley W C 2012 _Phys. Rev. A_**86**(5) 053428 URL http://link.aps.org/doi/10.1103/PhysRevA.86.053428
* [14] Stwalley W C, Banerjee J, Bellos M, Carollo R, Recore M and Mastroianni M 2010 _J. Phys. Chem. A_**114** 81–86 URL http://pubs.acs.org/doi/abs/10.1021/jp901803f
* [15] Pellegrini P, Gacesa M and Côté R 2008 _Phys. Rev. Lett._**101**(5) 053201 URL http://link.aps.org/doi/10.1103/PhysRevLett.101.053201
* [16] Krzyzewski S P, Akin T G, Dizikes J, Morrison M A and Abraham E R I 2015 _Phys. Rev. A_**92**(6) 062714 URL http://link.aps.org/doi/10.1103/PhysRevA.92.062714
* [17] Allison A C and Dalgarno A 1971 _The Journal of Chemical Physics_**55** 4342–4344 URL http://scitation.aip.org/content/aip/journal/jcp/55/9/10.1063/1.1676757
* [18] Smith A L 1971 _The Journal of Chemical Physics_**55** 4344–4350 URL http://scitation.aip.org/content/aip/journal/jcp/55/9/10.1063/1.1676758
* [19] Allison A C and Stwalley W C 1973 _The Journal of Chemical Physics_**58** 5187–5188 URL http://scitation.aip.org/content/aip/journal/jcp/58/11/10.1063/1.1679123
* [20] Le Roy R J 2014 _LEVEL 8.2: A Computer Program for Solving the Radial Schrödinger Equation for Bound and Quasibound Levels_ University of Waterloo Chemical Physics Research Report CP-668 see http://scienide2.uwaterloo.ca/%7erleroy/level/ URL http://scienide2.uwaterloo.ca/%7erleroy/level/
* [21] Stwalley W C, Bellos M, Carollo R, Banerjee J and Bermudez M 2012 _Molecular Physics_**110** 1739–1755 URL http://www.tandfonline.com/doi/abs/10.1080/00268976.2012.676680
* [22] Strauss C, Takekoshi T, Lang F, Winkler K, Grimm R, Hecker Denschlag J and Tiemann E 2010 _Phys. Rev. A_**82**(5) 052514 URL http://link.aps.org/doi/10.1103/PhysRevA.82.052514
* [23] Kallush S and Kosloff R 2006 _Chemical Physics Letters_**433** 221 – 227 ISSN 0009-2614 URL http://www.sciencedirect.com/science/article/pii/S0009261406016940
* [24] Carini J L, Kallush S, Kosloff R and Gould P L 2015 _Phys. Rev. Lett._**115**(17) 173003 URL http://link.aps.org/doi/10.1103/PhysRevLett.115.173003
* [25] Carini J L, Kallush S, Kosloff R and Gould P L 2016 _The Journal of Physical Chemistry A_**120** 3032–3041 URL http://dx.doi.org/10.1021/acs.jpca.5b10088
* [26] Allouche A R and Aubert-Frécon M 2012 _The Journal of Chemical Physics_**136** 114302 URL http://scitation.aip.org/content/aip/journal/jcp/136/11/10.1063/1.3694014
* [27] Roberts J L, Claussen N R, Burke J P, Greene C H, Cornell E A and Wieman C E 1998 _Phys. Rev. Lett._**81**(23) 5109–5112 URL http://link.aps.org/doi/10.1103/PhysRevLett.81.5109
* [28] van Kempen E G M, Kokkelmans S J J M F, Heinzen D J and Verhaar B J 2002 _Phys. Rev. Lett._**88**(9) 093201 URL http://link.aps.org/doi/10.1103/PhysRevLett.88.093201
* [29] Bellos M A, Carollo R, Banerjee J, Ascoli M, Allouche A R, Eyler E E, Gould P L and Stwalley W C 2013 _Phys. Rev. A_**87**(1) 012508 URL http://link.aps.org/doi/10.1103/PhysRevA.87.012508
* [30] Amiot C and Vergès J 1997 _Chemical Physics Letters_**274** 91 – 98 ISSN 0009-2614 URL http://www.sciencedirect.com/science/article/pii/S0009261497006349
* [31] Amiot C 1990 _The Journal of Chemical Physics_**93** 8591–8604 URL http://scitation.aip.org/content/aip/journal/jcp/93/12/10.1063/1.459246
|
0712.2794 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 42312,
"num_imgs": 0,
"llama3_tokens_count": 15064
} | [] | Some Progress in Conformal Geometry
Some Progress in Conformal Geometry
[FOOTNOTE:][ENDFOOTNOTE]
Sun-Yung A. CHANG \({}^{\dagger}\), Jie QING \({}^{\ddagger}\) and Paul YANG \({}^{\dagger}\)
S.-Y.A. Chang, J. Qing and P. Yang
\({}^{\dagger}\) Department of Mathematics, Princeton University, Princeton, NJ 08540, USA chang@math.princeton.edu, yang@math.princeton.edu
\({}^{\ddagger}\) Department of Mathematics, University of California, Santa Cruz,
\(\phantom{{}^{\ddagger}}\) Santa Cruz, CA 95064, USA qing@ucsc.edu
Received August 30, 2007, in final form December 07, 2007; Published online December 17, 2007
This is a survey paper of our current research on the theory of partial differential equations in conformal geometry. Our intention is to describe some of our current works in a rather brief and expository fashion. We are not giving a comprehensive survey on the subject and references cited here are not intended to be complete. We introduce a bubble tree structure to study the degeneration of a class of Yamabe metrics on Bach flat manifolds satisfying some global conformal bounds on compact manifolds of dimension 4. As applications, we establish a gap theorem, a finiteness theorem for diffeomorphism type for this class, and diameter bound of the \(\sigma_{2}\)-metrics in a class of conformal 4-manifolds. For conformally compact Einstein metrics we introduce an eigenfunction compactification. As a consequence we obtain some topological constraints in terms of renormalized volumes.
Bach flat metrics; bubble tree structure; degeneration of metrics; conformally compact; Einstein; renormalized volume
53A30; 53C20; 35J60
_Dedicated to the memory of Thomas Branson_
## 1 Conformal gap and finiteness theorem
for a class of closed 4-manifolds
### Introduction
Suppose that \((M^{4},g)\) is a closed 4-manifold. It follows from the positive mass theorem that, for a 4-manifold with positive Yamabe constant,
\[\int_{M}\sigma_{2}dv\leq 16\pi^{2}\]
and equality holds if and only if \((M^{4},g)\) is conformally equivalent to the standard 4-sphere, where
\[\sigma_{2}[g]=\frac{1}{24}R^{2}-\frac{1}{2}|E|^{2},\]
\(R\) is the scalar curvature of \(g\) and \(E\) is the traceless Ricci curvature of \(g\). This is an interesting fact in conformal geometry because the above integral is a conformal invariant like the Yamabe constant.
One may ask, whether there is a constant \(\epsilon_{0}>0\) such that a closed 4-manifold \(M^{4}\) has to be diffeomorphic to \(S^{4}\) if it admits a metric \(g\) with positive Yamabe constant and
\[\int_{M}\sigma_{2}[g]dv_{g}\geq(1-\epsilon)16\pi^{2}.\]
for some \(\epsilon<\epsilon_{0}\)? Notice that here the Yamabe invariant for such \([g]\) is automatically close to that for the round 4-sphere. There is an analogous gap theorem of Bray and Neves for Yamabe invariant in dimension 3 [5]. One cannot expect the Yamabe invariant alone to isolate the sphere, and it is more plausible to consider the integral of \(\sigma_{2}\). We will answer the question affirmatively in the class of Bach flat 4-manifolds.
Recall that Riemann curvature tensor decomposes into
\[R_{ijkl}=W_{ijkl}+(A_{ik}g_{jl}-A_{il}g_{jk}-A_{jk}g_{il}+A_{jl}g_{ik}),\]
in dimension 4, where \(\textsl{W}_{ijkl}\) is the Weyl curvature,
\[A_{ij}=\frac{1}{2}\left(R_{ij}-\frac{1}{6}Rg_{ij}\right)\]
is Weyl–Schouten curvature tensor and \(R_{ij}\) is the Ricci curvature tensor. Also recall that the Bach tensor is
\[B_{ij}=W_{kijl,lk}+\frac{1}{2}R_{kl}W_{kijl}.\]
We say that a metric \(g\) is Bach flat if \(B_{ij}=0\). Bach flat metrics are critical metrics for the functional \(\int_{M}|W|^{2}dv\). Bach flatness is conformally invariant in dimension 4. It follows from Chern–Gauss–Bonnet,
\[8\pi^{2}\chi(M^{4})=\int_{M}(\sigma_{2}+|W|^{2})dv,\]
that \(\int_{M}\sigma_{2}dv\) is conformally invariant.
The gap theorem is as follows:
Suppose that \((M^{4},[g])\) is a Bach flat closed \(4\)-manifold with positive Yamabe constant and that
\[\int_{M}(|W|^{2}dv)[g]\leq\Lambda_{0}\]
for some fixed positive number \(\Lambda_{0}\). Then there is a positive number \(\epsilon_{0}>0\) such that, if
\[\int_{M}\sigma_{2}[g]dv_{g}\geq(1-\epsilon)16\pi^{2}\]
holds for some constant \(\epsilon<\epsilon_{0}\), then \((M^{4},[g])\) is conformally equivalent to the standard \(4\)-sphere.
Our approach is based on the recent work on the compactness of Bach flat metrics on 4-manifolds of Tian and Viaclovsky [15, 16], and of Anderson [2]. Indeed our work relies on a more precise understanding of the bubbling process near points of curvature concentration. For that purpose we develop the bubble tree structure in a sequence of metrics that describes precisely the concentration of curvature. Our method to develop bubble tree structure is inspired by the work of Anderson and Cheeger [3] on the bubble tree configurations of the degenerations of metrics of bounded Ricci curvature. Our construction is modeled after this work but differs in the way that our bubble tree is built from the bubbles at points with the smallest scale of concentration to bubbles with larger scale; while the bubble tree in [3] is constructed from bubbles of large scale to bubbles with smaller scales. The inductive method of construction of our bubble tree is modeled on earlier work of [4, 12, 14] on the study of concentrations of energies in harmonic maps and the scalar curvature equations.
As a consequence of the bubble tree construction we are able to obtain the following finite diffeomorphism theorem:
Suppose that **A** is a collection of Bach flat Riemannian manifolds \((M^{4},g)\) with positive Yamabe constant, satisfying
\[\int_{M}(|W|^{2}dv)[g]\leq\Lambda_{0},\]
for some fixed positive number \(\Lambda_{0}\), and
\[\int_{M}(\sigma_{2}dv)[g]\geq\sigma_{0},\]
for some fixed positive number \(\sigma_{0}\). Then there are only finite many diffeomorphism types in **A**.
It is known that in each conformal class of metrics belonging to the family **A**, there is a metric \(\bar{g}=e^{2w}g\) such that \(\sigma_{2}(A_{\bar{g}})=1\), which we shall call the \(\sigma_{2}\) metric. The bubble tree structure in the degeneration of Yamabe metrics in **A** is also helpful to understand the behavior of the \(\sigma_{2}\)-metrics in **A**. For example:
For the conformal classes \(\left[g_{0}\right]\in{\textbf{A}}\) the conformal metrics \(g=e^{2w}g_{0}\) satisfying the equation \(\sigma_{2}(g)=1\) has a uniform bound for the diameter.
The detailed version of this work has appeared in our paper [6].
### The neck theorem
The main tool we need to develop the bubble tree picture is the neck theorem which should be compared with the neck theorem in the work of Anderson and Cheeger [3]. Due to the lack of point-wise bounds on Ricci curvature, our version of the neck theorem will have weaker conclusion. But it is sufficient to allow us to construct the bubble tree at each point of curvature concentration.
Let \((M^{4},g)\) be a Riemannian manifold. For a point \(p\in M\), denote by \(B_{r}(p)\) the geodesic ball with radius \(r\) centered at \(p\), \(S_{r}(p)\) the geodesic sphere of radius \(r\) centered at \(p\). Consider the geodesic annulus centered at \(p\):
\[\bar{A}_{r_{1},r_{2}}(p)=\{q\in M:r_{1}\leq\text{dist}(q,p)\leq r_{2}\}.\]
In general, \(\bar{A}_{r_{1},r_{2}}(p)\) may have more than one connected components. We will consider any one component
\[A_{r_{1},r_{2}}(p)\subset\bar{A}{r_{1},r_{2}}(p)\]
that meets the geodesic sphere of radius \(r_{2}\):
\[A_{r_{1},r_{2}}(p)\bigcap S_{r_{2}}(p)\neq\varnothing.\]
Let \(H^{3}(S_{r}(p))\) be the 3D-Hausdorff measure of the geodesic sphere \(S_{r}(p)\).
Suppose \((M^{4},g)\) is a Bach flat and simply connected \(4\)-manifold with a Yamabe metric of positive Yamabe constant. Let \(p\in M\), \(\alpha\in(0,1)\), \(\epsilon>0\), \(v_{1}>0\), and \(a<\text{\rm dist}(p,\partial M)\). Then there exist positive numbers \(\delta_{0}\), \(c_{2}\), \(n\) depending on \(\epsilon\), \(\alpha\), \(C_{s}\), \(v_{1}\), \(a\) such that the following holds. Let \(A_{r_{1},r_{2}}(p)\) be a connected component of the geodesic annulus in \(M\) such that
\[r_{2}\leq c_{2}a,\qquad r_{1}\leq\delta_{0}r_{2},\]
\[H^{3}(S_{r}(p))\leq v_{1}r^{3},\quad\forall\;r\in[r_{1},100r_{1}],\]
and
\[\int_{A_{r_{1},r_{2}}(p)}|Rm|^{2}dv\leq\delta_{0}.\]
Then \(A_{r_{1},r_{2}}(p)\) is the only such component. In addition, for the only component
\[A_{(\delta_{0}^{-\frac{1}{4}}-\epsilon)r_{1},(\delta_{0}^{\frac{1}{4}}+ \epsilon)r_{2}}(p),\]
which intersects with \(S_{(\delta_{0}^{\frac{1}{4}}+\epsilon)r_{2}}(p)\), there exist some \(\Gamma\subset O(4)\), acting freely on \(S^{3}\), with \(|\Gamma|\leq n\), and an quasi isometry \(\psi\), with
\[A_{(\delta_{0}^{-\frac{1}{4}}+\epsilon)r_{1},(\delta_{0}^{\frac{1}{4}}- \epsilon)r_{2}}(p)\subset\Psi(C_{\delta_{0}^{-\frac{1}{4}}r_{1},\delta_{0}^{ \frac{1}{4}}r_{2}}(S^{3}/\Gamma))\subset A_{(\delta_{0}^{-\frac{1}{4}}- \epsilon)r_{1},(\delta_{0}^{\frac{1}{4}}+\epsilon)r_{2}}(p)\]
such that for all \(C_{\frac{1}{2}r,r}(S^{3}/\Gamma)\subset C_{\delta_{0}^{-\frac{1}{4}}r_{1}, \delta_{0}^{\frac{1}{4}}r_{2}}(S^{3}/\Gamma)\), in the cone coordinates, one has
\[|(\Psi^{*}(r^{-2}g))_{ij}-\delta_{ij}|_{C^{1,\alpha}}\leq\epsilon.\]
The first step in the proof is to use the Sobolev inequality to show the uniqueness of the connected annulus \(A_{r_{1},r_{2}}(p)\). The second step is to establish the growth of volume of geodesic spheres
\[H^{3}(S_{r}(p)\leq Cr^{3}\]
for all \(r\in[r_{1},\frac{1}{2}r_{2}]\). Here we rely on the work of Tian and Viaclovsky [15, 16] where they analyzed the end structure of a Bach-flat, scalar flat manifolds with finite \(L^{2}\) total curvature. The last step is to use the Gromov and Cheeger compactness argument as in the work of Anderson and Cheeger [3] to get the cone structure of the neck.
### Bubble tree construction
In this section we attempt to give a clear picture about what happen at curvature concentration points. We will detect and extract bubbles by locating the centers and scales of curvature concentration.
We will assume here that \((M_{i},g_{i})\) are Bach flat 4-manifolds with positive scalar curvature Yamabe metrics, vanishing first homology, and finite \(L^{2}\) total curvature. Choose \(\delta\) small enough according to the \(\epsilon\)-estimates and the neck theorem in the previous section. Suppose that \(X_{i}\subset M_{i}\) contains a geodesic ball of a fixed radius \(r_{0}\) and
\[\int_{T_{\eta_{0}}(\partial X_{i})}|Rm^{i}|^{2}dv^{i}\leq\frac{\delta}{2},\]
where
\[T_{\eta_{0}}(\partial X_{i})=\{p\in M_{i}:\text{dist}(p,\partial X_{i})<\eta_{ 0}\},\]
for some fixed positive number \(4\eta_{0}<r_{0}\). Define, for \(p\in X_{i}\),
\[s_{i}^{1}(p)=r\qquad\text{such that}\quad\int_{B^{i}_{r}(p)}|Rm^{i}|^{2}dv^{i} =\frac{\delta}{2}.\]
Let
\[p_{i}^{1}=p\qquad\text{such that}\quad s^{1}_{i}(p)=\inf_{B^{i}_{t_{0}}(p_{i}) }s_{i}^{1}(p).\]
We may assume \(\lambda_{i}^{1}=s_{i}^{1}(p_{i}^{1})\to 0\), for otherwise there would be no curvature concentration in \(X_{i}\). We then conclude that \((M_{i},(\lambda_{i}^{1})^{-2}g_{i},p_{i}^{1})\) converges to \((M_{\infty}^{1},g_{\infty}^{1},p_{\infty}^{1})\), which is a Bach flat, scalar flat, complete 4-manifold satisfying the Sobolev inequality, having finite \(L^{2}\) total curvature, and one single end.
We call a Bach flat, scalar flat, complete 4-manifold with the Sobolev inequality, finite \(L^{2}\) total curvature, and a single ALE end a _leaf bubble_, while we will call such space with finitely many isolated irreducible orbifold points an _intermediate bubble_.
Now, we define, for \(p\in X_{i}\setminus B^{i}_{K^{1}\lambda_{i}^{1}}(p_{i}^{1})\),
\[s_{i}^{2}(p)=r\]
such that
\[\int_{B^{i}_{r}(p)\setminus B^{i}_{K^{1}\lambda_{i}^{1}}(p_{i}^{1})}|Rm^{i}|^{ 2}dv^{i}=\frac{\delta}{2}.\]
Let
\[p_{i}^{2}=p\]
such that
\[s^{2}_{i}(p)=\inf_{B^{i}_{r}(p)\setminus B^{i}_{K^{1}\lambda_{i}^{1}}(p_{i}^{1 })}s_{i}^{2}(p).\]
Again let \(\lambda_{i}^{2}=s_{i}^{2}(p_{i}^{2})\to 0\). Otherwise there would be no more curvature concentration. Then
\[\frac{\lambda_{i}^{2}}{\lambda_{i}^{1}}+\frac{\text{\rm dist}(p_{i}^{1},p_{i}^ {2})}{\lambda_{i}^{1}}\to\infty.\]
There are two possibilities:
\[\text{Case 1.}\qquad\frac{\text{dist}(p_{i}^{1},p_{i}^{2})}{ \lambda_{i}^{2}}\to\infty;\]
\[\text{Case 2.}\qquad\frac{\text{dist}(p_{i}^{1},p_{i}^{2})}{ \lambda_{i}^{2}}\leq M^{1}.\]
In Case 1, we certainly also have
\[\frac{\text{dist}(p_{i}^{1},p_{i}^{2})}{\lambda_{i}^{1}}\to\infty.\]
Therefore, in the convergence of the sequence \((M_{i},(\lambda_{i}^{2})^{-2}g_{i},p_{i}^{2})\) the concentration which produces the bubble \((M_{\infty}^{1},g_{\infty}^{1})\) eventually escapes to infinity of \(M_{\infty}^{2}\) and hence is not visible to the bubble \((M_{\infty}^{2},g_{\infty}^{2})\), likewise, in the converging sequence \((M_{i},(\lambda_{i}^{1})^{-2}g_{i},p_{i}^{1})\) one does not see the concentration which produces \((M_{\infty}^{2},g_{\infty}^{2})\). There are at most finite number of such leaf bubbles.
We say two bubbles \((M_{\infty}^{j_{1}},g_{\infty}^{j_{1}})\) and \((M_{\infty}^{j_{2}},g_{\infty}^{j_{2}})\) associated with \((p_{i}^{j_{1}},\lambda_{i}^{j_{1}})\) and \((p_{i}^{j_{2}},\lambda_{i}^{j_{2}})\) are _separable_ if
\[\frac{\text{dist}(p_{i}^{j_{1}},p_{i}^{j_{2}})}{\lambda_{i}^{j_{1}}}\to\infty \qquad\text{and}\qquad\frac{\text{dist}(p_{i}^{j_{1}},p_{i}^{j_{2}})}{\lambda_ {i}^{j_{2}}}\to\infty.\]
In Case 2, one starts to trace intermediate bubbles which will be called parents of some bubbles. We would like to emphasize a very important point here. One needs the neck theorem to take limit in Goromov–Hausdorff topology to produce the intermediate bubbles. The neck Theorem is used to prove the limit space has only isolated point singularities, which are then proven to be orbifold points.
Suppose that there are several separable bubbles \(\{(M_{\infty}^{j},g_{\infty}^{j})\}_{j\in J}\) associated with \(\{(p_{i}^{j},\lambda_{i}^{j})\}_{j\in J}\). Suppose that there is a concentration detected as \((p_{i}^{k},\lambda_{i}^{k})\) after \(\{(p_{i}^{j},\lambda_{i}^{j})\}_{j\in J}\) such that
\[\frac{\text{\rm dist}(p_{i}^{k},p_{i}^{j})}{\lambda_{i}^{k}}\leq M^{j},\]
therefore
\[\frac{\lambda_{i}^{k}}{\lambda_{i}^{j}}\to\infty\]
for each \(j\in J\). In addition, suppose that \(\{(p_{i}^{j},\lambda_{i}^{j})\}_{j\in J}\) is the maximal collection of such. Then
\((M_{i},(\lambda_{i}^{k})^{-2}g_{i},p_{i}^{k})\) converges in Gromov–Hausdorff topology to an intermediate bubble \((M_{\infty}^{k},g_{\infty}^{k})\). \((M_{\infty}^{k},g_{\infty}^{k})\) is either a parent or a grandparent of all the given bubbles \(\{(M_{\infty}^{j},g_{\infty}^{j})\}_{j\in J}\).
We remark that it is necessary to create some strange intermediate bubbles to handle the inseparable bubbles. This situation does not arise in the degeneration of Einstein metrics. In that case there is a gap theorem for Ricci flat complete orbifolds and there is no curvature concentration at the smooth points due to a simple volume comparison argument, both of which are not yet available in our current situation. We will call those intermediate bubbles _exotic bubbles_.
A _bubble tree_\(T\) is defined to be a tree whose vertices are bubbles and whose edges are necks from neck Theorem. At each vertex \((M_{\infty}^{j},g_{\infty}^{j})\), its ALE end is connected, via a neck, to its parent towards the _root bubble_ of \(T\), while at finitely many isolated possible orbifold points of \((M_{\infty}^{j},g_{\infty}^{j})\), it is connected, via necks, to its children towards leaf bubbles of \(T\). We say two bubble trees \(T_{1}\) and \(T_{2}\) are _separable_ if their root bubbles are separable.
To finish this process we just iterate the process of extracting bubbles the construction has to end at some finite steps. In summary we have
Suppose that \((M_{i},g_{i})\) are Bach flat \(4\)-manifolds with positive scalar curvature Yamabe metrics, vanishing first homology, and finite \(L^{2}\) total curvature. Then \((M_{i},g_{i})\) converges to Bach-flat \(4\)-manifold \((M_{\infty},g_{\infty})\) with finitely orbifold singularities \(S\). The convergence is strong in \(C^{\infty}\) away from a finite number of points \(B\supset S\). At each point \(b\) in \(B\) there is a bubble tree attached to \(b\).
## 2 Conformally compact Einstein manifolds
### Conformally compact Einstein manifolds
Suppose that \(X^{n+1}\) is a smooth manifold of dimension \(n+1\) with smooth boundary \(\partial X=M^{n}\). A defining function for the boundary \(M^{n}\) in \(X^{n+1}\) is a smooth function \(x\) on \(\bar{X}^{n+1}\) such that
\[\left\{\begin{array}[]{ll}x>0&\text{in $X$};\\ x=0&\text{on $M$};\\ dx\neq 0&\text{on $M$}.\end{array}\right.\]
A Riemannian metric \(g\) on \(X^{n+1}\) is conformally compact if \((\bar{X}^{n+1},x^{2}g)\) is a compact Riemannian manifold with boundary \(M^{n}\) for a defining function \(x\). Conformally compact manifold \((X^{n+1},g)\) carries a well-defined conformal structure on the boundary \(M^{n}\), where each metric \(\hat{g}\) in the class is the restriction of \(\bar{g}=x^{2}g\) to the boundary \(M^{n}\) for a defining function \(x\). We call \((M^{n},[\hat{g}])\) the conformal infinity of the conformally compact manifold \((X^{n+1},g)\). A short computation yields that, given a defining function \(x\),
\[R_{ijkl}[g]=|dx|^{2}_{\bar{g}}(g_{ik}g_{jl}-g_{il}g_{jk})+O(x^{3})\]
in a coordinate \((0,\epsilon)\times M^{n}\subset X^{n+1}\). Therefore, if we assume that \(g\) is also asymptotically hyperbolic, then
\[|dx|^{2}_{\bar{g}}|_{M}=1\]
for any defining function \(x\). If \((X^{n+1},g)\) is a conformally compact manifold and \(\text{Ric}[g]=-ng\), then we call \((X^{n+1},g)\) a conformally compact Einstein manifold.
Given a conformally compact, asymptotically hyperbolic manifold \((X^{n+1},g)\) and a representative \(\hat{g}\) in \([\hat{g}]\) on the conformal infinity \(M^{n}\), there is a uniquely determined defining function \(x\) such that, on \(M\times(0,\epsilon)\) in \(X\), \(g\) has the normal form
\[g=x^{-2}(dx^{2}+g_{x}),\] (1)
where \(g_{x}\) is a 1-parameter family of metrics on \(M\). This is because
Suppose that \((X^{n+1},g)\) is a conformally compact, asymptotically hyperbolic manifold with the conformal infinity \((M,[\hat{g}])\). Then, for any \(\hat{g}\in[\hat{g}]\), there exists a unique defining function \(x\) such that
\[|dx|^{2}_{r^{2}g}=1\]
in a neighborhood of the boundary \([0,\epsilon)\times M\) and
\[r^{2}g|_{M}=\hat{g}.\]
Given a conformally compact Einstein manifold \((X^{n+1},g)\), in the local product coordinates \((0,\epsilon)\times M^{n}\) near the boundary where the metric takes the normal form (1), the Einstein equations split and display some similarity to a second order ordinary differential equations with a regular singular point.
Suppose that \((X^{n+1},g)\) is a conformally compact Einstein manifold with the conformal infinity \((M^{n},[\hat{g}])\) and that \(x\) is the defining function associated with a metric \(\hat{g}\in[\hat{g}]\). Then
\[g_{x}=\hat{g}+g^{(2)}x^{2}+(\text{even powers of $x$})+g^{(n-1)} x^{n-1}+g^{(n)}x^{n}+\cdots,\]
when \(n\) is odd, and
\[g_{x}=\hat{g}+g^{(2)}x^{2}+(\text{even powers of $x$})+g^{(n)}x^ {n}+hx^{n}\log x+\cdots,\]
when \(n\) is even, where:
a) \(g^{(2i)}\) are determined by \(\hat{g}\) for \(2i<n\);
b) \(g^{(n)}\) is traceless when \(n\) is odd;
c) the trace part of \(g^{(n)}\) is determined by \(\hat{g}\) and \(h\) is traceless and determined by \(\hat{g}\);
d) the traceless part of \(g^{(n)}\) is divergence free.
Readers are referred to [10] for more details about the above two lemmas.
### Examples of conformally compact Einstein manifolds
Let us look at some examples.
_a) The hyperbolic spaces_
\[\left(R^{n+1},\frac{(d|x|)^{2}}{1+|x|^{2}}+|x|^{2}d\sigma\right),\]
where \(d\sigma\) is the standard metric on the \(n\)-sphere. We may write
\[g_{H}=s^{-2}\left(ds^{2}+\left(1-\frac{s^{2}}{4}\right)^{2}d\sigma\right),\]
where
\[s=\frac{2}{\sqrt{1+|x|^{2}}+|x|}\]
is a defining function. Hence the conformal infinity is the standard round sphere \((S^{n},d\sigma)\).
_b) The hyperbolic manifolds_
\[\left(S^{1}(\lambda)\times R^{n},(1+r^{2})dt^{2}+\frac{dr^{2}}{1+r^{2}}+r^{2}d \sigma\right).\]
Let
\[r=\frac{1-\frac{s^{2}}{4}}{s}=\sinh\log\frac{2}{s}\]
for a defining function \(s\). Then
\[g_{H}^{0}=s^{-2}\left(ds^{2}+\left(1+\frac{s^{2}}{4}\right)^{2}dt^{2}+\left(1- \frac{s^{2}}{4}\right)^{2}d\sigma\right).\]
Thus the conformal infinity is standard \((S^{1}(\lambda)\times S^{n-1},dt^{2}+d\sigma)\).
_c) AdS-Schwarzchild_
\[\big{(}R^{2}\times S^{2},g_{+1}^{m}\big{)},\]
where
\[g_{+1}^{m}=Vdt^{2}+V^{-1}dr^{2}+r^{2}g_{S^{2}},\qquad V=1+r^{2}-\frac{2m}{r},\]
\(m\) is any positive number, \(r\in[r_{h},+\infty)\), \(t\in S^{1}(\lambda)\) and \((\theta,\phi)\in S^{2}\), and \(r_{h}\) is the positive root for \(1+r^{2}-\frac{2m}{r}=0\). In order for the metric to be smooth at each point where \(S^{1}\) collapses we need \(Vdt^{2}+V^{-1}dr^{2}\) to be smooth at \(r=r_{h}\), i.e.
\[V^{\frac{1}{2}}\frac{d(V^{\frac{1}{2}}2\pi\lambda)}{dr}\Big{|}_{r=r_{h}}=2\pi.\]
Note that its conformal infinity is \((S^{1}(\lambda)\times S^{2},[dt^{2}+d\theta^{2}+\sin^{\theta}d\phi^{2}])\) and \(S^{1}\) collapses at the totally geodesic \(S^{2}\), which is the so-called horizon. Interestingly, \(\lambda\) is does not vary monotonically in \(r_{h}\), while \(r_{h}\) monotonically depends on \(m\). In fact, for each \(0<\lambda<1/\sqrt{3}\), there are two different \(m_{1}\) and \(m_{2}\) which share the same \(\lambda\). Thus, for the same conformal infinity \(S^{1}(\lambda)\times S^{2}\) when \(0<\lambda<1/\sqrt{3}\), there are two non-isometric AdS-Schwarzschild space with metric \(g^{+}_{m_{1}}\) and \(g^{+}_{m_{2}}\) on \(R^{2}\times S^{2}\). These are the interesting simple examples of non-uniqueness for conformally compact Einstein metrics.
_d) AdS-Kerr spaces_
\[\big{(}CP^{2}\setminus\{p\},g_{\alpha}\big{)},\]
where \(p\) is a point on \(CP^{2}\),
\[g_{\alpha}=E_{\alpha}((r^{2}-1)F_{\alpha}^{-1}dr^{2}+(r^{2}-1)^{ -1}F_{\alpha}(dt+\cos\theta d\phi)^{2}+(r^{2}-1)(d\theta^{2}+\sin^{2}\theta d \phi^{2})),\]
\[E_{\alpha}=\frac{2}{3}\frac{\alpha-2}{\alpha^{2}-1},\qquad F_{ \alpha}=(r-\alpha)((r^{3}-6r+3\alpha^{-1})E_{\alpha}+4(r-\alpha^{-1})),\]
\(r\geq\alpha\), \(t\in S^{1}(\lambda)\), and \((\theta,\phi)\in S^{2}\). For the metric to be smooth at the horizon, the totally geodesic \(S^{2}\), we need to require
\[\sqrt{\frac{F}{E(r^{2}-1)}}\frac{d}{dr}\left(2\pi\lambda\sqrt{\frac{EF}{r^{2}- 1}}\right)=2\pi.\]
Here \((t,\theta,\phi)\) is the coordinates for \(S^{3}\) through the Hopf fiberation. The conformal infinity is the Berger sphere with the Hopf fibre of length \(\pi E_{\alpha}\) and the \(S^{2}\) of area \(4\pi E_{\alpha}\). For every \(0<\lambda<(2-\sqrt{3})/3\) there are exactly two \(\alpha\), hence two AdS-Kerr metrics \(g_{\alpha}\). It is interesting to note that \((2-\sqrt{3})/3<1\), so the standard \(S^{3}(1)\) is not included in this family.
One may ask, given a conformal manifold \((M^{n},[\hat{g}])\), is there a conformally compact Einstein manifold \((X^{n+1},g)\) such that \((M^{n},[\hat{g}])\) is the conformal infinity? This in general is a difficult open problem. Graham and Lee in [11] showed that for any conformal structure that is a perturbation of the round one on the sphere \(S^{n}\) there exists a conformally compact Einstein metric on the ball \(B^{n+1}\).
### Conformal compactifications
Given a conformally compact Einstein manifold \((X^{n+1},g)\), what is a good conformal compactification? Let us consider the hyperbolic space. The hyperbolic space \((H^{n+1},g_{H})\) is the hyperboloid
\[\big{\{}(t,x)\in R\times R^{n+1}:-t^{2}+|x|^{2}=-1,t>0\big{\}}\]
in the Minkowski space-time \(R^{1,n+1}\). The stereographic projection via the imaginary south pole gives the Poincaré ball model
\[\left(B^{n+1},\left(\frac{2}{1-|y|^{2}}\right)^{2}|dy|^{2}\right)\]
and replacing the \(x\)-hyperplane by \(z\)-hyperplane tangent to the light cone gives the half-space model
\[\left(R^{n+1}_{+},\frac{|dz|^{2}}{z_{n+1}^{2}}\right),\]
where
\[\frac{1+|y|^{2}}{1-|y|^{2}}=t,\qquad\frac{1}{z_{n+1}}=t-x_{n+1}.\]
Therefore
\[(H^{n+1},t^{-2}g_{H})=(S^{n+1}_{+},g_{S^{n+1}}),\]
\[(H^{n+1},(t+1)^{-2}g_{H})=(B^{n+1},|dy|^{2}),\]
\[(H^{n+1},(t-x_{n+1})^{-2}g_{H})=(R^{n+1}_{+},|dz|^{2}).\]
The interesting fact here is that all coordinate functions \(\{t,x_{1},x_{2},\dots,x_{n+1}\}\) of the Minkowski space-time are eigenfunctions on the hyperboloid. Thus positive eigenfunctions on a conformally compact Einstein manifold are expected to be candidates for good conformal compactifications. This is first observed in [13].
Suppose that \((X^{n+1},g)\) is a conformally compact Einstein manifold and that \(x\) is a special defining function associated with a representative \(\hat{g}\in[\hat{g}]\). Then there always exists a unique positive eigenfunction \(u\) such that
\[\Delta u=(n+1)u\qquad\text{in}\ X\]
and
\[u=\frac{1}{x}+\frac{R[\hat{g}]}{4n(n-1)}x+O(x^{2})\]
near the infinity.
We remark here that, for the hyperbolic space \(H^{n+1}\) and the standard round metric in the infinity, we have
\[t=\frac{1}{x}+\frac{1}{4}x.\]
As we expect, positive eigenfunctions indeed give a preferable conformal compactification.
Suppose that \((X^{n+1},g)\) is a conformally compact Einstein manifold, and that \(u\) is the eigenfunction obtained for a Yamabe metric \(\hat{g}\) of the conformal infinity \((M,[\hat{g}])\) in the previous lemma. Then \((X^{n+1},u^{-2}g)\) is a compact manifold with totally geodesic boundary \(M\) and
\[R[u^{-2}g]\geq\frac{n+1}{n-1}R[\hat{g}].\]
As a consequence
Suppose that \((X^{n+1},g)\) is a conformally compact Einstein manifold and its conformal infinity is of positive Yamabe constant. Suppose that \(u\) is the positive eigenfunction associated with the Yamabe metric on the conformal infinity obtained in Lemma 1.3. Then \((X^{n+1},u^{-2}g)\) is a compact manifold with positive scalar curvature and totally geodesic boundary.
The work of Schoen–Yau and Gromov–Lawson then give some topological obstruction for a conformally compact Einstein manifold to have its conformal infinity of positive Yamabe constant. A surprising consequence of the eigenfunction compactifications is the rigidity of the hyperbolic space without assuming the spin structure.
Suppose that \((X^{n+1},g)\) is a conformally compact Einstein manifold with the round sphere as its conformal infinity. Then \((X^{n+1},g)\) is isometric to the hyperbolic space.
### Renormalized volume
We will introduce the renormalized volume, which was first noticed by physicists in their investigations of the holography principles in AdS/CFT. Take a defining function \(x\) associated with a choice of the metric \(\hat{g}\in[\hat{g}]\) on the conformal infinity, then compute, when \(n\) is odd,
\[\text{Vol}(\{x>\epsilon\}=c_{0}\epsilon^{-n}+\text{odd powers of $\epsilon$}+V+o(1),\] (2)
when \(n\) is even,
\[\text{Vol}(\{s>\epsilon\})=c_{0}\epsilon^{-n}+\text{even powers of $\epsilon$}+L\log\frac{1}{\epsilon}+V+o(1).\] (3)
It turns out the numbers \(V\) in odd dimension and \(L\) in even dimension are independent of the choice of the metrics in the class. We will see that \(V\) in even dimension is in fact a conformal anomaly.
Suppose that \((X^{n+1},g)\) is a conformally compact Einstein manifold and that \(\bar{x}\) and \(x\) are two defining functions associated with two representatives in \([\hat{g}]\) on the conformal infinity \((M^{n},[\hat{g}])\). Then
\[\bar{x}=xe^{w}\]
for a function \(w\) on a neighborhood of the boundary \([0,\epsilon)\times M\) whose expansion at \(x=0\) consists of only even powers of \(x\) up through and including \(x^{n+1}\) term.
Suppose that \((X^{n+1},g)\) is a conformally compact Einstein manifold. The \(V\) in (2) when \(n\) is odd and \(L\) in (3) when n\(n\) is even are independent of the choice of representative \(\hat{g}\in[\hat{g}]\) on the conformal infinity \((M^{n},[\hat{g}])\).
Let us calculate the renormalized volume for the examples in Section 2.2.
_a) The hyperbolic space_: We recall
\[(H^{4},g_{H})=\left(B^{4},\left(\frac{2}{1-|y|^{2}}\right)^{2}|dy|^{2}\right),\]
where
\[g_{H}=s^{-2}\left(ds^{2}+\left(1-\frac{s^{2}}{4}\right)^{2}h_{0}\right)\]
and \(h_{0}\) is the round metric on \(S^{3}\). Then
\[\text{vol}(\{s>\epsilon\})=\int_{\epsilon}^{2}\int_{S^{3}}s^{-4}\left(1-\frac{ s^{2}}{4}\right)^{3}d\sigma_{0}ds\]
where \(d\sigma_{0}\) is the volume element for the round unit sphere
\[\phantom{\text{vol}(\{s>\epsilon\})}{}=2\pi^{2}\left(-\frac{1}{3} s^{-3}\Big{|}_{\epsilon}^{2}+\frac{3}{4}s^{-1}\Big{|}_{\epsilon}^{2}+\frac{3}{ 16}(2-\epsilon)-\frac{1}{3\times 64}s^{3}\Big{|}_{\epsilon}^{2}\right)\]
\[\phantom{\text{vol}(\{s>\epsilon\})}{}=\frac{2\pi^{2}}{3}\epsilon ^{-3}-\frac{3\pi^{2}}{2}\epsilon^{-1}+2\pi^{2}\left(-\frac{1}{3\times 8}+\frac {3}{8}+\frac{3}{8}-\frac{1}{3\times 8}\right)+O(\epsilon)\]
\[\phantom{\text{vol}(\{s>\epsilon\})}{}=\frac{2\pi^{2}}{3}\epsilon ^{-3}-\frac{3\pi^{2}}{2}\epsilon^{-1}+\frac{4\pi^{2}}{3}+O(\epsilon).\]
Thus
\[V(H^{4},g_{H})=\frac{4\pi^{2}}{3}.\]
_b) The hyperbolic manifold_: We recall
\[\left(S^{1}(\lambda)\times R^{3},(1+r^{2})dt^{2}+\frac{dr^{2}}{1+r^{2}}+r^{2}g _{S^{2}}\right)\]
and
\[g_{H}^{0}=s^{-2}\left(ds^{2}+\left(1-\frac{s^{2}}{4}\right)^{2}(d\theta^{2}+ \sin^{2}\theta d\phi^{2})+\left(1+\frac{s^{2}}{4}\right)^{2}dt^{2}\right).\]
Then
\[\text{vol}(\{s>\epsilon\})=\int_{\epsilon}^{2}\int_{S^{2}}\int_{S^{1}}s^{-4} \left(1-\frac{s^{2}}{4}\right)^{2}\left(1+\frac{s^{2}}{4}\right)d\omega_{0}dtds\]
where \(d\omega_{0}\) stands for the volume element for the round unit sphere \(S^{2}\)
\[\phantom{\text{vol}(\{s>\epsilon\})}{}=8\pi^{2}\lambda\int_{ \epsilon}^{2}s^{-4}\left(1-\frac{s^{2}}{4}-\frac{s^{4}}{16}+\frac{s^{6}}{64} \right)ds\]
\[\phantom{\text{vol}(\{s>\epsilon\})}{}=\frac{8\pi^{2}}{3}\lambda \epsilon^{-3}-2\pi^{2}\lambda\epsilon^{-1}+8\pi^{2}\lambda\left(-\frac{1}{3 \times 8}+\frac{1}{8}-\frac{1}{8}+\frac{1}{3\times 8}\right)+O(\epsilon)\]
\[\phantom{\text{vol}(\{s>\epsilon\})}{}=\frac{8\pi^{2}}{3}\lambda \epsilon^{-3}-2\pi^{2}\lambda\epsilon^{-1}+O(\epsilon).\]
Thus
\[V(S^{1}\times R^{3},g_{H}^{0})=0.\]
_c) AdS-Schwarzschild spaces_: We recall on \(S^{2}\times R^{2}\)
\[g_{+1}^{m}=\left(1+r^{2}-\frac{2m}{r}\right)dt^{2}+\frac{dr^{2}}{1+r^{2}-\frac {2m}{r}}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}).\]
First let us find the special defining function, i.e. to have
\[\frac{1}{1+r^{2}-\frac{2m}{r}}dr^{2}=s^{-2}ds^{2}\]
that is, if denote by \(r=\rho/s\), where \(\rho=\rho(s)\),
\[\rho-s\rho^{\prime}=\sqrt{\rho^{2}+s^{2}-2ms^{3}/\rho},\]
and \(\rho(0)=1\). One may solve it in power series
\[\rho=1-\frac{1}{4}s^{2}+\frac{m}{3}s^{3}+\cdots.\]
Then
\[g^{m}_{+1}=s^{-2}\left(ds^{2}+\left(\rho^{2}+s^{2}-\frac{2ms^{3}}{\rho}\right) dt^{2}+\rho^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2})\right).\]
Note that \(s\in[\epsilon,s_{h}]\) for \(r\in[r_{h},M_{\epsilon}]\),
\[\log s_{h}=\log\epsilon+\int_{r_{h}}^{M_{\epsilon}}\frac{1}{\sqrt{1+r^{2}- \frac{2m}{r}}}dr<+\infty,\]
and
\[M_{\epsilon}=\epsilon^{-1}\rho(\epsilon)=\epsilon^{-1}\left(1-\frac{1}{4} \epsilon^{2}+\frac{m}{3}s^{3}+\cdots\right).\]
Therefore
\[\text{vol}(\{s>\epsilon\})=\int_{\epsilon}^{s_{h}}\int_{S^{1}( \lambda)}\int_{S^{2}}s^{-4}\sqrt{\rho^{2}+s^{2}-\frac{2ms^{2}}{\rho}}\rho^{2} dtd\sigma_{0}ds\]
\[\phantom{\text{vol}(\{s>\epsilon\})}{}=8\pi^{2}\lambda\int^{s_{h} }_{\epsilon}s^{-4}\sqrt{\rho^{2}+s^{2}-\frac{2ms^{3}}{\rho}}\rho^{2}ds\]
\[\phantom{\text{vol}(\{s>\epsilon\})}{}=8\pi^{2}\lambda\int_{r_{h} }^{M}s^{-1}\sqrt{1+r^{2}-\frac{2m}{r}}r^{2}\left(-\frac{ds}{dr}\right)dr\]
\[\phantom{\text{vol}(\{s>\epsilon\})}{}=8\pi^{2}\lambda\int_{r_{h} }^{M}r^{2}dr=\frac{8\pi^{2}\lambda}{3}(M^{3}-r^{3}_{h}).\]
Thus the renormalized volume
\[V(R^{2}\times S^{2},g^{m}_{+1})=\frac{8\pi^{2}}{3}\frac{r^{2}_{h}(1-r^{2}_{h}) }{3r^{2}_{h}+1},\]
where \(V(R^{2}\times S^{2},g^{m}_{+1})<0\) when \(r_{h}>1\); \(V(R^{2}\times S^{2},g^{m}_{+1})=0\) only when \(r_{h}=1\) or \(0\); and it achieves its maximum value at \(r_{h}=1/\sqrt{3}\)
\[V(R^{2}\times S^{2},g^{m}_{+1})_{\max}=\frac{1}{9}\cdot\frac{4\pi^{2}}{3}\chi( R^{2}\times S^{2}).\]
_d) AdS-Kerr spaces_: We will omit the calculation here. The renormalized volume
\[V(\text{CP}^{2}\setminus\{p\},g_{\alpha})=4\pi^{2}E_{\alpha}\left(-\frac{1}{6} E_{\alpha}(\alpha^{3}+3\alpha^{-1})+\frac{2}{3}(\alpha+\alpha^{-1})\right).\]
Clearly, \(V(\text{CP}^{2}\setminus\{p\},g_{\alpha})\) goes to zero when \(\alpha\) goes to \(2\), and \(V(\text{CP}^{2}\setminus\{p\},g_{\alpha})\) goes to \(-\infty\) when \(\alpha\) goes to \(\infty\). One may find the maximum value for the renormalized volume is achieved at \(\alpha=2+\sqrt{3}\). Therefore
\[V(\text{CP}^{2}\setminus\{p\},g_{\alpha})_{\max}=\frac{4\pi^{2}}{3}\cdot\frac{ 2(4-\sqrt{3})}{9}<\frac{1}{2}\cdot\frac{4\pi^{2}}{3}\chi(\text{CP}^{2} \setminus\{p\}).\]
### Renormalized volume and Chern–Gauss–Bonnet formula
We start with the Gauss–Bonnet formula on a surface \((M^{2},g)\)
\[4\pi\chi(M^{2})=\int_{M}Kdv_{g},\]
where \(K\) is the Gaussian curvature of \((M^{2},g)\). The transformation of the Gaussian curvature under a conformal change of metrics \(g_{w}=e^{2w}g\) is governed by the Laplacian as follows:
\[-\Delta_{g}w+K[g]=K[e^{2w}g]e^{2w}.\]
The Gauss–Bonnet formula for a compact surface with boundary \((M^{2}g)\) is
\[4\pi\chi(M)=\int_{M}KdV_{g}+2\int_{\partial M}kd\sigma_{g},\]
where \(k\) is the geodesic curvature for \(\partial M\) in \((M,g)\). The transformation of the geodesic curvature under a conformal change of metric \(g_{w}=e^{2w}g\) is
\[-\partial_{n}w+k[g]=k[e^{2w}g]e^{w},\]
where \(\partial_{n}\) is the inward normal derivative. Notice that
\[-\Delta[e^{2w}g]=e^{-2w}(-\Delta[g]),\qquad-\partial_{n}[e^{2w}g]=e^{-w}(- \partial_{n}[g]),\]
for which we say they are conformally covariant. In four dimension there is a rather complete analogue. We may write the Chern–Gauss–Bonnet formula in the form
\[8\pi^{2}\chi(M^{4})=\int_{M}(|W|^{2}+Q)dV_{g}\]
for closed 4-manifold and
\[8\pi^{2}\chi(M^{4})=\int_{M}(|W|^{2}+Q)dV_{g}+2\int_{\partial M}(L+T)d\sigma_{ g},\]
where \(W\) is the Weyl curvature, \(L\) is a point-wise conformal invariant curvature of \(\partial M\) in \((M,g)\).
\[Q=\frac{1}{6}(R^{2}-3|{\rm Ric}|^{2}-\Delta R),\]
\[T=-\frac{1}{12}\partial nR+\frac{1}{6}RH-R_{\alpha n\beta n}L_{ \alpha\beta}+\frac{1}{9}H^{3}-\frac{1}{3}\text{Tr}L^{3}-\frac{1}{3}\tilde{ \Delta}H,\]
\(R\) is the scalar curvature, \({\rm Ric}\) is the Ricci curvature, \(L\) is the second fundamental form of \(\partial M\) in \((M,g)\). We know the transformation of \(Q\) under a conformal change metric \(g_{w}=e^{2w}g\) is
\[P_{4}[g]w+Q[g]=Q[e^{2w}g]e^{4w},\]
where
\[P_{4}=(-\Delta)^{2}+\delta\left\{\frac{2}{3}Rg-2\text{Ric}\right\}d\]
is the so-called Paneitz operator, and the transformation of \(T\) is
\[P_{3}[g]w+T[g]=T[e^{2w}]e^{3w},\]
where
\[P_{3}=\frac{1}{2}\partial_{n}\Delta_{g}-\tilde{\Delta}\partial_{ n}+\frac{2}{3}H\tilde{\Delta}+L_{\alpha\beta}\tilde{\nabla}_{\alpha}\tilde{ \nabla}_{\beta}+\frac{1}{3}\tilde{\nabla}_{\alpha}H\cdot\tilde{\nabla}_{\alpha }+\left(F-\frac{1}{3}R\right)\partial_{n}.\]
We also have
\[P_{4}[e^{2w}g]=e^{-4w}P_{4}[g],\qquad P_{3}[e^{2w}g]=e^{-3w}P_{3}[g].\]
On the other hand, to calculate the renormalized volume in general, for odd \(n\), upon a choice of a special defining function \(x\), one may solve
\[-\Delta v=n\qquad\text{in $X^{n+1}$}\]
for
\[v=\log x+A+Bx^{n},\]
\(A\), \(B\) are even in \(x\), and \(A|_{x=0}=0\). Let
\[B_{n}[g,\hat{g}]=B|_{x=0}.\]
Fefferman and Graham observed
\[V(X^{n+1},g)=\int_{M}B_{n}[g,\hat{g}]dv[\hat{g}].\]
We observe that the function \(v\) in the above is also good in conformal compactifications. For example, given a conformally compact Einstein 4-manifold \((X^{4},g)\), let us consider the compactification \((X^{4},e^{2v}g)\). Then
\[Q_{4}[e^{2v}g]=0\]
and its boundary is totally geodesic in \((X^{4},e^{2v}g)\). Moreover
\[T[e^{2v}g]=3B_{3}[g,\hat{g}].\]
Therefore we obtain easily the following generalized Chern–Gauss–Bonnet formula.
Suppose that \((X^{4},g)\) is a conformally compact Einstein manifold. Then
\[8\pi^{2}\chi(X^{4})=\int_{X^{4}}(|W|^{2}dv)[g]+6V(X^{4},g).\]
### Topology of conformally compact Einstein 4-manifolds
In the following let us summarize some of our works appeared in [7]. From the generalized Chern–Gauss–Bonnet formula, obviously
\[V\leq\frac{4\pi^{2}}{3}\chi(X)\]
and the equality holds if and only if \((X^{4},g)\) is hyperbolic. Comparing with Chern–Gauss–Bonnet formula for a closed 4-manifold
\[\frac{1}{8\pi^{2}}\int_{M^{4}}(|W|^{2}+\sigma_{2})dv=\chi(M^{4})\]
one sees that the renormalized volume replaces the role of the integral of \(\sigma_{2}\). In the following we will report some results on the topology of a conformally compact Einstein 4-manifold in terms of the size of the renormalized volume relative to the Euler number, which is analogous to the results of Chang–Gursky–Yang [8, 9] on a closed 4-manifold with positive scalar curvature and large integral of \(\sigma_{2}\) relative to the Euler number. The proofs mainly rely on the conformal compactifications discussed earlier, a simple doubling argument and applications of the above mentioned results of Chang–Gursky–Yang [8, 9].
Suppose \((X^{4},g)\) is a conformally compact Einstein \(4\)-manifold with its conformal infinity of positive Yamabe constant and the renormalized volume \(V\) is positive. Then \(H^{1}(X,R)=0\).
Suppose \((X^{4},g)\) is a conformally compact Einstein \(4\)-manifold with conformal infinity of positive Yamabe constant. Then
\[V>\frac{1}{3}\frac{4\pi^{2}}{3}\chi(X)\]
implies that \(H^{2}(X,R)\) vanishes.
A nice way to illustrate the above argument is the following. We may consider the modified Yamabe constant
\[Y^{\lambda}(M,[g])=\inf_{g\in[g]}\frac{\int_{M}(R[g]+\lambda|W^{+}|_{g})dv_{g} }{(\int_{M}dv_{g})^{\frac{n-2}{n}}}.\]
Then, one knows that \((M,[g])\) is of positive \(Y^{\lambda}(M,[g])\) if and only if there is a metric \(g\in[g]\) with \(R+\lambda|W^{+}|>0\). As a consequence of the following Bochner formula
\[\Delta\frac{1}{2}|\omega|^{2}=|\nabla\omega|^{2}-2W^{+}(\omega,\omega)+\frac{1 }{3}R|\omega|^{2}\geq|\nabla\omega|^{2}+(R-2\sqrt{6}|W^{+}|)|\omega|^{2}\]
for any self-dual harmonic 2-form \(\omega\), one easily sees that a closed oriented 4-manifold with \(Y^{-2\sqrt{6}}>0\) has its \(b^{+}_{2}=0\). We also observe
Suppose \((X^{4},g)\) is a conformally compact Einstein \(4\)-manifold with its conformal infinity of positive Yamabe constant and that
\[V>\frac{1}{2}\frac{4\pi^{2}}{3}\chi(X).\]
Then \(X\) is diffeomorphic to \(B^{4}\) and more interestingly \(M\) is diffeomorphic to \(S^{3}\).
The detailed proofs of the above theorems are in our paper [7]. One may recall
\[a)\quad V(H^{4},g_{H})=\frac{4\pi^{2}}{3},\]
\[b)\quad V(S^{1}\times R^{3},g_{H})=0,\]
\[c)\quad V(S^{2}\times R^{2},g_{+}^{m})=\frac{8\pi^{2}}{3}\frac{r ^{2}_{h}(1-r^{2}_{h})}{3r^{2}_{h}+1}\leq\frac{1}{9}\frac{4\pi^{2}}{3}\chi(S^{2 }\times R^{2}),\]
\[d)\quad V(CP^{2}\setminus B,g_{K})\leq\frac{4\pi^{2}}{3}\frac{2( 4-\sqrt{3})}{9}<\frac{1}{3}\frac{4\pi^{2}}{3}\chi(CP^{2}\setminus B).\]
Theorem 2.6 is rather sharp, in cases (c) and (d) the second homology is nontrivial while the renormalized volume is very close to one-third of the maximum.
[1]Referencesref
## References
* [1]
* [2] Anderson M., Orbifold compactness for spaces of Riemannian metrics and applications, _Math. Ann._**331** (2005), 739–778, math.DG/0312111.
* [3] Anderson M., Cheeger J., Diffeomorphism finiteness for manifolds with Ricci curvature and \(L^{n/2}\)-norm of curvature bounded, _Geom. Funct. Anal._**1** (1991), 231–252.
* [4] Brezis H., Coron J.M., Convergence of solutions of H-systems or how to blow bubbles, _Arch. Ration. Mech. Anal._**89** (1985), 21–56.
* [5] Bray H., Neves A., Classification of prime 3-manifolds with Yamabe invariant larger than \(RP^{3}\), _Ann. of Math. (2)_**159** (2004), 407–424.
* [6] Chang S.-Y.A., Qing J., Yang P., On a conformal gap and finiteness theorem for a class of four-manifolds. _Geom. Funct. Anal._**17** (2007), 404–434, math.DG/0508621.
* [7] Chang S.-Y.A., Qing J., Yang P., On the topology of conformally compact Einstein 4-manifolds, in Noncompact Problems at the Intersection of Geometry, Analysis, and Topology, _Contemp. Math._**350** (2004), 49–61, math.DG/0305085.
* [8] Chang S.-Y.A., Gursky M., Yang P., An equation of Monge–Ampére type in conformal geometry and 4-manifolds of positive Ricci curvature, _Ann. of Math. (2)_**155** (2002), 709–787, math.DG/0409583.
* [9] Chang S.-Y.A., Gursky M., Yang P., An apriori estimate for a fully nonlinear equation on 4-manifolds, _J. D’Analyse Math._**87** (2002), 151–186.
* [10] Graham C.R., Volume and area renormalizations for conformally compact Einstein metrics, in The Proceedings of the 19th Winter School “Geometry and Physics” (1999, Srnì), _Rend. Circ. Mat. Palermo (2)_**63** (2000), suppl., 31–42, math.DG/9909042.
* [11] Graham C.R., Lee J., Einstein metrics with prescribed conformal infinity on the ball, _Adv. Math._**87** (1991), 186–225.
* [12] Qing J., On singularities of the heat flow for harmonic maps from surfaces into spheres, _Comm. Anal. Geom._**3** (1995), 297–315.
* [13] Qing J., On the rigidity for conformally compact Einstein manifolds, _Int. Math. Res. Not._**2003** (2003), 1141–1153, math.DG/0305084.
* [14] Struwe M., Global compactness result for elliptic boundary value problem involving limiting nonlinearities, _Math. Z._**187** (1984), 511–517.
* [15] Tian G., Viaclovsky J., Bach flat asymptotically ALE metrics, _Invent. Math._**160** (2005), 357–415, math.DG/0310302.
* [16] Tian G., Viaclovsky J., Moduli space of critical Riemannian metrics in dimension 4, _Adv. Math._**196** (2005), 346–372, math.DG/0312318.
|
1201.3536 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 5234,
"num_imgs": 0,
"llama3_tokens_count": 1576
} | [] | # What’s in a Fermi Bubble: a quasar episode in the Galactic centre
Kastytis Zubovas\({}^{1}\), Sergei Nayakshin\({}^{1}\), Andrew R. King\({}^{1}\)\({}^{1}\) Dept. of Physics & Astronomy, University of Leicester, Leicester, LE1 7RH, UK; mailto:kastytis.zubovas@le.ac.uk
###### Abstract
_Fermi_ bubbles, the recently observed giant (\(\sim 10\) kpc high) gamma-ray emitting lobes on either side of our Galaxy (Su et al., 2010), appear morphologically connected to the Galactic center, and thus offer a chance to test several models of supermassive black hole (SMBH) evolution, feedback and relation with their host galaxies. We use a physical feedback model (King, 2003, 2010) and novel numerical techniques (Nayakshin et al., 2009) to simulate a short burst of activity in Sgr A\({}^{*}\), the central SMBH of the Milky Way, \(\sim 6\) Myr ago, temporally coincident with a star formation event in the central parsec. We are able to reproduce the bubble morphology and energetics both analytically (Zubovas et al., 2011) and numerically (Zubovas & Nayakshin, in prep). These results provide strong support to the model, which was also used to simulate more extreme environments (Nayakshin & Power, 2010).
The AGN radiation radiation pressure drives a wind with a momentum flux \(\dot{M}_{\rm out}v\simeq L_{\rm Edd}/c\) with \(v\simeq\eta c\simeq 0.1c\), where \(\eta\simeq 0.1\) is the radiative efficiency (King, 2003). This wind shocks against the surrounding gas (perhaps producing \(\gamma\) rays) and pushes it away, forming an outflow. In the Milky Way, the wind shock cannot cool outside \(R_{\rm cool}\sim 10\) pc and hence transfers most of the kinetic energy rate (\(\sim 0.05L_{\rm Edd}\)) to the ambient gas (this is an energy-driven flow). Such an outflow, while driven, moves with a constant velocity \(v_{e}\sim 1000\) km s\({}^{-1}\)(King et al., 2011). Once the quasar switches off, the shell coasts for an order of magnitude longer than the driving phase \(t_{\rm q}\), easily reaching radii of tens of kpc.
The outflow morphology can become non-spherical due to anisotropic matter distribution in the Galaxy, such as the dense gas in the Central Molecular Zone (CMZ) which is too heavy for even an energy-driven outflow to lift. This qualitatively explains the morphology of the _Fermi_ bubbles. We use their observed and inferred properties to constrain the gas fraction (ratio of gas density to background potential density) in the Galaxy halo and the Sgr A\({}^{*}\) outburst duration (see Zubovas et al., 2011, for details):
\[f_{\rm g}\lesssim 7\cdot 10^{-3}l;\;\;t_{\rm q}>2.5\cdot 10^{5}\;{\rm yr}.\] (1)
[FIGURE:S0.F1][ENDFIGURE]
We test the model numerically, using GADGET with a ’virtual particle’ method of implementing wind feedback (Nayakshin et al., 2009). We embed the SMBH (which produces feedback for a time \(t_{\rm q}\)) and CMZ into a spherically symmetric isothermal halo with \(\sigma=100\) km/s and a constant \(f_{\rm g}\). We vary the free parameters \(t_{\rm q}\) and \(f_{\rm g}\).
Figure 1, left, shows that our model, with \(t_{\rm q}=1\) Myr and \(f_{\rm g}=10^{-3}\), can reproduce the morphology and size of the observed _Fermi_ bubbles. The CMZ is perturbed but not dispersed by the wind and collimates the outflow into two cavities. The total energy content inside the cavities is a small fraction of the input and also agrees with observational constraints. Figure 1, right, shows a simulation with \(t_{\rm q}=0.3\) Myr, which produces bubbles clearly inconsistent with observations. We can thus put tight constraints on both parameters. We also require the CMZ mass to be \(\simeq 10^{8}M_{\odot}\), but its aspect ratio is not important. A physical heating-cooling prescription (Sazonov et al., 2005) does not change the results significantly either. Therefore our findings are quite robust with regard to the uncertainties involved in the initial conditions.
We have shown that our physically motivated SMBH wind feedback model can explain the _Fermi_ bubbles. In addition, the same model works for quasars as their SMBHs establish the \(M-\sigma\) relation and clear the host galaxies of gas (e.g. Nayakshin & Power, 2010), suggesting there is no fundamental difference between the processes that were responsible for forming the galaxies at \(z\gtrsim 2\) and the processes what is happening in local, mostly dormant, galactic nuclei.
This research used the ALICE High Performance Computing Facility at the University of Leicester. KZ is supported by an STFC studentship.
## References
* King (2003) King, A. 2003, , 596, L27. arXiv:astro-ph/0308342
* King (2010) King, A. R. 2010, , 402, 1516. 0911.1639
* King et al. (2011) King, A. R., Zubovas, K., & Power, C. 2011, , L263+. 1104.3682
* Nayakshin et al. (2009) Nayakshin, S., Cha, S.-H., & Hobbs, A. 2009, , 397, 1314. 0905.2896
* Nayakshin & Power (2010) Nayakshin, S., & Power, C. 2010, , 402, 789. 0911.2434
* Sazonov et al. (2005) Sazonov, S. Y., Ostriker, J. P., & et al. 2005, , 358, 168. arXiv:astro-ph/0411086
* Su et al. (2010) Su, M., Slatyer, T. R., & Finkbeiner, D. P. 2010, , 724, 1044. 1005.5480
* Zubovas et al. (2011) Zubovas, K., King, A. R., & Nayakshin, S. 2011, , 415, L21. 1104.5443
|
1806.04528 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 46167,
"num_imgs": 0,
"llama3_tokens_count": 10965
} | [] | # Online Parallel Portfolio Selection with Heterogeneous Island Model
Štěpán Balcar
_Charles University_
_Faculty of Mathematics and Physics_
Prague, Czech Republic
Stepan.Balcar@mff.cuni.cz
Martin Pilát
_Charles University_
_Faculty of Mathematics and Physics_
Prague, Czech Republic
Martin.Pilat@mff.cuni.cz
###### Abstract
We present an online parallel portfolio selection algorithm based on the island model commonly used for parallelization of evolutionary algorithms. In our case each of the islands runs a different optimization algorithm. The distributed computation is managed by a central planner which periodically changes the running methods during the execution of the algorithm – less successful methods are removed while new instances of more successful methods are added.
We compare different types of planners in the heterogeneous island model among themselves and also to the traditional homogeneous model on a wide set of problems. The tests include experiments with different representations of the individuals and different duration of fitness function evaluations. The results show that heterogeneous models are a more general and universal computational tool compared to homogeneous models.
online portfolio selection, re-planning, hybrid algorithms
## I Introduction
In the past decades many different optimization methods have been proposed to solve various types of problems, including hill-climbing [1], random search [2], simulated annealing [3], tabu search [4], evolutionary algorithms [5], and differential evolution [6]. The methods are based on various paradigms and some of them are better for one problem, while others are better for another. In fact, the most suitable method can even be different in different phases of the optimization. In the beginning of the optimization run, very often methods that are based on the exploration are preferred, while in the later phases, exploitation of already found areas of the search space may be more beneficial.
The algorithm selection problem has been defined by Rice several decades ago as the problem of selecting the best algorithm for each problem instance from a set of instances such that the overall optimization cost is minimized [7]. Similar types of problems are also solved in the area of machine learning, with the goal to select the best machine-learning method for a given dataset. In this case, the problem is called meta-learning [8].
In the recent years, computers with multiple CPU cores have become more common, and that brings an alternative option, how to deal with the problem of algorithm selection. In such a case the so called portfolio algorithms [9] can be used. The portfolio algorithms run multiple (more or less) independent instances of different algorithms in parallel (or sequentially on a single CPU) with the goal to find better solution. Sometimes the instances differ only in random initialization of the algorithm parameters. Quite recently, Lindenauer _et al._ [10] studied the problem of portfolio selection as an extension of the problem of algorithm selection.
In this paper, we combine some of the ideas mentioned above and provide in essence an online algorithm for parallel portfolio selection. The algorithm is based on a heterogeneous island model, which is derived from the homogeneous island model commonly used for the parallelization of evolutionary algorithms. The island model [11] is based on the idea of the life of several isolated populations on different islands evolving in parallel. The islands cooperate with each other in the computation by exchanging the individuals from local populations, thus accelerating the convergence.
In homogeneous island models, each of the islands runs the same algorithm with the same settings, while in the heterogeneous model, the methods used on each island are somehow different. For example, Gong and Fukunaga [12] proposed a heterogeneous model where each island runs an evolutionary algorithm with different settings as an alternative adaptive approach to the manual parameter setting. Pilát and Neruda [13] introduced a heterogeneous models in the context of multi-objective optimization. Some of the islands run multi-objective algorithm, while the others run only a single-objective one. The single-objective islands help to improve the solutions of the multi-objective problem. The heterogeneity in this case actually serves as a way of creating a new hybrid evolutionary algorithm.
The method we discuss in this paper combines several techniques mentioned above – it uses the heterogeneous island model. The model runs a number of different stochastic optimization methods that periodically exchange some of the solutions they found. Apart from the optimization methods, there is also a planner that adaptively replaces the under-performing methods by better-performing ones and thus changes the set of algorithms used in the model online. The described system can thus be also considered an online parallel portfolio selection algorithm. From the point of view of evolutionary algorithms, the system is a way how to hybridize the evolutionary algorithm with other optimization methods in a general and modular way.
In this paper, we compare a number of different planners that decide which methods run in the heterogeneous island model at any point in time. The main goal of the paper is to provide a parallel system based on the idea that combines a number of general stochastic optimization methods in order to create a more general method with a better performance. In an ideal case, the performance of the combined method is better than the performance of each of the constituting methods separately, but even in case the performance is the same, the algorithm can be useful, if the same method can be run for multiple different types of problems. In such a case, it removes the need to select the correct algorithm for the problem at hand. The proposed method is evaluated on a wide range of problems including combinatorial optimization, continuous optimization and the hyper-parameter tuning of a machine-learning method.
Some of the results presented in this paper have already be presented in a short paper [14], namely the results of the P-QI and P-BM planners on the TSP and bin-packing problems. Here, we add eight new different planners, provide more detailed results of the baseline methods, and also present the results on seven new test instances from different fields – vertex cover, continuous optimization, and hyper-parameter tuning in machine learning.
## II Heterogeneous Island Model
```
1:Initialize the methods uniformly on the islands
2:\(t\gets 0\)
3:while termination condition not met do
4: \(I\leftarrow\) obtain information about running methods
5: if there is a method \(M\) that has not run then
6: \(k\leftarrow\) the least useful method
7: \(s\leftarrow\)\(M\)
8: else
9: \(k\leftarrow\) select a method to kill using a planner-spec. rule
10: \(s\leftarrow\) select a method to start using a planner-spec. rule
11: end if
12: Kill method \(k\) and start method \(s\)
13: \(t\gets t+1\)
14: Sleep until next planning iteration
15:end while
```
**Algorithm 1** General overview
In this section, we describe the heterogeneous island model with the re-planning of the methods. In the model, we assume a parallel computational environment with multiple CPU cores. Each CPU core corresponds to (and executes) a single island. Each island then executes a single optimization method. The methods running on different islands can be different, but they need to share the encoding of the candidate solutions as they share their solution among themselves.
During the run of the optimization, the system logs and processes information regarding the performance of various methods, like the quality of the solutions the methods share, how often the method improves the overall best solution or the number of distinct solutions it provided to other methods. The whole system is controller by a planner, that uses data measured during the execution of the methods in order to find a set of methods that would perform the best for the optimization problem at hand and for the current phase of the optimization.
An overview of the system from the point of view of the planner is given in Algorithm 1. The planner first initializes the island with different methods. Unless otherwise specified bellow (only for the random planner), the methods are assigned to the island in a round-robin manner, i.e. each method is executed the same amount of times (if possible). In case the number of different types of methods is larger than the number of islands, those methods that were not executed are given a preference to be executed first in re-planning.
After the initialization, the planner runs in a loop. In each iteration of the planner (planning iteration), the planner first obtains the information about the performance from the system and the methods themselves (line 4). If there is a method that has not been executed, an instance of the least useful method (according to planner philosophy, i.e. selected in the same way as on line 9 bellow) is removed and replaced by an instance of this method (lines 5-7, 12). Otherwise, the method that shall be removed and the method that shall be started in the given planning iteration are selected in a planner-specific way (lines 8-12). At the end of the planner iteration, the planner first removes the instance of the method that was selected to be removed and starts a new instance of the method selected to start. The planner then sleeps and waits for the start of the next iteration.
In the implementation, both the length of each planning iteration and the frequency of communication are time-based. While this makes the results of the experiments dependent on the hardware and on the implementation of the method (compared to an implementation that would be based on e.g. the number of function evaluations), the various optimization methods can run completely independently of the planner and the other methods in a fully asynchronous way and do not need to wait for the other methods to finish their evaluations. It also simplifies the implementation.
We also do not consider the parameters of the optimization methods in any way. While different settings of e.g. and evolutionary algorithm could be considered a different optimization method, it may actually make sense to work with the parameters more explicitly. This is however left for a future work.
```
1:Create the initial (set of) solutions
2:\(t\gets 0\)
3:while termination condition not met do
4: Generate new solution(s) based on the current state
5: Receive solutions from the other methods in the system
6: Incorporate received solution into own set of solutions
7: Share the best solution
8: Update information for the planners
9:end while
```
**Algorithm 2** General Optimization Method
### _Modification of Methods_
As we already indicated, the common optimization methods must be slightly modified before they can be used in the heterogeneous island model. First of all, the methods must be able to share their best individuals with other methods present in the system, and they must also be able to receive individuals created by other methods and incorporate them into their optimization loop.
We assume the optimization methods used in the heterogeneous island model contain a main loop (e.g. the generational loop in evolutionary algorithm). In such a case, modification of the method is simple (cf. Algorithm 2) – at the end of the optimization loop (line 5) new individuals are received from the other methods in the system and they are incorporated into the optimization (line 6). Then, the best individuals are shared with the rest of the system (line 7).
For all the methods we tested, the addition of the new solution is made in the same way as if the solution was generated by the methods. For example, in hill-climbing the solution is accepted if it is better than the current best solution for the method. In such a case, it replaces the solution of this methods and the method continues from it. In evolutionary algorithms, the received solutions are added to the population and it is up to the selection, if they survive or not.
The generating of the initial (set of) solutions (line 1) and of the new candidate solutions (line 4) is completely method-specific and can range from simple random sampling in random search to a complex combination of various genetic operators in evolutionary algorithms.
Apart from the communication, the methods also must be able to provide information to the planner (line 8). Specifically, each method counts how many times other methods provided it with a better solution (for the helper planner described bellow). Each individual in the system also carries the history of methods that modified it in any way, and this information must be kept updated.
While the modifications (and especially the book-keeping of the information for planners) seem tedious, they can be implemented in a general way and most of the code can be shared by most of the methods.
### _Planners_
For the experiments in this paper, we designed a number of planners, that follow (with the exception of the random planner) the general template given in Algorithm 1. All the planners first initialize all the islands with the available methods uniformly, i.e. each method is executed the same number of times. In case there are more methods than islands, the methods that have not been executed have a higher priority to be executed during re-planning. The planners also observe the system and in case new islands are added during the run of the algorithm, they initialize them with a method.
From the observation of the system, that planner obtains the following features that can be used during re-planning:
1. quantity of improvement – the number of times the given method improved the quality of the best solution,
2. average fitness – the average fitness of solutions shared by the method with other methods,
3. quantity of material – the number of distinct individuals each method shared with the rest of the methods,
4. quality of material – the number of times the method created a solution that is among the best \(N\) solutions for a pre-defined \(N\),
5. helper number – the number of times the method improved the quality of the current best solution of another method, and
6. best solution contribution – the number of times the given method was used in the history of the best solution.
Features 1-3 can be computed directly by observing the solutions shared by the methods in the system. The helper number is computed by each of the methods and the planner can request this information. In order to compute the best solution contribution feature each individual contains its history that lists all the methods that were used to create it.
In the rest of this section, we describe the details of the planners we propose in this paper.
[labelindent=leftmargin=0cm]
**Random (P-R)**: planner is the simplest planner and also serves as a baseline. It initializes the methods randomly and in each planning iteration, it randomly eliminates one method and replaces it with a new randomly chosen method.
**Random with Guaranteed Chance (P-RG)**: planner is a more sophisticated random planner – it ensures each method is executed at least once and at the same time, it ensures that there are always at least \(M_{min}\) different types of methods. In case the number of different running methods drops under the threshold, a random method is killed and replaced by the method that have not run recently. If there are methods that have not yet run, the planner chooses the method that has the least quantity of improvement and replaces it with a random method that has not run. Otherwise, the planner acts as the random planner.
**Method Description (P-MD)**: is based on the idea that in the initial phases of the optimization, exploration is more important than exploitation. Therefore, it divides the optimization methods into two sets – exploitation and exploration ones. During the initialization, only exploration methods are used and during the computation, these are gradually replaced by the exploitation methods. Let \(e\) be the number of exploration methods in the system, and \(a\) be the number of all methods in the system. Let \(t\) be the number of the current iteration and \(T\) be the maximum number of iterations. If \(1-\frac{t}{T}<\frac{e}{a}\) then the exploration method that achieved the least quantity of improvement is killed and replaced by the exploitation method that achieved the best average of fitness so far. In the last iterations of the planner only exploration methods are used. Newly added method is protected and cannot be killed for the first \(N_{protect}\) iterations. The division of the methods into the two groups is done manually.
**Best Helper (P-BH)**: planner is a planner based on the number of times a method helped another method to improve its solution since the last iteration of the planner. A method \(A\) helped another method \(B\) if the solution received by \(B\) from \(A\) is better than the currently best solution of method \(B\). In each planning iteration, the planner removes the method that helps the least and replaces is by the method, that helps the most. This planner does nothing in few first \(N_{init}\) iterations, in order to wait for the performance of methods to stabilize.
**Best Average Fitness (P-AF)**: planner uses the history of distributed computing and removes the method that has sent individuals with the worst average fitness during the last planing iteration, while the method with the best average fitness value of outgoing individuals is duplicated. Newly added methods are protected and ensured to run for at least \(N_{protect}\) iterations.
**Quantity of Improvement (P-QI)**: planner uses the number of times each method produces a solution that is better than the current best one. During the initialization, methods are spread uniformly, and in each planning iteration, the method that produced the least number of improvements since the last re-planning is replaced by the method that produced the most improvements since the last re-planning. New methods are also protected for \(N_{protect}\) planning iterations.
**Quantity Of Material (P-QM)**: planner uses the information on the amount of distinct individuals sent by each method. During re-planning, the planner replaces the method that sent the smallest number of different individuals, by the method that created the largest number of different individuals. This planner also protects the newly started method for \(N_{protect}\) iterations.
**Best Material (P-BM)**: planner uses the information on the number of times the method provided a solution that is among the top \(N\) solutions overall. In each planning iteration, the planner removes the method with the least number of top solutions since the last re-planning and replaces it with the method with the most top solutions.
**Best Solution Contribution (P-BC)**: planner determines the importance of computational methods using the history of the operations applied to create the best individual. From this history, the planner uses the information about the methods that helped to create the best solution. In each re-planning iteration, the method that is most common in the history of the best individual replaces the method that contributed the least. The planner does nothing in the first \(N_{init}\) iterations.
**Lazy Quantity Of Improvement (P-LQI)**: planner is similar to the quantity of improvement planner, but removes the methods only in case they are not beneficial, i.e. it replaces a method only in case it did not make any improvement to the best solution during the last \(N_{patience}\) planning iterations. Such method is replaced with the one which achieved the greatest number of improvements during the last planner iteration. Newly added methods are protected for \(N_{protect}\) iterations.
## III Experiment settings
In order to test the proposed portfolio selection algorithm, we ran a set of experiments where we compare the different planners to homogeneous island model running each of the optimization methods and to heterogeneous island model running all the methods, each in two instances, without re-planning (we also performed the same experiments with the single methods, which are equivalent to the homogeneous case without communication, but the results are generally worse than those with communication, therefore we do not present them here). The experiments are performed on five types of problems – traveling salesman problem, bin packing problem, continuous optimization problem, vertex cover problem, and tuning of machine-learning hyper-parameters. These problems use a wide range of encodings and different genetic operators.
In all of the experiments, the parameters of the planners are set the same and they are given in Table I. We used these values of the parameters based on some preliminary experiments. Based on the settings, we can see that each run of the optimization takes approximately 50 minutes on 16 CPUs. The optimization methods share the individuals every 5 seconds. The island model uses a fully connected topology, i.e. each method can directly communicate with any other method. The runs are repeated nine times for each combination of a problem and a planner.
Parameter | Value
---|---
Number of iterations | 50
Iteration length | 60 s
Number of runs | 9
Number of islands | 16
Ninit | 5
Nprotect | 3
Npatience | 3
Mmin | 3
TABLE I: The values of parameters shared by all the experiments (some of them
are used by some planners only)
We have implemented seven different optimization methods – random search (RS), tabu search (TS), hill climbing (HC), simulated annealing (SA), evolutionary algorithm (EA), differential evolution (DE), and a brute force (BF) algorithm that performs a systematic search. All these share the following basic operators:
* generate random solution – used by random search and in initialization of EA and DE,
* generate next solution – used by the brute force method to generate the next solution systematically,
* unary “mutation” operator – used as a mutation in the EA and also by the hill climbing, simulated annealing and tabu search to generate the next solution,
* binary “crossover” operator – used by the EA as a crossover, and
* ternary operator – used in the DE by the differential mutation.
There is no natural implementation of the last ternary operator in some cases, but we created an implementation nonetheless for the sake of completeness. We still call the resulting method “differential evolution”, although the only feature common to this method and the differential evolution is the use of the ternary operator.
Some parameters of the optimization methods are also set the same for all the experiments, these are given in Table II. The other parameters depend on the particular optimization problem and are discussed in the rest of this section.
Algorithm | Parameter | Value
---|---|---
Hill Climbing | # of neighbors | 10
Random Search | ∅ |
Evolution | population size | 10
| mutation rate | 0.9
| crossover rate | 0.1
| selection | bin. tourn.
Brute Force | |
Tabu Search | tabuModelSize | 50
Simulated Annealing | temperature | 10,000
| coolingRate | 0.002
Differential Evolution | popSize | 50
| F | 1
TABLE II: Parameters of the optimization methods
### _Traveling Salesman Problem_
The goal in the traveling salesman problem (TSP) is to find the shortest Hamiltonian cycle in a complete graph. For the experiments, we used two instances of the problem from the VLSI dataset¹. One of them contains 1,083 cities (denoted as TSP1083 in the rest of this paper) and the other contains 662 cities (TSP662).
[FOOTNOTE:1][ENDFOOTNOTE]
The TSP is a typical example of a problem, where the solutions are encoded as permutations. The permutation gives the order, in which the vertices of the graph should be visited.
In this case, the hill climbing, tabu search and simulated annealing use the 2-opt operator [15] to generate new solutions. The random search method generates new permutations randomly and the brute force algorithm generates the next permutation in each step. The evolutionary algorithm uses the 2-opt operator as a mutation and it additionally uses a single-point crossover [16] – a random crossover point is selected, the initial parts of the individuals are swapped and the rest of the individuals is filled by the rest of the numbers from the other parent in the order in which they appear in that parent.
We also wanted to use the differential evolution and therefore we created a special ternary operator: first, the two permutations are subtracted and their difference is added to a third permutation. Then, for each value in the vector, we remember the index of the vector component on which it is located and the pairs \((value,index)\) are sorted by values. The sequence of indices then forms a new permutation.
The optimization objective (and the values shown in the the results) is the length of the Hamiltonian cycle and should be minimized.
### _Bin-packing Problem_
In the bin-packing problem (BPP), objects of various volumes are packed into bins of limited volume. The goal is to minimize the number of bins used. The solution of such problems is also commonly represented by a permutation that is decoded using the First Fit algorithm [17] in order to compute the number of objectives. For the experiments, we generated a random instance of the BPP with 1,000 objects with volumes between 0 and 1 and bins with the volume of 1.
The random search, brute force and differential evolution methods generate the individuals in the same way as in the TSP problem. The hill climbing, tabu search, and simulated annealing use a displacement operator that moves a randomly selected number of consecutive values to the end of the permutation, the number of moved values is determined adaptively and is always less than 0.5 percent of all the values in the solution.
The evolutionary algorithm uses the order crossover [16] and a shift mutation that moves a random object to the end of the permutation.
The optimization objective is the number of bins and should be minimized.
### _Continuous Optimization_
The goal of continuous optimization (CO) is to find the minimum of a function \(f:\mathbb{R}^{n}\to\mathbb{R}\) in a multi-dimensional interval \([l_{1},u_{1}]\times\dots\times[l_{n},u_{n}]\). The solution is thus encoded as a vector of \(n\) numbers from this interval. For the experiments in this paper we selected four functions from the BBOB benchmark² – the Büche-Rastrigin function (COf04), the Rosenbrock function (COf08), the Different Powers function (COf14) and the Schaffers function (COf17), all of them in 10-dimensional space. These specific functions were selected in order to evaluate the performance of the algorithm under different conditions regarding the multi-modality and separability of the functions.
[FOOTNOTE:2][ENDFOOTNOTE]
The tabu search, hill climbing, and simulated annealing methods generate the new individuals by adding a random number between -0.0025 and 0.0025 to each coordinate in the vector. The evolutionary algorithm uses the same operator as a mutation and additionally uses the weighted average of the coordinates of the vectors with weights randomly generated between 0 and 1. The random search generates random vectors from the whole multi-dimensional interval. The differential evolution uses the common implementation with parameters given in Table II. Finally, the brute force algorithm performs a grid search in the whole multi-dimensional interval – it adds 0.005 to one of the coordinates in every step.
The optimization objective is directly the value of the function and should be minimized.
### _Vertex Cover_
In the vertex cover (VC) problem, a graph is given and the goal is to find the smallest set of vertices of the graph, such that all the edges are covered. The solution is thus represented by a subset of vertices forming a valid coverage. The objective is to minimize the number of vertices in the cover. In this case, we used two benchmarks from the BHOSLIB³, one with 1534 vertices (VCfrb59265) and the other with 4000 vertices (VCfrb10040).
[FOOTNOTE:3][ENDFOOTNOTE]
The coverage is generally generated by starting from a smaller set of vertices and adding vertices until a coverage is created. In case of random search, the start is random and the vertices are also added randomly. In the hill climbing, tabu search and simulated annealing methods, five randomly selected vertices are removed and the cover is completed by adding (some of) their neighbors. The brute force algorithm generates all the subsets of the vertices and for subset that do not cover the whole graph, the cover is filled by adding random vertices. The evolutionary algorithm uses a mutation that removes three random vertices and finishes the cover by adding (some of) their neighbors. Finally, the differential evolution computes an intersection of two individuals and adds vertices from the third one until a valid cover is formed.
The optimization objective is the number of vertices in the cover and should be minimized.
### _Hyper-parameter Tuning in ML_
The final type of problem we investigate in this paper is the tuning of the hyper-parameters of the machine learning methods (ML), i.e. we are searching for a set of hyper-parameters, such that the error rate of the model is minimized. To this end, we used the random forest model as implemented in the Weka framework [18] and optimize its parameters for the wilt dataset from the UCI machine learning repository [19]. The goal of this dataset is a classification into two classes based on five attributes. The particular implementation of the random forest has the following parameters and we search their optimal values in the given range: \(P\in[20,100]\), \(K\in[1,6]\), \(V\in[0.0001,0.5]\), \(U\in\{0,1\}\), \(B\in\{0,1\}\), \(depth\in[1,20]\), \(I\in[20,30]\), and \(batchsize\in[80,120]\).
The tabu search, simulated annealing, and hill climbing use an operator that changes a value by adding \(\pm 1\) in case the parameter is an integer one and a random number less than 0.005 if the parameter is a real number. The random search generates a random set of the parameters, while the brute force method performs a grid search of the parameters. Evolution uses the operator used in simulated annealing as a mutation and additionally uses a one-point crossover. As all the arguments are numerical, the differential evolution works in the classical sense, the values just need to be rounded for the integer parameters after the operator is applied.
The optimization objective is the error rate of the model and should be minimized.
## IV Results
Optimization Method | TSP1083 | TSP662 | BPP1000 | VCfrb59265 | VCfrb10040
---|---|---|---|---|---
Brute force | 104555106075103621 | 506455129349461 | 638.70642629 | 82.118777 | 142.11144136
Differential evolution | 8140710031819903 | 145732667911343 | 627.55629626 | 43.114642 | 96.3310088
Evolution | 222712260222043 | 668668386504 | 521.66524520 | 44.114741 | 82.558978
Hill Climbing | 515452215084 | 304430882968 | 550.88555545 | 35.003633 | 68.447265
Random Search | 100494100845100165 | 486614896848163 | 628.77631627 | 75.777874 | 141.44146137
Simulated annealing | 148371528014434 | 761079727363 | 564.88567561 | 40.444538 | 80.448875
Tabu search | 517752395114 | 299730732946 | 549.88553546 | 34.883634 | 64.556663
Hetero - Static | 525353135105 | 305531022994 | 529.22535527 | 35.883834 | 70.117268
Hetero - P-R | 537854705281 | 305131122945 | 529.44536527 | 36.223735 | 70.227866
Hetero - P-RC | 538655915228 | 304831432981 | 528.33533526 | 36.223735 | 69.227368
Hetero - P-MD | 204942159019769 | 652167415764 | 520.33523518 | 36.113734 | 70.227468
Hetero - P-LQI | 508451895027 | 301131192942 | 523.66527521 | 36.003734 | 69.667467
Hetero - P-AF | 505851265002 | 303330892985 | 528.30537525 | 35.773834 | 68.887266
Hetero - P-BH | 515552845101 | 305031372984 | 525.44528523 | 36.113835 | 69.777665
Hetero - P-BM | 498550354912 | 301430622947 | 523.22527520 | 35.663833 | 70.007268
Hetero - P-QI | 502250984951 | 300530802960 | 525.22532522 | 36.223735 | 69.117265
Hetero - P-QM | 689776846127 | 332634413215 | 526.66532523 | 36.223834 | 71.117369
Hetero - P-BC | 516852715075 | 303931102962 | 525.66532522 | 36.223835 | 68.447365
Optimization method | COf04 | COf08 | COf14 | COf17 | MLWilt
Brute force | 366.8946.2164.0 | 35410729087263 | 17.7932.758.953 | 8.394810.7924.9845 | 0.05400.05390.0539
Differential evolution | 5.1049.2970.742 | 48.124104.335.5458 | 0.1060.1820.036 | 1.32451.75010.7507 | 0.01280.01320.0124
Evolution | 16.2524.8710.94 | 0.00130.00150.0008 | 0.0000.0000.000 | 0.91051.92340.2749 | 0.01280.01340.0124
Hill Climbing | 187.4476.561.69 | 0.00110.00180.0007 | 0.0000.0000.000 | 5.47376.36564.9727 | 0.01380.01430.0132
Random Search | 91.065100.5881.68 | 1782.92287.91416.9 | 4.0935.1453.376 | 3.79504.48313.1385 | 0.01320.01340.0126
Simulated annealing | 76.01147.319.91 | 5.30588.97980.1430 | 0.0000.0010.000 | 5.83758.73603.7791 | 0.01350.01410.0126
Tabu search | 208.5519.350.74 | 0.00120.00140.0009 | 0.0000.0000.000 | 5.33736.77444.4686 | 0.01380.01430.0130
Hetero - Static | 11.1724.875.970 | 0.00240.00290.0018 | 0.0000.0000.000 | 1.89582.88960.4755 | 0.01280.01340.0124
Hetero - P-R | 2.1613.9800.000 | 0.00100.00200.0004 | 0.0000.0000.000 | 1.07861.64760.3701 | 0.01280.01320.0124
Hetero - P-RG | 2.2115.9700.000 | 0.00130.00190.0002 | 0.0000.0000.000 | 1.06161.51710.2756 | 0.01310.01340.0126
Hetero - P-MD | 1.68613.930.000 | 0.00180.00220.0014 | 0.0000.0000.000 | 0.88831.51040.1674 | 0.01280.01320.0124
Hetero - P-LQI | 0.9942.9850.000 | 0.00190.00260.0008 | 0.0000.0000.000 | 1.54192.77590.9240 | 0.01300.01320.0124
Hetero - P-AF | 1.1062.9850.000 | 0.00260.00370.0018 | 0.0000.0000.000 | 1.29202.21670.4295 | 0.01300.01320.0124
Hetero - P-BH | 7.36623.580.000 | 0.00160.00320.0000 | 0.0000.0000.000 | 1.04681.37020.6861 | 0.01280.01320.0124
Hetero - P-BM | 0.8842.9850.000 | 0.00260.00370.0018 | 0.0000.0000.000 | 0.98431.40080.4704 | 0.01300.01320.0124
Hetero - P-QI | 1.2162.9850.000 | 0.00150.00190.0012 | 0.0000.0000.000 | 1.12691.85650.6168 | 0.01290.01320.0124
Hetero - P-QM | 19.0838.806.965 | 0.00220.00350.0006 | 0.0000.0000.000 | 1.77772.81300.9309 | 0.01310.01340.0124
Hetero - P-BC | 3.85014.920.000 | 0.00340.00540.0015 | 0.0000.0000.000 | 1.16311.52440.6877 | 0.01290.01320.0124
TABLE III: Results of the planners. The numbers represent the average value of
the objective over nine independent runs. The numbers in subscript and
superscript show the minimum and maximum value of the objective achieved in
the experiments.
In order to run the experiments, all the above described methods were implemented in the JADE [20] multi-agent framework. It allows for simple implementation of distributed systems. Each of the islands corresponds to a single container running (apart from a few management agents) the optimization method, which is again implemented as another agent. The infrastructure provided by the JADE system makes the implementation of the communication and sharing in the system simple.
The results of the experiments are provided in Table III. It shows, for each of the problems and for each different setting of both homogeneous and heterogeneous islands the average objective value as well as the minimum and maximum objective over nine independent runs.
We can immediately see that, rather unsurprisingly, for the homogeneous models, different methods work better for different optimization problems. For the TSP and VC problems, the tabu search and hill climbing provide the best results, for the BPP, the best methods are tabu search and evolutionary algorithm. In continuous optimization, the differential evolution and evolutionary algorithm provide good results, and in some cases also hill climbing and tabu search work well. Finally, with the ML problem, the difference between the methods is rather small, still, evolution and differential evolution have the best average of the nine runs.
If we compare the static heterogeneous island where each method has two instances without re-planning, we can already see one of the advantages of the heterogeneous models – the results are always close the results of the best homogeneous method, in some cases (COf04 and ML) the results are even slightly better. This means that even without re-planning it may make sense to run heterogeneous models instead of the homogeneous ones, in case multiple CPUs are available and parallel optimization is desired. The use of heterogeneous islands seems to remove the need to select the correct optimization algorithm and can be in fact considered an algorithm selection method.
The different planners significantly influence the results. In all cases the best planner (for given problem) is better than the static case without re-planning. In fact, the heterogeneous model with the P-BM (best genetic material) planner has at least the same performance as the static heterogeneous model in most cases (except COf08, ML). A similar observation holds for the P-QI (quantity of improvement) planner.
The heterogeneous island model with re-planning also provides better results than the best homogeneous island model in six of the ten cases (one TSP benchmark, the bin-packing benchmark, and all four of the CO benchmarks). In the other cases, the homogeneous tabu search was better for the other TSP benchmark and for both the vertex cover benchmarks. The differential evolution and the evolutionary algorithms were better for the hyper-parameter tuning benchmark. However, in the latter case, the differences between the methods are negligible. In the cases where the homogeneous islands provided the best results, the difference between the homogeneous island and the best heterogeneous one were small. For example, in the TSP662 benchmark, the tabu search found on average solution of length 2997, while the P-QI found a solution of length 3005. The biggest difference between the homogeneous islands and the heterogeneous islands was observed on the VCfrb10040 benchmark, where the tabu search got a solution with average objective of 64.55 compared to the 68.44 for the best heterogeneous model.
| TSP | BP | CO | VCfrb | ML | Sum
---|---|---|---|---|---|---
Planner | 1083 | 662 | 1000 | f04 | f08 | f14 | f17 | 10040 | 59265 | Wilt |
Static | 0 | 1 | 0 | 0 | 0 | 1 | 2 | 0 | 1 | 4 | 9
P-R | 0 | 2 | 0 | 1 | 7 | 3 | 3 | 1 | 0 | 1 | 18
P-RG | 0 | 3 | 0 | 3 | 4 | 2 | 3 | 0 | 0 | 0 | 15
P-MD | 0 | 0 | 8 | 6 | 1 | 0 | 3 | 0 | 1 | 3 | 22
P-LQI | 4 | 5 | 4 | 2 | 1 | 3 | 0 | 2 | 1 | 1 | 23
P-AF | 5 | 3 | 0 | 4 | 0 | 3 | 1 | 3 | 2 | 1 | 22
P-BH | 0 | 1 | 0 | 1 | 3 | 4 | 4 | 2 | 0 | 3 | 18
P-BM | 9 | 4 | 5 | 4 | 0 | 3 | 3 | 0 | 1 | 1 | 30
P-QI | 6 | 3 | 2 | 2 | 4 | 5 | 3 | 3 | 0 | 1 | 29
P-QM | 0 | 0 | 0 | 0 | 3 | 0 | 0 | 0 | 2 | 1 | 6
P-BC | 0 | 2 | 2 | 1 | 1 | 0 | 2 | 4 | 0 | 2 | 14
TABLE IV: The number of times the given planner found a solution in the top
quartile of the solutions found by all the methods.
In order to compare the various planners among themselves, we also computed how many times each planner found a results that is among the top results found overall. We define a results to be top if it is the top quartile of all the results found by all the methods. The results of this experiment are displayed in Table IV. As we made nine independent runs, the best a planner can achieve is to have all the nine results among the top overall. This happened only once for the P-BM planner in the TSP1083 benchmark. Overall, the P-BM planner and the P-QI planner provide the best results, with the former finding a top result in 30 cases and the latter providing such results in 29 cases. For comparison, the static heterogeneous islands found a top solution only in 9 runs.
Interestingly, the random planners provide quite good results for the CO benchmarks, where in one case the P-R planner found solution in the top quartile in 7 of the 9 runs. On the other hand, the random methods failed in most of the other benchmarks. The P-MD planner based on the method description obtained good results in the BP benchmark and in the COf04 benchmark (with 8 and 6 top results respectively), but in the rest of the benchmark its performance was rather poor. The P-BC planner is based on the history of the best individual and thus requires the most complex changes in the optimization methods. However, its performance is not very good (comparable to the random planners) and thus it does not seem to be worth the more complex implementation.
## V Conclusion and Future Work
We presented an algorithm for online portfolio selection. The presented model is a generalization of the homogeneous island model to cases, where each island runs a different optimization method. As such, it can be considered a way how to create hybrid optimization algorithms. Thanks to the modular implementation, methods can be hybridized easily and the only requirement is that they share the encoding of the solution.
We have shown that the heterogeneous island model provides better or comparable results compared to the best method executed in the homogeneous island model. It means that if multiple CPU cores should be used for optimization and the method needs to be selected, it is better to use the heterogeneous model that selects the method automatically than using the homogeneous model and selecting the best method in multiple runs. As such, the heterogeneous models can also be consider and algorithm selection method without any interaction with the user.
So far, we have experimented with only seven different optimization methods with fixed parameters. However, in principle, we could used much more methods or different sets of their parameters. While methods with different parameters could be considered a completely different method and used in the same way as described in this paper. It may make sense to define a similarity between the methods and take it into account during the planning. Such extensions are left for future work.
## Acknowledgment
Martin Pilát has been supported by Czech Science Foundation project number 17-17125Y. Štěpán Balcar has been supported by SVV project number SVV-2018-260451.
## References
* [1] C.J. Colbourn and M.J. Colbourn. _Algorithms in Combinatorial Design Theory_. North-Holland Mathematics Studies. Elsevier Science, 1985.
* [2] L. A. Rastrigin. About convergence of random search method in extremal control of multi-parameter systems. _Avtomat. i Telemekh_, 24:1467–1473, 1963.
* [3] Christopher C. Skiścim and Bruce L. Golden. Optimization by simulated annealing: A preliminary computational study for the tsp. In _Proceedings of the 15th Conference on Winter Simulation - Volume 2_, WSC ’83, pages 523–535, Piscataway, NJ, USA, 1983. IEEE Press.
* [4] Fred Glover. Future paths for integer programming and links to artificial intelligence. _Comput. Oper. Res._, 13(5):533–549, May 1986.
* [5] John H. Holland. _Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence_. MIT Press, Cambridge, MA, USA, 1992.
* [6] Kenneth Price, Rainer M. Storn, and Jouni A. Lampinen. _Differential Evolution: A Practical Approach to Global Optimization (Natural Computing Series)_. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2005.
* [7] John R Rice. The algorithm selection problem. In _Advances in computers_, volume 15, pages 65–118. Elsevier, 1976.
* [8] Pavel Brazdil, Christophe Giraud-Carrier, Carlos Soares, and Ricardo Vilalta. _Metalearning: Applications to Data Mining_. Springer Publishing Company, Incorporated, 1 edition, 2008.
* [9] Carla P. Gomes and Bart Selman. Algorithm portfolios. _Artificial Intelligence_, 126(1):43 – 62, 2001.
* [10] M Lindauer, H Hoos, and Frank Hutter. From sequential algorithm selection to parallel portfolio selection. In _International Conference on Learning and Intelligent Optimization_, pages 1–16. Springer, 2015.
* [11] J. Kacprzyk and W. Pedrycz. _Springer Handbook of Computational Intelligence_. Springer Handbooks. Springer Berlin Heidelberg, 2015.
* [12] Y. Gong and A. Fukunaga. Distributed island-model genetic algorithms using heterogeneous parameter settings. In _2011 IEEE Congress of Evolutionary Computation (CEC)_, pages 820–827, June 2011.
* [13] M. Pilât and R. Neruda. Combining multiobjective and single-objective genetic algorithms in heterogeneous island model. In _IEEE Congress on Evolutionary Computation_, pages 1–8, July 2010.
* [14] Štěpán Balcar and Martin Pilát. Heterogeneous island model with re-planning of methods. In _GECCO 2018 (Companion)_, pages 1–2. ACM, 2018. accepted.
* [15] Shen Lin. _Computer solutions of the traveling salesman problem_. The Bell System Technical Journal, 1965.
* [16] A. E. Eiben and James E. Smith. _Introduction to Evolutionary Computing_. Springer Publishing Company, Incorporated, 2nd edition, 2015.
* [17] J.D. Ullman. _The Performance of a Memory Allocation Algorithm_. Technical report (Princeton University. Dept. of Electrical Engineering. Computer Sciences Laboratory). Princeton University, 1971.
* [18] Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. The weka data mining software: An update. _SIGKDD Explor. Newsl._, 11(1):10–18, November 2009.
* [19] M. Lichman. UCI machine learning repository, 2013.
* [20] Fabio Luigi Bellifemine, Giovanni Caire, and Dominic Greenwood. _Developing multi-agent systems with JADE_, volume 7. John Wiley & Sons, 2007.
|
1511.09283 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 42836,
"num_imgs": 23,
"llama3_tokens_count": 11035
} | [
"content_image/1511.09283/x1.png",
"content_image/1511.09283/x3.png",
"content_image/1511.09283/x4.png",
"content_image/1511.09283/x5.png",
"content_image/1511.09283/x7.png",
"content_image/1511.09283/x9.png",
"content_image/1511.09283/x10.png",
"content_image/1511.09283/x11.png",
"content_image/1511.09283/x12.png",
"content_image/1511.09283/x14.png",
"content_image/1511.09283/atom5.png",
"content_image/1511.09283/x17.png",
"content_image/1511.09283/dibujo20111007_hadrons_and_exotic_hadrons_in_qcd.png",
"content_image/1511.09283/PhaseDiag.jpeg",
"content_image/1511.09283/x18.png",
"content_image/1511.09283/HiggsComparisonLC500-300x223.png",
"content_image/1511.09283/x20.png",
"content_image/1511.09283/x22.png",
"content_image/1511.09283/x23.png",
"content_image/1511.09283/x24.png",
"content_image/1511.09283/1a.png",
"content_image/1511.09283/x26.png",
"content_image/1511.09283/x27.png"
] | ###### Abstract
The situation in particle physics after the discovery of the Higgs boson is discussed. Is the Standard Model consistent quantum field theory? Does it describe all experimental data? Are there any indications of physics beyond the SM? Is there another scale except for the EW and the Planck ones? Is the SM of particles physics compatible with Cosmology? New challenges of hadron physics: exotic hadrons and dense hadronic matter. Search for new physics, from the Higgs sector to dark matter, supersymmetry, extra dimensions and compositeness. What do we expect? What are the main targets? We try to answer the main questions and describe the key issues of possible new physics beyond the SM.
**LANDSCAPE VIEW AT THE EDGE OF**
**[0.3cm] A MYSTERY²**
[FOOTNOTE:2][ENDFOOTNOTE]
**D.I.Kazakov**
_Bogoliubov Laboratory of Theoretical Physics,_
_Joint Institute for Nuclear Research, Dubna_
_[0.2cm] Alikhanov Institute for Theoretical and Experimental Physics, Moscow_
_[0.2cm] Moscow Institute of Physics and Technology, Dolgoprudny_
## 1 Introduction: The Standard Theory
With the launch of the LHC we approached the mystery land that lies beyond the TeV border line. We do not know what is hidden there and our task is to be prepared and not to miss the new expected or unexpected phenomena. The guiding line here is our knowledge of physics at lower scales, first of all of the Standard Model of fundamental interactions which for our current understanding seems to be completed. Our search for possible new physics is based on the comparison of experimental data with the SM predictions. From this point of view it is useful to look back at the SM and remind its basic features.
The Standard Model (Theory) is the gauge quantum field theory based on the following main principles:
* Three gauged symmetries \(SU_{c}(3)\times SU_{L}(2)\times U_{Y}(1)\) corresponding to the strong, weak and electromagnetic interactions, respectively;
* Three families of quarks and leptons belonging to the representations (\(3\times 2,3\times 1,1\times 2,1\times 1\)) of the gauge groups;
* The Brout-Englert-Higgs mechanism of spontaneous EW symmetry breaking accompanied with the Higgs boson;
* The CKM and PMNS mixing matrices of flavours;
* The CP violation via the phase factors in these matrices;
* Confinement of quarks and gluons inside hadrons;
* The Baryon and Lepton number conservation;
* The CPT invariance leading to the existence of antimatter.
The ST principles allow:
* Extra families of quarks and leptons — seems to be excluded experimentally already;
* Presence or absence of right-handed neutrino — still unclear;
* Majorana/Dirac nature of neutrino — the Majorana mass is slightly beyond the SM;
* Extra Higgs bosons — Already beyond but in the spirit of the SM.
The main questions to the Standard Theory (ST) can be formulated as:
* Is it self consistent?
* Does it describe all experimental data?
* Are there any indications of physics beyond the SM?
* Is there another scale except for the EW and the Planck ones?
* Is it compatible with Cosmology?
There are also many ”why’s” and ”how’s”:
\begin{tabular}{l|l}
Why’s & How’s \\
why \(SU(3)\times SU(2)\times U(1)\) ? & how does confinement actually work? \\
why 3 generations? & how does the quark-hadron phase transition happen? \\
why quark-lepton symmetry? & how do neutrinos get a mass? \\
why V-A weak interaction? & how does CP violation occur in the Universe? \\
why L-R asymmetry? & how to protect the SM from would be heavy \\
why B \& L conservation? & scale physics? \\
etc & \\
\end{tabular}
In what follows we will try to answer the main questions and describe the key issues of possible physics beyond the SM.
## 2 Is the SM consistent quantum field theory?
### Ghosts
For a long time the known property of the SM is that the running couplings possess the Landau ghost poles at high energies. This is true for the U(1) coupling and for the Higgs coupling, but …beyond the Planck scale, as shown in Fig.1.
<figure><img src="content_image/1511.09283/x1.png"><figcaption>Figure 1: The running of the U(1) and the Higgs couplings (left) and theLandau ghost pole (right)</figcaption></figure>
The Landau pole has a wrong sign residue that indicates the presence of unphysical ghost fields - intrinsic problem and inconsistency of a theory [1]. The one loop expression for the hyper charge coupling in the SM
\[\alpha_{1}(Q^{2})=\frac{\alpha_{10}}{1-\frac{41}{10}\frac{\alpha_{10}}{4\pi} \log(Q^{2}/M_{Z}^{2})}\] (1)
possesses the ghost pole at the scale \(Q^{*}=M_{Z}e^{\frac{20\pi}{41\alpha_{10}}}\sim 10^{41}\ GeV\). It is far beyond the Planck scale and one may ignore it assuming that the Planck scale quantum gravity will change the situation. However, quantum gravity is still lacking and the presence of ghosts is intrinsically dangerous independently of the scale where they appear. The situation may change in GUTs due to new heavy fields at the GUT scale. In any case, this requires modification of the ST at VERY high energies.
### Anomalies
As is well known, in quantum theories there may exist anomalies that can ruin the theory. In the SM there is a set of quantum anomalies. A famous example is the triangle chiral anomaly [2]. Its contribution to the electron-neutrino scattering amplitude is shown in Fig.2. It would destroy renormalizability if not cancelled among quarks and leptons. The other anomalies existing in the SM are shown in Fig.3 [3].
<figure><img src="content_image/1511.09283/x3.png"><figcaption>Figure 2: The chiral anomaly diagram in the electron-neutrino scatteringamplitude</figcaption></figure>
<figure><img src="content_image/1511.09283/x4.png"><figcaption>Figure 3: The gauge anomalies in the SM</figcaption></figure>
Fortunately, they are all canceled in the SM for each generation of quarks and leptons, as can be seen from expressions below.
\[TrY_{L}=3(\frac{1}{3}+\frac{1}{3})-1-1=0,\]
\[TrY_{q}=3(\frac{1}{3}+\frac{1}{3}-\frac{4}{3}-(-\frac{2}{3}))=0,\]
\[TrY=3(\frac{1}{3}+\frac{1}{3}-\frac{4}{3}-(-\frac{2}{3}))-1-1-(-2)=0.\]
Thus, the cancellation of anomalies requires the quark-lepton symmetry. Probably, this is a hint towards the Grand Unified Theories.
### Vacuum stability
Quantum corrections can make the vacuum unstable. Moreover, the whole construction of the SM may be in trouble being metastable or even unstable. This is related to the Higgs potential which at the tree level contains quadratic and quartic terms. The quartic coupling due to the radiative corrections depends on a scale and at some scale might change the sign, thus making the EW vacuum unstable. Indeed, it may happen at high energy scale, as shown in Fig.4 (left)[4]. The situation crucially depends on the top and Higgs mass values and requires severe fine-tuning and accuracy (see Fig.4 (right)). It seems that we are sitting just at the border line with the top quark and the Higgs boson masses specially adjusted. However, the account of the next-to-leading order corrections is essential and shifts the position towards the stability region, as can be seen in Fig.5[5].
<figure><img src="content_image/1511.09283/x5.png"><figcaption>Figure 4: The running of the Higgs coupling (left) and the stability of the EWvacuum as a function of the Higgs and top masses</figcaption></figure>
<figure><img src="content_image/1511.09283/x7.png"><figcaption>Figure 5: The vacuum stability point at the NLO and NNLO</figcaption></figure>
In the case when the EW vacuum is indeed metastable the right question to ask would be: what is the life-time of the ground state? If it is bigger than the life-time of the Universe, it is still fine for the SM. Still the situation requires some caution. The way out might be the new physics at higher scale. One example is supersymmetry: in this case the scalar potential is \(V_{SUSY}=|F|^{2}+|D|^{2}\geq 0\) [6]. If SUSY is broken the potential can get negative corrections though the quartic scalar coupling remains to be positive. The second example is the extended Higgs sector. Several Higgs fields with several Higgs-like couplings push the smallest coupling up (might have also several minima). The third example is provided by GUTs. In a unified theory the Higgs coupling may be attracted by the gauge coupling, thus stabilizing the potential. Note that in all these cases one has an extension of the SM at high energies.
### Scale stability
New physics at high energy scale may destroy the EW scale of the SM. Indeed, the masses of quarks and leptons and the masses of gauge bosons in the SM are protected versus the radiative corrections originating from heavy new physics due to the gauge invariance. However, this is not true for the Higgs mass. The Higgs sector is not protected by any symmetry. Quantum corrections to the Higgs potential from to New physics (see Fig.6) are proportional to the heavy mass squared. These huge corrections would destroy the light Higgs potential and eventually the light EW scale of the SM. This creates a hierarchy problem: the coexistence of the light and heavy scales \(\frac{m_{H}}{m_{GUT}}\sim 10^{-14}\) and requires modification of the SM.
<figure><img src="content_image/1511.09283/x9.png"><figcaption>Figure 6: Radiative correction to the Higgs mass due to heavy particles</figcaption></figure>
<figure><img src="content_image/1511.09283/x10.png"><figcaption>Figure 7: Cancellation of the radiative correction to the Higgs mass due tosuper partners</figcaption></figure>
The way out again might be the new physics at higher scale. Two suggestions are popular. The first one is supersymmetry at TeV scale. In this case, the unwanted radiative corrections are canceled by super partners of the corresponding particles, as it is shown in Fig.7 [7]. This cancellation is true up to the SUSY breaking scale. If \(m_{SUSY}\sim 1\) TeV, the light Higgs mass is protected, which suggests an approximate scale of low energy supersymmetry. If, on the contrary, \(m_{SUSY}\geq 1\) TeV, one has the so-called little hierarchy problem that requires the fine tuning of the SUSY parameters [8].
The other proposal to solve the hierarchy problem is related to the extra dimensional theories. In this case the hierarchy is achieved due to the wrap factor which appears while going from the so-called Planck brane to the physical brane (Fig.8).
<figure><img src="content_image/1511.09283/x11.png"><figcaption>Figure 8: The Randall-Sundrum type brane world construction</figcaption></figure>
In the Randall-Sundrum brane world construction [9] the gravity scale at the Planck brane and the TeV brane are related by
\[M^{2}_{Pl}=\frac{M^{3}}{k}(e^{2k\pi R}-1).\] (2)
As a result the gravity scale at the TeV brane, \(M\), might be small enough not to create the hierarchy problem. Whether any of these scenarios are realized in Nature is unclear.
## 3 Does the ST describe all experimental data?
Remarkable success of the SM in describing practically all experimental data in particle physics manifests itself in a pool of EW observables (see Fig.9 left) [10]. Almost everywhere one has agreement within 1-2 standard deviations. The only exception is the forward-backward asymmetries in LEP data, the long ignored problem usually attributed to the analysis. The same is true for the flavor observables (see Fig.9 right) [11].
<figure><img src="content_image/1511.09283/x12.png"><figcaption>Figure 9: The pool of the EW data (left) and the flavour observables (right)</figcaption></figure>
One has to admit very impressive progress achieved in the last decade in the EW and QCD perturbative calculations (see e.g. [12]). This became possible due to the development of new techniques and computer codes for multi-loop and multi-leg calculations. Today the accuracy of theoretical calculations competes with that of experimental data and further progress is on the way in both the cases.
For years the pain in the neck remains an almost 3 \(\sigma\) discrepancy in the anomalous magnetic moment of muon, the \(a_{\mu}={(g-2)}/2\), as illustrated in Fig.10 [13]. The attempts to fill the gap with the new physics contributions are not very successful due to the heaviness of the experimentally allowed new physics. The reason is that the new physics contributions come from the virtual particles in the loop and these diagrams are suppressed by the inverse mass squared of the intermediate particles. Though this explanation is still possible, the main hopes in resolving the \(a_{\mu}\) puzzle are related with the still inaccurate strong interaction contribution or with the new experiment which is on the way.
<figure><img src="content_image/1511.09283/x14.png"><figcaption>Figure 10: The difference between the experimental and theoretical values ofaμ</figcaption></figure>
The other discrepancy which attracted attention is the value of the CKM mixing matrix element \(V_{ub}\) measured recently[14]. This quantity is slightly different when measured in inclusive and exclusive processes. The resolution of this puzzle presumably lies in the theoretical interpretation.
Looking at the other observables and unsolved problems one has to mention the strong CP problem which despite being known for several decades still has not found its solution. The elegant way of resolving it with the help of the axion field still lacks experimental confirmation. The existing models with the axion field allow almost invisible light particle leaving small chances for its detection [15]. Another field where the new physics might appear is the rare decays. Here, despite some hopes connected in particular with the \(B_{s}\to\mu\mu\) decay, everything looks fine for the SM so far [16]. QCD is another huge area of activity. It also looks fine, although the spin crisis related to the spin of the proton is still unresolved. Presumably, it is related to parton distributions. Relatively new activity with the generalized parton distributions depending on momentum transfer opens a new field for the check of the SM [17]. At last, the neutrino physics attracts much attention in recent and coming years. It seems that the neutrino masses and mixings look fine so far but still need to be clarified. The nature of neutrino (Dirac or Majorana) remains the major puzzle in this field.
In this situation one may wonder why do we talk about new physics at all. Everything seems to be described by the SM. It is useful to look back in history and find an analogy of the modern situation. For this purpose, let us go back to the middle of the 20th century. This was the world of a single generation of particles. Indeed, the world around us is made of the first generation. In the middle of the 20th century we had the following set of elementary particles: proton, neutron, electron and later neutrino. The structure of an atom was described in detail (see Fig.11 left). Who expected new physics to come? And at which scale?
<figure><img src="content_image/1511.09283/atom5.png"><figcaption>Figure 11: The structure of the atom (left) and the families of quarks andleptons and the force carriers (right)</figcaption></figure>
We know what happened next: the muon was discovered in cosmic rays. It was first considered as a heavy electron and later was recognized as the beginning of the 2nd generation. Then the \(K\)-meson appeared, the strange particle. The following up discoveries of new hadrons at accelerators triggered the invention of the quark model and everything looked OK again. Then some problems with suppression of the flavour changing transitions appeared which were resolved with the help of the GIM mechanism [18]. The subsequent discovery of J/Psi completed the 2-nd generation with the charm quark. This second generation looked artificial and unexplained; however, something was missing since the CP-violation was discovered and called for the interpretation. The Kobayashi-Maskawa mixing matrix gave a hint for the 3rd generation and here we are. Discoveries of the force carriers and eventually of the Higgs boson already in the 21st century completed our picture, as shown in Fig.11 (right). Only the gravitational force with the graviton as a carrier still stands aside. Let me repeat, who expected this in the middle of the 20th century?
There were, however, unanswered questions. The challenge came from astrophysics and cosmology. Being at the earlier stage of the development they puzzled particle physics with the major problems of the baryon asymmetry of the Universe and the description of the Dark Matter known already at that time. Remarkably that these problems are still not resolved within the SM today.
## 4 Is there another scale except for the EW and Planck ones?
The expectations of the new physics inevitably lead to the question of the scale. Is there any new scale between the EW and the Planck ones? Many new phenomena assume the existence of the proper scales. The forseeble panorama of high energy physics is shown in Fig.12 [19].
<figure><img src="content_image/1511.09283/x17.png"><figcaption>Figure 12: The high energy physics panorama and possible scales of the newphysics</figcaption></figure>
First of all there is the EW scale. All the masses (except for the top quark and the Higgs boson) lie below this scale and form a random pattern as of today. Then there is \(\Lambda_{QCD}\) which is not a fundamental scale but plays an essential role in strong interactions. Moving down from the Planck scale we have subsequently the hypothetical string scale, the GUT scale, the Majorana scale, the vacuum stability scale, probably some others like the Pechei-Queen scale, etc. Somewhere in between is the foreseen SUSY scale. There might also be the scale of extra dimensions positioned anywhere.
What is true of this picture? Is there anything that is revealing at the TeV scale? Future and presumably not so distant future will show us what is correct.
## 5 Is it compatible with Cosmology?
The revolutionary development of cosmology in recent years and the appearance of the Standard model of cosmology called the \(\Lambda\)CDM model [20] allow one to compare the predictions of the Standard Model of particle physics with that of cosmology where they intersect. The main issues are:
* Baryon asymmetry of the Universe. The ratio of the number of baryons minus the number of anti-baryons in the Universe to the number of photons is given by an approximate formula [21] \[\frac{N(B)-N(\bar{B})}{N_{\gamma}}\sim(6.19\pm 0.14)\times 10^{-10}.\] (3) This number is still not explained in the SM and may require modification of the SM in future. It requires larger CP-violation than in the strong sector of the SM giving some hints toward its lepton nature.
* Relic abundance of the Dark Matter. According to recent data from the Planck mission [22], the energy balance of the Universe has the following shape \[Ordinary\ Matter=4.9\%,\ Dark\ Matter=26.8\%,\ Dark\ Energy=68.3\%\] (4) The problem of the Dark matter content is the problem of particle physics and seems to be beyond the SM. We will come to this point later.
* Number of neutrinos. The recent combined data from the Cosmic Microwave Background (CMB), the baryon acoustic oscillations (BAO), the WMAP polarization data (WP), the Hubble Space Telescope (HST) and high-l temperature power spectrum (highL) give for the number of neutrinos the value [22] \[N_{eff}(\nu)=3.52\pm 0.47\ \ at\ 95\%\ CL,\] (5) that well suits the SM with 3 generations assuming the quark-lepton symmetry.
* Masses of neutrinos. From the same CMB, WP and HST data plus the gravitational lensing one gets the bound on the neutrino masses [23] \[\sum m_{\nu}<1.11(0.22)\ eV,\] (6) which is even stronger than in neutrino experiments. These extremely light neutrinos probably give us a hint towards new physics responsible for their smallness like the see-saw mechanism and the Majorana nature of the neutrino.
## 6 Hadron physics
Looking back at the SM as the highest achievement in the description of matter we find some problems that were put aside in our race for the highest energy and intensity, namely, the problem of confinement and the problem of hot dense hadronic matter.
### Confinement and exotic hadrons
The understanding of confinement is the challenging problem in particle physics well inside the SM. Is it time to come back to it? We still do not understand how confinement actually works, why colorless states are the only observables, which bound states exist in Nature. Lattice calculations seem to shed some light on it: we know that in mesons quark and anti-quark pairs are linked by the gluon string which has a tension and thus provides a linearly growing potential leading to confinement. Trying to break this string one actually creates a new quark anti-quark pair thus again obtaining colorless mesons. For baryons the situation is more sophisticated, the strings form the Mercedes-Benz star with the same result as for mesons. However, it is still unclear how these strings are formed and why they are restricted to the colorless states. And even if so, what about other colorless states like the tetraquark, the pentaquark, the sextoquark, etc? Do they exist in Nature? (see Fig.13 [24]).
<figure><img src="content_image/1511.09283/dibujo20111007_hadrons_and_exotic_hadrons_in_qcd.png"><figcaption>Figure 13: Possible exotic colorless hadrons and newly discovered tetraquarks</figcaption></figure>
According to recent data, the pentaquark hadrons are at last unequivocally discovered at the LHC by the LHCb collaboration [25].
These new states require an adequate description probably within the lattice gauge theories or within the holographic approach or dual gauge theories. Or maybe we are back to analyticity and unitarity?
### Dense hadron matter
Dense hadron matter might well be a new phase of matter with yet unknown properties which has no name so far. It is known that at high density (high temperature) the usual description of hadron matter is not valid. Hadrons do not exist above the Hagedorn temperature [26]. What happens with a hadron gas at high pressure? How to get the new phase? What is the relevant description? The popular phase diagram of hadron matter is shown in Fig.14 [27]. Here \(T\) is the temperature and \(\mu_{B}\) is the baryon chemical potential.
<figure><img src="content_image/1511.09283/PhaseDiag.jpeg"><figcaption>Figure 14: The phase diagram of hadron matter</figcaption></figure>
<figure><img src="content_image/1511.09283/x18.png"><figcaption>Figure 15: The nuclear phase diagram in different representations</figcaption></figure>
Usually, it is assumed that the phase diagram contains several phases with the phase transitions and critical points. The high temperature phase is usually referred to as a deconfinement one. To check whether it is true, one uses various methods including statistical mechanics, nonequilibrium thermodynamics, hydrodynamics, and dual holographic models. There are several microscopic and macroscopic models [28]. As an example we show below (Fig.15) the nuclear phase diagram in different representations for different parameters [29]. It represents rich new phenomena which still have to be exploited.
## 7 Search for new physics
### The Higgs boson
The Higgs boson still remains the target #1 in search for new physics. And though there is no doubt that the discovered particle is the CP-even scalar with all the properties of the Higgs boson, the main question remains: is it the SM Higgs boson or not? Are there alternatives to a single Higgs boson of the SM? The answer is positive. One may consider the singlet, doublet and triplet extensions of the SM, or their combinations [30]. The guiding principle for these extensions is the custodial symmetry. It indicates that an approximate global symmetry exists, broken by the vev to the diagonal custodial symmetry group \(SU(2)_{L}\times SU(2)_{R}\to SU(2)_{L+R}\). The custodial symmetry of the SM is responsible for the ratio
\[\rho=\frac{M_{W}^{2}}{M_{Z}^{2}\cos^{2}\theta_{W}}=1\] (7)
at the tree level. In the case of various extensions, when the Higgs field(s) transform under \(SU(2)_{L}\times SU(2)_{R}\) as \(\Phi\to L\Phi R\), the \(\rho\)-parameter can be constructed starting from the isospin and the hyper charge values of the Higgs multiplets [30]
\[\rho=\frac{\sum\limits_{i=0}^{n}[I_{i}(I_{i}+1)-\frac{1}{4}Y^{2}_{i}]v_{i}}{ \sum\limits_{i=0}^{n}\frac{1}{2}Y^{2}_{i}v_{i}}.\] (8)
For both \(SU(2)\) singlet with \(Y=0\) and \(SU(2)\) doublet with \(Y=\pm 1\) one has \(\rho=1\). Moreover, any number of singlets and doublets respect custodial symmetry at the tree level. This is not so for an arbitrary number of triplets.
How can one probe that the Higgs boson is of the SM? There are two ways to do it. First of all, one has to probe the deviations from the SM Higgs couplings ( see Fig.16 [31]).
<figure><img src="content_image/1511.09283/HiggsComparisonLC500-300x223.png"><figcaption>Figure 16: The accuracy of the measurement of the Higgs couplings at variousaccelerators (left) and the required precision to distinguish SUSY from the2HDM (right)</figcaption></figure>
The name of the game is precision. At the few percent level one can distinguish, for example, the two Higgs doublet model of the MSSM type from the SM [32].
The second way is the direct search for additional scalars. In various extensions one can have extra CP-even, CP-odd and charged Higgs bosons. As an example, we present in Fig.17 the spectrum of the Higgs bosons in supersymmetric models (the MSSM - two Higgs doublet model and the NMSSM - plus additional singlet).
<figure><img src="content_image/1511.09283/x20.png"><figcaption>Figure 17: The field content and the spectrum in various models of the Higgssector</figcaption></figure>
It may well be that we have found one of the light states that may not even be the lightest one. The latter one may have very weak couplings and thus not being detectable [33].
The Higgs physics has already started. This is the task of vital importance to be fulfilled at the LHC but may require an electron-positron collider.
### The Dark matter
Target # 2 is the Dark matter. We know now that the amount of the Dark matter in the Universe exceeds that of the usual matter by a factor of 5 (see eq.(4), but we do not know what it is made of. Some possible candidates are:
* Macro objects – not seen in our Galaxy
* New particles
– heavy right neutrino not favorable but possible
Not the SM
Our best chance to detect the Dark matter particle would be via the weak interaction. The so-called WIMP (weakly interacting massive particle) can be simultaneously detected in three spheres: via annihilation in the halo of our Galaxy (irregularities in cosmic ray spectra), via scattering on a target in underground experiments (recoil of a nuclei) and via creation at accelerators (missing energy) (see Fig.18 (left) [34]).
<figure><img src="content_image/1511.09283/x22.png"><figcaption>Figure 18: Detection of WIMPs in there different spheres (left) andexperimental data for the underground experiments and accelerators (right)</figcaption></figure>
This search is already on the way with a negative result so far. The typical plot is shown in Fig.18(right) [35] where the results of the direct search and the collider experiments are presented. One can see the complimentary nature of these studies with the advantage of the accelerators at low masses and the advantage of the underground experiments at higher masses of WIMPs. The latter has already almost reached the neutrino floor where the background of neutrinos will be prevailing [36]. Apparently, WIMPs are our chance though we have to look elsewhere.
### Supersymmetry
Supersymmetry is an obvious target #3. Supersymmetry is a dream of a unified theory of all particles and interactions [37].
<figure><img src="content_image/1511.09283/x23.png"><figcaption>Figure 19: Particle content of Minimal SUSY model</figcaption></figure>
Supersymmetry remains, to this date, a well-motivated, much anticipated extension to the Standard Model of particle physics [38].
With the advent of the LHC a huge new ground of SUSY masses is within reach. However, a search is defined by its signature and by its background estimation method. Still, if SUSY is the answer to the naturalness problem, then there must exist light colored particles. The typical spectrum of SUSY particles consistent with the naturalness paradigm in shown in Fig.20 [39]. At the left, it is shown how the scale of SUSY searches has shifted after the first run of the LHC.
<figure><img src="content_image/1511.09283/x24.png"><figcaption>Figure 20: The typical SUSY mass spectrum</figcaption></figure>
Many available supersymmetric models differ mostly by the way of supersymmetry breaking. Since this problem has not found its obvious solution, one is left with the phenomenological set of parameters motivated by either the simplification of parameter space, like in the MSSM with universality requirement, or the restricted number of experimental signatures, like in the so-called simplified models.
In both the cases the experimental data on direct SUSY searches and the indirect SUSY contributions to rare decays, relic dark matter abundance, the lightest Higgs mass, etc push the limits on SUSY masses to a few TeV scale [40]), which makes the observation more problematic. Moreover, pushing the SUSY threshold even further, we start losing the main motivation for a low energy SUSY, namely the solution of the hierarchy problem and unification of the gauge couplings. Note, however, the conclusions crucially depend on the model applied, as one may see from Fig.21 below [33]. Going from the MSSM to the NMSSM, for example, allows one to incorporate the 125 GeV Higgs mass and still keep the light super partners.
<figure><img src="content_image/1511.09283/1a.png"><figcaption>Figure 21: The SYSY reach of the LHC in the SUSY mass plane for the CMSSM(left) and NMSSM (right)</figcaption></figure>
The absence of a model independent way of predictions and analysis makes it difficult to put strict limits on the low energy supersymmetry. However, there is a crucial moment now: either we find SUSY at the LHC eventually or we might have no other chance. Then we have to solve the hierarchy problem some other way! (which way?).
### Extra Dimensions/ Exotics
The extra dimensional approach might be an alternative to low energy supersymmetry or might also include SUSY within the brane world framework. Usually, the two main versions of extra dimensions are considered: the compact extra dimensions a la Kaluza-Klein picture (the ADD scenario [41]) or the large extra dimensions (the Randall-Sandrum scenarios [9]). Schematically, they are shown in Fig.22 [42].
<figure><img src="content_image/1511.09283/x26.png"><figcaption>Figure 22: Compact or large extra dimensions scenarios</figcaption></figure>
These kinds of models demonstrate a significant departure from the Standard Model since they not only contain the new fields and interactions but the whole framework of renormalizable quantum field theory is left behind. Apparently, this approach requires a new technique which is still to be developed. I would present my view on the extra dimensional brane world scenario in the form of a dialogue.
Q: Do we really live on a brane?
A: We have to check it.
Q: Do we have good reasons to believe in it?
A: No, but it is appealing.
Q: Why \(D>4\)?
A: String theory loves it.
Q: Is it what we believe in?
A: We believe in BIG deal!
The phenomenology of extra dimensions is quite rich, though it is not linked to any particular scale. Possible experimental manifestations include: search for \(Z^{\prime}\) (Di-muon events), search for \(W^{\prime}\) (single muon/ jets), search for a resonance decaying into t-tbar, search for diboson resonances, search for monojets + invisible particles.
Besides extra dimension there are a lot of exotic possibilities. None of them have been found so far, though one cannot a priori say what is realized in Nature. Some common topics are listed below: Leptoquarks, long-lived particles, off-pointing photons, excited fermions, contact interactions, etc. The drawback of all these approaches is the lack of real motivation and hence the arbitrariness of the scale of new physics.
### Compositeness
Compositeness is in a sense a natural continuation of the chain of particle physics starting from an atom and going down to quarks. The question is: moving to higher energies or smaller distances do we have to stay with the same fundamental particles or the new level appears. Answering this question we first of all look at the Higgs boson as an obvious analogy with the \(\pi\)-meson as a pseudo Nambu-Goldstone boson of the chiral symmetry. One has in mind the construction when some global symmetry group \(G\) is broken down to the symmetry group of the Standard Model H (see Fig.23) [43].
<figure><img src="content_image/1511.09283/x27.png"><figcaption>Figure 23: Breaking of the global group G down to the SM subgroup H</figcaption></figure>
As a result, the Higgs boson becomes the pseudo Nambu-Goldstone particle like the \(\pi\)-meson, and the \(W\) and \(Z\) bosons have an analogy with the vector \(\rho\)-meson. There should also be exited states like \(\pi^{\prime},\pi^{\prime\prime},\rho^{\prime},\rho^{\prime\prime}\), etc.
The advantage of this approach is that there is no artificial scalar field, everything is dictated by the symmetry group. The masses of these states are protected from high energy physics contribution, thus eliminating the hierarchy problem.
One can go even further and consider quarks and leptons also as composite states made of some _preons_ [44]. This would require new strong confining interactions. In earlier days, these types of models were referred to as technicolor, or walking technicolor, or extended technicolor. They have their own problems and got new development now [45]. The drawback of these models is the absence of excited states so far, the problems with the EW phenomenology and the absence of a viable simple scheme. Still this approach has the right to exist.
## 8 Concluding remarks
The LHC experiments are at the front line of a mystery land. We make the first attempts to look beyond the horizon. We have to be persistent and have to be patient. The main goals are:
* Target #1: The Higgs sector;
* Target #2: The Dark Matter;
* Target #3: The New physics (supersymmetry);
In attempts to achieve these goals one should have in mind that
* The future development of HEP crucially depends on the LHC outcome;
* Complimentary searches for dark matter and insights in neutrino physics are of extreme importance;
* The areas that were left behind come to the front: confinement, exotic hadrons, dense hadron matter.
I bet that discoveries will come!
**Acknowledgments** I am grateful to the Organizing Committee of the LHCp Conference for a challenge to give this talk and for a warm hospitality at St.Petersburg.
## References
* [1] L.D. Landau, On quantum field theory, in ”Niels Bohr and the Development of Physics”, London: Pergamon Press, 1955;
* [2] S. Adler, Axial-Vector Vertex in Spinor Electrodynamics, _Phys. Rev._**177** (1969) 2426 ; J.S. Bell and R. Jackiw, A PCAC puzzle: \(\pi^{0}\to\gamma\gamma\) in the \(\sigma\)-model, _Nuovo Cimento_**60A** (1969) 47; S. Adler and W.A. Bardeen, Corrections in the Anomalous Axial-Vector Divergence Equation, _Phys. Rev._**182** (1969) 157.
* [3] M.Peskin and D.Schreder, ”An Introduction to Quantum Field Theory”, Addison-Wesley Pub. Company, 1995.
* [4] G. Degrassi et al, Higgs mass and vacuum stability in the Standard Model at NNLO, _JHEP_**1208** (2012) 098, e-Print: arXiv:1205.6497.
* [5] J. R. Espinosa, Vacuum Stability and the Higgs Boson, PoS LATTICE2013 (2014) 010, e-Print: arXiv:1311.1970.
* [6] J. Wess, J. Bagger, ”Supersymmetry and Supergravity”, Princeton Univ. Press, 1983.
* [7] A.V. Gladyshev, D.I. Kazakov, Is (Low Energy) SUSY Still Alive?, Lectures at ESHEP-2012, CERN-2014-008.107, e-Print: arXiv:1212.2548.
* [8] A. Birkedal, Z. Chacko, M. K. Gaillard, Little Supersymmetry and the Supersymmetric Little Hierarchy Problem, _JHEP_**0410** (2004) 036, e-Print: arXiv:hep-ph/0404197.
* [9] L. Randall and R. Sundrum, Large Mass Hierarchy from a Small Extra Dimension, _Phys. Rev. Lett._**83** (1999) 3370, e-print: hep-ph/9905221. L. Randall and R. Sundrum, An Alternative to Compactification, _Phys. Rev. Lett._**83** (1999) 4690, e-print: hep-th/9906064.
* [10] G-fitter, A Generic Fitter Project for HEP Model Testing, http://project-gfitter.web.cern.ch; PDG, _Chin.Phys._**38** (2014) 090001.
* [11] CKM-fitter, http://ckmfitter.in2p3.fr, PDG, _Chin.Phys._**38** (2014) 090001.
* [12] Talks at the LHCp2015 Conference: K. Melnikov, Calculating Higgs boson properties in SM; A. Vicini, General status and future prospects of HO corrections; S.Forte, Perturbative QCD at the LHC; S.A.Pozzorini, Electroweak theory at the LHC.
* [13] A. Hoecker and W.J. Marciano, The muon anomalous magnetic moment, http://pdg.lbl.gov/2013/reviews/rpp2013-rev-g-2-muon-anom-mag-moment.pdf
* [14] LHCb Collaboration, Determination of the quark coupling strength \(|Vub|\) using baryonic decays, _Nature Physics_**11** (2015) 743, http://www.nature.com/nphys/journal/v11/n9/full/nphys3415.html
* [15] J. E. Kim, Constraints on very light axions from cavity experiments, _Phys. Rev._**D58** (1998) 055006.
* [16] Talks at the LHCp2015 Conference: C. Bobeth (for CMS and LHCb Collaborations), Theoretical perspective on rare and semi-rare B decays.
* [17] P. Mulders and R. Tangerman, The complete tree-level result up to order 1/Q for polarized deep-inelastic leptoproduction, _Nucl.Phys._**B461** (1996) 197, e-print: hep-ph/9510301. R. Angeles-Martinez et al, Transverse momentum dependent (TMD) parton distribution functions: status and prospects, e-Print: arXiv:1507.05267
* [18] S.L. Glashow, J. Iliopoulos, L. Maiani, Weak Interactions with Lepton Hadron Symmetry, _Phys. Rev._**D2 (7)** (1970) 1285.
* [19] C. Quigg, The Coming Revolutions in Particle Physics, _Scientific American_, February, 2008; http://chemphys.armstrong.edu
* [20] A. Linde, ”An Introduction to Modern Cosmology” (2nd ed.). London: Wiley, 2003; S. Weinberg, ”Cosmology”, Oxford University Press, Oxford, (2008).
* [21] E. W. Kolb and M. S. Turner, ”The Early Universe”, Addison-Wesley (1990).
* [22] Planck 2013 results. XVI. Cosmological parameters, _Astronomy and Astrophysics_**571** (2014) A16, e-print: arXiv:1303.5076.
* [23] Jian-Wei Hu, Rong-Gen Cai, Zong-Kuan Guo, Bin Hu, Cosmological parameter estimation from CMB and X-ray clusters after Planck, _JCAP_**1405** (2014) 020, e-print: arXiv:1401.0717.
* [24] F.R.Villatoro, http://francis.naukas.com/2011/10/08/que-paso-con-los-pentaquarkshttp://phys.org/news/2012-01-belle-heavy-exotic-hadrons.html
* [25] R. Aaij et al. (LHCb Collaboration), Observation of \(J/\psi p\) resonances consistent with pentaquark states in \(\Lambda^{0}_{b}\to J/\psi K^{-}p\) decays, _Phys. Rev. Lett._**115** (2015) 072001.
* [26] J.Cleymans and D. Worku, The Hagedorn temperature Revisited, _Mod. Phys. Lett._**A26** (2011) 1197; e-print: arXiv: 1103.1463.
* [27] A. Ohnishi, Phase diagram and heavy-ion collisions: Overview, _Prog.Theor.Phys.Suppl._**193** (2012) 1, e-Print: arXiv:1112.3210 [nucl-th].
* [28] E. Bratkovskaya, Microscopic dynamical models for heavy ion collisions, Lectures at International Summer School ”Dense Matter 2015”, JINR, Dubna, July 2015, http://theor.jinr.ru/ diastp/dm15/
* [29] J. Randrup, Spinodal instabilites at the deconfinement phase transition, Lectures at International Summer School ”Dense Matter 2015”, JINR, Dubna, July 2015, http://theor.jinr.ru/ diastp/dm15/
* [30] M.Spannowsky, Higgs Phenomenology, Lectures at Helmholtz - DIAS International Summer School ”Theory challenges for LHC physics,” JINR, Dubna, July 2015, http://theor.jinr.ru/ calc2015/
* [31] S-Fitter Collaboration, http://groups.lal.in2p3.fr/atlas/files/2013/01/HiggsComparisonLC500.png
* [32]S. Yamashita, Physics at International Linear Collider, 7th ACFA WS, http://hep1.phys.ntu.edu.tw/…/P2-2-Yamashita.pdf
* [33] C. Beskidt, W. de Boer, D.I. Kazakov, A comparison of the Higgs sectors of the CMSSM and NMSSM for a 126 GeV Higgs boson, _Phys. Lett._**B726** (2013) 758, e-Print: arXiv:1308.1333 [hep-ph]
* [34] E.W.Kolb, A Dark Universe: Dark Matter and Dark Energy , CERN Academic Lectures, www.infocobuild.com/…/cosmology-cern.htm
* [35] D. d’Enterria, (CMS Collaboration), CMS physics highlights in the LHC Run 1, PoS Bormio2015 (2015) 027, e-Print: arXiv:1504.06519.
* [36] P.Grothaus, M. Fairbairn, J. Monroe, Directional Dark Matter Detection Beyond the Neutrino Bound _Phys. Rev._**D 90** (2014) 055018, e-print: arXiv:1406.5047 [hep-ph].
* [37] P. Fayet and S. Ferrara, Supersymmetry, _Phys. Rep._**32** (1977) 249; M. F. Sohnius, Introducing Supersymmetry, _Phys. Rep._**128** (1985) 41; H. P. Nilles, Supersymmetry, supergravity and particle physics, _Phys. Rep._**110** (1984) 1; H. E. Haber and G. L. Kane, The search for supersymmetry: Probing physics beyond the standard model, _Phys. Rep._**117** (1985) 75; A. B. Lahanas and D. V. Nanopoulos, The road to no-scale supergravity, _Phys. Rep._**145** (1987) 1.
* [38] H. Baer, X. Tata, ”Weak Scale Supersymmetry”, Cambridge University Press, 2006.
* [39] M.Monaco, M. Pierini, A. Romanino and M. Spinrath, Phenomenology of Minimal Unified Tree Level Gauge Mediation at the LHC, _JHEP_**1307** (2013) 078, arXiv:1302.1305 [hep-ph].
* [40] C. Beskidt, W. de Boer, D.I. Kazakov, Where is SUSY? _JHEP_**1205** (2012) 094, e-Print: arXiv:1202.3366.
* [41] N. Arkani-Hamed, S. Dimopoulos and G. Dvali, _Phys. Lett._**B429** (1998) 263, e-print: hep- ph/9803315; N. Arkani-Hamed, S. Dimopoulos and G. Dvali, _Phys.Rev._**D59** (1999) 086004, e-print: hep- ph/9807344.
* [42] R.Maartens, Brane Cosmology, Rencontres de Moriond, Electro Weak Interactions and Unified Theories 2004, http:/moriond.in2p3.fr/J04/trans/maartens.pdf
* [43] R.Contino, The Higgs as a Composite Nambu-Goldstone Boson, Theoretical Advanced Study Institute in Elementary Particle Physics : Physics of the Large and the Small. (TASI 2009), e-Print: arXiv:1005.4269.
* [44] I.A. D’Souza, C.S. Kalman, ”Preons: Models of Leptons, Quarks and Gauge Bosons as Composite Objects”, World Scientific, 1992
* [45] A. Belyaev, M.S. Brown, R. Foadi, M.T. Frandsen, The Technicolor Higgs in the Light of LHC Data _Phys.Rev._**D90** (2014) 035012 , e-Print: arXiv:1309.2097; M. Antola, S. Di Chiara, K. Tuominen, Ultraviolet Complete Technicolor and Higgs Physics at LHC, _Nucl.Phys._**B899** (2015) 55, e-Print: arXiv:1307.4755.
|
1409.1315 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 85603,
"num_imgs": 10,
"llama3_tokens_count": 31490
} | [
"content_image/1409.1315/x1.png",
"content_image/1409.1315/x3.png",
"content_image/1409.1315/x4.png",
"content_image/1409.1315/x8.png",
"content_image/1409.1315/x9.png",
"content_image/1409.1315/x10.png",
"content_image/1409.1315/x11.png",
"content_image/1409.1315/x14.png",
"content_image/1409.1315/x17.png",
"content_image/1409.1315/x18.png"
] | # Transverse momentum broadening in semi-inclusive deep inelastic scattering
at next-to-leading order
Zhong-Bo Kang
zkang@lanl.gov
Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
Enke Wang
wangek@iopp.ccnu.edu.cn
Institute of Particle Physics and Key Laboratory of Lepton and Quark Physics (MOE), Central China Normal University, Wuhan 430079, China
Xin-Nian Wang
xnwang@lbl.gov
Institute of Particle Physics and Key Laboratory of Lepton and Quark Physics (MOE), Central China Normal University, Wuhan 430079, China
Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
Hongxi Xing
hxing@lanl.gov
Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
Institute of Particle Physics and Key Laboratory of Lepton and Quark Physics (MOE), Central China Normal University, Wuhan 430079, China
February 26, 2024
###### Abstract
Within the framework of higher-twist collinear factorization, transverse momentum broadening for the final hadrons in semi-inclusive deeply inelastic \(e+A\) collisions is studied at the next-to-leading order (NLO) in perturbative QCD. Through explicit calculations of real and virtual corrections at twist-4, the transverse momentum weighted differential cross section due to double scattering is shown to factorize at NLO and can be expressed as a convolution of twist-4 nuclear parton correlation functions, the usual twist-2 fragmentation functions and hard parts which are finite and free of any divergences. A QCD evolution equation is also derived for the renormalized twist-4 quark-gluon correlation function which can be applied to future phenomenological studies of transverse momentum broadening and jet quenching at NLO.
pacs: 12.38.Bx, 12.39.St, 24.85.+p
## I Introduction
Multiple scatterings in deeply inelastic lepton-nucleus scattering, hadron-nucleus and heavy-ion collisions lead to many interesting phenomena that in turn can provide useful tools for diagnosing properties of cold and hot nuclear medium [1; 2; 3; 4]. The predicted phenomena such as jet quenching and transverse momentum broadening [6; 7; 8; 9; 10; 5; 11] have been observed in the fixed target experiments at the Deutsches Elektronen-Synchrotron (DESY), Jefferson Lab and Fermilab [12; 13; 14; 15; 16], as well as in the ongoing collider experiments at Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) [1; 17; 18]. These phenomena will continue to be the focus of future studies in experiments at the proposed Electron Ion Collider [19].
Many theoretical formalisms have been developed in the study of multiple scatterings which in turn can be used to extract medium properties from the nontrivial nuclear dependence observed in the experiments of high energy collisions with nuclear targets. Significant progress has been made in the past few years, in particular, in the study of parton energy loss [20; 21; 22; 23], radiative corrections to transverse momentum broadening [24; 26; 25], effects of multiple gluon emissions [27; 28; 29], and phenomenological extraction of the jet transport parameter from jet quenching in high-energy heavy-ion collisions [30]. However, so far a complete next-to-leading-order (NLO) calculation in perturbative QCD (pQCD) for either jet quenching or transverse momentum broadening is still lacking, which is essential for more precise extraction of medium properties from experimental data.
One of the approaches in studying effects of multiple scatterings is based on a generalized high-twist factorization theorem [31; 32; 33]. Within such an approach, these effects manifest themselves as power corrections to the differential cross sections, whose main contributions often depend on high-twist matrix elements of the nuclear state that are enhanced by the nuclear size. So far most studies have focused on double parton scatterings and their effect on transverse momentum broadening, which leads to nuclear enhancement in the dijet transverse momentum imbalance in photon-nucleus collisions [34], transverse momentum broadening for single inclusive jet (or hadron) production in semi-inclusive deep inelastic scattering (SIDIS) [35; 36; 37], as well as Drell-Yan lepton pair [35; 38], vector boson [39] and back-to-back particle productions [40; 41] in p+A collisions. Phenomenological studies of experimental data within the high-twist formalism have been quite successful [34; 35; 39; 40], providing us confidence in using multiple scatterings and the phenomenology as a tool to probe the fundamental twist-4 nuclear parton correlation functions and the associated QCD dynamics. All these calculations, however, are based on the picture of the leading-order (LO) “bare” twist-4 factorization without higher order corrections in pQCD and contribute to most of the theoretical uncertainties in the phenomenological studies of jet quenching [42]. More complete NLO calculations with renormalized twist-4 matrix elements and finite corrections are very complex and have not been attempted so far. They are, however, necessary for more accurate predictions and more precise extraction of medium properties from future phenomenological studies of experimental data.
In this paper, we carry out the calculation of the one-loop perturbative corrections to the transverse momentum weighted SIDIS cross section at twist four. Through explicit calculations, we will illustrate the factorization of the transverse momentum weighted cross section of SIDIS and derive the evolution equations for the renormalized twist-four two-parton correlation functions. Such a calculation is an important step towards a full NLO pQCD description of single hadron spectra in SIDIS and transverse momentum broadening within the high-twist formalism. A brief summary of our results has been reported earlier in Ref. [43]. We now provide detailed derivations and discussions in this paper. The rest of our paper is organized as follows. In Sec. II, we introduce our notations and kinematics, and review the LO derivation for transverse momentum broadening. In Sec. III, we perform explicit calculations of NLO contributions at twist four to the transverse-momentum-weighted differential cross section, including quark-gluon and gluon-gluon double scatterings, as well as the interference contributions from single and triple scatterings. In particular, we show the complete cancelation of soft divergences in real and virtual corrections. The remaining collinear divergences can be absorbed into the standard fragmentation function and/or the twist-4 parton correlation function of the nuclear state, which give rise to the factorization scale evolution of these functions. We summarize our paper in Sec. IV.
## II Transverse momentum broadening at leading order
### Notations and kinematics
We start this section by specifying our notations and kinematics in SIDIS. We consider a lepton \(l\) scattering off a large nucleus \(A\),
\[l(L_{1})+A(p)\to l(L_{2})+h(\ell_{h})+X,\] (1)
where \(L_{1}\) and \(L_{2}\) are the four momenta of the incoming and outgoing leptons, \(\ell_{h}\) is the observed hadron momentum, and \(p\) is the momentum per nucleon in the nucleus with the atomic number \(A\). In the approximation of one-photon exchange, the virtual photon momentum is given by \(q=L_{1}-L_{2}\) with the invariant mass \(Q^{2}=-q^{2}\). The usual SIDIS Lorentz invariant variables are defined as follows,
\[x_{B}=\frac{Q^{2}}{2p\cdot q},\qquad y=\frac{p\cdot q}{p\cdot L_ {1}},\qquad z_{h}=\frac{p\cdot\ell_{h}}{p\cdot q}.\] (2)
For later convenience, we also define Mandelstam variables at the partonic level,
\[\hat{s}=(xp+q)^{2},\qquad\hat{t}=(\ell-q)^{2},\qquad\hat{u}=(\ell -xp)^{2},\] (3)
with \(\ell\) the momentum of the final state parton which fragments into the observed hadron \(h\). It is instructive to realize that the transverse momentum \(\ell_{T}\) of the final state parton in the so-called _hadron frame_[44] can be written in terms of Mandelstam variables as
\[\ell_{T}^{2}=\frac{\hat{s}\hat{t}\hat{u}}{(\hat{s}+Q^{2})^{2}}.\] (4)
The transverse momentum broadening,
\[\Delta\langle\ell_{hT}^{2}\rangle=\langle\ell_{hT}^{2}\rangle_{eA }-\langle\ell_{hT}^{2}\rangle_{eN},\] (5)
is defined as the difference between the average squared transverse momentum of the observed hadron produced on a nuclear target (\(e+A\) collisions) and that on a proton target (\(e+N\) scattering), with \(\langle\ell_{hT}^{2}\rangle\) given by
\[\langle\ell_{hT}^{2}\rangle=\int d\ell_{hT}^{2}\ell_{hT}^{2}\frac {d\sigma}{d{\cal PS}d\ell_{hT}^{2}}\left/\frac{d\sigma}{d{\cal PS}}\right.,\] (6)
where the phase space \(d{\cal PS}=dx_{B}dydz_{h}\). The denominator gives the so-called single hadron differential cross section in SIDIS, which can be written as [45]
\[\frac{d\sigma}{d{\cal PS}}=\frac{\alpha_{em}^{2}}{Q^{2}}\left[Y^{ M}(-g^{\mu\nu})+Y^{L}\frac{4x_{B}^{2}}{Q^{2}}p^{\mu}p^{\nu}\right]\frac{dW_{ \mu\nu}}{dz_{h}},\] (7)
where \(\alpha_{em}\) stands for the fine structure constant, and \(W_{\mu\nu}\) is the hadronic tensor for \(\gamma^{*}+A\to h+X\). Here the term proportional to \(Y^{M}\) is the so-called “metric” contribution, while the term proportional to \(Y^{L}\) is the longitudinal contribution. \(Y^{M}\) and \(Y^{L}\) are closely connected to the photon polarizations with the following expressions,
\[Y^{M}=\frac{1+(1-y)^{2}}{2y},\qquad Y^{L}=1+\frac{4(1-y)+(1-y)^{ 2}}{2y}.\] (8)
The result for single hadron differential cross section in SIDIS at leading twist-2 is well-known. As a warm-up exercise, we also calculate this cross section to NLO order. Working in \(n=4-2\epsilon\) dimensions with \(\overline{\rm MS}\) scheme, our findings are consistent with those in the literature [46; 47; 48]:
\[\frac{d\sigma}{d{\cal PS}}= \sigma_{0}\sum_{q}e_{q}^{2}\int\frac{dx}{x}\frac{dz}{z}f_{q/A}(x, \mu_{f}^{2})D_{h/q}(z,\mu_{f}^{2})\delta(1-\hat{x})\delta(1-\hat{z})\]
\[+\sigma_{0}\frac{\alpha_{s}}{2\pi}\sum_{q}e_{q}^{2}\int\frac{dx}{ x}\frac{dz}{z}f_{q/A}(x,\mu_{f}^{2})D_{h/q}(z,\mu_{f}^{2})\Bigg{\{}\ln\left( \frac{Q^{2}}{\mu_{f}^{2}}\right)\left[P_{qq}(\hat{x})\delta(1-\hat{z})+P_{qq}( \hat{z})\delta(1-\hat{x})\right]+H^{NLO}_{T2-qq}\Bigg{\}}\]
\[+\sigma_{0}\frac{\alpha_{s}}{2\pi}\sum_{q}e_{q}^{2}\int\frac{dx}{ x}\frac{dz}{z}f_{q/A}(x,\mu_{f}^{2})D_{h/g}(z,\mu_{f}^{2})\Bigg{[}\ln\left( \frac{Q^{2}}{\mu_{f}^{2}}\right)P_{gq}(\hat{z})\delta(1-\hat{x})+H^{NLO}_{T2- qg}\Bigg{]}\]
\[+\sigma_{0}\frac{\alpha_{s}}{2\pi}\sum_{q}e_{q}^{2}\int\frac{dx}{ x}\frac{dz}{z}f_{g/A}(x,\mu_{f}^{2})\left[D_{h/q}(z,\mu_{f}^{2})+D_{h/\bar{q}} (z,\mu_{f}^{2})\right]\left[\ln\left(\frac{Q^{2}}{\mu_{f}^{2}}\right)P_{qg}( \hat{x})\delta(1-\hat{z})+H^{NLO}_{T2-gq}\right],\] (9)
where \(\mu_{f}\) is the factorization scale, \(f_{q(g)/A}(x,\mu_{f}^{2})\) is the quark (gluon) distribution function inside the nucleus, and \(D_{h/q(g)}(z,\mu_{f}^{2})\) is the fragmentation function for a quark (gluon) into a hadron \(h\). The detailed expressions for the finite terms \(H^{NLO}_{T2-qq}\), \(H^{NLO}_{T2-qg}\), and \(H^{NLO}_{T2-gq}\) are given in Appendix by Eqs. (101), (102), and (103), respectively. Other variables are defines as \(\hat{x}=x_{B}/x\), \(\hat{z}=z_{h}/z\), and \(\sigma_{0}\) is given by
\[\sigma_{0}=\frac{2\pi\alpha_{em}^{2}}{Q^{2}}\frac{1+(1-y)^{2}}{y} (1-\epsilon).\] (10)
In Eq. (9), \(P_{ab}(z)\) is the usual DGLAP splitting kernel for partons \(b\to a\)
\[P_{qq}(z) =C_{F}\left[\frac{1+z^{2}}{(1-z)_{+}}+\frac{3}{2}\delta(1-z) \right],\] (11)
\[P_{gq}(z) =C_{F}\frac{1+(1-z)^{2}}{z},\] (12)
\[P_{qg}(z) =T_{R}\left[z^{2}+(1-z)^{2}\right],\] (13)
where \(C_{F}=(N_{c}^{2}-1)/2N_{c}\) with \(N_{c}=3\) the number of colors, and \(T_{R}=1/2\).
### Transverse momentum broadening: leading order
In a nuclear medium, the outgoing parton in SIDIS may experience additional scatterings with other partons from the nucleus before fragmenting into final observed hadrons. Taking into account these multiple scatterings, one can express the differential cross section for single inclusive hadron production in SIDIS off a nuclear target as a sum of contributions from single, double, and higher multiple scatterings [49],
\[d\sigma=d\sigma^{S}+d\sigma^{D}+\dots\,,\] (14)
where the superscript “\(S\)” (“\(D\)”) indicates the single (double) scattering contribution. In the case of a single scattering as illustrated in Fig. 1(a), the virtual photon interacts with a single parton (quark or gluon) coming from the nucleus to produce a parton which will then fragment into the final observed hadron. On the other hand, in the case of double scatterings as shown in Fig. 1(b), the outgoing parton experiences one additional scattering with another parton (e.g., a gluon in the figure) from the nucleus.
<figure><img src="content_image/1409.1315/x1.png"><figcaption>Figure 1: The general diagrams for single inclusive hadron production in SIDISin a nuclear medium: (a) single scattering contribution with k=xp; (b) quark-gluon double scattering with the initial parton’s momenta k1=x1p, k2=(x1+x3)p,kg=x2p+kT and k′g=(x2−x3)p+kT, respectively. Here kT is the transversemomentum kick from the nucleus.</figcaption></figure>
It is these additional parton multiple scatterings that lead to the enhancement in the average squared transverse momentum for the observed hadron produced in \(e+A\) collisions. According to Eqs. (5) and (14), the leading contribution to the defined nuclear transverse momentum broadening comes from the double scatterings,
\[\Delta\langle\ell_{hT}^{2}\rangle\approx\frac{d\langle\ell_{hT}^{ 2}\sigma^{D}\rangle}{d{\cal PS}}\left/\frac{d\sigma}{d{\cal PS}}\right.,\qquad \frac{d\langle\ell_{hT}^{2}\sigma^{D}\rangle}{d{\cal PS}}\equiv\int d\ell_{hT} ^{2}\ell_{hT}^{2}\frac{d\sigma^{D}}{d{\cal PS}d\ell_{hT}^{2}},\] (15)
where the superscript “\(D\)” indicates again the double-scattering contribution. In other words, the transverse momentum broadening is closely related to the transverse momentum \(\ell_{hT}^{2}\)-weighted differential cross section, i.e., the numerator in the left equation of Eq. (15), which will be the focus of our paper.
<figure><img src="content_image/1409.1315/x3.png"><figcaption>Figure 2: Feynman diagram for double scattering contribution to transversemomentum broadening at leading order. The parton momenta follow the samenotation as in Fig. 1 (b).</figcaption></figure>
At leading order, the double-scattering contribution is given by Fig. 2. The \(\ell_{hT}^{2}\)-weighted cross section (specifically the hadronic tensor) can be written as
\[\frac{d\langle\ell_{hT}^{2}W^{D}\rangle}{dz_{h}}^{\rm(LO)}= \frac{2\pi x_{B}}{Q^{2}}\sum_{q}e_{q}^{2}\int\frac{dx}{x}\frac{dz }{z}D_{h/q}(z)\int d^{n-2}\ell_{hT}\,\ell_{hT}^{2}\,\delta^{n-2}(\ell_{hT}-zk_ {T})\delta(1-{\hat{x}})\delta(1-{\hat{z}})\]
\[\times\int d^{n-2}k_{T}dx_{1}dx_{2}dx_{3}T_{A}(x_{1},x_{2},x_{3}, k_{T})(-g^{\mu\nu})H_{\mu\nu}\left(\{x_{i}\},p,q,\ell,\ell_{h},k_{T}\right) \delta(x_{1}+x_{2}-x),\] (16)
where \(W^{D}=(-g^{\mu\nu})W^{D}_{\mu\nu}\). In arriving at this, we have used the fact that the longitudinal contribution vanishes at LO, and thus the only contribution is the metric term. In Eq. (16), \(\{x_{i}\}=\{x_{1},x_{2},x_{3}\}\) are the independent collinear momentum fractions carried by partons from the nucleus, \(k_{T}\) is a small transverse momentum kick due to the multiple scattering, and the matrix element \(T_{A}(x_{1},x_{2},x_{3},k_{T})\) is defined as
\[T_{A}(x_{1},x_{2},x_{3},k_{T})= \int\frac{dy^{-}}{2\pi}\frac{dy_{1}^{-}}{2\pi}\frac{dy_{2}^{-}}{2 \pi}\frac{d^{2}y_{T}}{(2\pi)^{2}}e^{ix_{1}p^{+}y^{-}}e^{ix_{2}p^{+}(y_{1}^{-}- y_{2}^{-})}e^{ix_{3}p^{+}y_{2}^{-}}e^{ik_{T}\cdot y_{T}}\]
\[\times\frac{1}{2}\langle A|\bar{\psi}_{q}(0)\gamma^{+}A^{+}(y_{2} ^{-},0_{T})A^{+}(y_{1}^{-},y_{T})\psi_{q}(y^{-})|A\rangle.\] (17)
As in Refs. [39; 40; 41], the calculation proceeds by first taking Taylor expansion of the hard part function in \(k_{T}\),
\[H_{\mu\nu}\left(\{x_{i}\},p,q,\ell,\ell_{h},k_{T}\right)=H_{\mu \nu}\left(\{x_{i}\},p,q,\ell,\ell_{h},k_{T}=0\right)+{\cal O}_{\mu\nu}(k_{T}^{ 2}).\] (18)
Note that the term linear in \(k_{T}\) in the above expansion does not contribute to unpolarized SIDIS. Using \(\delta^{n-2}(\ell_{hT}-zk_{T})\) in Eq. (16) to set \(\ell_{hT}=zk_{T}\), one can convert \(k_{T}^{2}A^{+}A^{+}\) to the gauge-covariant gluon field strength \(F^{+}_{\sigma}F^{\sigma+}\) in the matrix element through partial integrations in \(y_{T}\). We further integrate over the momentum fractions \(x_{1},x_{2},x_{3}\) through contour integrations around poles in the hard part \(H_{\mu\nu}\),
\[\frac{1}{(q+x_{1}p)^{2}+i\epsilon} =\frac{x_{B}}{Q^{2}}\frac{1}{x_{1}-x_{B}+i\epsilon},\] (19)
\[\frac{1}{\left(q+(x_{1}+x_{3})p\right)^{2}-i\epsilon} =\frac{x_{B}}{Q^{2}}\frac{1}{x_{1}+x_{3}-x_{B}-i\epsilon}.\] (20)
Together with the phase space \(\delta\)-function \(\delta(x_{1}+x_{2}-x_{B})\) in Eq. (16), we fix \(x_{1}=x_{B}\), \(x_{2}=0\), and \(x_{3}=0\). Finally we have
\[\frac{d\langle\ell_{hT}^{2}W^{D}\rangle}{dz_{h}}^{\rm(LO)}=\frac{ 2\alpha_{s}}{N_{c}}z_{h}^{2}(2\pi)^{3}(1-\epsilon)\sum_{q}e_{q}^{2}\int\frac{ dx}{x}T_{qg}(x,0,0)\int\frac{dz}{z}D_{h/q}(z)\delta(1-{\hat{x}})\delta(1-{\hat {z}}),\] (21)
where the twist-4 quark-gluon correlation function \(T_{qg}(x_{1},x_{2},x_{3})\) is given by [10; 39]¹,
[FOOTNOTE:1][ENDFOOTNOTE]
\[T_{qg}(x_{1},x_{2},x_{3})= \int\frac{dy^{-}}{2\pi}e^{ix_{1}p^{+}y^{-}}\int\frac{dy_{1}^{-}dy _{2}^{-}}{4\pi}e^{ix_{2}p^{+}(y_{1}^{-}-y_{2}^{-})}e^{ix_{3}p^{+}y_{2}^{-}} \theta(y_{2}^{-})\theta(y_{1}^{-}-y^{-})\]
\[\times\langle A|{\bar{\psi}}_{q}(0)\gamma^{+}F_{\sigma}^{+}(y_{2} ^{-})F^{\sigma+}(y_{1}^{-})\psi_{q}(y^{-})|A\rangle.\] (22)
We thus obtain the double scattering contribution to the \(\ell_{hT}^{2}\)-weighed differential cross section at LO,
\[\frac{d\langle\ell_{hT}^{2}\sigma^{D}\rangle^{\rm(LO)}}{d{\cal PS }}= \sigma_{h}\sum_{q}e_{q}^{2}\int\frac{dx}{x}T_{qg}(x,0,0)\int\frac {dz}{z}D_{h/q}(z)\delta(1-{\hat{x}})\delta(1-{\hat{z}}),\] (23)
where \(\sigma_{h}=(4\pi^{2}\alpha_{s}z_{h}^{2}/N_{c})\sigma_{0}\), with \(\sigma_{0}\) defined in Eq. (10). The LO transverse momentum broadening is then
\[\Delta\langle\ell_{hT}^{2}\rangle=\left(\frac{4\pi^{2}\alpha_{s}z _{h}^{2}}{N_{c}}\right)\frac{\sum_{q}e_{q}^{2}T_{qg}(x_{B},0,0)D_{h/q}(z_{h})} {\sum_{q}e_{q}^{2}f_{q/A}(x_{B})D_{h/q}(z_{h})},\] (24)
as obtained in previous calculations [36; 35].
## III Transverse momentum broadening at next-to-leading order
In this section, we present our calculations of NLO contributions to transverse momentum broadening in SIDIS. We first study the virtual-photon-quark (\(\gamma^{*}+q\)) interaction channel, which involves the quark-gluon correlation function \(T_{qg}\) as defined in Eq. (22). We then derive the result for virtual-photon-gluon (\(\gamma^{*}+g\)) channel, which involves the gluon-gluon correlation function \(T_{gg}\) defined in Eq. (88) below. The final result will be presented at the end of this section.
The double-scattering contributions in the nuclear medium manifest themselves as power corrections to the differential cross section. A high-twist factorization formalism was established [31; 32] to systematically extract these contributions. This formalism stems directly from the well-established collinear factorization theorem [31; 32; 33] and has recently been extended to include transverse-momentum-dependent parton distributions [50]. Within such an approach, one carries out collinear expansion of hard parts and reorganizes the final results in terms of power corrections, where the second order expansion gives rise to the twist-4 contribution. In the presence of a large nucleus (\(A\gg 1\)), the dominant contribution comes from the terms associated with the high-twist matrix elements of the nuclear state that are enhanced by the nuclear size. The general formalism for the double-scattering contribution can be written as
\[\frac{dW_{\mu\nu}^{D}}{dz_{h}}= \sum_{q}e_{q}^{2}\int\frac{dz}{z}D_{h/q}(z)\int\frac{dy^{-}}{2\pi }\frac{dy_{1}^{-}}{2\pi}\frac{dy_{2}^{-}}{2\pi}\frac{1}{2}\langle A|\bar{\psi} _{q}(0)\gamma^{+}F_{\sigma}^{+}(y_{2}^{-})F^{\sigma+}(y_{1}^{-})\psi_{q}(y^{-} )|A\rangle\]
\[\times\left[-\frac{1}{2(1-\epsilon)}g^{\alpha\beta}\right]\left[ \frac{1}{2}\frac{\partial^{2}}{\partial k_{T}^{\alpha}\partial k_{T}^{\beta}}{ \overline{H}}_{\mu\nu}(p,q,\ell,\ell_{h},k_{T})\right]_{k_{T}=0},\] (25)
where \({\overline{H}}_{\mu\nu}(p,q,\ell,\ell_{h},k_{T})\) is the Fourier transform of the hard partonic function \(H_{\mu\nu}\left(\{x_{i}\},p,q,\ell,\ell_{h},k_{T}\right)\),
\[{\overline{H}}_{\mu\nu}(p,q,\ell,\ell_{h},k_{T})=\int dx_{1}dx_{2 }dx_{3}e^{ix_{1}p^{+}y^{-}}e^{ix_{2}p^{+}(y_{1}^{-}-y_{2}^{-})}e^{ix_{3}p^{+}y _{2}^{-}}H_{\mu\nu}\left(\{x_{i}\},p,q,\ell,\ell_{h},k_{T}\right).\] (26)
### Quark-Gluon Double Scattering
In this subsection we calculate the double-scattering contribution for the virtual-photon-quark (\(\gamma^{*}+q\)) interaction channel, as illustrated in Fig. 1(b), which involves a quark and a gluon in the initial state. They will be referred as quark-gluon double scattering, in which there is first a hard photon-quark scattering, then the produced parton undergoes a second scattering with another initial gluon from the nucleus. To simplify our discussion, we classify the secondary scattering as “soft” or “hard” [10; 51; 52], depending on whether the exchanged gluon momentum (either \(k_{g}\) or \(k_{g}^{\prime}\) in Fig. 1(b)) becomes zero or remains finite, respectively, when \(k_{T}\to 0\). The final amplitudes of the cut diagrams contain “soft”, “hard” contributions and their interferences, often referred to as soft-soft, hard-hard, soft-hard and hard-soft contributions. We will first study the central-cut diagrams, which represent the classical double scattering picture; then we compute the virtual contributions, and finally we come back to the asymmetric-cut diagrams, which represent the interference between single and triple scattering processes. As we will show below, both central-cut diagrams and virtual contributions contain divergences, while the sum of all the asymmetric-cut diagrams is free of any divergence, and only contributes to the NLO finite terms.
#### iii.1.1 Central-cut (real corrections)
For real corrections, there are in total 16 diagrams corresponding to four different kinds of subprocesses mentioned above: soft-soft double scattering, hard-hard double scattering and the interferences between them, as shown in Fig. 3. Let us take the soft-soft double scattering in Fig. 3(a) as an example to outline the essential steps for calculating the NLO contributions to the transverse momentum broadening, and all the other subprocesses could be evaluated in the same manner.
<figure><img src="content_image/1409.1315/x4.png"><figcaption>Figure 3: The central-cut diagrams for (a) soft-soft, (b) hard-hard, (c) soft-hard, and (d) hard-soft double scatterings in SIDIS. The short bars indicatethe propagators where the soft pole and hard pole arise. The “H”-blobsrepresent the hard 2→2 processes as shown in Fig. 4.</figcaption></figure>
<figure><img src="content_image/1409.1315/x8.png"><figcaption>Figure 4: The representations of hard 2→2 processes for (a) photon-quarkinteraction, and (b) quark-gluon interaction.</figcaption></figure>
To perform the collinear expansion in Eq. (25), we first integrate out the parton momentum fractions \(x_{1}\), \(x_{2}\), and \(x_{3}\) with the help of either the contour integration or the kinematic \(\delta\)-function in the final-state phase space, and then perform the \(k_{T}\)-expansion directly. Starting from the high-twist general formalism as shown in Eq. (25), the \(\ell_{hT}^{2}\)-weighted hadronic tensor can then be written as,
\[\frac{d\langle\ell_{hT}^{2}W_{\mu\nu}^{D}\rangle}{dz_{h}}= \sum_{q}e_{q}^{2}\int\frac{dz}{z}D_{h/q}(z)\ell_{hT}^{2}\left[- \frac{g^{\alpha\beta}}{2(1-\epsilon)}\right]\frac{1}{2}\frac{\partial^{2}}{ \partial k_{T}^{\alpha}\partial k_{T}^{\beta}}\left[\int dx_{1}dx_{2}dx_{3}T_{ qg}(\{x_{i}\})H_{\mu\nu}(\{x_{i}\},p,q,\ell,\ell_{h},k_{T})\right]_{k_{T}=0}.\] (27)
The two propagators which will be used to perform the contour integrals are marked by short bars in Fig. 3(a) and can be expressed as following,
\[\frac{1}{(\ell-x_{2}p-k_{T})^{2}+i\epsilon} =\frac{x}{{\hat{u}}}\frac{1}{x_{2}-x_{D}-i\epsilon},\] (28)
\[\frac{1}{\left(\ell-(x_{2}-x_{3})p-k_{T}\right)^{2}-i\epsilon} =\frac{x}{{\hat{u}}}\frac{1}{x_{2}-x_{3}-x_{D}+i\epsilon}.\] (29)
On the other hand, the two-body final-state phase space integral at central-cut is given by
\[dPS^{(C)}=\frac{1}{8\pi}\left(\frac{4\pi}{Q^{2}}\right)^{ \epsilon}\frac{1}{\Gamma(1-\epsilon)}\int dx\,\delta\left(x_{1}+x_{2}-x-x_{C} \right)\hat{z}^{-\epsilon}(1-\hat{z})^{-\epsilon}\hat{x}^{\epsilon}(1-\hat{x}) ^{-\epsilon},\] (30)
where the \(\delta\)-function \(\delta\left(x_{1}+x_{2}-x-x_{C}\right)\) comes from the on-shell condition for the unobserved final-state gluon. Here the momentum fractions \(x\), \(x_{C}\), and \(x_{D}\) in Eqs. (28), (29) and (30) are given by
\[x=\frac{Q^{2}+2q\cdot\ell}{2p\cdot(q-\ell)},\qquad x_{C}=x\frac{ k_{T}^{2}-2\ell\cdot k_{T}}{{\hat{t}}},\qquad x_{D}=x\frac{2\ell\cdot k_{T}-k_ {T}^{2}}{\hat{u}}.\] (31)
Now we are able to integrate over \(\{x_{i}\}\),
\[\int dx_{1}dx_{2}dx_{3}e^{ix_{1}p^{+}y^{-}}e^{ix_{2}p^{+}(y_{1}^{ -}-y_{2}^{-})}e^{ix_{3}p^{+}y_{2}^{-}}\frac{1}{x_{2}-x_{D}-i\epsilon}\frac{1}{ x_{2}-x_{3}-x_{D}+i\epsilon}\delta\left(x_{1}+x_{2}-x-x_{C}\right)\]
\[=e^{i(x+x_{C}-x_{D})p^{+}y^{-}}e^{ix_{D}p^{+}(y_{1}^{-}-y_{2}^{-} )}(2\pi)^{2}\theta(y_{2}^{-})\theta(y_{1}^{-}-y^{-}).\] (32)
In the above equation, two of the integrations over \(\{x_{i}\}\) are carried out by contour integrations, which lead to the \(\theta\)-functions, indicating the order of the two scatterings. The third integration over \(\{x_{i}\}\) is fixed by the \(\delta\)-function from the final-state phase space. After the integration, the parton momentum fractions \(\{x_{i}\}\) are fixed as follows
\[x_{1}=x+x_{C}-x_{D},\qquad x_{2}=x_{D},\qquad x_{3}=0.\] (33)
As we can see here, in the collinear limit \(k_{T}\to 0\), \(x_{C}=x_{D}=0\) according to Eq. (31). Thus the momentum fraction for the initial quark is finite \(x_{1}=x\), while the momentum fractions for the initial gluons on both sides of the cut-line become zero (\(k_{g}\to 0\) and \(k_{g}^{\prime}\to 0\)). This is why we refer to this process as soft-soft double scattering.
The next critical step, which is the key point in high-twist calculation, is to perform the collinear expansion. With the help of the following identity [53],
\[\frac{\partial^{2}\big{[}T(\{x_{i}\})H_{\mu\nu}(\{x_{i}\},k_{T}) \big{]}}{\partial k_{T}^{\alpha}\partial k_{T}^{\beta}}=\frac{\partial^{2}T}{ \partial x_{i}\partial x_{j}}\left[\frac{\partial x_{i}}{\partial k_{T}^{ \alpha}}\frac{\partial x_{j}}{\partial k_{T}^{\beta}}H_{\mu\nu}\right]+\frac{ \partial T}{\partial x_{i}}\left[\frac{\partial^{2}x_{i}}{\partial k_{T}^{ \alpha}\partial k_{T}^{\beta}}H_{\mu\nu}+\frac{\partial x_{i}}{\partial k_{T}^ {\alpha}}\frac{\partial H_{\mu\nu}}{\partial k_{T}^{\beta}}+\frac{\partial x_{ i}}{\partial k_{T}^{\beta}}\frac{\partial H_{\mu\nu}}{\partial k_{T}^{\alpha}} \right]+T\frac{\partial^{2}H_{\mu\nu}}{\partial k_{T}^{\alpha}\partial k_{T}^{ \beta}},\] (34)
where repeated indices imply summations, we substitute the parton momentum fractions \(\{x_{i}\}\) in Eq. (33), and then carry out the collinear expansion of the hard part. At the end of the day, we have
\[\frac{d\langle\ell_{hT}^{2}W^{D}\rangle^{ss}_{C}}{dz_{h}}= \frac{2\alpha_{s}}{N_{c}}z_{h}^{2}(2\pi)^{3}(1-\epsilon)\frac{ \alpha_{s}}{2\pi}\int\frac{dx}{x}\int\frac{dz}{z}D_{h/q}(z)\left(\frac{4\pi\mu ^{2}}{Q^{2}}\right)^{\epsilon}\frac{1}{\Gamma(1-\epsilon)}\hat{z}^{-\epsilon}( 1-\hat{z})^{-\epsilon}\hat{x}^{\epsilon}(1-\hat{x})^{-\epsilon}\]
\[\times\left[x^{2}\frac{d^{2}}{dx^{2}}T_{qg}(x,0,0)D_{2}^{ss}+x \frac{d}{dx}T_{qg}(x,0,0)D_{1}^{ss}+T_{qg}(x,0,0)D_{0}^{ss}\right].\] (35)
Here and throughout the later part of this paper, \(W^{D}\) (H) stands for the combination of metric and longitudinal contributions by contracting \(W^{D}_{\mu\nu}\) (\(H_{\mu\nu}\)) with \(-g^{\mu\nu}\) and \(p^{\mu}p^{\nu}\) seperatly. In Eq. (35), \(\mu\) is the mass scale introduced to keep coupling constant dimensionless \(g\to g\mu^{\epsilon}\), and the superscript “\(ss\)” represents the soft-soft contributions. There are three terms in Eq. (35): the first two are the derivative terms, and the third one is the nonderivative term, and they are related to the hard part coefficient function \(H(\{x_{i}\},k_{T})\) as follows,
\[D_{2}^{ss}= \frac{1}{2\hat{z}^{2}}\left(\frac{1}{\hat{t}}+\frac{1}{\hat{u}} \right)^{2}\frac{\ell_{T}^{4}}{(1-\epsilon)^{2}}H,\] (36)
\[D_{1}^{ss}= -\frac{1}{2\hat{z}^{2}}\left(H+\frac{\ell_{T}^{2}}{1-\epsilon} \frac{\partial H}{\partial y_{1}}\right)\frac{\ell_{T}^{2}}{1-\epsilon},\] (37)
\[D_{0}^{ss}= \frac{1}{2\hat{z}^{2}}\left(\frac{1}{4}\frac{\ell_{T}^{2}}{1- \epsilon}\frac{\partial^{2}H}{\partial y_{1}^{2}}-\frac{\partial H}{\partial y _{2}}\right)\frac{\ell_{T}^{2}}{1-\epsilon},\] (38)
where \(y_{1}=\ell\cdot k_{T}\), \(y_{2}=k_{T}^{2}\) and the arguments in \(H\) are supressed. In arriving at Eq. (35) from Eq. (34), one realizes that only derivatives with respect to (w.r.t.) \(x_{1}\) contribute to the final result, thus we change the partial derivative w.r.t. \(x_{1}\) into the form of full derivative w.r.t. \(x\) (recall \(x_{1}\to x\) at \(k_{T}\to 0\)). The first derivative w.r.t. \(x_{2}\) will generate \((y_{1}^{-}-y_{2}^{-})\), and thus when combined with the matrix element, it vanishes due to fact that the gluon field strengths commute on the light cone, as explained clearly in [31]. The second derivative w.r.t. \(x_{2}\) gives rise to a “contact” term when combined with the corresponding asymmetric-cut diagrams. The “contact” terms generally do not have nuclear size enhancement and thus we neglect them in our study, see the explanation in Ref. [53] and also discussions in Sec. III.1.3 below. Finally since \(x_{3}=0\) is independent of \(k_{T}\), no expansion over \(x_{3}\) is needed.
In the above equations, the hard part coefficients \(D_{i}^{ss}(i=2,1,0)\) are functions of parton Mandelstam variables \(\hat{s},~{}\hat{t}\) and \(\hat{u}\), which can be expressed in terms of \(Q^{2}\), \(\hat{x}\) and \(\hat{z}\) as
\[\hat{s}=\frac{1-\hat{x}}{\hat{x}}Q^{2},\qquad\hat{t}=-\frac{1- \hat{z}}{\hat{x}}Q^{2},\qquad\hat{u}=-\frac{\hat{z}}{\hat{x}}Q^{2}.\] (39)
Thus we see that the integrals over \(\hat{x}\) and \(\hat{z}\) will contain divergences when \(\hat{x}\to 1\) and \(\hat{z}\to 1\). Note that we do not have to worry about the divergences when \(\hat{x}\to 0\) and \(\hat{z}\to 0\) since they are outside the physical regions (\(\hat{x}>x_{B}\) and \(\hat{z}>z_{h}\)). The main task now is to isolate all the divergences, and combine them accordingly. Let’s define the following common factor
\[I=\hat{z}^{-\epsilon}(1-\hat{z})^{-\epsilon}\hat{x}^{\epsilon}(1 -\hat{x})^{-\epsilon},\] (40)
which will be used repeatedly below. To perform the \(\epsilon\)-expansion for the hard part coefficients \(I\times D_{i}^{ss}\), we use the following formulas [46]:
\[\hat{z}^{-\epsilon}(1-\hat{z})^{-\epsilon-1}= -\frac{1}{\epsilon}\delta(1-\hat{z})+\frac{1}{(1-\hat{z})_{+}}- \epsilon\left(\frac{\ln(1-\hat{z}}{1-\hat{z}}\right)_{+}-\epsilon\frac{\ln\hat {z}}{1-\hat{z}}+{\cal O}(\epsilon^{2}),\] (41)
\[\hat{x}^{\epsilon}(1-\hat{x})^{-\epsilon-1}= -\frac{1}{\epsilon}\delta(1-\hat{x})+\frac{1}{(1-\hat{x})_{+}}- \epsilon\left(\frac{\ln(1-\hat{x}}{1-\hat{x}}\right)_{+}+\epsilon\frac{\ln\hat {x}}{1-\hat{x}}+{\cal O}(\epsilon^{2}),\] (42)
\[\hat{z}^{-\epsilon}(1-\hat{z})^{-\epsilon}= 1-\epsilon\ln\hat{z}-\epsilon\ln(1-\hat{z})+{\cal O}(\epsilon^{2 }),\] (43)
\[\hat{x}^{\epsilon}(1-\hat{x})^{-\epsilon}= 1+\epsilon\ln\hat{x}-\epsilon\ln(1-\hat{x})+{\cal O}(\epsilon^{2 }),\] (44)
where the usual “plus”-function is defined as
\[\int_{0}^{1}dz\frac{f(z)}{(1-z)_{+}}\equiv\int_{0}^{1}dz\frac{f(z )-f(1)}{1-z}.\] (45)
Finally we have
\[I\times D_{2}^{ss}= -\frac{1}{\epsilon}C_{F}\delta(1-\hat{z})(1-\hat{x})(1+\hat{x}^{2 })+\cdots,\] (46)
\[I\times D_{1}^{ss}= -\frac{1}{\epsilon}C_{F}\delta(1-\hat{z})(4\hat{x}^{3}-5\hat{x}^{ 2}-1)+\cdots,\] (47)
\[I\times D_{0}^{ss}= C_{F}\bigg{[}\frac{2}{\epsilon^{2}}\delta(1-\hat{z})\delta(1- \hat{x})+\frac{4}{\epsilon}\delta(1-\hat{z})\delta(1-\hat{x})-\frac{1}{ \epsilon}\delta(1-\hat{x})\frac{1+\hat{z}^{2}}{\hat{z}^{2}(1-\hat{z})_{+}}\]
\[-\frac{1}{\epsilon}\delta(1-\hat{z})\frac{1+\hat{x}^{2}(6\hat{x}^ {2}-14\hat{x}+9)}{(1-\hat{x})_{+}}\bigg{]}+\cdots,\] (48)
where the ellipses denote finite contributions. It is instructive to point out that all the divergent terms above come from the metric contribution, not from the longitudinal contribution. However, the longitudinal part does contribute to finite terms. This feature holds true in all the other processes as well. For the divergent pieces associated with derivative terms in the above expression, we further perform integration by parts to convert them into the form of non-derivative terms [47]. We have the final divergent piece in soft-soft double scattering as
\[C_{F}\int_{x_{B}}^{1}\frac{dx}{x}T_{qg}(x,0,0)\left[\frac{2}{ \epsilon^{2}}\delta(1-\hat{x})\delta(1-\hat{z})-\frac{1}{\epsilon}\delta(1- \hat{x})\frac{1+\hat{z}^{2}}{\hat{z}^{2}(1-\hat{z})_{+}}-\frac{1}{\epsilon} \delta(1-\hat{z})\frac{1+\hat{x}^{2}}{(1-\hat{x})_{+}}\right],\] (49)
where we have used the boundary condition \(T_{qg}(x,0,0)=0\) when \(x\to 1\) in preforming the integration by parts, which is valid under the approximation of neglecting the Fermi motion of nucleon inside a nucleus. From the divergent piece, we can see that soft-soft double scattering contains both soft-collinear and collinear divergences, which are identified as double-pole \(1/\epsilon^{2}\) and single-pole \(1/\epsilon\), respectively. On the other hand, the finite terms associated with derivative and non-derivative terms (as denoted by ellipses in Eqs. (46), (47), and (48)) are combined into a single term denoted as \(H_{qg-C}^{ss}\otimes T_{qg}\), with the expression given in Eq. (104) in Appendix.
Likewise, we can also compute the diagrams of hard-hard double scattering as shown in Fig. 3(b), where the radiated gluon is induced by the secondary quark-nucleus scattering, following the first quark-photon interaction. In this process, the exchanged gluon momenta (\(k_{g}\) and \(k_{g}^{\prime}\) ) remain finite in the collinear limit \(k_{T}\to 0\), and thus it is referred to as a hard scattering. Specifically the two propagators marked by short bars have the following expressions:
\[\frac{1}{(x_{1}p+q)^{2}+i\epsilon} =\frac{x_{B}}{Q^{2}}\frac{1}{x_{1}-x_{B}+i\epsilon},\] (50)
\[\frac{1}{\left[(x_{1}+x_{3})p+q\right]^{2}+i\epsilon} =\frac{x_{B}}{Q^{2}}\frac{1}{x_{1}+x_{3}-x_{B}+i\epsilon}.\] (51)
At the same time, the on-shell condition for the unobserved gluon leads to
\[\delta\left[\left((x_{1}+x_{2})p+k_{T}+q-\ell\right)^{2}\right] \to\delta(x_{1}+x_{2}-x-x_{C}).\] (52)
Thus the contour integrals and the above kinematic \(\delta\)-function fix \(\{x_{i}\}\) as
\[x_{1}=x_{B},\qquad x_{2}=x+x_{C}-x_{B},\qquad x_{3}=0,\] (53)
from which we find that the gluon momenta associated with the second scattering remain finite when \(k_{T}\to 0\) as
\[k_{g}\to(x-x_{B})p,\qquad k_{g}^{\prime}\to(x-x_{B})p,\] (54)
thus the name hard-hard double scattering.
Following the same steps as we have outlined in soft-soft double scattering, we can write down the contributions from hard-hard double scattering, where only non-derivative term contributes to the final result:
\[\frac{d\langle\ell_{hT}^{2}W^{D}\rangle^{hh}_{C}}{dz_{h}}= \frac{\alpha_{s}^{2}}{N_{c}}z_{h}^{2}(2\pi)^{3}(1-\epsilon)\int \frac{dx}{x}\int\frac{dz}{z}D_{h/q}(z)\left(\frac{4\pi\mu^{2}}{Q^{2}}\right)^{ \epsilon}\frac{1}{\Gamma(1-\epsilon)}\hat{z}^{-\epsilon}(1-\hat{z})^{-\epsilon }\hat{x}^{\epsilon}(1-\hat{x})^{-\epsilon}\]
\[\times T_{qg}(x_{B},x-x_{B},0)D_{0}^{hh},\] (55)
where the superscript “\(hh\)” represents the hard-hard scattering contribution. Perform the \(\epsilon\)-expansion, we have
\[I\times D_{0}^{hh}= C_{A}\left[\frac{2}{\epsilon^{2}}\delta(1-\hat{z})\delta(1-\hat{ x})-\frac{2}{\epsilon}\delta(1-\hat{x})\frac{1+\hat{z}^{2}}{(1-\hat{z})_{+}} \frac{C_{F}/C_{A}(1-\hat{z})^{2}+\hat{z}}{\hat{z}^{2}}-\frac{2}{\epsilon} \delta(1-\hat{z})\frac{1}{(1-\hat{x})_{+}}+\cdots\right].\] (56)
The finite term denoted by ellipses comes from the metric part only, with the explicit expression \(H_{qg-C}^{hh}\otimes T_{qg}\) given in Eq. (105) in Appendix.
Finally let us turn to the interference diagrams between soft and hard scatterings. The calculation is similar, and we only list the final result here. For the soft-hard scattering contributions as shown in Fig. 3(c), we have
\[\frac{d\langle\ell_{hT}^{2}W^{D}\rangle^{sh}_{C}}{dz_{h}}= \frac{2\alpha_{s}}{N_{c}}z_{h}^{2}(2\pi)^{3}(1-\epsilon)\frac{ \alpha_{s}}{2\pi}\int\frac{dx}{x}\int\frac{dz}{z}D_{h/q}(z)\left(\frac{4\pi\mu ^{2}}{Q^{2}}\right)^{\epsilon}\frac{1}{\Gamma(1-\epsilon)}\hat{z}^{-\epsilon}( 1-\hat{z})^{-\epsilon}\hat{x}^{\epsilon}(1-\hat{x})^{-\epsilon}\]
\[\times\left[x\frac{d}{dx}T_{qg}(x,0,x_{B}-x)D_{1}^{sh}+x\left. \frac{d}{dx_{2}}T_{qg}(x,x_{2},x_{B}-x)\right|_{x_{2}\to 0}D_{12}^{sh}+T_{qg}( x,0,x_{B}-x)D_{0}^{sh}\right].\] (57)
Again, the divergences in each term of the above equation can be identified as follows,
\[I\times D_{1}^{sh}= -\frac{1}{\epsilon}\frac{C_{A}}{2}\delta(1-\hat{z})(1+\hat{x})\cdots,\] (58)
\[I\times D_{12}^{sh}= \cdots,\] (59)
\[I\times D_{0}^{sh}= \frac{C_{A}}{2}\bigg{\{}-\frac{2}{\epsilon^{2}}\delta(1-\hat{z}) \delta(1-\hat{x})-\frac{2}{\epsilon}\delta(1-\hat{z})\delta(1-\hat{x})+\frac{1 }{\epsilon}\delta(1-\hat{x})\frac{1+\hat{z}^{2}}{\hat{z}^{2}(1-\hat{z})_{+}}[ \hat{z}+2C_{F}/C_{A}(1-\hat{z})]\]
\[+\frac{1}{\epsilon}\delta(1-\hat{z})\frac{1+2\hat{x}-\hat {x}^{2}}{(1-\hat{x})_{+}}+\cdots\bigg{\}}.\] (60)
Similarly to the soft-soft double scattering, performing partial integration to convert the derivative of the quark-gluon correlation function \(T_{qg}\) to \(T_{qg}\) itself leads to the divergent part
\[C_{A}\int_{x_{B}}^{1}\frac{dx}{x}T_{qg}(x,0,x_{B}-x)\left\{- \frac{1}{\epsilon^{2}}\delta(1-\hat{x})\delta(1-\hat{z})+\frac{1}{\epsilon} \delta(1-\hat{x})\frac{1+\hat{z}^{2}}{\hat{z}^{2}(1-\hat{z})_{+}}\left[\frac{ \hat{z}}{2}+\frac{C_{F}}{C_{A}}(1-\hat{z})\right]+\frac{1}{\epsilon}\delta(1- \hat{z})\frac{1+\hat{x}}{2(1-\hat{x})_{+}}\right\},\] (61)
while the finite contribution denoted as \(H_{qg-C}^{sh}\otimes T_{qg}\) is given by Eq. (106) in Appendix. Like in hard-hard double scattering, the finite contribution in soft-hard double scattering comes from the metric contribution only, and the longitudinal part does not contribute. This also holds true for hard-soft double scattering process.
The process of hard-soft double scattering as shown in Fig. 3(d) is simply the complex conjugate of the soft-hard double scattering, its contribution can be easily obtained by replacing the matrix element in soft-hard process as follows
\[T_{qg}(x,0,x_{B}-x)\to T_{qg}(x_{B},x-x_{B},x-x_{B}).\] (62)
Therefore, the divergent part in this process is
\[C_{A}\int_{x_{B}}^{1}\frac{dx}{x}T_{qg}(x_{B},x-x_{B},x-x_{B}) \left\{-\frac{1}{\epsilon^{2}}\delta(1-\hat{x})\delta(1-\hat{z})+\frac{1}{ \epsilon}\delta(1-\hat{x})\frac{1+\hat{z}^{2}}{\hat{z}^{2}(1-\hat{z})_{+}} \left[\frac{\hat{z}}{2}+\frac{C_{F}}{C_{A}}(1-\hat{z})\right]\right.\]
\[\left.+\frac{1}{ \epsilon}\delta(1-\hat{z})\frac{1+\hat{x}}{2(1-\hat{x})_{+}}\right\},\] (63)
and the finite part denoted as \(H_{qg-C}^{hs}\otimes T_{qg}\) can be found in Eq. (107) in Appendix.
Combining all the results from soft-soft, hard-hard, soft-hard and hard-soft contributions, we obtain the result for real corrections from central-cut diagrams,
\[\frac{d\langle\ell_{hT}^{2}\sigma^{D}\rangle^{\rm(C)}}{d{\cal PS}}= \sigma_{h}\frac{\alpha_{s}}{2\pi}\sum_{q}e_{q}^{2}\int\frac{dx}{x }\int\frac{dz}{z}D_{h/q}(z)\left(\frac{4\pi\mu^{2}}{Q^{2}}\right)^{\epsilon} \frac{1}{\Gamma(1-\epsilon)}\Bigg{\{}\frac{2}{\epsilon^{2}}C_{F}\delta(1-\hat{ x})\delta(1-\hat{z})T_{qg}(x,0,0)-\frac{1}{\epsilon}\delta(1-\hat{x})\]
\[\times C_{F}\frac{1+\hat{z}^{2}}{(1-\hat{z})_{+}}T_{qg}(x,0,0)- \frac{1}{\epsilon}\delta(1-\hat{z})\bigg{[}C_{F}\frac{1+\hat{x}^{2}}{(1-\hat{x })_{+}}T_{qg}(x,0,0)+C_{A}\frac{2}{(1-\hat{x})_{+}}T_{qg}(x_{B},x-x_{B},0)\]
\[-\frac{C_{A}}{2}\frac{1+\hat{x}}{(1-\hat{x})_{+}}\big{(}T_{qg}(x, 0,x_{B}-x)+T_{qg}(x_{B},x-x_{B},x-x_{B})\big{)}\bigg{]}+H_{qg}^{C-R}\otimes T_ {qg}\Bigg{\}},\] (64)
where the finite contribution \(H_{qg}^{C-R}\otimes T_{qg}\) has the following form
\[H_{qg}^{C-R}\otimes T_{qg}=H_{qg-C}^{ss}\otimes T_{qg}+H_{qg-C}^ {hh}\otimes T_{qg}+H_{qg-C}^{sh}\otimes T_{qg}+H_{qg-C}^{hs}\otimes T_{qg},\] (65)
with all the terms on the right-hand side given in Eqs. (104), (105), (106), and (107), respectively. It is instructive to point out that even though hard-hard double scattering, soft-hard and hard-soft scattering all have double-pole \(1/\epsilon^{2}\) terms \(\propto C_{A}\), they cancel between them, and thus the remaining \(1/\epsilon^{2}\) terms entirely come from the soft-soft double scattering contribution, which has a color factor \(C_{F}\), and is exactly opposite to those in the virtual corrections as we will show in the next subsection.
#### iii.1.2 Virtual corrections
In this subsection, we calculate the virtual corrections in quark-gluon double scattering, which have to be included to ensure unitarity and infrared safety of the final result. The relevant generic Feynman diagrams are shown in Fig. 5, in which the blob is given by Fig. 6. The incoming parton momenta involved in the double scatterings follow the same convention as those in Fig. 1(b), or the LO diagram shown in Fig. 2. In this case, it is important to realize that all the asymmetric-cut diagrams give no contribute to the \(\ell_{hT}^{2}\)-weighted differential cross section. This is because the kinematic \(\delta\)-function \(\delta^{n-2}(\ell_{hT})\) from final state phase space leads to \(\int d^{n-2}\ell_{hT}\ell_{hT}^{2}\delta^{n-2}(\ell_{hT})=0\). Thus we only have to consider the central-cut diagrams.
<figure><img src="content_image/1409.1315/x9.png"><figcaption>Figure 5: The virtual diagrams in the calculation of transverse momentumbroadening at NLO in SIDIS. The incoming parton momenta involved in the doublescatterings follow the same convention as in Fig. 1(b), or the LO diagramshown in Fig. 2.</figcaption></figure>
<figure><img src="content_image/1409.1315/x10.png"><figcaption>Figure 6: One-loop corrections to the quark-photon-quark vertex with gluonattachment, corresponding to the blob in Fig. 5.</figcaption></figure>
Two diagrams in Fig. 5 are simply the complex conjugate of each other, so they should have the same result. The actual calculation is quite involved and tedious, and it contains significant amount of tensor reductions and integrations. Nevertheless, the calculation is straightforward. The results can be decomposed into two types of color factors, \(C_{F}\) and \(C_{A}\), and it turns out that terms associated with \(C_{A}\) cancel out and only terms with the color \(C_{F}\) remain. The final result for virtual correction is quite simple and has exactly the same structure as the virtual correction at leading-twist,
\[\frac{d\langle\ell_{hT}^{2}\sigma^{D}\rangle}{d{\cal PS}}^{\rm(V)}= \sigma_{h}\frac{\alpha_{s}}{2\pi}\int\frac{dx}{x}T_{qg}(x,0,0) \int\frac{dz}{z}D_{h/q}(z)\delta(1-{\hat{x}})\delta(1-{\hat{z}})\left(\frac{4 \pi\mu^{2}}{Q^{2}}\right)^{\epsilon}\frac{1}{\Gamma(1-\epsilon)}C_{F}\left(- \frac{2}{\epsilon^{2}}-\frac{3}{\epsilon}-8\right).\] (66)
A similar structure also appears in the virtual correction to the transverse momentum weighted spin-dependent cross section at the twist-3 [47; 57]. It is important to note that the soft-collinear divergence (\(1/\epsilon^{2}\) term) in the above virtual corrections should cancel that in the real diagrams in order to establish the NLO collinear factorization at twist-4. We will check this cancellation when we combine the results of all the diagrams together. For later convenience, we write out the finite term in the virtual contribution:
\[H_{qg}^{C-V}\otimes T_{qg}=-8C_{F}\delta(1-{\hat{x}})\delta(1-{ \hat{z}})T_{qg}(x,0,0).\] (67)
#### iii.1.3 Asymmetric-cut
We now turn to the asymmetric-cut diagrams, which represent the interferences between single and triple scatterings. They include both left-cut and right-cut diagrams as shown in Figs. 7 and 8, respectively. Since two additional scattered gluons are always on the same side, there will be no hard-hard scattering contributions. Thus there are only three different kinds of subprocesses for asymmetric-cut diagrams: soft-soft, soft-hard and hard-soft rescatterings.
<figure><img src="content_image/1409.1315/x11.png"><figcaption>Figure 7: The left-cut diagrams for (a) soft-soft, (b) soft-hard, (c) hard-soft rescattering processes in SIDIS. The short bars indicate the propagatorswhere the soft pole or hard pole arises.</figcaption></figure>
<figure><img src="content_image/1409.1315/x14.png"><figcaption>Figure 8: The right-cut diagrams for (a) soft-soft, (b) soft-hard, (c) hard-soft rescattering processes in SIDIS. The short bars indicate the propagatorswhere the soft pole or hard pole arises.</figcaption></figure>
The soft-soft rescatterings of single-triple interference are shown in Figs. 7(a) and 8(a). Let us take Fig. 7(a) as an example, in which the relevant propagators (marked by the short bars) are,
\[\frac{1}{(\ell+x_{2}p+k_{T})^{2}-i\epsilon} =-\frac{x}{\hat{u}}\frac{1}{x_{2}-x_{E}-i\epsilon},\] (68)
\[\frac{1}{(\ell+x_{3}p)^{2}-i\epsilon} =-\frac{x}{\hat{u}}\frac{1}{x_{3}-i\epsilon}.\] (69)
Together with the on-shell condition for the unobserved gluon, which gives \(\delta(x_{1}-x)\), we have
\[x_{1}=x,\qquad x_{2}=x_{E},\qquad x_{3}=0,\] (70)
where \(x_{E}\) is given by
\[x_{E}=\frac{x}{\hat{u}}\left(2\ell\cdot k_{T}+k_{T}^{2}\right).\] (71)
Only \(x_{2}\) which is set to \(x_{E}\) by the pole in the first propagator depends on \(k_{T}\) and vanishes when \(k_{T}\to 0\). We further find that the hard part coefficient \(H(\{x_{i}\},k_{T})\) is independent of \(k_{T}\), and therefore according to Eq. (34), all the terms associated with the derivative of hard part coefficient vanish and only the derivative terms w.r.t. \(x_{2}\) survive. As we have pointed out already when discussing soft-soft double scattering in central-cut diagrams, the single-derivative term w.r.t. \(x_{2}\) vanishes due to the commutation of gluon field strengths on the light cone. We further find that the double-derivative term w.r.t. \(x_{2}\) leads to the “contact” contribution to the final result. For example, when we combine the soft-soft contributions in central-cut, left-cut and right-cut diagrams, the result is proportional to
\[\propto \int_{-\infty}^{\infty}dy^{-}\int_{-\infty}^{\infty}dy_{1}^{-} \int_{-\infty}^{\infty}dy_{2}^{-}e^{ixp^{+}y^{-}}(y_{1}^{-}-y_{2}^{-})^{2} \langle A|\bar{\psi}_{q}(0)\gamma^{+}F_{\sigma}^{~{}+}(y_{2}^{-})F^{+\sigma}(y _{1}^{-})\psi_{q}(y^{-})|A\rangle\]
\[\times\Big{[}H_{C}(\{x_{i}\},k_{T})\theta(y_{1}^{-}-y^{-})\theta( y_{2}^{-})-H_{L}(\{x_{i}\},k_{T})\theta(y_{1}^{-}-y_{2}^{-})\theta(y_{2}^{-})\]
\[-H_{R}(\{x_{i}\},k_{T})\theta(y_{1}^{-}-y^{-})\theta(y_{2}^{-}-y_ {1}^{-})\Big{]}_{k_{T}\to 0}.\] (72)
Given that
\[H_{C}(\{x_{i}\},k_{T}=0)=H_{L}(\{x_{i}\},k_{T}=0)=H_{R}(\{x_{i} \},k_{T}=0)\equiv H(x,0),\] (73)
we have a combination of \(\theta\)-functions as
\[\Big{[}\theta(y_{1}^{-}-y^{-})\theta(y_{2}^{-})-\theta(y_{1}^{-}- y_{2}^{-})\theta(y_{2}^{-})-\theta(y_{1}^{-}-y^{-})\theta(y_{2}^{-}-y_{1}^{-}) \Big{]},\] (74)
which converts Eq. (72) to
\[-\int_{-\infty}^{\infty}dy^{-}e^{ixp^{+}y^{-}}\int_{0}^{y^{-}}dy_ {1}^{-}\int_{0}^{y_{1}^{-}}dy_{2}^{-}(y_{1}^{-}-y_{2}^{-})^{2}\langle A|\bar{ \psi}_{q}(0)\gamma^{+}F_{\sigma}^{~{}+}(y_{2}^{-})F^{+\sigma}(y_{1}^{-})\psi_{ q}(y^{-})|A\rangle\,H(x,0).\] (75)
In other words, the integration \(\int dy_{1}^{-}\int dy_{2}^{-}\) becomes an ordered integral limited by the value of \(y^{-}\), which is in turn effectively restricted by the rapidly oscillating exponential phase factor \(e^{ixp^{+}y^{-}}\), i.e., \(y^{-}\sim 1/xp^{+}\to 0\) (if \(x\) is not small), thus also restricts \(y_{1,2}^{-}\to 0\). Physically, this means that all the position integrations in such a term are localized, and therefore, will not have nuclear size enhancement to the double scattering contribution. Such commonly-called “contact” term can thus be safely neglected when one consider a large nucleus. Therefore, the contributions from soft-soft rescatterings for asymmetric-cut diagrams can be neglected in our calculation for transverse momentum broadening.
For soft-hard rescattering contributions in left-cut diagrams as shown in Fig. 7(b), we follow the same steps in the calculation of soft-soft double scattering in central-cut diagrams, and obtain the following result
\[\frac{d\langle\ell_{hT}^{2}W^{D}\rangle^{sh}_{L}}{dz_{h}}= -\frac{2\alpha_{s}}{N_{c}}z_{h}^{2}(2\pi)^{3}(1-\epsilon)\frac{ \alpha_{s}}{2\pi}\int\frac{dx}{x}\int\frac{dz}{z}D_{h/q}(z)\left(\frac{4\pi\mu ^{2}}{Q^{2}}\right)^{\epsilon}\frac{1}{\Gamma(1-\epsilon)}\hat{z}^{-\epsilon}( 1-\hat{z})^{-\epsilon}\hat{x}^{\epsilon}(1-\hat{x})^{-\epsilon}\]
\[\times\left[x\left.\frac{d}{dx_{2}}T_{qg}^{L}(x,x_{2},x_{B}-x) \right|_{x_{2}\to 0}D_{12}^{sh}+T_{qg}^{L}(x,0,x_{B}-x)D_{0}^{sh}\right],\] (76)
where the matrix element \(T_{qg}^{L}\) is given by
\[T_{qg}^{L}(x_{1},x_{2},x_{3})= \int\frac{dy^{-}}{2\pi}e^{ix_{1}p^{+}y^{-}}\int\frac{dy_{1}^{-}dy _{2}^{-}}{4\pi}e^{ix_{2}p^{+}(y_{1}^{-}-y_{2}^{-})}e^{ix_{3}p^{+}y_{2}^{-}} \theta(y_{2}^{-})\theta(y_{1}^{-}-y_{2}^{-})\]
\[\times\langle A|{\bar{\psi}}_{q}(0)\gamma^{+}F_{\sigma}^{+}(y_{2} ^{-})F^{\sigma+}(y_{1}^{-})\psi_{q}(y^{-})|A\rangle,\] (77)
with the \(\theta\)-functions representing the order of rescatterings. In Eq. (76), only the non-derivative term contains divergence
\[I\times D_{12}^{sh}= \cdots,\] (78)
\[I\times D_{0}^{sh}= \frac{1}{\epsilon}C_{A}\delta(1-\hat{z})\delta(1-\hat{x})+\cdots.\] (79)
while the finite contribution for soft-hard resctterings in left-cut diagrams, denoted as \(H_{qg-L}^{sh}\otimes T_{qg}^{L}\), can be written in Eq. (108) in Appendix.
Similarly, the contribution from hard-soft resctterings in the left-cut diagrams in Fig. 7(c) is
\[\frac{d\langle\ell_{hT}^{2}W_{\mu\nu}^{D}\rangle^{hs}_{L}}{dz_{h}}= -\frac{2\alpha_{s}}{N_{c}}z_{h}^{2}(2\pi)^{3}(1-\epsilon)\frac{ \alpha_{s}}{2\pi}\int\frac{dx}{x}\int\frac{dz}{z}D_{h/q}(z)\left(\frac{4\pi\mu ^{2}}{Q^{2}}\right)^{\epsilon}\frac{1}{\Gamma(1-\epsilon)}\hat{z}^{-\epsilon}( 1-\hat{z})^{-\epsilon}\hat{x}^{\epsilon}(1-\hat{x})^{-\epsilon}\]
\[\times\left[T_{qg}^{L}(x,x_{B}-x,x_{B}-x)D_{0\mu\nu}^{hs}\right],\] (80)
where the non-derivative term contains divergences,
\[I\times D_{0\mu\nu}^{hs}= -\frac{1}{\epsilon}C_{A}\delta(1-\hat{z})\delta(1-\hat{x})+\cdots,\] (81)
and the finite contribution denoted as \(H_{qg-L}^{hs}\otimes T_{qg}^{L}\) can be found in Eq. (109) in Appendix.
When we combine the soft-hard and hard-soft rescatterings together in left-cut diagrams, the divergences cancel out,
\[\frac{1}{\epsilon}C_{A}\delta(1-\hat{z})\delta(1-\hat{x})\left[T_ {qg}^{L}(x,0,x_{B}-x)-T_{qg}^{L}(x,x_{B}-x,x_{B}-x)\right]=0,\] (82)
and only the finite contribution remains, given by \(H_{qg-L}^{sh}\otimes T_{qg}^{L}+H_{qg-L}^{hs}\otimes T_{qg}^{L}\).
The soft-hard and hard-soft rescatterings in the right-cut diagrams as shown in Fig. 8 are the complex conjugate of the ones in the left-cut diagrams, and thus can be obtained directly from the results for diagrams in Fig. 7. By replacing the matrix element in Eq. (108)
\[T_{qg}^{L}(x,0,x_{B}-x)\to T_{qg}^{R}(x_{B},x-x_{B},x-x_{B}),\] (83)
with \(T_{qg}^{R}\) given by
\[T_{qg}^{R}(x_{1},x_{2},x_{3})= \int\frac{dy^{-}}{2\pi}e^{ix_{1}p^{+}y^{-}}\int\frac{dy_{1}^{-}dy _{2}^{-}}{4\pi}e^{ix_{2}p^{+}(y_{1}^{-}-y_{2}^{-})}e^{ix_{3}p^{+}y_{2}^{-}} \theta(y_{2}^{-}-y_{1}^{-})\theta(y_{1}^{-}-y^{-})\]
\[\times\langle A|{\bar{\psi}}_{q}(0)\gamma^{+}F_{\sigma}^{+}(y_{2} ^{-})F^{\sigma+}(y_{1}^{-})\psi_{q}(y^{-})|A\rangle,\] (84)
we obtain the finite contribution from hard-soft rescatterings at right-cut, denoted by \(H_{qg-R}^{hs}\otimes T_{qg}^{R}\), as given in Eq. (110). Similarly, the finite contribution in soft-hard rescatterings at right-cut denoted as \(H_{qg-R}^{sh}\otimes T_{qg}^{R}\) is given by Eq. (111).
Combining all contributions from asymmetric-cut diagrams, we find that the final result is free of any divergence,
\[\frac{d\langle\ell_{hT}^{2}\sigma^{D}\rangle^{\rm(A)}}{d{\cal PS}}= -\sigma_{h}\frac{\alpha_{s}}{2\pi}\sum_{q}e_{q}^{2}\int\frac{dx}{ x}\int\frac{dz}{z}D_{h/q}(z)H_{qg}^{A}\otimes T_{qg}^{A},\] (85)
where \(H_{qg}^{A}\otimes T_{qg}^{A}\) is given by
\[H_{qg}^{A}\otimes T_{qg}^{A}=H_{qg-L}^{sh}\otimes T_{qg}^{L}+H_{ qg-L}^{hs}\otimes T_{qg}^{L}+H_{qg-R}^{sh}\otimes T_{qg}^{R}+H_{qg-R}^{hs} \otimes T_{qg}^{R}.\] (86)
with all the terms on the right-hand side given by Eqs. (108), (109), (110), and (111), respectively. Note again that longitudinal contributions for asymmetric-cut diagrams vanish.
### Gluon-Gluon double scattering
In this subsection, we consider the gluon-gluon double scattering process in SIDIS, as shown in Fig. 9, where two initial gluons in the nucleus participate in the process and the first hard gluon plays the same role as the hard quark in the quark-gluon double scattering. Here, for simplicity, we only consider the situation where a quark fragments into the final-state observed hadron. The inclusion of the antiquark fragmentation is straightforward, by simply replacing the fragmentation function \(D_{h/q}(z)\to D_{h/\bar{q}}(z)\).
<figure><img src="content_image/1409.1315/x17.png"><figcaption>Figure 9: The central-cut diagram for soft-soft gluon-gluon double scatteringsin SIDIS. The short bars indicate the propagators where the soft pole arise.The blob with “H” inside represents the hard 2→2 processes as shown in Fig.10.</figcaption></figure>
<figure><img src="content_image/1409.1315/x18.png"><figcaption>Figure 10: The representation of hard 2→2 processes for photon-gluoninteraction.</figcaption></figure>
In gluon-gluon double scattering, we only have soft-soft double scattering as illustrated in central-cut diagrams in Fig. 9. The kinematics and pole structures are exactly the same as those in soft-soft process of quark-gluon double scattering. The calculation is straightforward and the final result turns out to be
\[\frac{d\langle\ell_{hT}^{2}W^{D}\rangle^{ss}_{gg}}{dz_{h}}= \frac{2\alpha_{s}}{N_{c}}z_{h}^{2}(2\pi)^{3}(1-\epsilon)\frac{ \alpha_{s}}{2\pi}\int\frac{dx}{x}\int\frac{dz}{z}D_{h/q}(z)\left(\frac{4\pi\mu ^{2}}{Q^{2}}\right)^{\epsilon}\frac{1}{\Gamma(1-\epsilon)}\hat{z}^{-\epsilon}( 1-\hat{z})^{-\epsilon}\hat{x}^{\epsilon}(1-\hat{x})^{-\epsilon}\]
\[\times\left[x^{2}\frac{d^{2}}{dx^{2}}T_{gg}(x,0,0)D_{2}^{ss}+x \frac{d}{dx}T_{gg}(x,0,0)D_{1}^{ss}+T_{gg}(x,0,0)D_{0}^{ss}\right],\] (87)
where the gluon-gluon matrix element \(T_{gg}(x,0,0)\) is given by [40]
\[T_{gg}(x,0,0)= \frac{1}{xp^{+}}\int\frac{dy^{-}}{2\pi}\,e^{ixp^{+}y^{-}}\int \frac{dy_{1}^{-}dy_{2}^{-}}{2\pi}\theta(y_{2}^{-})\,\theta(y_{1}^{-}-y^{-}) \langle A|F_{\alpha}^{~{}+}(0)F^{\sigma+}(y_{2}^{-})F^{+}_{~{}\sigma}(y_{1}^{- })F^{+\alpha}(y^{-})|A\rangle\,.\] (88)
The \(\epsilon\)-expansion in Eq. (87) gives
\[I\times D_{2}^{ss}= -\frac{1}{\epsilon}T_{R}\delta(1-\hat{z})(1-\hat{x})^{2}(2\hat{x} ^{2}-2\hat{x}+1)+\cdots,\] (89)
\[I\times D_{1}^{ss}= \frac{1}{\epsilon}T_{R}\delta(1-\hat{z})(1-\hat{x})(1-2\hat{x})(6 \hat{x}^{2}-6\hat{x}+1)+\cdots,\] (90)
\[I\times D_{0}^{ss}= -\frac{1}{\epsilon}T_{R}\delta(1-\hat{z})(1-\hat{x})(1-4\hat{x})( 6\hat{x}^{2}-6\hat{x}+1)+\cdots,\] (91)
which leads to the following divergent piece
\[T_{R}\int_{x_{B}}^{1}\frac{dx}{x}T_{gg}(x,0,0)\left[-\frac{1}{ \epsilon}\delta(1-\hat{z})(2\hat{x}^{2}-2\hat{x}+1)\right]=\left(-\frac{1}{ \epsilon}\right)\delta(1-\hat{z})\int_{x_{B}}^{1}\frac{dx}{x}T_{gg}(x,0,0)P_{ qg}(\hat{x}).\] (92)
The remaining finite contribution from gluon-gluon double scattering, denoted as \(H_{gg}^{C}\otimes T_{gg}\), is written in Eq. (112).
One should in principle also include the asymmetric-cut diagrams in gluon-gluon double scattering. However, these diagrams can be neglected due to the lack of nuclear enhancement as they lead to contact contribution. Thus the gluon-gluon double scattering contribution can be written as
\[\frac{d\langle\ell_{hT}^{2}\sigma^{D}\rangle^{\rm gg}}{d{\cal PS}}= \sigma_{h}\frac{\alpha_{s}}{2\pi}\sum_{q}e_{q}^{2}\int\frac{dx}{x }\int\frac{dz}{z}D_{h/q}(z)\left(\frac{4\pi\mu^{2}}{Q^{2}}\right)^{\epsilon} \frac{1}{\Gamma(1-\epsilon)}\bigg{[}-\frac{1}{\epsilon}\delta(1-\hat{z})P_{qg} (\hat{x})T_{qg}(x,0,0)+H_{gg}^{C}\otimes T_{gg}\bigg{]}.\] (93)
which comes from the central-cut diagrams only, and \(H_{gg}^{C}\otimes T_{gg}\) is expressed in Eq. (112).
### Final result and QCD evolution equation for quark-gluon correlation function
With all the real and virtual corrections given in the previous subsections, we can combine them and present the final result of transverse momentum broadening in SIDIS at NLO. Here the real corrections include both central-cut and asymmetric-cut diagrams for quark-gluon correlation function, and the central-cut diagrams for gluon-gluon correlation function. We show that all the soft divergences cancel out between real and virtual diagrams. This is an important check for any calculation within the collinear factorization formalism [54]. The remaining collinear divergences can be absorbed by the redefinition of either the quark fragmentation function or the quark-gluon correlation function.
First, let us concentrate on the double-pole \(1/\epsilon^{2}\) terms, which represent soft-collinear divergences. We find that they cancel out between real and virtual contributions, see in particular, the real contribution from quark-gluon correlation function in Eq. (64) and the virtual correction in Eq. (66). Thus we are left with only the \(1/\epsilon\) divergences and the finite terms, and they can be written as
\[\frac{d\langle\ell_{hT}^{2}\sigma^{D}\rangle}{d{\cal PS}}= \sigma_{h}\frac{\alpha_{s}}{2\pi}\sum_{q}e_{q}^{2}\int\frac{dz}{z }D_{h/q}(z)\int\frac{dx}{x}\Bigg{\{}\left(-\frac{1}{\hat{\epsilon}}+\ln\frac{Q ^{2}}{\mu^{2}}\right)\Big{[}\delta(1-\hat{x})P_{qq}(\hat{z})T_{qg}(x,0,0)+ \delta(1-\hat{z})\Big{(}{\mathcal{P}}_{qg\to qg}\otimes T_{qg}\]
\[+P_{qg}(\hat{x})T_{gg}(x,0,0)\Big{)}\Big{]}+H_{qg}^{C-R}\otimes T _{qg}+H_{qg}^{C-V}\otimes T_{qg}-H_{qg}^{A}\otimes T_{qg}^{A}+H_{gg}^{C} \otimes T_{gg}\Bigg{\}},\] (94)
where the finite corrections in the second line are given by Eqs. (65), (67), (86), and (112), respectively, \(1/\hat{\epsilon}=1/\epsilon-\gamma_{E}+\ln(4\pi)\), \(P_{qq}(\hat{z})\) and \(P_{qg}(\hat{x})\) are the usual quark-to-quark and gluon-to-quark splitting kernels as given in Eqs. (11) and (13), respectively. There is a new term \({\mathcal{P}}_{qg\to qg}\otimes T_{qg}\) defined as,
\[{\mathcal{P}}_{qg\to qg}\otimes T_{qg}\equiv P_{qq}(\hat{x})T_{qg}(x,0,0)+\frac{C_{A}}{2}\bigg{\{}\frac{4}{(1 -\hat{x})_{+}}T_{qg}(x_{B},x-x_{B},0)-\frac{1+\hat{x}}{(1-\hat{x})_{+}}\big{[} T_{qg}(x,0,x_{B}-x)\]
\[+T_{qg}(x_{B},x-x_{B},x-x_{B})\big{]}\bigg{\}}.\] (95)
It is thus obvious that the first term that is proportional to \(\delta(1-\hat{x})\) in Eq. (94) amounts to just the leading-twist collinear QCD correction to the leading order quark-to-hadron fragmentation function \(D_{h/q}(z_{h})\):
\[D_{h/q}(z_{h},\mu_{f}^{2})=D_{h/q}(z_{h})-\frac{\alpha_{s}}{2\pi }\left(\frac{1}{\hat{\epsilon}}+\ln\frac{\mu^{2}}{\mu_{f}^{2}}\right)\int_{z_{ h}}^{1}\frac{dz}{z}P_{qq}(\hat{z})D_{h/q}(z),\] (96)
where we have adopted the \(\overline{\rm MS}\) scheme, and \(\mu_{f}\) is the factorization scale for the fragmentation function. The factorization scale \(\mu_{f}\)-dependence leads to the same DGLAP evolution equation for the fragmentation function \(D_{h/q}(z_{h},\mu_{f}^{2})\) as in the single scattering (leading-twist) case.
Following the same procedure of collinear factorization, one can absorb the second collinear divergence which is proportional \(\delta(1-\hat{z})\)) in Eq. (94) into the redefinition of the corresponding quark-gluon correlation function \(T_{qg}(x_{B},0,0)\),
\[T_{qg}(x_{B},0,0,\mu_{f}^{2})=T_{qg}(x_{B},0,0)-\frac{\alpha_{s} }{2\pi}\left(\frac{1}{\hat{\epsilon}}+\ln\frac{\mu^{2}}{\mu_{f}^{2}}\right) \int_{x_{B}}^{1}\frac{dx}{x}\Big{[}{\mathcal{P}}_{qg\to qg}\otimes T_{qg}+P_{ qg}(\hat{x})T_{gg}(x,0,0)\Big{]},\] (97)
where we have chosen the same factorization scale \(\mu_{f}\) as that in the fragmentation function. In principle, they do not have to be the same. The above redefinition leads to a new QCD evolution equation for the “diagonal” quark-gluon correlation function:
\[\mu_{f}^{2}\frac{\partial}{\partial\mu_{f}^{2}}T_{qg}(x_{B},0,0, \mu_{f}^{2})=\frac{\alpha_{s}}{2\pi}\int_{x_{B}}^{1}\frac{dx}{x}\Big{[}{ \mathcal{P}}_{qg\to qg}\otimes T_{qg}+P_{qg}(\hat{x})T_{gg}(x,0,0,\mu_{f}^{2}) \Big{]}.\] (98)
This evolution equation, as it stands, is not closed. It is a common feature for higher-twist parton distributions [47; 57]. Under certain approximations for the functional form in \(x_{i=1,2,3}\) of the two-parton correlation functions, one could obtain a solution to the above evolution equation [58; 59; 60]. According to the analysis of induced gluon spectra in Refs. [10; 61], the interference between soft and hard contributions corresponds to Landau-Pomeranchuk-Migdal (LPM) [62] interference which suppresses gluon radiation with large formation time \(\tau_{f}\gg R_{A}\). In the high-twist formalism, the formation time of the medium induced gluon is defined as \(\tau_{f}=1/[x(1-\hat{x})p^{+}]\). Thus LPM region can be reached if we impose \(\hat{x}\to 1\) in Eq. (98). In this particular kinematic region, the splitting kernel \({\mathcal{P}}_{qg\to qg}\otimes T_{qg}\) reduces to the vacuum one \(P_{qq}(\hat{x})T_{qg}(x,0,0,\mu_{f}^{2})\). Then Eq. (98) will be exactly the same as the vacuum DGLAP evolution equation for quark distribution function.
Notice that in Eq. (98), the quark-gluon correlation function \(T_{qg}\) is coupled with the gluon-gluon correlation function. In order to solve the evolution equation, in principle, one needs an evolution equation for the gluon-gluon correlation function \(T_{gg}\). However, since we deal with SIDIS process, in which only \(T_{qg}\) enters at the LO as in Eq. (23), our NLO calculation can not give the evolution equation for \(T_{gg}\). In this case, we have to go beyond NLO to NNLO, or we could study transverse momentum broadening for a process where \(T_{gg}\) enters at the LO, e.g., a scalar particle production in gluon-gluon fusion channel in proton-nucleus (\(p+A\)) collisions [63]. This way we will be able to derive the complete set of evolution equations which couple both \(T_{qg}\) and \(T_{gg}\) correlation functions.
Under the approximation of a large and loosely bound nucleus where one can neglect the momentum and spatial correlations of two nucleons [59], we can express the quark-gluon correlation function \(T_{qg}(x_{B},0,0,\mu_{f}^{2})\) in a factorized form [64],
\[T_{qg}(x_{B},0,0,\mu_{f}^{2})\approx\frac{N_{c}}{4\pi^{2}\alpha_ {\rm s}}f_{q/A}(x_{B},\mu_{f}^{2})\int dy^{-}\hat{q}(\mu_{f}^{2},y^{-}),\] (99)
where \(f_{q/A}(x_{B},\mu_{f}^{2})\) is the standard quark distribution function inside a nucleus and \(\hat{q}(\mu_{f}^{2},y^{-})\) is the jet transport parameter that describes the averaged transverse momentum transfer squared per unit distance (or mean-free-path) in the medium. Thus from Eq. (98) one could in principle determine the factorization scale \(\mu_{f}^{2}\)-dependence of \(\hat{q}(\mu_{f}^{2},y^{-})\). Such QCD evolution of \(\hat{q}(\mu_{f}^{2},y^{-})\) will have important consequences on quantitative studies of jet quenching at NLO. A preliminary study for transverse momentum broadening based on such a formalism with \(\hat{q}(\mu_{f}^{2},y^{-})\) evolution incorporated shows good agreement with experimental data from both \(e+A\) and \(p+A\) collisions [58].
After \(\overline{\rm MS}\) subtraction of the collinear divergences into the fragmentation function \(D_{h/q}(z,\mu_{f}^{2})\) and the twist-4 quark-gluon correlation function \(T_{qg}(x,0,0,\mu_{f}^{2})\), we can express the \(\ell_{hT}^{2}\)-weighted differential cross section up to NLO at twist-4 as,
\[\frac{d\langle\ell_{hT}^{2}\sigma^{D}\rangle}{d{\cal PS}}= \sigma_{h}\sum_{q}e_{q}^{2}\int_{x_{B}}^{1}\frac{dx}{x}T_{qg}(x,0 ,0,\mu_{f}^{2})\int_{z_{h}}^{1}\frac{dz}{z}D_{h/q}(z,\mu_{f}^{2})\delta(1-{ \hat{x}})\delta(1-{\hat{z}})\]
\[+\sigma_{h}\frac{\alpha_{s}}{2\pi}\sum_{q}e_{q}^{2}\int_{z_{h}}^{ 1}\frac{dz}{z}D_{h/q}(z,\mu_{f}^{2})\int_{x_{B}}^{1}\frac{dx}{x}\bigg{\{}\ln \left(\frac{Q^{2}}{\mu_{f}^{2}}\right)\Big{[}\delta(1-\hat{x})P_{qq}(\hat{z})T _{qg}(x,0,0,\mu_{f}^{2})\]
\[+\delta(1-\hat{z})\big{(}{\mathcal{P}}_{qg\to qg}\otimes T_{qg}+P _{qg}(\hat{x})T_{gg}(x,0,0,\mu_{f}^{2})\big{)}\Big{]}\]
\[+H_{qg}^{C-R}\otimes T_{qg}+H_{qg}^{C-V}\otimes T_{qg}-H_{qg}^{A} \otimes T_{qg}^{A}+H_{gg}^{C}\otimes T_{gg}\bigg{\}},\] (100)
that includes finite NLO corrections to the hard-part coefficient function. Just like the NLO correction to differential cross section at leading-twist in Eq. (9), the finite hard part coefficient at NLO also depends on the factorization scale which will reduce the overall factorization scale dependence of the cross section when combined with the scale-dependence of the fragmentation functions and the twist-4 quark-gluon correlation function as determined by the evolution equations. Substituting the \(\ell_{hT}^{2}\)-weighted cross section in Eq. (100) and leading-twist differential cross section in Eq. (9) into Eq. (15), we are able to compute the transverse momentum broadening in SIDIS at NLO, which is the main result of this paper.
Our results in this paper verify for the first time the factorization of \(\ell_{hT}^{2}\)-weighted differential cross section at twist-4 in NLO. The collinear divergences associated with the quark fragmentation function and twist-4 quark-gluon correlation function are factorized, and one is left only with finite hard coefficient functions, which also depend on the factorization scale. One should also consider contributions from double quark scattering [65] and hadron production from gluon fragmentation for more complete NLO calculations. These will be left for future publications.
## IV Summary
We have calculated the NLO pQCD corrections to the nuclear transverse momentum broadening in semi-inclusive hadron production in deep inelastic \(e+A\) collisions. Specifically, we have demonstrated in detail how to evaluate at NLO the transverse-momentum-weighted differential cross section at twist-4. By including contributions from quark-gluon and gluon-gluon double scatterings, as well as interferences between single and triple scatterings, we have shown explicitly that soft divergences cancel out between real and virtual corrections, and the remaining collinear divergences can be absorbed into the redefinition (renormalization) of the final-state fragmentation function and initial-state twist-4 quark-gluon correlation function, which enables us to identity a DGLAP-type evolution equations for the twist-4 quark-gluon correlation function. After the subtraction of collinear divergences, the transverse-momentum-weighted cross section can be factorized as a convolution of twist-4 nuclear parton correlation functions, the usual twist-2 fragmentation function and hard parts which are finite and free of any divergence. With the NLO results for inclusive cross section and transverse-momentum-weighted differential cross section in hand, our result can be further applied to phenomenological studies of transverse momentum broadening in HERMES and experiments at the Jefferson Lab experiments and future EIC facilities. Such a detailed phenomenological studies will be carried in a forthcoming paper [58].
We want to emphasize that it is important to perform similar studies for some other processes. For example, through the NLO calculations of transverse momentum broadening in Drell-Yan lepton pair production in \(p+A\) collisions, we can verify the collinear factorization at twist-4, and demonstrate the universality of the twist-4 quark-gluon correlation function. This will be published in a separate paper. On the other hand, extension to a scalar particle production through gluon-gluon fusion channel in \(p+A\) collisions will enable us to study the evolution equation for twist-4 gluon-gluon correlation function, from which we can derive a complete set of evolution equations for twist-4 parton correlation functions.
## Acknowledgments
This work is supported by the U.S. Department of Energy, Office of Science, Office of High Energy and Nuclear Physics, Division of Nuclear Physics, under Contract No. DE-AC52-06NA25396 and No. DE-AC02-05CH11231, and within the framework of the JET Collaboration, the National Science Foundation of China under Grants No. 11221504 and No. 10825523, China Ministry of Science and Technology under Grant No. 2014DFG02050, and the Major State Basic Research Development Program in China (No. 2014CB845404).
## Appendix A Complete list of finite terms
In this appendix, we list the finite terms in leading twist differential cross section and twist-4 weighted differential cross section at NLO. The finite terms \(H^{NLO}_{T2-qq}\), \(H^{NLO}_{T2-qg}\), and \(H^{NLO}_{T2-gq}\) for leading twist differential cross section at NLO in Eq. (9) can be written as:
\[H^{NLO}_{T2-qq}= C_{F}\bigg{\{}-8\delta(1-\hat{x})\delta(1-\hat{z})+\frac{1+(1- \hat{x}-\hat{z})^{2}}{(1-\hat{x})_{+}(1-\hat{z})_{+}}+\delta(1-\hat{z})\left[( 1+\hat{x}^{2})\left(\frac{\ln(1-\hat{x})}{1-\hat{x}}\right)_{+}-\frac{1+\hat{x }^{2}}{1-\hat{x}}\ln\hat{x}+(1-\hat{x})\right]\]
\[+\delta(1-\hat{x})\left[(1+\hat{z}^{2})\left(\frac{\ln(1-\hat{z}) }{1-\hat{z}}\right)_{+}+\frac{1+\hat{z}^{2}}{1-\hat{z}}\ln\hat{z}+(1-\hat{z}) \right]+\frac{1+4(1-y)+(1-y)^{2}}{1+(1-y)^{2}}2\hat{x}\hat{z}\bigg{\}},\] (101)
\[H^{NLO}_{T2-qg}= \ln\Big{[}\hat{z}(1-\hat{z})\Big{]}P_{gq}(\hat{z})\delta(1-\hat{x })+C_{F}\left[\frac{1+(\hat{x}-\hat{z})^{2}}{\hat{z}(1-\hat{x})_{+}}+\hat{z} \delta(1-\hat{x})+\frac{1+4(1-y)+(1-y)^{2}}{1+(1-y)^{2}}2\hat{x}(1-\hat{z}) \right],\] (102)
\[H^{NLO}_{T2-gq}= \ln\frac{1-\hat{x}}{\hat{x}}P_{qg}(\hat{x})\delta(1-\hat{z})+T_{R }\bigg{[}\frac{2\hat{x}^{2}-2\hat{x}+2\hat{z}^{2}-2\hat{z}+1}{\hat{z}(1-\hat{z })_{+}}+2\hat{x}(1-\hat{x})\delta(1-\hat{z})+\frac{1+4(1-y)+(1-y)^{2}}{1+(1-y) ^{2}}\]
\[\times 4\hat{x}(1-\hat{x})\bigg{]}.\] (103)
For the twist-4 weighted differential cross section, besides the finite term for virtual diagrams as given in Eq. (67), there are 9 finite terms. For the central-cut diagrams, we have four finite terms: \(H_{qg-C}^{ss}\otimes T_{qg}\) associated with soft-soft double scattering, \(H_{qg-C}^{hh}\otimes T_{qg}\) associated with hard-hard double scattering, \(H_{qg-C}^{sh}\otimes T_{qg}\) associated with soft-hard double scattering, and \(H_{qg-C}^{hs}\otimes T_{qg}\) associated with hard-soft double scatterings. For the asymmetric-cut diagrams, we also have four finite terms: \(H_{qg-L}^{sh}\otimes T_{qg}^{L}\) (or \(H_{qg-L}^{sh}\otimes T_{qg}^{R}\)) associated with soft-hard scattering in left-cut (right-cut) diagrams, \(H_{qg-L}^{hs}\otimes T_{qg}^{L}\) (or \(H_{qg-L}^{hs}\otimes T_{qg}^{R}\)) associated with hard-soft scattering in left-cut (right-cut)diagrams. At the same time, we also have the finite term for gluon-gluon double scattering \(H_{gg}^{C}\otimes T_{gg}\). They are given by the following expressions:
\[H_{qg-C}^{ss}\otimes T_{qg}= x^{2}\frac{d^{2}}{dx^{2}}T_{qg}(x,0,0)C_{F}\bigg{\{}\frac{(1- \hat{x})(\hat{x}^{2}+2\hat{x}\hat{z}-2\hat{x}+\hat{z}^{2}-2\hat{z}+2)}{\hat{z} ^{2}(1-\hat{z})_{+}}-\delta(1-\hat{z})(1-\hat{x})\left[2\hat{x}+\ln\frac{\hat{ x}}{1-\hat{x}}(1+\hat{x}^{2})\right]\]
\[+\frac{1+4(1-y)+(1-y)^{2}}{1+(1-y)^ {2}}2\hat{z}\hat{x}(1-\hat{x})^{2}\bigg{\}}\]
\[-x\frac{d}{dx}T_{qg}(x,0,0)C_{F}\bigg{\{}\frac{-4\hat{x}^{3}+\hat {x}^{2}(9-4\hat{z})-6\hat{x}(1-\hat{z})+(\hat{z}-2)\hat{z}+2}{\hat{z}^{2}(1- \hat{z})_{+}}+\delta(1-\hat{z})\bigg{[}(3\hat{x}^{2}-6\hat{x}-1)+\]
\[\ln\frac{\hat{x}}{1-\hat{x}}(1-\hat {x})(4\hat{x}^{3}-5\hat{x}^{2}-1)\bigg{]}+\frac{1+4(1-y)+(1-y)^{2}}{1+(1-y)^{2 }}2\hat{z}\hat{x}(1-\hat{x})(3-4\hat{x})\bigg{\}}\]
\[+T_{qg}(x,0,0)C_{F}\bigg{\{}\frac{2\hat{x}\hat{z}(2\hat{x}^{2}-5 \hat{x}+4)+\hat{x}^{2}(6\hat{x}^{2}-18\hat{x}+19)-8\hat{x}+(1-\hat{z})^{2}+1}{ \hat{z}^{2}(1-\hat{x})_{+}(1-\hat{z})_{+}}+4\delta(1-\hat{x})\delta(1-\hat{z})\]
\[+\delta(1-\hat{z})\left[\left(\frac{\ln(1 -\hat{x})}{1-\hat{x}}\right)_{+}-\frac{\ln\hat{x}}{1-\hat{x}}\right]\left[1+ \hat{x}^{2}(6\hat{x}^{2}-14\hat{x}+9)\right]-\delta(1-\hat{z})\]
\[\times\frac{2\hat{x}^{3}-7\hat{x}^{2}+8 \hat{x}+1}{(1-\hat{x})_{+}}+\delta(1-\hat{x})\left[\left(\frac{\ln(1-\hat{z})} {1-\hat{z}}\right)_{+}+\frac{\ln\hat{z}}{1-\hat{z}}\right]\frac{1+\hat{z}^{2}} {\hat{z}^{2}}\]
\[-\delta(1-\hat{x})\frac{(1+\hat{z})^{2}}{ \hat{z}^{2}(1-\hat{z})_{+}}+\frac{1+4(1-y)+(1-y)^{2}}{1+(1-y)^{2}}4\hat{z}\hat {x}(1-\hat{x})(2-3\hat{x})\bigg{\}},\] (104)
\[H_{qg-C}^{hh}\otimes T_{qg}=\]
\[+\delta(1-\hat{x})\frac{(1-\hat{z}) \left[C_{F}/C_{A}(1-\hat{z})^{2}+\hat{z}\right]}{\hat{z}^{2}}+\frac{(1+\hat{z} ^{2})\left[C_{F}/C_{A}(1-\hat{z})^{2}+\hat{z}\right]}{\hat{z}^{2}(1-\hat{x})_{ +}(1-\hat{z})_{+}}\]
\[\left.\hskip 100.0pt+2\delta(1-\hat{z})\left[\left(\frac{\ln(1- \hat{x})}{1-\hat{x}}\right)_{+}-\frac{\ln\hat{x}}{1-\hat{x}}\right]\right\},\] (105)
\[H_{qg-C}^{sh}\otimes T_{qg}= x\frac{d}{dx}T_{qg}(x,0,x_{B}-x)\frac{C_{A}}{2}\Bigg{\{}\frac{(1 +\hat{x}\hat{z}^{2})[\hat{z}+2C_{F}/C_{A}(1-\hat{z})]}{\hat{z}^{2}(1-\hat{z})_ {+}}-\delta(1-\hat{z})\left(1+\hat{x}\right)\left(1+\ln\frac{\hat{x}}{1-\hat{x }}\right)\Bigg{\}}\]
\[+x\left.\frac{d}{dx_{2}}T_{qg}(x,x_{2},x_{B}-x)\right|_{x_{2}\to 0 }\frac{C_{A}}{2}\bigg{\{}\left(\frac{1}{\hat{z}^{2}}+\hat{x}\right)\left[\hat{ z}+2C_{F}/C_{A}(1-\hat{z})\right]\bigg{\}}\]
\[+T_{qg}(x,0,x_{B}-x)\frac{C_{A}}{2}\Bigg{\{}\frac{(\hat{x}^{2} \hat{z}^{2}-2\hat{x}\hat{z}^{2}-1)[\hat{z}+2C_{F}/C_{A}(1-\hat{z})]}{\hat{z}^{ 2}(1-\hat{x})_{+}(1-\hat{z})_{+}}-2\delta(1-\hat{x})\delta(1-\hat{z})\]
\[+\delta(1-\hat{x})\frac{(\hat{z}^ {3}-\hat{z}^{2}+3\hat{z}-1)+2C_{F}/C_{A}(1-\hat{z})(3+\hat{z}^{3})}{\hat{z}(1- \hat{z})_{+}}\Bigg{\}},\] (106)
\[H_{qg-C}^{hs}\otimes T_{qg}=\]
\[+x\left.\frac{d}{dx_{2}}T_{qg}(x_{B},x_{2},x-x_{B})\right|_{x_{2} \to x-x_{B}}\frac{C_{A}}{2}\left(\frac{1}{\hat{z}^{2}}+\hat{x}\right)\left[ \hat{z}+2C_{F}/C_{A}(1-\hat{z})\right]\]
\[+T_{qg}(x_{B},x-x_{B},x-x_{B})\frac{C_{A}}{2}\Bigg{\{}\frac{(\hat {x}^{2}\hat{z}^{2}-2\hat{x}\hat{z}^{2}-1)[\hat{z}+2C_{F}/C_{A}(1-\hat{z})]}{ \hat{z}^{2}(1-\hat{x})_{+}(1-\hat{z})_{+}}-2\delta(1-\hat{x})\delta(1-\hat{z})\]
\[+\delta(1-\hat{x})\frac {(\hat{z}^{3}-\hat{z}^{2}+3\hat{z}-1)+2C_{F}/C_{A}(1-\hat{z})(3+\hat{z}^{3})}{ \hat{z}(1-\hat{z})_{+}}\Bigg{\}},\] (107)
\[H_{qg-L}^{sh}\otimes T_{qg}^{L}= -x\left.\frac{d}{dx_{2}}T_{qg}^{L}(x,x_{2},x_{B}-x)\right|_{x_{2} \to 0}\frac{C_{A}}{2}\left(\frac{1}{\hat{z}^{2}}+\hat{x}\right)\left[\hat{z}+2 C_{F}/C_{A}(1-\hat{z})\right]\]
\[+T_{qg}^{L}(x,0,x_{B}-x)\frac{C_{A}}{2}\left\{2\delta(1-\hat{x}) \delta(1-\hat{z})-\delta(1-\hat{z})\frac{1+\hat{x}}{(1-\hat{x})_{+}}\right.\]
\[\left.-\delta(1-\hat{x})\frac{1+ \hat{z}^{2}}{\hat{z}^{2}(1-\hat{z})_{+}}\left[2C_{F}/C_{A}(1-\hat{z})^{2}-( \hat{z}-2)\hat{z}\right]\right\},\] (108)
\[H_{qg-L}^{hs}\otimes T_{qg}^{L}= T_{qg}^{L}(x,x_{B}-x,x_{B}-x)\frac{C_{A}}{2}\left\{-2\delta(1- \hat{x})\delta(1-\hat{z})+\delta(1-\hat{z})\frac{1+\hat{x}}{(1-\hat{x})_{+}}\right.\]
\[\left.+\delta(1-\hat{x})\frac{1+ \hat{z}^{2}}{\hat{z}^{2}(1-\hat{z})_{+}}\left[2C_{F}/C_{A}(1-\hat{z})^{2}-(2 \hat{z}-1)\right]\right\},\] (109)
\[H_{qg-R}^{hs}\otimes T_{qg}^{R}=\]
\[+T_{qg}^{R}(x_{B},x-x_{B},x-x_{B})\frac{C_{A}}{2}\left\{2\delta(1 -\hat{x})\delta(1-\hat{z})-\delta(1-\hat{z})\frac{1+\hat{x}}{(1-\hat{x})_{+}}\right.\]
\[\left.-\delta(1-\hat{x})\frac{1+ \hat{z}^{2}}{\hat{z}^{2}(1-\hat{z})_{+}}\left[2C_{F}/C_{A}(1-\hat{z})^{2}-( \hat{z}-2)\hat{z}\right]\right\},\] (110)
\[H_{qg-R}^{sh}\otimes T_{qg}^{R}= T_{qg}^{R}(x_{B},0,x-x_{B})\frac{C_{A}}{2}\left\{-2\delta(1-\hat {x})\delta(1-\hat{z})+\delta(1-\hat{z})\frac{1+\hat{x}}{(1-\hat{x})_{+}}\right.\]
\[\left.+\delta(1-\hat{x})\frac{1+ \hat{z}^{2}}{\hat{z}^{2}(1-\hat{z})_{+}}\left[2C_{F}/C_{A}(1-\hat{z})^{2}-(2 \hat{z}-1)\right]\right\},\] (111)
\[H_{gg}^{C}\otimes T_{gg}= x^{2}\frac{d^{2}}{dx^{2}}T_{gg}(x,0,0)T_{R}\bigg{\{}\frac{(1- \hat{x})^{2}(2\hat{x}^{2}-2\hat{x}+2\hat{z}^{2}-2\hat{z}+1)}{\hat{z}(1-\hat{z} )_{+}}-\delta(1-\hat{z})\ln\frac{\hat{x}}{1-\hat{x}}(1-\hat{x})^{2}(2\hat{x}^{ 2}-2\hat{x}+1)\]
\[-\delta(1-\hat{z})(2\hat{x}^{2}-3 \hat{x}+1)^{2}+\frac{1+4(1-y)+(1-y)^{2}}{1+(1-y)^{2}}4\hat{x}(1-\hat{x})^{3} \bigg{\}}\]
\[-x\frac{d}{dx}T_{gg}(x,0,0)T_{R}\bigg{[}\frac{(1-\hat{x})(1-2\hat {x})(6\hat{x}^{2}-6\hat{x}+2\hat{z}^{2}-2\hat{z}+1)}{\hat{z}(1-\hat{z})_{+}}- \delta(1-\hat{z})2(1-\hat{x})^{2}(12\hat{x}^{2}-7\hat{x}+1)\]
\[-\delta(1-\hat{z})\ln\frac{\hat{x}}{1 -\hat{x}}(1-\hat{x})(1-2\hat{x})(6\hat{x}^{2}-6\hat{x}+1)+\frac{1+4(1-y)+(1-y) ^{2}}{1+(1-y)^{2}}\]
\[\times 12\hat{x}(2\hat{x}-1)(1-\hat{x })^{2}\bigg{]}\]
\[-T_{gg}(x,0,0)T_{R}\bigg{\{}\frac{(1-\hat{x})\left[24\hat{x}^{3}- 30\hat{x}^{2}+2\hat{x}(2\hat{z}^{2}-2\hat{z}+5)-2\hat{z}^{2}+2\hat{z}-1\right] }{\hat{z}(1-\hat{z})_{+}}+\delta(1-\hat{z})\ln\frac{\hat{x}}{1-\hat{x}}(1-\hat {x})\]
\[\times(1-4\hat{x})(6\hat{x}^{2}-6\hat{x}+ 1)-\delta(1-\hat{z})2(1-\hat{x})(24\hat{x}^{3}-33\hat{x}^{2}+11\hat{x}-1)\]
\[+\frac{1+4(1-y)+(1-y)^{2}}{1+(1-y)^{2}}4 \hat{x}(1-\hat{x})(12\hat{x}^{2}-15\hat{x}+4)\bigg{\}}.\] (112)
## References
* (1) M. Gyulassy, I. Vitev, X. -N. Wang and B. -W. Zhang, In *Hwa, R.C. (ed.) et al.: Quark gluon plasma* 123-191.
* (2) N. Armesto, N. Borghini, S. Jeon, U. A. Wiedemann, S. Abreu, V. Akkelin, J. Alam and J. L. Albacete _et al._, J. Phys. G **35**, 054001 (2008).
* (3) C. A. Salgado, J. Alvarez-Muniz, F. Arleo, N. Armesto, M. Botje, M. Cacciari, J. Campbell and C. Carli _et al._, J. Phys. G **39**, 015010 (2012).
* (4) J. L. Albacete, N. Armesto, R. Baier, G. G. Barnafoldi, J. Barrette, S. De, W. -T. Deng and A. Dumitru _et al._, Int. J. Mod. Phys. E **22**, 1330007 (2013).
* (5) I. Vitev and M. Gyulassy, Phys. Rev. Lett. **89**, 252301 (2002).
* (6) M. Gyulassy and X. -N. Wang, Nucl. Phys. B **420**, 583 (1994).
* (7) R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne and D. Schiff, Nucl. Phys. B **484**, 265 (1997).
* (8) M. Gyulassy, P. Levai and I. Vitev, Nucl. Phys. B **594**, 371 (2001).
* (9) B. G. Zakharov, JETP Lett. **65**, 615 (1997).
* (10) X. -N. Wang and X. -F. Guo, Nucl. Phys. A **696**, 788 (2001); X. -F. Guo and X. -N. Wang, Phys. Rev. Lett. **85**, 3591 (2000).
* (11) Z. -B. Kang, I. Vitev and H. Xing, Phys. Lett. B **718**, 482 (2012).
* (12) A. Airapetian _et al._ [HERMES Collaboration], Nucl. Phys. B **780**, 1 (2007).
* (13) A. Airapetian _et al._ [HERMES Collaboration], Phys. Lett. B **684**, 114 (2010).
* (14) W. K. Brooks, S. Strauch and K. Tsushima, J. Phys. Conf. Ser. **299**, 012011 (2011).
* (15) A. Accardi, F. Arleo, W. K. Brooks, D. D’Enterria and V. Muccifora, Riv. Nuovo Cim. **32**, 439 (2010).
* (16) P. L. McGaughey, J. M. Moss and J. C. Peng, Ann. Rev. Nucl. Part. Sci. **49**, 217 (1999); J. -C. Peng, AIP Conf. Proc. **494**, 503 (1999); M. B. Johnson, B. Z. Kopeliovich, M. J. Leitch, P. L. McGaughey, J. M. Moss, I. K. Potashnikova and I. Schmidt, Phys. Rev. C **75**, 035206 (2007); M. J. Leitch, Eur. Phys. J. C **43**, 157 (2005).
* (17) See, e.g., B. Muller, J. Schukraft and B. Wyslouch, Ann. Rev. Nucl. Part. Sci. **62**, 361 (2012); and references therein.
* (18) See, e.g., A. Majumder and M. Van Leeuwen, Prog. Part. Nucl. Phys. A **66**, 41 (2011); and references therein.
* (19) A. Accardi, J. L. Albacete, M. Anselmino, N. Armesto, E. C. Aschenauer, A. Bacchetta, D. Boer and W. Brooks _et al._, arXiv:1212.1701 [nucl-ex]; D. Boer, M. Diehl, R. Milner, R. Venugopalan, W. Vogelsang, D. Kaplan, H. Montgomery and S. Vigdor _et al._, arXiv:1108.1713 [nucl-th].
* (20) T. Liou and A. H. Mueller, Phys. Rev. D **89**, 074026 (2014).
* (21) B. Wu, arXiv:1408.5459 [hep-ph].
* (22) J. Huang, Z. -B. Kang and I. Vitev, Phys. Lett. B **726**, 251 (2013).
* (23) Z. B. Kang, R. Lashof-Regas, G. Ovanesyan, P. Saad and I. Vitev, arXiv:1405.2612 [hep-ph].
* (24) A. H. Mueller and S. Munier, Nucl. Phys. A **893**, 43 (2012); T. Liou, A. H. Mueller and B. Wu, Nucl. Phys. A **916**, 102 (2013).
* (25) J. -P. Blaizot and Y. Mehtar-Tani, arXiv:1403.2323 [hep-ph].
* (26) E. Iancu, arXiv:1403.1996 [hep-ph].
* (27) Y. Mehtar-Tani, C. A. Salgado and K. Tywoniuk, Phys. Rev. Lett. **106**, 122002 (2011).
* (28) J. -P. Blaizot, E. Iancu and Y. Mehtar-Tani, Phys. Rev. Lett. 111, **052001** (2013).
* (29) M. Fickinger, G. Ovanesyan and I. Vitev, JHEP **1307**, 059 (2013).
* (30) K. M. Burke, A. Buzzatti, N. Chang, C. Gale, M. Gyulassy, U. Heinz, S. Jeon and A. Majumder _et al._, Phys. Rev. C **90**, 014909 (2014).
* (31) M. Luo, J. -W. Qiu and G. F. Sterman, Phys. Lett. B **279**, 377 (1992); Phys. Rev. D **50**, 1951 (1994).
* (32) J. -W. Qiu and G. F. Sterman, Nucl. Phys. B **353**, 137 (1991).
* (33) J. C. Collins, D. E. Soper and G. F. Sterman, Adv. Ser. Direct. High Energy Phys. **5**, 1 (1988).
* (34) M. Luo, J. -W. Qiu and G. F. Sterman, Phys. Rev. D **49**, 4493 (1994).
* (35) X. -F. Guo, Phys. Rev. D **58**, 114033 (1998).
* (36) X. Guo and J. -W. Qiu, Phys. Rev. D **61**, 096003 (2000).
* (37) A. Majumder and B. Muller, Phys. Rev. C **77**, 054903 (2008).
* (38) R. J. Fries, Phys. Rev. D **68**, 074013 (2003).
* (39) Z. -B. Kang and J. -W. Qiu, Phys. Rev. D **77**, 114027 (2008).
* (40) Z. -B. Kang, I. Vitev and H. Xing, Phys. Rev. D **85**, 054024 (2012).
* (41) H. Xing, Z. -B. Kang, I. Vitev and E. Wang, Phys. Rev. D **86**, 094010 (2012).
* (42) N. Armesto, B. Cole, C. Gale, W. A. Horowitz, P. Jacobs, S. Jeon, M. van Leeuwen and A. Majumder _et al._, Phys. Rev. C **86**, 064904 (2012).
* (43) Z. -B. Kang, E. Wang, X. -N. Wang and H. Xing, Phys. Rev. Lett. **112**, 102001 (2014); H. Xing, Z. B. Kang, E. Wang and X. N. Wang, arXiv:1407.8506 [hep-ph].
* (44) R. -B. Meng, F. L. Olness and D. E. Soper, Nucl. Phys. B **371**, 79 (1992); Z. -B. Kang and J. -W. Qiu, Phys. Rev. D **78**, 034005 (2008).
* (45) D. Graudenz, Nucl. Phys. B **432**, 351 (1994); A. Daleo, D. de Florian and R. Sassot, Phys. Rev. D **71**, 034013 (2005).
* (46) G. Altarelli, R. K. Ellis and G. Martinelli, Nucl. Phys. B **157**, 461 (1979).
* (47) Z. -B. Kang, I. Vitev and H. Xing, Phys. Rev. D **87**, no. 3, 034024 (2013); Int. J. Mod. Phys. Conf. Ser. **25**, 1460019 (2014).
* (48) D. P. Anderle, F. Ringer and W. Vogelsang, Phys. Rev. D **87**, no. 3, 034014 (2013).
* (49) J. -W. Qiu and G. F. Sterman, Int. J. Mod. Phys. E **12**, 149 (2003).
* (50) Z. T. Liang and X. N. Wang, Phys. Rev. D **75**, 094002 (2007); J. H. Gao, Z. t. Liang and X. N. Wang, Phys. Rev. C **81**, 065211 (2010); Y. H. Song, J. H. Gao, Z. t. Liang and X. N. Wang, Phys. Rev. D **83**, 054010 (2011).
* (51) X. Guo, Phys. Rev. D **58**, 036001 (1998).
* (52) B. -W. Zhang and X. -N. Wang, Nucl. Phys. A **720**, 429 (2003); B. -W. Zhang, E. Wang and X. -N. Wang, Phys. Rev. Lett. **93**, 072301 (2004); Nucl. Phys. A **757**, 493 (2005).
* (53) Z. -B. Kang, I. Vitev and H. Xing, Phys. Rev. D **88**, 054010 (2013).
* (54) This will not necessarily hold true for calculations involving transverse momentum dependent parton distribution functions, in which the soft divergences might not cancel between real and virtual corrections, see, e.g., Refs. [55; 56].
* (55) Z. B. Kang, I. Vitev and H. Xing, Phys. Rev. Lett. **113**, 062002 (2014).
* (56) G. A. Chirilli, B. -W. Xiao and F. Yuan, Phys. Rev. D **86**, 054005 (2012).
* (57) W. Vogelsang and F. Yuan, Phys. Rev. D **79**, 094010 (2009).
* (58) Z. -B. Kang, E. Wang, X. -N. Wang and H. Xing, in preparation.
* (59) J. Osborne and X. -N. Wang, Nucl. Phys. A **710**, 281 (2002).
* (60) Z. -B. Kang and J. -W. Qiu, Phys. Rev. D **79**, 016003 (2009).
* (61) H. Xing, Y. Guo, E. Wang and X. -N. Wang, Nucl. Phys. A **879**, 77 (2012); Nucl. Phys. A **910-911**, 442 (2013).
* (62) L. D. Landau and I. J. Pomeranchuk, Dolk. Akad. Nauk. SSSR **92**, 92 (1953); A. B. Migdal, Phys. Rev. **103**, 1811 (1956).
* (63) Z. -B. Kang, E. Wang, X. -N. Wang and H. Xing, in preparation.
* (64) J. Casalderrey-Solana and X. -N. Wang, Phys. Rev. C **77**, 024902 (2008).
* (65) A. Schafer, X. -N. Wang and B. -W. Zhang, Nucl. Phys. A **793**, 128 (2007).
|
1504.05655 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 145450,
"num_imgs": 1,
"llama3_tokens_count": 36373
} | [
"content_image/1504.05655/x1.png"
] | # Online Social Network Analysis: A Survey
of Research Applications in Computer Science
David Burth Kurka
d.kurka@ic.ac.uk
Alan Godoy
godoy@dca.fee.unicamp.br
Fernando J. Von Zuben
vonzuben@dca.fee.unicamp.br University of Campinas, Brazil
###### Abstract
The emergence and popularization of online social networks suddenly made available a large amount of data from social organization, interaction and human behavior. All this information opens new perspectives and challenges to the study of social systems, being of interest to many fields. Although most online social networks are recent (less than fifteen years old), a vast amount of scientific papers was already published on this topic, dealing with a broad range of analytical methods and applications. This work describes how computational researches have approached this subject and the methods used to analyze such systems. Founded on a wide though non-exaustive review of the literature, a taxonomy is proposed to classify and describe different categories of research. Each research category is described and the main works, discoveries and perspectives are highlighted.
**Keywords:**_Online Social Networks, Survey, Computational Research, Machine Learning, Complex Systems_
## 1 Introduction
One of the most revolutionary aspects of the Internet is, beyond the possibility of connecting computers from the entire world, the power to connect people and cultures. More and more the Internet is used for the development of online social networks (OSNs) – an adaptation of social organizations to the “virtual world”. Currently, OSNs such as _Twitter_¹, _Google+_² and _Facebook_³ have hundreds of millions of users (Ajmera, 2014). Futhermore, the average browsing time inside those services is increasing (Benevenuto et al., 22) and many websites are featuring some sort of integration with social networking services. Although the effects of such services on personal interactions, cultural and living standards, education and politics are visible, understanding the whole extent of the influence and impact of those services is a challenging task.
[FOOTNOTE:1][ENDFOOTNOTE]
[FOOTNOTE:2][ENDFOOTNOTE]
[FOOTNOTE:3][ENDFOOTNOTE]
The study of social networks is not something new. Since the emergence of the first human societies, social networks have been there forging individual and collective behavior. In the academia, research on social networks can be traced to the first decades of the twentieth century (Rice, 1927), while probably the most influential early work on social network analysis was the seminal paper “Contacts and Influence” (de Sola Pool and Kochen, 1978), written in the 1950’s⁴.
[FOOTNOTE:4][ENDFOOTNOTE]
In recent years, however, with the popularization of OSNs, this research subject gained new momentum as new possibilities of study have arisen and plenty of data on social relations and interactions have become available. Even though the most popular OSNs have slightly more than ten years of existence -- Facebook was founded in 2004, Twitter in 2006 and Myspace⁵ in 2003 – , the volume of scientific work having them as subject is considerable. Finding order and sense among all the work produced is becoming a huge task, specially for new researchers, as the amount of produced material accumulates.
[FOOTNOTE:5][ENDFOOTNOTE]
With this in mind, this work aims to present an introductory overview of research in online social network analysis, mapping the main areas of research and their perspectives. A comprehensive approach is taken, prioritizing the diversity of applications, but endeavouring to select relevant work and to analyze their actual contributions. Also, although many disciplines have been interested in this topic – it is possible to find related works in psychology, sociology, politics, economics, biology, philosophy, to name a few – , the present work will focus predominantly in computational approaches.
This work is structured as follows: in section 2 the main reasons and motivations for OSN research are discussed; in section 3 a proposal for a taxonomy is presented and sections 4, 5 and 6, following the proposed nomenclature, detail the main references and findings for each topic. Finally, in section 7 we conclude by presenting general remarks regarding the current stage of the research and a brief analysis of future perspectives.
## 2 Online social networks as object of study
In this section, we make a brief introduction to the research about online social networks, discussing the reasons why this area is getting a very strong momentum, the kind of data being explored in the field and the computational tools commonly used by researchers to analyze social networks data.
### Why should anyone research OSNs?
The attention given by the media and general public to OSNs can be a good motivation to justify the research in this field. However, from a computational perspective, OSNs present some particularities that must be taken into account, in order to understand researchers interests. The main reasons are listed below:
**Data availability:**: every day, a huge amount of information travels through OSNs and much of it is freely available for researchers⁶. The current abundance of data has no precedent in the study of social systems and serves as basis for computational analysis and scientific work. Due to its large scale, social data can fit in the context of _big data_ research.
[FOOTNOTE:6][ENDFOOTNOTE]
**Multiple authorship:**: differently from other corpora, the textual content produced in OSNs have different authorial sources. This enhances the information content and diversity of the data collected, presenting various styles, forms, contexts and expression strategies. Thereby, OSNs can be a rich repository of text for natural language processing applications.
**Agent interaction:**: every individual user that composes such networks is an agent able to take decisions and interact with other users. This complex interaction dynamics produces effects that puzzle and interest several researchers.
**Temporal dynamics:**: the fact that social data is generated continuously along time, allows analysis that take into account spatio-temporal processes and transformations, such as topic evolution or collective mobilization.
**Instantaneity:**: besides the continuous generation, the social data is also provided at every moment, instantaneously. Thus, OSNs typically react in real time to both internal and external stimuli.
**Ubiquity:**: following the technological development, which increases people’s access to means of communication and information (as smartphones, tablets), OSNs content can be generated, virtually, anywhere and at any time. Also, data’s _geolocation_, a feature present in many OSNs, add new possibilities to the analysis.
### Which networks are explored?
Two main characteristics can be taken into consideration, before choosing a network to study: popularity (number of active users) and how easy is the data access.
Currently, the largest online social network is _Facebook_, with over one billion active users (Facebook, 2014). Although the use of data extracted from Facebook is present in literature (Dow and Friggeri, 2013; Kumar, 2012; Sun et al., 2009), the high proportion of protected content – generally due to users’ privacy settings – severely restricts the analysis using this OSN as source.
_Twitter_, a popular microblogging tool (Cheong and Ray, 2011), can be considered by far the most studied OSN (Rogers, 2013). The existence of a well-defined public interface for software developers⁷ to extract data from the network, the simplicity of its protocol⁸ and the public nature of most of its content can be a good explanation for that. However, since the beginning of the service, rate policies have been created to control the amount of data allowed to be collected by researchers and analysts. This had a direct impact on research, as initial works had access to all the content published in the network, while today’s works are usually limited by those policies (Rogers, 2013).
[FOOTNOTE:7][ENDFOOTNOTE]
[FOOTNOTE:8][ENDFOOTNOTE]
It is also worth mentioning the existence of Chinese counterpart services for Facebook and Twitter, like Sina-Weibo⁹, the largest one, with more than 500 million registered users (Ong, 2013). Although the usage of those services may differ due to cultural aspects (Asur et al., 2011; Gao et al., 2012), similar lines of inquiry can be developed in both the western and eastern equivalents (e.g.: Guo et al., 2011; Qu et al., 2011; Yang et al., 2012; Bao et al., 2013).
[FOOTNOTE:9][ENDFOOTNOTE]
Other web services that integrate social networking features have been the focus of studies. Examples are media sites like YouTube¹⁰(Mislove et al., 2007) and Flickr¹¹(Cha et al., 2009; Kumar et al., 105), and news services as Digg¹²(Lerman and Hogg, 2010; Wu and Huberman, 2007). Research was also made with implicit social networks as email users (Tyler et al., 2005), university pages (Adamic and Adar, 2003, 2005) or blogs (Gruhl et al., 2004), even before the creation of social networking services.
[FOOTNOTE:10][ENDFOOTNOTE]
[FOOTNOTE:11][ENDFOOTNOTE]
[FOOTNOTE:12][ENDFOOTNOTE]
### Computational tools
There are, currently, many computational tools that help in the task of analyzing large social networks, like graph-based databases (e.g.: AllegroGraph¹³ and Neo4J¹⁴), libraries to access online social networks APIs (e.g.: Instagram Ruby Gem¹⁵ and Tweepy¹⁶), graph drawing softwares (e.g.: Graphviz¹⁷ and Tulip¹⁸) and tools for graph manipulation and statistical analysis of networks. The present section, however, will focus only in this last category, as it is more relevant to the kind of analysis conduced in the studies presented in this survey.
[FOOTNOTE:13][ENDFOOTNOTE]
[FOOTNOTE:14][ENDFOOTNOTE]
[FOOTNOTE:15][ENDFOOTNOTE]
[FOOTNOTE:16][ENDFOOTNOTE]
[FOOTNOTE:17][ENDFOOTNOTE]
[FOOTNOTE:18][ENDFOOTNOTE]
Even when considering only tools for graph analysis and manipulation, there are dozens of alternatives, ranging from general purpose graph libraries to advanced commercial tools aimed at specific business. For an extensive list of social networks analysis software, we refer to Wikipedia’s entry on the subject¹⁹.
[FOOTNOTE:19][ENDFOOTNOTE]
When considering applications commonly used in academic works, a division in two groups of tools is clear: (a) graphical user interface (GUI), which are based stand-alone software, focusing on ease of use by non-programmers, and (b) programming language libraries, that are usually more flexible and have more functionalities.
In the first group, the most widely adopted tool is Gephi²⁰(Bastian et al., 2009), which is a Java-based open source software licensed under the Common Development and Distribution License (CDDL) and GNU General Public License (GPL). Gephi is able to deal with moderate/small graphs (up to 1 million nodes and edges, according to its website), allowing node/edge filtering. It features diverse algorithms to draw graphs, detect communities, generate random graphs and calculate network metrics, like centrality measures (e.g.: betweenness, closeness and PageRank), diameter, and clustering coefficient. It is also able to deal with temporal information and hierarchical graphs and has support for third-party plugins. In addition to the stand-alone software, Gephi is also available as a Java module through Gephi Toolkit²¹.
[FOOTNOTE:20][ENDFOOTNOTE]
[FOOTNOTE:21][ENDFOOTNOTE]
Another GUI-based software worth mentioning is Cytoscape²²(Saito et al., 2012), also open source and licensed under the GNU Lesser General Public License (LGPL). As Gephi, Cytoscape is written in Java and offers graph drawing, community detection algorithms, network metrics, node/edge filtering and it also supports plugins. Despite being intended for the analysis of biomolecular networks, Cytoscape can be used to analyze graphs from any kind of source, including social networks.
[FOOTNOTE:22][ENDFOOTNOTE]
The most adopted and feature-rich libraries in the second group are NetworkX and igraph. Both libraries can handle millions of nodes and edges (Akhtar et al., 2013) and offer advanced algorithms for networks, as checking isomorphisms, searching for connected components, cliques, communities and \(k\)-cores, and calculating dominating and independent sets and minimum spanning trees.
NetworkX²³(Hagberg et al., 2008) is an open source project – under the Berkeley Software Distribution license (BSD) – sponsored by Los Alamos National Lab, which is in active development since 2002. Despite the recurrent addition of new functionalities, it is a very stable library, as it includes extensive unit-testing. NetworkX is fully implemented in Python and is interoperable with NumPy and SciPy, the language’s standard packages for advanced mathematics and scientific computation. It also has remarkable flexibility: nodes can be almost anything – texts, numbers, images and even other graphs – and graphs, nodes and edges can have attributes of any type. The library can deal not only with common graphs, but also with digraphs, multigraphs and dynamic graphs. Among the specific features of NetworkX are a particularly large set of graph generators and a number of special functions for bipartite graphs.
[FOOTNOTE:23][ENDFOOTNOTE]
igraph²⁴(Csardi and Nepusz, 2006) is a performance-oriented graph library written in C with official interfaces for C, Python and R and a third-party binding for Ruby. If on the one hand it is not as flexible as NetworkX, on the other hand it can be even 10 times faster when performing some functions (Akhtar et al., 2013). Many advanced network analysis methods are available in igraph, including classical techniques from sociometry, like dyad and triad census and structural holes scores, and more recent methods, like motif estimation, decomposing a network into graphlets and different algorithms for community detection. As all other tools presented in this section, igraph is an open source project (it is licensed under the GNU GPL).
[FOOTNOTE:24][ENDFOOTNOTE]
Two more libraries worth citing are graph-tool²⁵ and NetworKit²⁶, open source frameworks intended to be much faster than mainstream alternatives by making intensive use of parallelism. Both libraries are implemented mostly in C++ and have Python APIs providing broad lists of functionalities, though not as comprehensive as NetworkX and igraph’s. graph-tool (Peixoto, 2014) is licensed under the GNU GPL and is developed since 2006. NetworKit (Staudt et al., 2014) is more recent: it was created in 2013 in the Karlsruhe Institute of Technology, in Germany. It is under the MIT license and is designed to be interoperable with NetworkX. Differently from other libraries, it aims at networks with billions of nodes and edges and is particularly well-suited for high-performance computing.
[FOOTNOTE:25][ENDFOOTNOTE]
[FOOTNOTE:26][ENDFOOTNOTE]
The libraries discussed here implement a vast range of graph functions. Some of these functions, however, are not available in all tools. We recommend that researchers in need of specific functionalities to check the libraries’ documentation, available at their websites. All these libraries are under active development and are well documented. For more complete comparisons between network libraries, we refer to Combe et al. (2010); Akhtar et al. (2013); Staudt et al. (2014).
## 3 Categories of study
In order to simplify the presentation of the wide range of works devoted to the analysis of Online Social Networks, a categorisation of the areas of research is needed. Here we will propose a taxonomy that covers different aspects of this research, structuring all the surveyed works in three main groups: (a) structural analysis, (b) social data analysis and (c) social interaction analysis. Fig. 1 illustrates this structure, with its respective subdivisions.
<figure><img src="content_image/1504.05655/x1.png"><figcaption>Figure 1: Categories of study on Online Social Networks, from a computationalperspective.</figcaption></figure>
_Structural analysis_ is the earliest category of study, since it contemplates initial inquiries about the structure and functionality of social networking services (SNSs), as they were launched. Researchers were interested in simply knowing what are those services and why so many people were being attracted to them. Also, the huge structures that were being formed proved to be worthy investigating and comparing to other known networks (as biological and offline social networks). This area of research is still very active, despite its age.
_Social data analysis_ represents a second branch, in which researchers started to use and analyze what OSNs _produce_. This area exploits the huge amount of rich data produced by OSNs to do all kinds of applications. Usually only the data produced by users is considered, not having much importance the topology of users’ connections or other network features.
Finally, _social interaction analysis_, deals with aspects related to the individuals using the SNSs. Using all the rich data provided by OSNs, such as users’ friendships and the record of social relationships, it is possible to observe how users interact on the network and have insights on aspects of human behavior. This category is intrinsically interdisciplinary, as its discoveries relate to other fields of research, such as psychology, sociology and even biology.
We are unaware of other works that propose a taxonomy for the computational study of OSNs in general. However, previous works were made specifically focusing on studies about Twitter. Memon and Alhajj (2010) and Cheong and Ray (2011) tracked papers produced from 2008 to 2010 and found categories very similar to the ones presented above. However, their general classification is based on only two main areas: user domain and message domain. Williams et al. (2013) systematically collected all the research papers since 2011 containing the word “Twitter”, and defined four main aspects: message, user, technology and concept. Message could be related to social data analysis, user to social interaction analysis and technology and concept to structural analysis. However, that work did not further deepen the classification in subcategories.
Another interesting perspective is the study conducted by Rogers (2013), which described the evolution of Twitter and how it has been attracting researchers. According to him, Twitter passed through three phases: Twitter I, when the service was used mainly to connect people, but contained mainly superficial conversations between users; Twitter II, a more mature network, able to promote and organize mobilizations; and Twitter III, a historical valuable big database used to understand society and the recent past.
Of course, we do not expect to achieve consensus with this taxonomy. Imposing categories to any study can be helpful for contextualisation, but can also be misleading and endowed with some degree of arbitrariness. Also, works can belong to more than one category and there can be some intersection between different areas of research. The aim of this survey, therefore, is to serve as an introductory overview of the current status of the field, supported by the proposed taxonomy.
## 4 Structural analysis
Under structural analysis are works that have OSNs structure and operation as objects of study. Many can be the reasons researchers are interested in the study of a network: to understand how it is composed, to compare its structure to other known networks (specially with offline social networks) or to create models of social organization.
Since the end of the last century, studies showed that many real networks have some non-trivial properties, such as small average distances between nodes²⁷(Watts and Strogatz, 1998) and number of connections per node following a power-law²⁸(Barabási, 1999), culminating in the rise of a new area of study named complex networks or network science (Bragin, 2010). Such networks can be found on many areas (Costa et al., 2007), from computer systems to protein interactions and, of course, in social networks. The creation of OSNs and the availability of data, thus, are leveraging this emergent study of complex attributes of OSNs.
[FOOTNOTE:27][ENDFOOTNOTE]
[FOOTNOTE:28][ENDFOOTNOTE]
### Topology characterization
Analyzing the topology of a social network can reveal several interesting features about its components and how people organize themselves for different purposes. Extracting network connections from OSNs is much easier than in offline networks, as all required data is already stored digitally, not asking for explicit knowledge extraction strategies.
Several SNSs had their networks explored and many statistical properties characterized, such as (to name a few):
* General OSNs services – Facebook (Kumar, 2012), Orkut²⁹(Ahn et al., 2007; Mislove et al., 2007), Myspace (Ahn et al., 2007), Cyworld³⁰(Ahn et al., 2007; Chun et al., 2008); [FOOTNOTE:29][ENDFOOTNOTE] [FOOTNOTE:30][ENDFOOTNOTE]
* Media sharing services – YouTube, Flickr (Mislove et al., 2007);
* Blogging services – Twitter (Huberman et al., 2008; Kwak et al., 2010), LiveJournal³¹(Mislove et al., 2007); [FOOTNOTE:31][ENDFOOTNOTE]
* Message exchange services – MSN messenger (Leskovec and Horvitz, 2008);
* Location-based networks – Foursquare (Scellato et al., 2011).
In addition to these services, some studies also attempted to characterize the topology of social networks formed implicitly in sites like university web pages (Adamic and Adar, 2003) and email groups (Adamic and Adar, 2005; Tyler et al., 2005).
_What the network structure reveals?_
One important property revealed by topology characterization is how similar OSNs are to other real networks previously studied. Agreeing to what is observed in offline social networks, Mislove et al. (2007) verified the presence of power-law degree distribution and small-world property in several OSNs. Kwak et al. (2010) discovered, however, that Twitter’s structure does not follow a power-law degree distribution, having an unusual high number of popular users with many followers³², therefore resembling more a news network than a social network.
[FOOTNOTE:32][ENDFOOTNOTE]
Using data from MSN Messenger, Leskovec and Horvitz (2008) analyzed the mean distance between users, identifying small-world property in this network and also showing how people with similar interests (same age, language, location and opposite sex) tend to connect and keep frequent communication. Kumar (2012) discovered that 99.91% of Facebook users belong to the same large _connected component_³³ and that friends communities³⁴ can be stunningly dense, compared to the general sparse structure of the whole network. Also, they showed that common age and nationality are relevant to determine social connections.
[FOOTNOTE:33][ENDFOOTNOTE]
[FOOTNOTE:34][ENDFOOTNOTE]
The network characterization in services where there is no explicit network allows the inference of interesting discoveries. By characterizing the network formed by internal links connecting web-pages from a university domain, Adamic and Adar (2003) showed possibilities of discovering communities and real-world connections among students. From networks built from email services, Tyler et al. (2005) were able to perceive hidden patterns of collaboration and leadership among users, identifying communities (formal and informal) and leadership roles within the communities.
_Many networks in one network_
An interesting fact is that an OSN may embed more than one network structure. Many SNSs explicitly register users’ relationships, resulting in a _friendship network_. However, from users’ interactions, an implicit _interaction network_ can also be formed, revealing which social connections are actually active and in use (generally a subgraph of the friendship network). Other possible implicit networks are _diffusion networks_, characterized by the course of a content in the network, and _interest networks_, defined by groups of people with similar interests.
By comparing the friendship network to the interaction network on Twitter, Huberman et al. (2008) showed how smaller is the second one, but more adequate to describe and analyze social events. Chun et al. (2008) showed how Cyworld’s interaction network can be more precise to represent real networks, having its nodes’ degree distribution closer to known social networks than the friendship network. Wilson et al. (2009) discussed that the interaction network can present a different perspective and metrics for an OSN (like larger network diameter and less connected “supernodes”), being suitable for applications like social spam detection and online fraud detection.
Smith et al. (2014) analyzed conversations on Twitter about different topics and identified, from how participants of a topic are connected, the formation of six distinct network structures according to the subject being discussed. These network structures describe different “spaces” of information exchange: from the engaged and intransigent crowds, to the fast content replicating and sharing broadcast networks.
### Use and functionality characterization
Since the rise of SNSs, researchers have been interested in understanding the functionality of those services and how their users could take advantage of them.
_Network formation_
While SNSs were still becoming popular, Backstrom et al. (2006) described how the OSN structure can impact in new friendships and community formation. They showed that more densely connected communities are more likely to receive new members and that events, as the change of the topics of interest in a group, tend to cause transformations in the network topology. Wilkinson (2008) made similar discussion, but focusing on networks of peer production services (Wikipedia³⁵, Digg, Bugzilla³⁶ and Essembly³⁷), showing how more ancient individuals have a tendency of receiving new connections, concentrating contributions and remaining longer in the network.
[FOOTNOTE:35][ENDFOOTNOTE]
[FOOTNOTE:36][ENDFOOTNOTE]
[FOOTNOTE:37][ENDFOOTNOTE]
Java et al. (2007) described, in an introductory perspective, what is Twitter and the main uses of the service: talking about everyday subjects and finding information. Then, they showed how coherent communities arise from the aggregation of users with similar interest. Takhteyev et al. (2012) analyzed how users’ geographical distribution affects their links, uncovering a correlation between the existence of a connection among two users and the frequency of airline flights between the cities they live.
_User profiles_
Network users can be categorised in different classes by their attributes and patterns of behavior. Krishnamurthy et al. (2008) analyzed profiles of almost 100,000 Twitter users and identified three different classes of users: _broadcasters_, with much more followers than followees (e.g.: celebrities); _acquaintances_, with reciprocity in their relationships (e.g.: casual users); _miscreants_, that follow a much larger number of users than they are followed (e.g.: spammers or stalkers).
Wu et al. (2011) identified “elite” Twitter users (i.e., celebrities, famous bloggers, media and corporation accounts) and evaluated the impact of the content published by them, realizing that half of the URLs that circulate over the network are generated by 20,000 of those “elite” users. Association patterns among those special users are also analyzed, revealing that “elite” users of a same field (e.g.: celebrities or blogs) tend to interact among them.
Benevenuto et al. (22) were able to analyze and measure the online activity of users of four SNSs: Orkut, Myspace, hi5³⁸ and LinkedIn³⁹. They discovered that users spend on average 92% of their time on those services just browsing other users’ pages, without posting any content to the network.
[FOOTNOTE:38][ENDFOOTNOTE]
[FOOTNOTE:39][ENDFOOTNOTE]
_Conversation_
A notable feature of OSNs is the users’ ability to maintain conversations, enabling the organization of mobilization and the creation of enriched content. Kumar et al. (104) elaborated a detailed study of how conversations are created in diverse OSN contexts, finding patterns and particularities that enabled the creation of a simple mathematical model capable of describing the dynamics of the conversations.
Honeycutt and Herring (2009) analyzed how conversation dynamics can occur on Twitter, with users adapting its simple mechanism of message exchange to track and maintain active communication with each other. In the same line, Boyd et al. (2010) explored how _retweets_⁴⁰ can be used to create conversations and involve new users in existing conversations.
[FOOTNOTE:40][ENDFOOTNOTE]
Discussing the impact of communication in OSNs, Bernstein et al. (2013) discovered, by analyzing large amount of log data, the extent of diffusion of content published on Facebook (i.e., how many people read a message posted by a user). They showed that users usually underestimate the extent of their posts, expecting an audience of less than one third of the actual reached audience.
_Network deterioration_
Not only the growth, but also the decline in the use of SNSs was studied. Kwak et al. (2011) examined details of the _unfriending_ (i.e., unfollowing) behavior on Twitter, showing how frequent it is, using both quantitative and qualitative data, which were obtained through user interviews. Garcia et al. (2013) examined SNSs that suffered intense decline in user activity (Friendster⁴¹ and LiveJournal), attempting to understand the impact of users desertion. The impact of “cascades of users leaving” on the network resilience was deeply studied, and a metric was proposed to determine when it is or it is not advantageous to users to join a network.
[FOOTNOTE:41][ENDFOOTNOTE]
### Anomaly and fraud detection
Another important matter that can be explored by structural analysis is the investigation of presence of anomalies and frauds within a network. Those incidents can be either harmless activities, as using false accounts to create artificial number of _likes_ in pages (Beutel et al., 2013; Jiang et al., 2014), to more serious incidents involving political manipulation (Ratkiewicz et al., 2011) or embezzlement (Pandit et al., 2007).
_Anomaly and fraud_
Analysis of OSN’s structure can reveal the presence of anomalies, indicating that users might be acting in suspicious ways.
Akoglu et al. (2010), observing different networks including email and blogs, examined the topology of sub-graphs formed by users’ _1-step_ neighborhood. The empirical analysis shows that some properties of those sub-graphs follow a power-law probability distribution, implying that users presenting sub-graphs with improbable values for those properties are considered anomalies and can be inspected. In the presented results, cases such as corrupt CEOs (emails network) or biased connections (blogs network) were detected using the algorithm.
Another interesting example was brought by Golbeck (2015), who showed that Benford’s law – which predicts the frequency distribution of digits in datasets – can also be used to detect anomalies in OSNs. It is shown that, in data collected from real SNS, properties such as user’s number of posts, number of friends and number of friends-of-friends tend to follow the law. Therefore, the identification of datasets where statistics have a different distribution, can indicate the presence of fraud or suspicious behavior.
A common and practical form of fraud in Facebook’s network is the use of automated processes to generate _likes_ on the service’s pages as a way of artificially promoting a cause, a business or an individual. In order to detect and avoid this situation, Beutel et al. (2013) proposed a method where a bipartite graph is formed connecting users to the pages they liked and registering the time those connections were made. Then, by analyzing patterns of groups of users who liked the same pages, they were able to detect anomalies and misbehavior.
A related problem occurs in some SNSs, where fake accounts are used to increase the number of followers of certain users. Ghosh et al. (2012) investigated this problem in Twitter, analyzing over 40,000 accounts suspended by misconduct. They noticed that, linked to the problematic existence of improper accounts in the service, there are also regular users who, in order to increase their social capital, agree to follow back any user who followed them, creating connections between regular and malicious accounts, hindering the detection of malfunctioning accounts.
Working in the same issue, Jiang et al. (2014) analyzed spatio-temporal properties of OSN sections and created measurements for its ‘synchronicity’ – how similar and coordinated are the actions of the users on the network section – and its ‘rarity’ – how the topology of the section compares to the whole network’s structure. The technique was tested in big datasets from Twitter and Sina-Weibo, showing positive results in fraudulent users detection.
Finally, Jiang et al. (2015) summarised many of these techniques and proposed general axioms and metrics to quantify suspicious behavior in OSNs, presenting a new algorithm using these principles which showed improved performance.
_Spamming behavior_
Another type of fraud occurring in SNS is the presence of user accounts that deliberately send unwanted content (spam) to regular users, abusing the communication channels provided by the services.
This problem in Twitter was tackled by Benevenuto and Magno (2010) who identified users acting as spammers in messages related to topics that generated great mobilization. This was made with the use of machine learning techniques that considered the network’s structural properties, but also the textual content of messages. Hu et al. (94) proposed a related approach, discussing the benefits and challenges of using those features in classification tasks.
Similar problem was also investigated in YouTube. Benevenuto et al. (21) used properties extracted from the network, users accounts and videos posted, to create a supervised classifier identifying three roles of users: spammers, promoters and legitimates. O’Callaghan et al. (2012) worked on the identification of spammers in YouTube’s comments using an approach exclusively based on network analysis. For this, a network was built using real data, connecting users to videos, when there was the presence of comments. The formed network’s structure presented repeated topology patterns (_motifs_) that, when categorised, lead to the identification of typical structures created by spammer behavior and enabled the systematic identification of suspicious user accounts.
### Representation models
One of the challenges of OSN studies is to create models able to describe with success the structure, events and transformations the network goes through. Different models have been proposed addressing this issue. We discuss some of them below.
_Structure models_
When analyzing the structure of photo sharing OSNs (Flickr and Yahoo!360⁴²), Kumar et al. (105) detected patterns in the network representing different regions: _singletons_ (users without connections), _isolated communities_ (generally around a popular user) and a _giant component_ (users connected to many users). Then, a simple generative model was proposed, able to reproduce the network evolution and recreate the structural patterns observed empirically.
[FOOTNOTE:42][ENDFOOTNOTE]
Xiang et al. (2010) worked on building a model able to represent the _intensity_ of social relationships. Instead of having a binary value, each edge between two users in a social graph is calculated as a function of the frequency of interaction among them.
_Spatio-temporal models_
Although graphs are suitable representations to analyze spatial properties of an OSN, temporal aspects must also be considered in order to represent transformation processes taking place in a network. Although observing temporal aspects of OSN can be a challenge (specially due to the huge number of users involved in processes and data retrieval restrictions), they can be a valuable source of information.
The temporal evolution of a network was studied by Leskovec et al. (2005), who were able to make interesting empirical observations about the growth of several real networks. They noticed that, contrarily to the expectations, the addition of new nodes makes the network become denser in terms of edges per nodes and the average distance between nodes often decreases over time. From those observations, a graph generator model was proposed, able to produce more realistic networks.
Tang et al. (2010) proposed temporal models to describe network transformations, enabling the creation of new metrics, like temporal distance, i.e., the average time taken for an information published by a user to reach other users. Those metrics are complementary to other spatial metrics (such as geodesic distance) and seem to enable new perspectives of analysis of information diffusion processes or network formation.
## 5 Social data analysis
The focus of social data analysis is essentially the _content that is being produced by users_. The data produced in social networks are rich, diverse and abundant, which makes them a relevant source for data science. As will be seen in this section, most of the computational researches that employ social data use it in machine learning problems such as natural language processing (NLP), classification and prediction. In addition to the challenge of building robust algorithms for such purposes, researchers have also the challenge of building scalable computational solutions that can deal with the large amount of data available in those services.
### Sentiment analysis
The textual information produced everyday in SNSs, like Twitter, is a huge corpora (Pak and Paroubek, 2010), in which natural language processing techniques, such as sentiment analysis, can be used. Applied to OSNs, sentiment analysis has the potential to describe how emotions spread among populations and their effects.
_Taking advantage of corpora particularities_
New sentiment classification strategies can be explored, if particularities of the services are taken into account, like the Twitter’s (short) size of messages, slangs, _hashtags_⁴³ and network characteristics. Go et al. (2007) is one of the first attempts in the literature of sentiment classification on Twitter. Text processing techniques were proposed to extract and reduce features and an algorithm was built reaching over 80% of accuracy in classification. Hu et al. (95) also noted that an interesting feature of social data is the presence of _emoticons_, that can be used as labels for machine learning algorithms, helping the process of classification.
[FOOTNOTE:43][ENDFOOTNOTE]
Another interesting element of OSN corpora is the presence of language expressions not always present in formal texts. Using the fact that sentences are commonly followed by descriptive _hashtags_ (like “#irony” or “#not”) that can be used as labels for supervised learning, Culotta (52) and Reyes et al. (2012) worked on learning and detecting sarcasm and irony in text, with positive results.
_Applications_
Sentiment analysis can have many applications. For example, Jansen et al. (2009) and Ghiassi et al. (2013) analyzed how OSN users express sentiments towards different brands, obtaining a measure of approval or disapproval. With the increasing influence of SNSs, this kind of work can be valuable for companies to understand and deal with customer demands. Dodds et al. (2011) and Lansdall-Welfare et al. (2012) developed indicators of happiness among populations, based on the analysis of OSNs texts. With that, they were able to analyze the impact of historical events – such as economic recession (Lansdall-Welfare et al., 2012) – in public opinion, showing an innovative quantification of population welfare.
Deeper analyzes take into account not only text classification, but also a study of how sentiment spread in the network. Hu et al. (96) took advantage of _emotional contagion_ theories (Howard and Gengler, 2001) to help the classification of texts produced by specific users, having better results than traditional algorithms. In a controversial experiment, Kramer et al. (2014) filtered content displayed on Facebook to emphasise positive or negative posts, showing how emotions can be contagious. Although the users subject to the experiment did not presented drastic changes of behavior, there were statistically significant effects observed.
### Prediction
A valid question to address when dealing with OSNs is how representative are the dynamics present in the virtual environment in relation to the non-virtual world. Supposing that what happens inside SNSs can provide information about other external events, researchers have been trying to build predictors in many fields:
* Elections: predict the outcome of elections from OSNs manifestations (Tumasjan et al., 2010);
* Box-office revenue: forecast the popularity (and revenue) of a blockbuster before or just after it comes out (Asur and Huberman, 2010);
* Book sales prediction (Gruhl et al., 2005);
* Disease spread (Culotta, 53; Lampos and Cristianini, 2012);
* Stock market prediction from sentiment analysis (Bollen et al., 27).
However, despite the initial positive results and good perspective presented in the works above, skepticism about the effectiveness of the proposed methods and their representativeness must be noted, as seen in Gayo-Avello (2013), Wong et al. (2012) and Zhang et al. (2011), which analyzed election forecasts, box-office revenue and stock market predictions, respectively. Those studies showed that the validity of the initial findings can be questioned and that many results can not be generalized as expected.
### Trending topics detection
Another important focus of research that uses content published in OSNs is the analysis of message exchange dynamics, aiming to detect trends. Although some SNSs, like Twitter, have their own algorithms for trending topics detection, alternative proposals of content detection and organization have been made. According to Guille et al. (2013), there are two main approaches to detect a trending topic in an SNS: message analysis or network analysis.
_Message analysis_
Focusing on the messages content, Shamma et al. (2011) proposed a simple metric to identify trending topics, analyzing the frequency of words during specific time frames, compared to its general frequency (similar to the usual tf-idf (Dillon, 1983) model in NLP). A trending topic happens when there is an abnormal term frequency occurrence. In a creative approach, Weng et al. (2011) considered the frequency in time of words as waveforms. Thus, some messages would contain words with waveforms that resonate together, enabling the identification of emergent topics.
Lu and Yang (2012) went beyond and developed a method to predict which topics will be popular in the future. Using strategy originally intended to predict stock markets, this method is able to calculate the _trend momentum_: the difference of frequency of a term between a short and a long time period. In the tests performed, the method was effective, with trends being successfully predicted by the increase of the momentum.
_Network analysis_
On the transition from message to a network approach, Cataldi et al. (2010) used not only the term frequency, but also the authority (calculated using PageRank) of users posting the observed content. This way, they were able not only to identify trending topics, but also related topics. Takahashi et al. (2014) used exclusively network information to create a probabilistic model of interactions. When anomalies are detected in the interaction pattern, a trending topic can be detected, without even text analysis. In their tests, this technique performed at least as good as other text-based techniques, being superior when topic keywords are hard to determine.
_Tracking memes evolution_
Apart from trending topics detection, Leskovec et al. (2009) studied not only topics created, but also their evolution in new subtopics or derivatives over time, observing the spreading of news for days. The researchers were able to track a common path in the news cycle, with content being first published in traditional media and, few hours later, the same content appearing in blogs and other online services, resulting in “heartbeat-like” patterns of attention peaks.
### Location and real events detection
In many cases, topics discussed in OSNs are about events that take place in the “real” (or external) world, like political, public or daily life events. Also, as contents are often posted from mobile devices, it is common for OSN users to be physically present during those events. Therefore, OSN data can be a valuable resource for recovering data from offline interactions.
_Location_
Information about geographical localization of OSN users is available in many SNSs, specially in _location-based SNS_, such as Foursquare⁴⁴ and Nearby⁴⁵. Noulas et al. (2011) characterized users’ geographical data present on Foursquare, demonstrating the potential of such data in unprecedented research on human mobility, urban spatiality and in applications such as recommendation systems.
[FOOTNOTE:44][ENDFOOTNOTE]
[FOOTNOTE:45][ENDFOOTNOTE]
Cho et al. (2011), analyzing social data from both location-based SNS (Gowalla⁴⁶ and Brightkite⁴⁷) and from cell phone towers, found patterns on user mobility, being able to create a predictive model of users location. The analysis reveals that, although people tend to stay most of the time transitioning between routine locations (e.g.: home, work), social connections and location of friends are also determinant to identify an individual’s location.
[FOOTNOTE:46][ENDFOOTNOTE]
[FOOTNOTE:47][ENDFOOTNOTE]
The geolocation of an OSN user can also influence its relationships and content exchanged. Takhteyev et al. (2012) found that groups of people sharing similar cultural or geographical elements, such as language and location, are more likely to be connected in an OSN. Also, the existence of physical connections between places, like the presence of abundant airline flight routes, can be an indicator of social connections. Cheng et al. (2010) also explored this aspect, indicating that it is possible to predict the location of a user exclusively from the content of his/her textual messages, even when this information is not explicitly disclosed.
Showing the potential of social data as a demographic tool, Cranshaw et al. (2012) developed a methodology where a city can be spatially divided in regions, using data from Foursquare. By comparing the record of users present in different public spaces (Foursquare’s _check-ins_) and the spaces’ geographical locations, an affinity matrix is built, revealing similarities between premises. This matrix can then be clustered, revealing areas of both spatial and social proximity inside cities. These areas, denominated by the authors as _livehoods_, form a relevant and coherent territory demarcation (as revealed by interviews), presenting as a valuable alternative to traditional municipal organizational units such as neighborhoods.
_Detecting real events_
Becker et al. (2011) worked on a method to distinguish Twitter messages that refer to real events from those that do not (jokes, spam, memes, etc.) by clustering messages of the same topic and, then, classifying the clusters based on their properties. Psallidas et al. (2013) discussed the challenge of separating, in an OSN, content related to predictable events (e.g.: awards, games, concerts) from those related to unpredictable ones (e.g.: emergencies, disasters, breaking news). Features useful to describe each type of diffusion were evaluated to be used as input to classification algorithms, being effective in large-scale experiments.
Sasahara et al. (2013) analyzed how some topics related to past events spread across the social network, finding some patterns that help in the identification of real event diffusion. According to the authors, diffusion networks of real events have an abrupt and unusual structure (compared to diffusion of other kinds of events), making it possible to create automatic tools to detect them.
_Using real events information_
Hu et al. (2012) studied how a social network is capable of disclosing breaking news even before traditional media. They used as case study the fact that the news of Osama Bin Laden’s death were disclosed on OSNs before traditional media and showed how OSN users take roles of leadership to efficiently transmit information and influence other users on those events.
Using this ability of quick awareness, Sakaki et al. (2010) and Neubig et al. (2011) proposed automatic methods for detecting earthquakes in Japan, considering network users as _social sensors_. Their results were robust and promising, involving the identification of the earthquake’s centre and trajectory, inference about the safety of people possibly affected and the generation of automatic earthquake alerts faster than official announcement by authorities.
### Social recommendation systems
Another application of OSN data is the possibility of creating social recommendation systems for products or even content produced by users in the network. In a space with many users and data, the use of social relationships can improve traditional recommendation systems both in relevance and scalability, as users connected by social relationship usually share many interests, both by homophily⁴⁸ and by contagion, reducing the amount of data necessary to make accurate recommendations.
[FOOTNOTE:48][ENDFOOTNOTE]
_Trust networks_
One practical use of social information in recommendation systems is the synthesis of _trust networks_, which are groups of related users that are considered to have a valuable opinion on some matters. Generally, a user’s truthfulness is related to its proximity to a reference user.
Walter et al. (2007) described how an OSN can be used to collect information in general and how the relationships can help to filter relevant information for each user, as trust networks are established. By using exclusively content on users’ neighborhoods, they were able to build effective recommendation systems as good as other systems that use information from the whole database. Arazy et al. (2009) created social recommendation systems in order to evaluate products reputation, building trust networks to ponder the relevance of users opinions.
_Improving traditional recommendation systems_
Other uses of OSN data for recommendation systems include the work of Ma et al. (2011), who uses relationship data to initialize recommendation systems that have few initial reviews. Also, Yang et al. (2013) created probabilistic models to model users preferences and make recommendations based on friendship connections. In a more conservative proposal, Liu and Lee (2010) suggested ways to improve existing recommendation systems by including social information, like users’ relationships, and showed how the accuracy of algorithms may be positively affected.
_Content selection_
A common task for recommendation systems on SNSs is to select relevant content to be displayed to users. Chen et al. (2010) worked on a series of algorithms to recommend content to users, in order to improve Twitter’s usability. They were able to reach a level in which 72% of the content showed was considered interesting, according to real Twitter users feedback.
Backstrom et al. (2013) worked with Facebook data, analyzing the attention a topic might receive, by predicting the topic’s length and its re-entry rate (i.e., the number of times a user participates in the same topic). This gives a measure of how interesting a topic is and can be used to select and recommend content to users.
## 6 Social interaction analysis
By watching _users diffusing content_, there is the expectation of knowing more about complex human behavior. The access to data produced by OSNs and the knowledge of how to process and analyze them are enabling computer scientists to join discussions previously exclusive to sociologists or psychologists. This new intersection of fields is known as _computational social science_(Lazer et al., 2009; Cioffi-Revilla, 2010; Conte et al., 2012).
There are still questioning related to whether the behavior observed in an OSN can be extrapolated to its users offline lives and whether OSN users are representative enough for drawing conclusions, from their behavior, for whole societies (Boyd, 2010). Even so, there is a plenty of phenomena that take place on OSNs that are worth to be studied, as we will outline in this section.
### Cascading
One of the most widely studied behavioral phenomenon that takes place in OSNs is information cascade. Also known as viral effect, a cascade is characterized by a contagious process in which users, after having contact with a content or a behavior, reproduce it and influence new users to do the same. This decentralized process often causes chain reactions with great proportions, involving many users and being one of the main strategies for information diffusion in social networks.
The unpredictability and the magnitude of this phenomenon attract many researchers, trying to interpret and understand the factors behind it. The cascade effect has been studied and characterized in many different SNSs, as:
* Facebook (Sun et al., 2009; Dow and Friggeri, 2013);
* Google+ (Guerini et al., 2013);
* Second Life⁴⁹(Bakshy et al., 2009); [FOOTNOTE:49][ENDFOOTNOTE]
* Flickr (Cha et al., 2009);
* Twitter and Digg (Lerman and Ghosh, 2010);
* LinkedIn (Anderson et al., 2015).
Goel et al. (2012) alone studied information diffusion in seven different OSN domains, verifying similarity in cascading properties, regardless the service observed.
_Properties observed_
From the empirical analysis of information cascades on OSNs, some common properties can be observed, as already shown by Goel et al. (2012). A good characterization of many of those properties can be found in Borge-Holthoefer et al. (2013), that gathered results from works that modeled and analyzed cascades.
Among the properties observed, some are highlighted:
* Most cascades have small depth⁵⁰, exhibiting a star-shaped connection graph (a central node connected to many others around it). This was shown by many researchers, as Leskovec et al. (2007), González-Bailón et al. (2011), Lerman and Ghosh (2010) and Goel et al. (2012). [FOOTNOTE:50][ENDFOOTNOTE]
* In practice, the majority of information diffusion processes that take place in the network are shallow and do not reach many users. Thus, widely scattered cascades turn to be rare and exceptional events.
* In general, cascades (even large ones) occur in a short period of time. Most reactions to a content posted on an OSN usually happen quickly after it is posted (Centola, 2010; Leskovec et al., 2007) and do not last for a long time (Borge-Holthoefer et al., 2013).
* Any user on the network has potential to start widely scattered cascades. It is shown that different sources of information can conquer space on the network (Bessi et al., 2014), and attempts to measure users’ potential to start a cascade are not conclusive (Bakshy et al., 2011; Borge-Holthoefer et al., 2012) (see section 6.5 on influence for more details).
_Information origins_
Myers et al. (2012) studied sources of information in OSNs. They found that almost one third of the information that travels on Twitter network comes directly from external sources, while the rest comes from other users, through cascades. Tracking a cascading process can be a challenge when the content being propagated may undergo changes. Leskovec et al. (2009) proposed ways to track memes and their derivatives, in a process that can take several days, showing the long transformation process from publication to popularization.
_How topology influences cascades_
The analysis of the network underlying a diffusion is a helpful way to understand a cascading process. Goel et al. (2013), using a dataset of billions of diffusion events on Twitter, analyzed the diffusion networks and proposed a “structural virality” metric, able to measure the network’s tendency to successfully propagate an information.
One of the most important conclusions of the network analysis, shown by Sun et al. (2009), Ardon et al. (2013) and Weng et al. (2013), is the fact that topics that can reach initially more than one community of users tend to cause larger cascades.
_Cascades from historical events_
Specific events where SNSs had significant influence, such as political movements and protests, received special attention in social network analysis. In 2009, following the Iran presidential elections, many protests took place and their effects could be noticed in SNSs by increased diffused information. Zhou et al. (2010) conducted a qualitative research of these cascades, concluding that in general they are shallow (99% of the diffusion trees have depth smaller than three). González-Bailón et al. (2011), based on the diffusion network, analyzed the roles of users and related them to their positions in the network. According to the study, influential users in the process of spreading information tend to be more central in the network.
Similar experiments were made with protests that happened in Spain on May 15th 2011. Borge-Holthoefer et al. (2011) analyzed the diffusion network related to such events and differentiated users that acted as sources of information and users that only consumed it. In a later work, Gonzalez-Bailon et al. (2013) identified four types of users – namely influentials, hidden influentials, broadcasters and common users – that can help the understanding of how users behave in cascading processes.
### Predicting cascades
An important motivation for characterizing cascades is to be able to predict how users in a network will behave with regards to a specific content and how this content will spread. This capacity to tell beforehand how many users will see or share an online content can be a source of revenue for advertisers and, also, a useful tool to governments willing to effectively disseminate public interest information.
However, the task of predicting popularity of online content has shown to be extremely difficult to accomplish (Salganik et al., 2006; Watts, 2012). Two main problems are determinant (Cheng et al., 2014): (a) the definition of what are the features (if any) that determine the size of a cascading process; and (b) the fact that widely spread cascades are rare events (Goel et al., 2012), making it hard to develop and train algorithms with so few positive samples.
Nevertheless, those difficulties were not enough to prevent research in this area, as seen in the many scientific works already published. Also, according to experiments presented by Petrovic et al. (2011), the identification of content likely to be shared is a task manageable by humans, what can bring hope to new inquiries. As we will show below, many are the works published in this topic and so are the strategies used to tackle the problems.
_Feature selection_
The most important aspects to be considered when building machine learning algorithms (such as predictors or classifiers) to analyze cascades is the proper characterization of information diffusion processes and the choice of relevant properties to describe these processes preserving existing distinctions among them (Suh et al., 2010). From the literature, we can see that four main classes of features are generally chosen: (a) message features, (b) user features, (c) network features, and (d) temporal features.
_Message features_
Does a textual message posted in an OSN have an intrinsic potential to be shared? Assuming that some content has more potential than others to create cascades, researchers have investigated ways of predicting the future popularity of a message based on text analysis. This kind of investigation might be specially interesting in cases in which there is the need (or the will) of maximizing the audience reached by a content posted by a specific user. Thus, by adjusting the text that will be posted, it would be possible to increase the range of an author’s message.
This is the aim of Naveed et al. (2011) work, that found correlations between message content and _retweet_ count on Twitter. Several features were analyzed, such as presence of URLs, _hashtags_, mention to other users, punctuation and sentiment analysis. Their conclusion is that messages referring to public content and with negative emotions are more likely to be shared. Suh et al. (2010) did an extensive search for features, both in message and user characteristics, in a large dataset (74 million posts from Twitter) highlighting the presence of URLs and _hashtags_ as the most relevant factors in the message content for predicting cascades.
More creative message descriptors were studied by Hong et al. (2011), who used topic detection algorithms to identify a message’s topic, to be further used as a feature. Tsur and Rappoport (2012) explored different interesting features that can be extracted from a _hashtag_, like its location inside a post or its size in characters or words.
_User features_
It is evident that a popular and influential user has more chance of generating a cascading process than an anonymous user. Therefore, analyzing aspects related to the user that shares a message, and possibly about the users that continue this process, can be crucial to build a reliable cascade predictor.
In addition to message features (as discussed above), Suh et al. (2010) also analyzed a set of possible features related to authors, including the number of connections, number of past messages posted, number of days since the user’s account was created and number of messages previously marked as favorite by other users. Their conclusion was that only the number of connections and the age of the account have any sort of correlation to _retweet_ rates. Hong et al. (2011) also suggested other features, namely: author’s authority according to PageRank (Page et al., 1998), degree distribution, local clustering coefficient⁵¹ and reciprocal links.
[FOOTNOTE:51][ENDFOOTNOTE]
Metrics taking into account properties of the users involved in a diffusion (beyond the author) can be also valuable. Hoang and Lim (2012) introduced a model to predict information virality on Twitter, by creating three features: item virality (the rate of users that share a content, after receiving it), user virality (the number of connections of users involved in a diffusion) and user susceptibility (the proportion of content shared in the past by a user). Lerman and Hogg (2010), by observing cascades on Digg, were able to create models that describe the initial behavior of users sharing content, thus allowing the forecast of a cascade’s size. Lee et al. (2014) explored features related to previous behaviors of users, such as average time spent online, time of the day in which the user is more likely to join discussions, and number of messages sent over time.
_Network features_
The analysis of the network structure where a diffusion takes place is also important to determine the potential range of a cascade.
Weng et al. (2013) explored the importance of a network characterization, using the knowledge that diffusions starting in multiple communities are more likely to be larger (Sun et al., 2009; Ardon et al., 2013). The authors then proposed as a metric the number of communities involved in the early diffusion and the amount of message exchanges between different communities (inter-community communication).
Kupavskii et al. (2012) examined a set of features to describe a cascade, showing relevant improvements in the prediction task when using network features such as the flow of the cascade – a measure related to the number of users sharing a content and how fast they share it – and the authority in the network formed by users sharing the same message, calculated using PageRank (Page et al., 1998). Ma et al. (2013) used both message and network features to predict the popularity of Twitter _hashtags_. Among the network features adopted are metrics like the ratio between the number of connected components in a network and the number of users that initiated the cascade, the density of the diffusion network⁵² and the diffusion network’s clustering coefficient. Their conclusion is that network features are more effective than message features for predicting the use of _hashtags_.
[FOOTNOTE:52][ENDFOOTNOTE]
_Temporal features_
Every cascade process can be represented as a time series, listing the amount of information diffused over time. This time series can be seen as a cascade signature, representing its range, speed and power.
Szabo and Huberman (2010) analyzed the initial diffusion of YouTube and Digg contents and, based on the initial time series, forecast the long term popularity of specific contents. They pointed that only two hours of data about the access to Digg stories was enough to predict thirty days of popularity, while, on YouTube, ten days of records were needed to evaluate the next twenty days.
Cheng et al. (2014) improved this strategy, by dividing the original prediction problem into subtasks where, based on past features, a classifier must estimate if a content published on Facebook will double its audience or not. Thus, robust and high performance classifiers can be built.
_What exactly is predicted_
After presenting the features used to describe cascading phenomena, it is worth examining the different approaches to predict cascades.
Most of the work in this topic tries to measure the number of users or messages that will join a cascade. Examples are Kupavskii et al. (2012), who worked predicting the number of messages (_retweets_) a cascade will have, Ma et al. (2013), that predicted the popularity of a new topic (_hashtag_), and Suh et al. (2010), that forecast the rate of users participating in a cascade.
However, some works were simply interested in building binary classifiers to determine if a content will be shared by any user or not. This is the case of Naveed et al. (2011) and Petrovic et al. (2011). Hong et al. (2011) went a little further and created four categories of cascading – not shared, less than 100 shares, less than 10000 shares and above 10000 shares – that can be classified more easily.
Another strategy was used by MORGAN (2009), who built a system able to predict which users are leaned to enter a cascade. Lee et al. (2014) worked in the same line, being able to sort the \(N\) users most inclined to share a message.
### Rumors diffusion
Another particular area of study involving cascading that received special attention from the research community is the detection of false information (rumor⁵³) propagation.
[FOOTNOTE:53][ENDFOOTNOTE]
_Characterizing rumors_
Aiming to characterize this phenomena, Friggeri et al. (2014), with the assistance of a website that documents memes and urban legends (http://snopes.com), mapped the appearance of rumors on Facebook network, showing that rumor cascades tend to be more popular than generally expected and discussing users’ reactions after acknowledging the falsehood of previously posted messages. Also on Facebook, Bessi et al. (2014) observed the acceptance by network users of different sources of information. By analyzing how content from (a) mainstream media, (b) alternative media, and (c) political activism is diffused, they concluded that, regardless of source, every information has the same visibility. This may favour people that share false content, as they potentially have the same power of influence on the network as reliable sources.
_Detecting rumors_
Mendoza et al. (2010), when analyzing the diffusion networks of news related to a natural disaster in Chile, realized that the patterns of rumor spreading are different from those related to real information spreading. Therefore, in a subsequent work, Castillo et al. (2011) sought automated methods to detect rumors, by analyzing features from texts posted and the users involved in the propagation of the information.
Qazvinian et al. (2011) further proved the effectiveness of using features related to network and message content to detect rumors. Despite their positive result, it is noticeable the small number of rumors analyzed (only five), given the quantity of data (10000 posts from Twitter). Gupta et al. (2012) also worked developing metrics, but this time trying to measure credibility of users, messages and events, resulting in a score for the credibility of the general topic diffused.
_Rumor containment_
In a different perspective, Tripathy et al. (2010) explored ways to contain a rumor cascade, after its identification. Using techniques inspired by disease immunization, they discussed the importance of a quick identification of rumors and the use of anti-rumors agents able to detect such events and spread messages against the rumors. Lastly, Shah and Zaman (2011) aimed to detect the source of a rumor cascade, developing a new topological measure entitled “rumor centrality”, able to outperform traditional metrics in special cases.
### Information diffusion models
One way to understand and study the dynamics of OSNs is to build models that represent users interactions. Having a reliable representation enables the conception of simulations that can give support to understand the events that take place in the network.
_Models paradigms_
In Borge-Holthoefer et al. (2013), the models used to describe cascades in complex networks are revised. According to them, the models can be divided in two main groups: (a) threshold models and (b) epidemic and rumor models. In both methods, the decision of a user to adopt a certain behavior depends on the neighbors that have already adopted it. In threshold models a user will act only if the proportion of his/her neighbors that are active is superior than a given threshold; in epidemic and rumor models, on the other hand, active users have a probability of infecting each of their neighbors.
An example of the threshold model is provided by Shakarian et al. (2013), using the model to create a heuristic to identify users able to start a cascade. The method is able to quickly identify a relatively small set of users able to start cascades that cover the whole network, even for large networks with millions of nodes and edges.
Using the epidemic model, we have the work of Gruhl et al. (2004) who created a model for information diffusion in blogs, using real data to validate it. They showed that the model faithfully reproduces real behavior, where influential and popular blogs in reality also have relevance in the model’s diffusion. Golub and Jackson (2010) also showed that the epidemic model is an appropriate form of representing cascades, when modeling (the rare⁵⁴) high depth cascades.
[FOOTNOTE:54][ENDFOOTNOTE]
It is important to notice that the epidemic model, based on disease propagation, has its limitations when describing information contagion, given their different nature. One important distinction is the concept of _complex contagion_(Centola and Macy, 2007) which states that, for a behavior be acquired by an individual on social networks, he/she has to be exposed to multiple other individuals. This differs from disease infections, where a single contact with a virus is enough to infect a person (_simple contagion_). Romero et al. (150) explored this phenomenon on Twitter, showing that multiple exposure to subjects were determinant for contagion. Weng et al. (2013), however, made a counterpoint showing that although most content spread like complex contagion, some can be properly modeled as simple contagion.
In a different approach, Herd et al. (2014) built a model where, after collecting behavior data from Twitter, each user receives a probability of posting and a probability for emotions to be expressed. With this, they created a multi-agent model to simulate the behavior of social networks. By building a model based on messages exchanged during United States 2012 presidential campaign, the researchers were able to detect which users were more influential to spread messages. An unexpected conclusion was the fact that the removal of the ten biggest enthusiasts of Barack Obama’s campaign would have a larger impact in the network than if Obama himself was removed.
_Model enhancements_
Some enhancements can be proposed to turn the models more realistic to the OSN context. This is the case of Weng et al. (2012) and Gonçalves et al. (2011), which considered limitations on the amount of information each user can access and process. This is able to reproduce the fact that many of the information diffusion on OSNs simply lose strength and disappear, regardless the content.
Gómez et al. (2013) discussed ways of modeling and processing information diffusion through multiplex networks. A multiplex network is a network with multiple levels, each level representing a different type of relationship between the network nodes. Therefore, a multiplex network is an adequate model to represent online social networks, as OSN users can be connected in multiple ways (e.g.: different topics may generate different dynamics on the network, creating different diffusion networks connecting users). The proposed analysis revealed relevant aspects of the relationship among those multiple processes.
_Inferred paths of propagation_
Another area of interest is to determine which are the paths traveled by messages subject to diffusion. Gomez-Rodriguez et al. (2012) were able to infer the order in which users were “infected” by a content, by observing the final infected network. By analyzing the timestamps when network nodes shared a content, they calculated the most likely structure that connects the nodes. The algorithm is applied to a large database of blogs’ diffusions, achieving high quality results.
Yang and Leskovec (2010) created a method to model and forecast information diffusion, independently of the network structure. For each user of the network, an influence index is estimated, as a measure of the number of users infected by him/her, over time. Thus, for an initial group of infected users, it is possible to predict how many new users will be infected in the future, even without information regarding their connections. Also, the individual influences can be grouped and be used to model the influence dynamics of different classes of users.
### Influence
As already antecipated, another important factor that determines information diffusion in an OSN is the users’ capability of influence. An influential user can be determinant to start (or trigger) cascade events, or even change people’s opinion and behavior.
_Locating influential users_
Locating an influential individual in a network is not a trivial task. Cha et al. (2010) discussed three metrics aiming to quantify users’ influence in OSNs: number of connections (nodes degree), number of mentions, and number of messages reshared (_retweets_) by other users. A discussion of the most appropriate ways to measure influence is done, revealing that simple metrics like number of connections can be misleading to represent the future influence of a user. Weng et al. (2010) were more optimistic, showing that an adaptation of the PageRank algorithm (Page et al., 1998) can be used to successfully measure influence on networks.
However, Bakshy et al. (2011), when analyzing a huge dataset, showed that the theoretical results and metrics are not always confirmed in reality. They discussed that, even though it is possible to identify influential users able to repeatedly start widely scattered cascades, determining a priori which users will influence a cascade process is a hard task. Borge-Holthoefer et al. (2012) also analyzed real data in order to identify influential users from the network topology. Although some influential users are correctly identified in some cases, there are situations where “badly located” users are also able to be influential, exceeding expectations.
_Influence effects_
Researchers have also been interested in evaluating the effects of social influence. Bakshy et al. (2009), by examining the adoption rate of user-to-user content transfer in Second Life⁵⁵ among friends and strangers, showed that content sharing among known users usually happens sooner than among strangers, although transactions with strangers can influence and reach a wider audience.
[FOOTNOTE:55][ENDFOOTNOTE]
Stieglitz and Dang-Xuan (2012) analyzed tweets with political opinions and concluded that texts with increased emotional words have stronger influence in the network, being more likely to be shared. Salathé et al. (2013) discussed how the network connections influence opinions and individual sentiment, by observing reactions to a new vaccine campaign in United States. They showed that negative users are more accepted by the network and that users connected with opinionated neighbors tend to be discouraged from expressing opinions.
### Network’s influence on behavior
Even though individual users have autonomy, it can not be denied that social connections have influence on the formation and evolution of their behaviors and opinions. The OSN analysis enables the empirical observation of the consequences of social connections on individual behavior, and the development of new models and theories capable of explaining those hypothetical associations.
_Homophily_
A relationship between the topological structure of an OSN and the behavior of its users can be often noticed. In most cases it is not possible to determine what is cause and what is consequence (i.e., if the topology is a result of users behavior, or if the behavior is a consequence of the topology), but the study of one can help in the understanding of the other.
Researchers identified, in general social networks, a tendency that users with common interests are usually connected to each other (McPherson et al., 2001). Such phenomenon is called _homophily_ and is also verified on OSNs. For example, Bollen et al. (26) verified, by investigating the relationship between emotions and social connections, that users considered happy tend to be linked to each other.
Romero et al. (151) investigated the relationship between the (explicit) network of friendship and the (implicit) network of topical affiliations (i.e., the communities formed by users interested in a common topic). They showed that both networks have considerable intersection (users tend to connect to other users with common interests), such that it is possible to predict friendship from _hashtag_ diffusions and also the future popularity of a _hashtag_ from the friends network.
_Users’ information processing capability_
Gonçalves et al. (2011) verified whether users are able to surpass, in OSNs, the _Dunbar’s number_⁵⁶, given that users usually have hundreds, or even thousands, of connections in such services. After analyzing message exchanges, they showed that, despite the abundance of social connections in OSNs, users are unable to interact regularly with more peers than what is predicted by Dunbar’s threshold. Grabowicz et al. (2012) studied how the topology affects the type of content transmitted on the network, discussing how users not very close related (intermediary ties) can filter relevant information from several groups, while close relationships (strong ties) can be distracted with a great amount of irrelevant messages.
[FOOTNOTE:56][ENDFOOTNOTE]
_Divergence of opinions in networks_
By examining the information diffusion dynamics on OSN, Romero et al. (150) studied how users would not immediately adopt an opinion or behavior (such as a new political position) from the first contact with the idea, provided by few initial users. However, if the user is continuously exposed to such content, with many users reinforcing it, the chance of adoption increases. This result is validated on Twitter, where the authors examined how _hashtags_ are diffused and the decisive role of multiple exposures.
Based on the relationships established on Twitter, Golbeck and Hansen (2014) estimated the political preferences of users and analyzed how different political opinions coexist in a social network. Also, using the user database together with the predicted political preferences, they were able to analyze the audience of traditional media sources, classifying them as liberal or conservative. This media classification showed to be coherent with previous classification in the literature.
### Self-organization
Some research groups studied how users in OSNs, given the absence of central command and their decentralized communication, are able to self-organize in specific situations.
_Crisis events_
Leysa Palen, Kate Starbird and colleagues (Vieweg et al., 2010; Starbird et al., 2010; Starbird and Palen, 2010, 2011, 2012) made a deep research on how OSNs can help managing information during crisis events, such as popular uprisings, political protests, natural disasters and humanitarian aid missions. The researchers identified that, among thousands of messages and publications during a crisis, there is the emergence of mechanisms able to deal efficiently with this overload of information. Some of the observed dynamics include the ability of content selection, relevance detection and attribution of roles to specific users. They showed that the largest information cascades during those events tend to happen with important content, being a way to emphasize content worth to be viewed by other users. Also, the network is able to identify reliable users (like on-site witnesses) and give relevance to their posts, by sharing them more often. Thereby, just by observing the content circulating on SNSs, it is possible to quickly identify the most important or urgent information and even coordinate actions in order to help and assist people.
_Social curating_
Another self-organizing ability of OSNs is content curating, which is the ability of collectively selecting and filtering content relevant to users. This process can happen both spontaneously in traditional SNSs or in dedicated services like _Pinterest_⁵⁷ or _Tumblr_⁵⁸, where users can collaboratively build collections of diverse subjects, selecting content from the Internet.
[FOOTNOTE:57][ENDFOOTNOTE]
[FOOTNOTE:58][ENDFOOTNOTE]
Liu (2010) explored the skills involved in the curating process, describing seven distinct abilities of a social network, namely: collecting, organizing, preserving, filtering, crafting a story, displaying and facilitating discussions. Those skills are compared to actual professional skills (archivist, librarian, preservationist, editor, storyteller, exhibitor, docent, respectively), emphasizing how impressive is the network ability to promote self-organization, being able to specialize and accomplish complex tasks.
Zhong et al. (2013), in a comprehensive study, described with details the process and the mechanisms of curating in Last.fm⁵⁹ and Pinterest services, discussing users motivations behind it. They also showed that the social curating process is able to give value to items differently from centralized strategies, being an important source of opinion and measurement of quality. However, the community choices can still be biased, specially when dealing with items already popular in the network, or previously promoted by the service.
[FOOTNOTE:59][ENDFOOTNOTE]
## 7 Final remarks
In this work, we performed a comprehensive analysis of research published on online social network analysis, from a Computer Science perspective. Different topics of inquiry were distinguished and a taxonomy was proposed to organize them. For each area, we defined the scope of the works included in it, some of the most representative works, highlighting the discoveries, discussions and challenges of each field.
As seen in the previous sections, computational research in OSN analysis is wide and diverse, enabling the application of techniques from many fields like graph theory, complex networks, dynamic systems, computational simulation, machine learning, natural language processing, data mining, spatio-temporal modeling, among others.
Although many aspects of the presented areas are still being developed, some general movements on the research’s course could be identified. The simple characterization of OSN structures, much valued on the first studies, was progressively replaced by studies of users’ behavior on the network and the complex dynamic produced by them. Works using social data for different purposes are also very common, with the knowledge extracted being often considered as a valuable representative of human behavior or opinion.
_Future perspectives_
Predicting the next steps of research on OSN is a challenging and risky task. It is even temerarious to predict if the interest on this topic will still be increasing in years to come. Nonetheless, we will list in the following paragraphs some possibilities of new studies that we believe are worth being explored.
Despite the existance of few works combining information from many social networks, we can notice an increase in the number of theoretical and experimental studies dealing with heterogeneous relationships (e.g.: following, friendship, transportation sharing) from one or more concurrent sources (Gomez-Gardenes et al., 2012; Gómez et al., 2013; Mucha et al., 2010; Sun and Han, 2012). This kind of analysis opens several new roads for research, making possible to have a more complete overview of how individuals interact and influence each other, to better track the evolution of a piece of information and to evaluate how specialized may be the use of different social networks – what may help us to estimate how representative is user behavior in OSNs – , just to cite a few examples. An important aspect to single out is that this data may also be obtained from sources other than OSNs, like surveys and interviews or sensors (e.g.: GPS in smartphones). In fact, we believe that, with the emergence of the “Internet of Things”, this offline data will acquire prominence in computational studies about human behavior.
In addition to the use of different sources for context awareness, the deeper understanding of how networks evolve during time is also a likely subject to appear in the future. Most studies still consider the network structure as a fixed object, ignoring its transformation and plasticity. The limitation of current methods may be seen in information diffusion analysis, for instance, as the disregard of when a connection is active may create paths that are not temporally consistent and reduce artificially the distance between individuals. More work is required to understand what are the transformations that take place on each kind of network, their impact on the processes observed in complex systems and how such processes influence the evolution of networks themselves.
The knowledge drawn from online social networks may impact not only computer science, but it may provoke a revolution in social sciences. The burgeoning cross-disciplinary field of computational social science benefits from computational methods, as multi-agent based models, network analysis and machine learning, in order to build a fast, data-driven science. The program of this new data intensive discipline intends to make use of partially structured data available in the Internet, in order to validate and complement existing social theories, or even to propose new research explanations to social phenomena. The use of data from OSNs can not only make much faster the currently time-consuming process of gathering social data, but it may also improve the reproducibility of research in social sciences, as every step of the research – from data collection to its analysis – may be audited and reproduced by external agents.
_Challenges_
Even though the volume of work analyzing OSNs is significant, the area still presents some open challenges, that deserve to be further addressed by researchers.
One initial challenge is associated with the tools and methodologies used. We see that most approaches of OSN studies (specially social data analysis) focus on characteristics of users or messages, but few have a more systemic view, approaching network effects. Therefore, we believe that there is a promising niche to be further explored using methods from complex systems and network science, trying to understand, for instance, the roles of topology, homophily, heterogeneity in individual behaviors and collective cognition in such social systems. This kind of research, however, demands tools and strategies yet to be discovered and experienced. More effort, thus, is required to build a robust theoretical framework to tackle those problems adequately.
After approximately ten years in the spotlight, OSNs are still a topic of interest of general media and academia. Buzzwords like “social”, “big data” and “complexity” are increasingly popular and the amount of new scientific papers related to them grows each year. At the same time that more discoveries are made it gets more difficult to properly select relevant works and validate new results presented in literature. One of the main aims of this work is, precisely, to help researchers with the task of organizing and selecting material on OSN analysis.
The lack of ethical considerations in most of the observed works is left as our final remark. Even though we focused on computational approaches to online social networks, the information collected and the knowledge produced by the works we analyzed have direct implications on societies. For example, the theories and methods developed in this research area can, potentially, be used in harmful ways by authoritarian regimes or abusive advertising campaigns. Privacy is also an important issue as, by analyzing public data and behaviors in OSNs, data scientists may uncover implicit information about specific individuals, information that such individuals may have never intended to made public. As OSN analysis is a strongly interdisciplinary field, we believe that this is a current challenge, indispensable to be considered.
## Acknowledgements
The authors sincerely thank Romis Attux, Leonardo Maia and Fabrício Olivetti de França for their kind effort of revising this work and contributing with corrections and new insights.
Part of the results presented in this work were obtained through the project “Training in Information Technology”, funded by Samsung Eletronics of Amazonia LTDA., using resources from Law of Informatics (Brazilian Federal Law Number 8.248/91).
## References
* Adamic and Adar (2005) Lada Adamic and Eytan Adar. How to search a social network. _Social Networks_, 27(3):187–203, 2005. ISSN 03788733. doi: 10.1016/j.socnet.2005.01.007.
* Adamic and Adar (2003) Lada a. Adamic and Eytan Adar. Friends and neighbors on the Web. _Social Networks_, 25(3):211–230, 2003. ISSN 03788733. doi: 10.1016/S0378-8733(03)00009-1.
* Ahn et al. (2007) Yong-Yeol Ahn, Seungyeop Han, Haewoon Kwak, Sue Moon, and Hawoong Jeong. Analysis of topological characteristics of huge online social networking services. In _Proceedings of the 16th international conference on World Wide Web - WWW ’07_, page 835, New York, New York, USA, 2007. ACM Press. ISBN 9781595936547. doi: 10.1145/1242572.1242685.
* Ajmera (2014) Harsh Ajmera. Latest Social Media users stats, facts and numbers for 2014, 2014.
* Akhtar et al. (2013) Nadeem Akhtar, Hira Javed, and Geetanjali Sengar. Analysis of facebook social network. In _Proceedings - 5th International Conference on Computational Intelligence and Communication Networks, CICN 2013_, pages 451–454. IEEE, 2013. ISBN 9780768550695. doi: 10.1109/CICN.2013.99.
* Akoglu et al. (2010) Leman Akoglu, Mary McGlohon, and Christos Faloutsos. oddball: Spotting Anomalies in Weighted Graphs. In Mohammed J. Zaki, Jeffrey Xu Yu, B. Ravindran, and Vikram Pudi, editors, _14th Pacific-Asia Conference, PAKDD 2010_, volume 6119 of _Lecture Notes in Computer Science_, pages 410–421. Springer Berlin Heidelberg, Berlin, Heidelberg, 2010. ISBN 978-3-642-13672-6. doi: 10.1007/978-3-642-13672-6˙40.
* Anderson et al. (2015) Ashton Anderson, Daniel Huttenlocher, Jon Kleinberg, Jure Leskovec, and Mitul Tiwari. Global Diffusion via Cascading Invitations. In _Proceedings of the 24th International Conference on World Wide Web - WWW ’15_, pages 66–76, New York, New York, USA, may 2015. ACM Press. ISBN 9781450334693. doi: 10.1145/2736277.2741672.
* Arazy et al. (2009) Ofer Arazy, Nanda Kumar, and Bracha Shapira. Improving Social Recommender Systems. _IT Professional_, 11(4):38–44, jul 2009. ISSN 1520-9202. doi: 10.1109/MITP.2009.76.
* Ardon et al. (2013) Sebastien Ardon, Amitabha Bagchi, Anirban Mahanti, Amit Ruhela, Aaditeshwar Seth, Rudra Mohan Tripathy, and Sipat Triukose. Spatio-temporal and events based analysis of topic popularity in twitter. In _Proceedings of the 22nd ACM international conference on Conference on information & knowledge management - CIKM ’13_, pages 219–228, New York, New York, USA, nov 2013. ACM Press. ISBN 9781450322638. doi: 10.1145/2505515.2505525.
* Asur and Huberman (2010) Sitaram Asur and Bernardo A. Huberman. Predicting the Future with Social Media. In _2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology_, volume 1, pages 492–499. IEEE, aug 2010. ISBN 978-1-4244-8482-9. doi: 10.1109/WI-IAT.2010.63.
* Asur et al. (2011) Sitaram Asur, Louis Yu, and Bernardo a. Huberman. What Trends in Chinese Social Media. _SSRN Electronic Journal_, 2011. ISSN 1556-5068. doi: 10.2139/ssrn.1888779.
* Backstrom et al. (2006) Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. Group formation in large social networks. In _Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’06_, page 44, New York, New York, USA, 2006. ACM Press. ISBN 1595933395. doi: 10.1145/1150402.1150412.
* Backstrom et al. (2013) Lars Backstrom, Jon Kleinberg, Lillian Lee, and Cristian Danescu-Niculescu-Mizil. Characterizing and curating conversation threads. In _Proceedings of the sixth ACM international conference on Web search and data mining - WSDM ’13_, page 13, New York, New York, USA, 2013. ACM Press. ISBN 9781450318693. doi: 10.1145/2433396.2433401.
* Bakshy et al. (2009) Eytan Bakshy, Brian Karrer, and Lada a. Adamic. Social influence and the diffusion of user-created content. In _Proceedings of the tenth ACM conference on Electronic commerce - EC ’09_, {EC} ’09, page 325, New York, New York, USA, 2009. ACM Press. ISBN 9781605584584. doi: 10.1145/1566374.1566421.
* Bakshy et al. (2011) Eytan Bakshy, Jake M. Hofman, Winter A. Mason, and Duncan J. Watts. Everyone’s an influencer. In _Proceedings of the fourth ACM international conference on Web search and data mining - WSDM ’11_, WSDM ’11, page 65, New York, New York, USA, 2011. ACM Press. ISBN 9781450304931. doi: 10.1145/1935826.1935845.
* Bao et al. (2013) Peng Bao, Hua-Wei Shen, Junming Huang, and Xueqi Cheng. Popularity Prediction in Microblogging Network: A Case Study on Sina Weibo. _arXiv preprint arXiv:1304.4324_, pages 2–3, 2013.
* Barabási (1999) A.-L. Barabási. Emergence of Scaling in Random Networks. _Science_, 286(5439):509–512, oct 1999. ISSN 00368075. doi: 10.1126/science.286.5439.509.
* Bastian et al. (2009) Mathieu Bastian, Sebastien Heymann, and Mathieu Jacomy. Gephi: An Open Source Software for Exploring and Manipulating Networks. _International AAAI Conference on Weblogs and Social Media (ICWSM)_, 2009.
* Becker et al. (2011) Hila Becker, Mor Naaman, and Luis Gravano. Beyond Trending Topics: Real-World Event Identification on Twitter. _International AAAI Conference on Weblogs and Social Media (ICWSM)_, pages 1–17, 2011. doi: 10.1.1.221.2822.
* Benevenuto and Magno (2010) F Benevenuto and G Magno. Detecting spammers on twitter. In _Collaboration, electronic messaging, anti-abuse and spam conference (CEAS)_, volume 6, page 12, 2010.
* Benevenuto et al. (2009a) Fabrício Benevenuto, Tiago Rodrigues, Virgílio Almeida, Jussara Almeida, and Marcos Gonçalves. Detecting spammers and content promoters in online video social networks. In _Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval - SIGIR ’09_, page 620, New York, New York, USA, jul 2009a. ACM Press. ISBN 9781605584836. doi: 10.1145/1571941.1572047.
* Benevenuto et al. (2009b) Fabrício Benevenuto, Tiago Rodrigues, Meeyoung Cha, and Virgílio Almeida. Characterizing user behavior in online social networks. In _Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference - IMC ’09_, page 49, New York, New York, USA, 2009b. ACM Press. ISBN 9781605587714. doi: 10.1145/1644893.1644900.
* Bernstein et al. (2013) Michael S. Bernstein, Eytan Bakshy, Moira Burke, and Brian Karrer. Quantifying the invisible audience in social networks. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’13_, page 21, New York, New York, USA, 2013. ACM Press. ISBN 9781450318990. doi: 10.1145/2470654.2470658.
* Bessi et al. (2014) Alessandro Bessi, Antonio Scala, Luca Rossi, Qian Zhang, and Walter Quattrociocchi. The economy of attention in the age of (mis)information. _Journal of Trust Management_, 1(1):12, dec 2014. ISSN 2196-064X. doi: 10.1186/s40493-014-0012-y.
* Beutel et al. (2013) Alex Beutel, Wanhong Xu, Venkatesan Guruswami, Christopher Palow, and Christos Faloutsos. CopyCatch: stopping group attacks by spotting lockstep behavior in social networks. In _Proceedings of the 22nd international conference on World Wide Web - WWW ’13_, pages 119–130, New York, New York, USA, may 2013. ACM Press. ISBN 9781450320351. doi: 10.1145/2488388.2488400.
* Bollen et al. (2011a) Johan Bollen, Bruno Goncalves, Guangchen Ruan, and Huina Mao. Happiness is assortative in online social networks. _Artificial life_, 17(3):237–251, jan 2011a. ISSN 1064-5462. doi: 10.1162/artl˙a˙00034.
* Bollen et al. (2011b) Johan Bollen, Huina Mao, and Xiaojun Zeng. Twitter mood predicts the stock market. _Journal of Computational Science_, 2(1):1–8, mar 2011b. ISSN 18777503. doi: 10.1016/j.jocs.2010.12.007.
* Borge-Holthoefer et al. (2011) Javier Borge-Holthoefer, Alejandro Rivero, Iñigo García, Elisa Cauhé, Alfredo Ferrer, Darío Ferrer, David Francos, David Iñiguez, María Pilar Pérez, Gonzalo Ruiz, Francisco Sanz, Fermín Serrano, Cristina Viñas, Alfonso Tarancón, and Yamir Moreno. Structural and dynamical patterns on online social networks: the Spanish May 15th movement as a case study. _PloS one_, 6(8):e23883, jan 2011. ISSN 1932-6203. doi: 10.1371/journal.pone.0023883.
* Borge-Holthoefer et al. (2012) Javier Borge-Holthoefer, Alejandro Rivero, and Yamir Moreno. Locating privileged spreaders on an online social network. _Physical Review E_, 85(6):066123, jun 2012. ISSN 1539-3755. doi: 10.1103/PhysRevE.85.066123.
* Borge-Holthoefer et al. (2013) Javier Borge-Holthoefer, Raquel a Baños, Sandra González-Bailón, and Yamir Moreno. Cascading behaviour in complex socio-technical networks. _Journal of Complex Networks_, 1(1):3–24, apr 2013. ISSN 2051-1310. doi: 10.1093/comnet/cnt006.
* Boyd (2010) Danah Boyd. Big Data: Opportunities for Computational and Social Sciences. http://www.zephoria.org/thoughts/archives/2010/04/17/big-data-opportunities-for-computational-and-social-sciences.html, 2010.
* Boyd et al. (2010) Danah Boyd, Scott Golder, and Gilad Lotan. Tweet, Tweet, Retweet: Conversational Aspects of Retweeting on Twitter. In _2010 43rd Hawaii International Conference on System Sciences_, pages 1–10. IEEE, jan 2010. ISBN 978-1-4244-5509-6. doi: 10.1109/HICSS.2010.412.
* Bragin (2010) John Bragin. _Complexity: A Guided Tour_, volume 13. Oxford University Press, New York, NY, USA, 2010. ISBN 9780195124415. doi: 10.1063/1.3326990.
* Castillo et al. (2011) Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. Information credibility on twitter. In _Proceedings of the 20th international conference on World wide web - WWW ’11_, page 675, New York, New York, USA, 2011. ACM, ACM Press. ISBN 9781450306324. doi: 10.1145/1963405.1963500.
* Cataldi et al. (2010) Mario Cataldi, Luigi Di Caro, and Claudio Schifanella. Emerging topic detection on Twitter based on temporal and social terms evaluation. In _Proceedings of the Tenth International Workshop on Multimedia Data Mining - MDMKDD ’10_, pages 1–10, New York, New York, USA, 2010. ACM Press. ISBN 9781450302203. doi: 10.1145/1814245.1814249.
* Centola (2010) Damon Centola. The spread of behavior in an online social network experiment. _Science (New York, N.Y.)_, 329(5996):1194–7, sep 2010. ISSN 1095-9203. doi: 10.1126/science.1185231.
* Centola and Macy (2007) Damon Centola and Michael Macy. Complex Contagions and the Weakness of Long Ties. _American Journal of Sociology_, 113(3):702–734, nov 2007. ISSN 0002-9602. doi: 10.1086/521848.
* Cha et al. (2009) Meeyoung Cha, Alan Mislove, and Krishna P. Gummadi. A measurement-driven analysis of information propagation in the flickr social network. In _Proceedings of the 18th international conference on World wide web - WWW ’09_, page 721, New York, New York, USA, apr 2009. ACM Press. ISBN 9781605584874. doi: 10.1145/1526709.1526806.
* Cha et al. (2010) Meeyoung Cha, Hamed Haddai, Fabricio Benevenuto, and Krishna P Gummadi. Measuring User Influence in Twitter : The Million Follower Fallacy. _International AAAI Conference on Weblogs and Social Media (ICWSM)_, 10:10–17, 2010. doi: 10.1.1.167.192.
* Chen et al. (2010) Jilin Chen, Rowan Nairn, Les Nelson, Michael Bernstein, and Ed Chi. Short and tweet: experiments on recommending content from information streams. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’10_, pages 1185–1194, New York, New York, USA, 2010. ACM Press. ISBN 9781605589299. doi: 10.1145/1753326.1753503.
* Cheng et al. (2014) Justin Cheng, Lada Adamic, P. Alex Dow, Jon Michael Kleinberg, and Jure Leskovec. Can cascades be predicted? In _Proceedings of the 23rd international conference on World wide web - WWW ’14_, pages 925–936, New York, New York, USA, 2014. ACM Press. ISBN 9781450327442. doi: 10.1145/2566486.2567997.
* Cheng et al. (2010) Zhiyuan Cheng, James Caverlee, and Kyumin Lee. You are where you tweet. In _Proceedings of the 19th ACM international conference on Information and knowledge management - CIKM ’10_, page 759, New York, New York, USA, oct 2010. ACM Press. ISBN 9781450300995. doi: 10.1145/1871437.1871535.
* Cheong and Ray (2011) Marc Cheong and Sid Ray. A literature review of recent microblogging developments. _Victoria, Australia: Clayton School of Information Technology, Monash University._, 2011.
* Cho et al. (2011) Eunjoon Cho, Seth A. Myers, and Jure Leskovec. Friendship and mobility. In _Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’11_, page 1082, New York, New York, USA, aug 2011. ACM Press. ISBN 9781450308137. doi: 10.1145/2020408.2020579.
* Chun et al. (2008) Hyunwoo Chun, Haewoon Kwak, Young-Ho Eom, Yong-Yeol Ahn, Sue Moon, and Hawoong Jeong. Comparison of online social relations in volume vs interaction. In _Proceedings of the 8th ACM SIGCOMM conference on Internet measurement conference - IMC ’08_, page 57, New York, New York, USA, 2008. ACM Press. ISBN 9781605583341. doi: 10.1145/1452520.1452528.
* Cioffi-Revilla (2010) Claudio Cioffi-Revilla. Computational social science. _Wiley Interdisciplinary Reviews: Computational Statistics_, 2(3):259–271, may 2010. ISSN 19395108. doi: 10.1002/wics.95.
* Combe et al. (2010) David Combe, Christine Largeron, Elod Egyed-Zsigmond, and Mathias Géry. A comparative study of social network analysis tools. In _International Workshop on Web Intelligence and Virtual Enterprises_, volume 2, page 1, 2010.
* Conte et al. (2012) R. Conte, N. Gilbert, G. Bonelli, C. Cioffi-Revilla, G. Deffuant, J. Kertesz, V. Loreto, S. Moat, J. P. Nadal, A. Sanchez, A. Nowak, A. Flache, M. San Miguel, and D. Helbing. Manifesto of computational social science. _The European Physical Journal Special Topics_, 214(1):325–346, dec 2012. ISSN 1951-6355. doi: 10.1140/epjst/e2012-01697-8.
* Costa et al. (2007) Luciano F. Costa, Osvaldo N. Oliveira, Gonzalo Travieso, Francisco A. Rodrigues, Paulino R. Villas Boas, Lucas Antiqueira, Matheus P. Viana, and Luis E. C. da Rocha. Analyzing and Modeling Real-World Phenomena with Complex Networks: A Survey of Applications. _Advances in Physics_, 60(3):103, jun 2007. ISSN 0001-8732. doi: 10.1080/00018732.2011.572452.
* Cranshaw et al. (2012) Justin Cranshaw, Raz Schwartz, Jason I. Hong, and Norman Sadeh. The Livehoods Project: Utilizing Social Media to Understand the Dynamics of a City. _Proceedings of the 6th International AAAI Conference on Weblogs and Social Media (ICWSM)_, pages 58–65, jun 2012.
* Csardi and Nepusz (2006) Gabor Csardi and Tamas Nepusz. The igraph software package for complex network research. _InterJournal_, page 1695, 2006.
* Culotta (2010a) Aron Culotta. Towards detecting influenza epidemics by analyzing Twitter messages. In _Proceedings of the First Workshop on Social Media Analytics - SOMA ’10_, pages 115–122, New York, New York, USA, jul 2010a. ACM Press. ISBN 9781450302173. doi: 10.1145/1964858.1964874.
* Culotta (2010b) Aron Culotta. Detecting influenza outbreaks by analyzing Twitter messages. In _Proceedings of the First Workshop on Social Media Analytics - SOMA ’10_, pages 115–122, New York, New York, USA, 2010b. ACM Press. ISBN 9781450302173. doi: 10.1145/1964858.1964874.
* Daley and Kendall (1965) D. J. Daley and D. G. Kendall. Stochastic Rumours. _IMA Journal of Applied Mathematics_, 1(1):42–55, mar 1965. ISSN 0272-4960. doi: 10.1093/imamat/1.1.42.
* de Sola Pool and Kochen (1978) Ithiel de Sola Pool and Manfred Kochen. Contacts and Influence. _Social Networks_, 1(1):5–51, 1978. ISSN 03788733. doi: 10.1016/0378-8733(78)90011-4.
* Dillon (1983) Martin Dillon. Introduction to modern information retrieval. _Information Processing & Management_, 19(6):402–403, jan 1983. ISSN 03064573. doi: 10.1016/0306-4573(83)90062-6.
* Dodds et al. (2011) Peter Sheridan Dodds, Kameroncker Decker Harris, Isabel M. Kloumann, Catherine a. Bliss, and Christopher M. Danforth. Temporal patterns of happiness and information in a global social network: hedonometrics and Twitter. _PloS one_, 6(12):e26752, jan 2011. ISSN 1932-6203. doi: 10.1371/journal.pone.0026752.
* Dow and Friggeri (2013) P Alex Dow and Adrien Friggeri. The Anatomy of Large Facebook Cascades. _Proceedings of the 7th International AAAI Conference on Weblogs and Social Media (ICWSM)_, pages 145–154, 2013.
* Dunbar (1992) R.I.M. Dunbar. Neocortex size as a constraint on group size in primates. _Journal of Human Evolution_, 22(6):469–493, jun 1992. ISSN 00472484. doi: 10.1016/0047-2484(92)90081-J.
* Facebook (2014) Facebook. Company Info — Facebook Newsroom, 2014.
* Friggeri et al. (2014) Adrien Friggeri, La Adamic, Dean Eckles, and Justin Cheng. Rumor Cascades. _International AAAI Conference on Weblogs and Social Media (ICWSM)_, 2014.
* Gao et al. (2012) Qi Gao, Fabian Abel, Geert Jan Houben, and Yong Yu. A comparative study of users’ microblogging behavior on Sina Weibo and Twitter. In Judith Masthoff, Bamshad Mobasher, Michel C. Desmarais, and Roger Nkambou, editors, _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_, volume 7379 LNCS, pages 88–101, Berlin, Heidelberg, 2012. Springer. ISBN 9783642314537. doi: 10.1007/978-3-642-31454-4˙8.
* Garcia et al. (2013) David Garcia, Pavlin Mavrodiev, and Frank Schweitzer. Social resilience in online communities. In _Proceedings of the first ACM conference on Online social networks - COSN ’13_, volume 40, pages 39–50, New York, New York, USA, nov 2013. ACM Press. ISBN 9781450320849. doi: 10.1145/2512938.2512946.
* Gayo-Avello (2013) Daniel Gayo-Avello. A Meta-Analysis of State-of-the-Art Electoral Prediction From Twitter Data. _Social Science Computer Review_, 31(6):649–679, aug 2013. ISSN 0894-4393, 1552-8286. doi: 10.1177/0894439313493979.
* Ghiassi et al. (2013) M. Ghiassi, J. Skinner, and D. Zimbra. Twitter brand sentiment analysis: A hybrid system using n-gram analysis and dynamic artificial neural network. _Expert Systems with Applications_, 40(16):6266–6282, nov 2013. ISSN 09574174. doi: 10.1016/j.eswa.2013.05.057.
* Ghosh et al. (2012) Saptarshi Ghosh, Bimal Viswanath, Farshad Kooti, Naveen Kumar Sharma, Gautam Korlam, Fabricio Benevenuto, Niloy Ganguly, and Krishna Phani Gummadi. Understanding and combating link farming in the twitter social network. In _Proceedings of the 21st international conference on World Wide Web - WWW ’12_, page 61, New York, New York, USA, apr 2012. ACM Press. ISBN 9781450312295. doi: 10.1145/2187836.2187846.
* Girvan and Newman (2002) Michelle Girvan and M. E. J. Newman. Community structure in social and biological networks. _Proceedings of the National Academy of Sciences of the United States of America_, 99(12):7821–6, jun 2002. ISSN 0027-8424. doi: 10.1073/pnas.122653799.
* Go et al. (2007) Alec Go, Richa Bhayani, and Lei Huang. Twitter sentiment classification using distant supervision. _CS224N Project Report_, page 12, feb 2007.
* Goel et al. (2012) Sharad Goel, Duncan J. Watts, and Daniel G. Goldstein. The structure of online diffusion networks. In _Proceedings of the 13th ACM Conference on Electronic Commerce - EC ’12_, volume 1, page 623, New York, New York, USA, 2012. ACM Press. ISBN 9781450314152. doi: 10.1145/2229012.2229058.
* Goel et al. (2013) Sharad Goel, Ashton Anderson, Jake Hofman, and Duncan Watts. The structural virality of online diffusion. _Preprint_, 2013.
* Golbeck (2015) Jennifer Golbeck. Benford’s Law Applies to Online Social Networks. _PloS one_, 10(8):e0135169, jan 2015. ISSN 1932-6203. doi: 10.1371/journal.pone.0135169.
* Golbeck and Hansen (2014) Jennifer Golbeck and Derek Hansen. A method for computing political preference among Twitter followers. _Social Networks_, 36:177–184, jan 2014. ISSN 03788733. doi: 10.1016/j.socnet.2013.07.004.
* Golub and Jackson (2010) Benjamin Golub and Matthew O Jackson. Using selection bias to explain the observed structure of Internet diffusions. _Proceedings of the National Academy of Sciences of the United States of America_, 107(24):10833–6, jun 2010. ISSN 1091-6490. doi: 10.1073/pnas.1000814107.
* Gómez et al. (2013) S. Gómez, A. Díaz-Guilera, J. Gómez-Gardeñes, C. J. Pérez-Vicente, Y. Moreno, and A. Arenas. Diffusion Dynamics on Multiplex Networks. _Physical Review Letters_, 110(2):028701, jan 2013. ISSN 0031-9007. doi: 10.1103/PhysRevLett.110.028701.
* Gomez-Gardenes et al. (2012) Jesus Gomez-Gardenes, Irene Reinares, Alex Arenas, and Luis Mario Floría. Evolution of cooperation in multiplex networks. _Scientific reports_, 2:620, jan 2012. ISSN 2045-2322. doi: 10.1038/srep00620.
* Gomez-Rodriguez et al. (2012) Manuel Gomez-Rodriguez, Jure Leskovec, and Andreas Krause. Inferring Networks of Diffusion and Influence. _ACM Transactions on Knowledge Discovery from Data_, 5(4):1–37, feb 2012. ISSN 15564681. doi: 10.1145/2086737.2086741.
* Gonçalves et al. (2011) Bruno Gonçalves, Nicola Perra, and Alessandro Vespignani. Modeling users’ activity on twitter networks: validation of Dunbar’s number. _PloS one_, 6(8):e22656, jan 2011. ISSN 1932-6203. doi: 10.1371/journal.pone.0022656.
* Gonzalez-Bailon et al. (2013) S. Gonzalez-Bailon, Javier Borge-Holthoefer, and Yamir Moreno. Broadcasters and Hidden Influentials in Online Protest Diffusion. _American Behavioral Scientist_, 57(7):943–965, mar 2013. ISSN 0002-7642. doi: 10.1177/0002764213479371.
* González-Bailón et al. (2011) Sandra González-Bailón, Javier Borge-Holthoefer, Alejandro Rivero, and Yamir Moreno. The dynamics of protest recruitment through an online network. _Scientific reports_, 1:197, jan 2011. ISSN 2045-2322. doi: 10.1038/srep00197.
* Grabowicz et al. (2012) Przemyslaw a. Grabowicz, José J. Ramasco, Esteban Moro, Josep M. Pujol, and Victor M. Eguiluz. Social features of online networks: the strength of intermediary ties in online social media. _PloS one_, 7(1):e29358, jan 2012. ISSN 1932-6203. doi: 10.1371/journal.pone.0029358.
* Gruhl et al. (2004) D. Gruhl, David Liben-Nowell, R. Guha, and A. Tomkins. Information diffusion through blogspace. _ACM SIGKDD Explorations Newsletter_, 6(2):43–52, 2004. ISSN 19310145. doi: 10.1145/1046456.1046462.
* Gruhl et al. (2005) Daniel Gruhl, R Guha, Ravi Kumar, Jasmine Novak, and Andrew Tomkins. The predictive power of online chatter. In _Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining - KDD ’05_, page 78, New York, New York, USA, 2005. ACM Press. ISBN 159593135X. doi: 10.1145/1081870.1081883.
* Guerini et al. (2013) Marco Guerini, Jacopo Staiano, and Davide Albanese. Exploring Image Virality in Google Plus. In _2013 International Conference on Social Computing_, pages 671–678. IEEE, sep 2013. ISBN 978-0-7695-5137-1. doi: 10.1109/SocialCom.2013.101.
* Guille et al. (2013) Adrien Guille, Hakim Hacid, Cecile Favre, and Djamel a. Zighed. Information diffusion in online social networks. _ACM SIGMOD Record_, 42(1):17, jun 2013. ISSN 01635808. doi: 10.1145/2503792.2503797.
* Guo et al. (2011) Zhengbiao Guo, Zhitang Li, and Hao Tu. Sina Microblog: An Information-Driven Online Social Network. In _2011 International Conference on Cyberworlds_, pages 160–167. IEEE, oct 2011. ISBN 978-1-4577-1453-5. doi: 10.1109/CW.2011.12.
* Gupta et al. (2012) Manish Gupta, Peixiang Zhao, and Jiawei Han. Evaluating Event Credibility on Twitter. In _SIAM International Conference on Data Mining_, pages 153–164. Citeseer, 2012.
* Hagberg et al. (2008) Aric A Hagberg, Daniel A Schult, and Pieter J Swart. Exploring network structure, dynamics, and function using NetworkX. In _Proceedings of the 7th Python in Science Conference (SciPy2008)_, pages 11–15, Pasadena, CA USA, aug 2008.
* Herd et al. (2014) Benjamin Herd, Simon Miles, Peter Mcburney, and Michael Luck. _Multi-Agent-Based Simulation XIV_, volume 8235 of _Lecture Notes in Computer Science_. Springer Berlin Heidelberg, Berlin, Heidelberg, 2014. ISBN 978-3-642-54782-9. doi: 10.1007/978-3-642-54783-6.
* Hoang and Lim (2012) Tuan-anh Hoang and Ee-peng Lim. Virality and Susceptibility in Information Diffusions. _Artificial Intelligence_, pages 146–153, 2012.
* Honeycutt and Herring (2009) Courtenay Honeycutt and Susan C. Herring. Beyond Microblogging: Conversation and Collaboration via Twitter. In _2009 42nd Hawaii International Conference on System Sciences_, pages 1–10. IEEE, 2009. ISBN 978-0-7695-3450-3. doi: 10.1109/HICSS.2009.89.
* Hong et al. (2011) Liangjie Hong, Ovidiu Dan, and Brian D. Davison. Predicting popular messages in Twitter. In _Proceedings of the 20th international conference on World wide web - WWW ’11_, page 57, New York, New York, USA, 2011. ACM Press. ISBN 9781450306379. doi: 10.1145/1963192.1963222.
* Howard and Gengler (2001) Daniel J. Howard and Charles Gengler. Emotional Contagion Effects on Product Attitudes. _Journal of Consumer Research_, 28(2):189–201, sep 2001. ISSN 0093-5301. doi: 10.1086/322897.
* Hu et al. (2012) Mengdie Hu, Shixia Liu, Furu Wei, Yingcai Wu, John Stasko, and Kwan-Liu K L Ma. Breaking news on twitter. In _Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI ’12_, CHI ’12, page 2751, New York, New York, USA, 2012. ACM, ACM Press. ISBN 9781450310154. doi: 10.1145/2207676.2208672.
* Hu et al. (2013a) X Hu, J Tang, Y Zhang, and H Liu. Social Spammer Detection in Microblogging. _IJCAI_, 2013a.
* Hu et al. (2013b) Xia Hu, Jiliang Tang, Huiji Gao, and Huan Liu. Unsupervised Sentiment Analysis with Emotional Signals. In _International Conference on World Wide Web_, pages 607–617, Rio de Janeiro, Brazil, may 2013b. International World Wide Web Conferences Steering Committee. ISBN 9781450320351.
* Hu et al. (2013c) Xia Hu, Lei Tang, Jiliang Tang, and Huan Liu. Exploiting social relations for sentiment analysis in microblogging. In _Proceedings of the sixth ACM international conference on Web search and data mining - WSDM ’13_, volume 1, page 537, New York, New York, USA, feb 2013c. ACM Press. ISBN 9781450318693. doi: 10.1145/2433396.2433465.
* Huberman et al. (2008) Bernardo a. Huberman, Daniel M. Romero, and Fang Wu. Social Networks that Matter: Twitter Under the Microscope. _SSRN Electronic Journal_, 14, 2008. ISSN 1556-5068. doi: 10.2139/ssrn.1313405.
* Jansen et al. (2009) Bernard J. Jansen, Mimi Zhang, Kate Sobel, and Abdur Chowdury. Twitter power: Tweets as electronic word of mouth. _Journal of the American Society for Information Science and Technology_, 60(11):2169–2188, nov 2009. ISSN 15322882. doi: 10.1002/asi.21149.
* Java et al. (2007) Akshay Java, Xiaodan Song, Tim Finin, and Belle Tseng. Why we twitter. In _Proceedings of the 9th WebKDD and 1st SNA-KDD 2007 workshop on Web mining and social network analysis - WebKDD/SNA-KDD ’07_, pages 56–65, New York, New York, USA, 2007. ACM Press. ISBN 9781595938480. doi: 10.1145/1348549.1348556.
* Jiang et al. (2014) Meng Jiang, Peng Cui, Alex Beutel, Christos Faloutsos, and Shiqiang Yang. CatchSync: Catching Synchronized Behavior in Large Directed Graphs. In _Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’14_, pages 941–950, New York, New York, USA, aug 2014. ACM Press. ISBN 9781450329569. doi: 10.1145/2623330.2623632.
* Jiang et al. (2015) Meng Jiang, Alex Beutel, Peng Cui, Bryan Hooi, Shiqiang Yang, and Christos Faloutsos. A General Suspiciousness Metric for Dense Blocks in Multimodal Data. In _2015 IEEE International Conference on Data Mining_, pages 781–786. IEEE, nov 2015. ISBN 978-1-4673-9504-5. doi: 10.1109/ICDM.2015.61.
* Kramer et al. (2014) Adam D I Kramer, Jamie E Guillory, and Jeffrey T Hancock. Experimental evidence of massive-scale emotional contagion through social networks. _Proceedings of the National Academy of Sciences of the United States of America_, 111(24):8788–90, jun 2014. ISSN 1091-6490. doi: 10.1073/pnas.1320040111.
* Krishnamurthy et al. (2008) Balachander Krishnamurthy, Phillipa Gill, and Martin Arlitt. A few chirps about twitter. In _Proceedings of the first workshop on Online social networks - WOSP ’08_, page 19, New York, New York, USA, 2008. ACM Press. ISBN 9781605581828. doi: 10.1145/1397735.1397741.
* Kumar et al. (2010a) Ravi Kumar, Mohammad Mahdian, and Mary McGlohon. Dynamics of conversations. In _Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’10_, page 553, New York, New York, USA, 2010a. ACM Press. ISBN 9781450300551. doi: 10.1145/1835804.1835875.
* Kumar et al. (2010b) Ravi Kumar, Jasmine Novak, and Andrew Tomkins. Structure and Evolution of Online Social Networks. In Philip S Yu, Jiawei Han, and Christos Faloutsos, editors, _Link Mining: Models, Algorithms, and Applications_, pages 337–357. Springer New York, New York, NY, 2010b. ISBN 978-1-4419-6514-1, 978-1-4419-6515-8. doi: 10.1007/978-1-4419-6515-8.
* Kumar (2012) Sanjeev Kumar. Analyzing the Facebook workload. In _2012 IEEE International Symposium on Workload Characterization (IISWC)_, pages 111–112. IEEE, nov 2012. ISBN 978-1-4673-4532-3. doi: 10.1109/IISWC.2012.6402911.
* Kupavskii et al. (2012) Andrey Kupavskii, Liudmila Ostroumova, Alexey Umnov, Svyatoslav Usachev, Pavel Serdyukov, Gleb Gusev, and Andrey Kustarev. Prediction of retweet cascade size over time. In _Proceedings of the 21st ACM international conference on Information and knowledge management - CIKM ’12_, page 2335, New York, New York, USA, 2012. ACM Press. ISBN 9781450311564. doi: 10.1145/2396761.2398634.
* Kwak et al. (2010) Haewoon Kwak, Changhyun Lee, Hosung Park, and Sue Moon. What is Twitter, a social network or a news media? In _Proceedings of the 19th international conference on World wide web - WWW ’10_, page 591, New York, New York, USA, 2010. ACM Press. ISBN 9781605587998. doi: 10.1145/1772690.1772751.
* Kwak et al. (2011) Haewoon Kwak, Hyunwoo Chun, and Sue Moon. Fragile online relationship. In _Proceedings of the 2011 annual conference on Human factors in computing systems - CHI ’11_, page 1091, New York, New York, USA, 2011. ACM Press. ISBN 9781450302289. doi: 10.1145/1978942.1979104.
* Lampos and Cristianini (2012) Vasileios Lampos and Nello Cristianini. Nowcasting Events from the Social Web with Statistical Learning. _ACM Transactions on Intelligent Systems and Technology_, 3(4):1–22, sep 2012. ISSN 21576904. doi: 10.1145/2337542.2337557.
* Lansdall-Welfare et al. (2012) Thomas Lansdall-Welfare, Vasileios Lampos, and Nello Cristianini. Effects of the recession on public mood in the UK. In _Proceedings of the 21st international conference companion on World Wide Web - WWW ’12 Companion_, page 1221, New York, New York, USA, 2012. ACM Press. ISBN 9781450312301. doi: 10.1145/2187980.2188264.
* Lazer et al. (2009) David Lazer, Alex Pentland, Lada Adamic, Sinan Aral, Albert-Laszlo Barabasi, Devon Brewer, Nicholas Christakis, Noshir Contractor, James Fowler, Myron Gutmann, Tony Jebara, Gary King, Michael Macy, Deb Roy, and Marshall Van Alstyne. Social science. Computational social science. _Science (New York, N.Y.)_, 323(5915):721–3, feb 2009. ISSN 1095-9203. doi: 10.1126/science.1167742.
* Lee et al. (2014) Kyumin Lee, Jalal Mahmud, Jilin Chen, Michelle Zhou, and Jeffrey Nichols. Who Will Retweet This ? Automatically Identifying and Engaging Strangers on Twitter to Spread Information. In _Proceedings of the 19th International Conference on Intelligent User Interfaces_, pages 247—-256, New York, New York, USA, may 2014. ACM Press. ISBN 9781450321846. doi: 10.1145/2557500.2557502.
* Lerman and Ghosh (2010) Kristina Lerman and Rumi Ghosh. Information Contagion: an Empirical Study of the Spread of News on Digg and Twitter Social Networks. _Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media (ICWSM)_, pages 90–97, 2010. ISSN 00846570. doi: 10.1146/annurev.an.03.100174.001431.
* Lerman and Hogg (2010) Kristina Lerman and Tad Hogg. Using a model of social dynamics to predict popularity of news. In _Proceedings of the 19th international conference on World wide web - WWW ’10_, WWW ’10, page 621, New York, New York, USA, 2010. ACM Press. ISBN 9781605587998. doi: 10.1145/1772690.1772754.
* Leskovec and Horvitz (2008) Jure Leskovec and Eric Horvitz. Planetary-scale views on a large instant-messaging network. In _Proceeding of the 17th international conference on World Wide Web - WWW ’08_, page 915, New York, New York, USA, 2008. ACM Press. ISBN 9781605580852. doi: 10.1145/1367497.1367620.
* Leskovec et al. (2005) Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. Graphs over time. In _Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining - KDD ’05_, page 177, New York, New York, USA, 2005. ACM Press. ISBN 159593135X. doi: 10.1145/1081870.1081893.
* Leskovec et al. (2007) Jure Leskovec, Mary McGlohon, Christos Faloutsos, Natalie Glance, and Matthew Hurst. Cascading Behavior in Large Blog Graphs. _SIAM International Conference on Data Mining (SDM)_, 2007. ISSN 0038-0644. doi: 10.1.1.103.8339.
* Leskovec et al. (2009) Jure Leskovec, Lars Backstrom, and Jon Kleinberg. Meme-tracking and the Dynamics of the News Cycle. In _Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’09_, volume 1 of _KDD ’09_, pages 497–506, New York, New York, USA, may 2009. ACM Press. ISBN 978-1-60558-495-9. doi: 10.1145/1557019.1557077.
* Liben-Nowell and Kleinberg (2008) David Liben-Nowell and Jon Kleinberg. Tracing information flow on a global scale using Internet chain-letter data. _Proceedings of the National Academy of Sciences of the United States of America_, 105(12):4633–8, mar 2008. ISSN 1091-6490. doi: 10.1073/pnas.0708471105.
* Liu and Lee (2010) Fengkun Liu and Hong Joo Lee. Use of social network information to enhance collaborative filtering performance. _Expert Systems with Applications_, 37(7):4772–4778, jul 2010. ISSN 09574174. doi: 10.1016/j.eswa.2009.12.061.
* Liu (2010) Sophia B Liu. The Rise of Curated Crisis Content. _Iscram_, pages 1–6, 2010.
* Lu and Yang (2012) Rong Lu and Qing Yang. Trend Analysis of News Topics on Twitter. _International Journal of Machine Learning and Computing_, 2:327–332, 2012. ISSN 20103700. doi: 10.7763/IJMLC.2012.V2.139.
* Ma et al. (2011) Hao Ma, Tom Chao Zhou, Michael R. Lyu, and Irwin King. Improving Recommender Systems by Incorporating Social Contextual Information. _ACM Transactions on Information Systems_, 29(2):1–23, apr 2011. ISSN 10468188. doi: 10.1145/1961209.1961212.
* Ma et al. (2013) Zongyang Ma, Aixin Sun, and Gao Cong. On predicting the popularity of newly emerging hashtags in Twitter. _Journal of the American Society for Information Science and Technology_, 64(7):1399–1410, jul 2013. ISSN 15322882. doi: 10.1002/asi.22844.
* McPherson et al. (2001) Miller McPherson, Lynn Smith-Lovin, and James M Cook. Birds of a Feather: Homophily in Social Networks. _Annual Review of Sociology_, 27(1):415–444, aug 2001. ISSN 0360-0572. doi: 10.1146/annurev.soc.27.1.415.
* Memon and Alhajj (2010) Nasrullah Memon and Reda Alhajj. _From sociology to computing in social networks: Theory, foundations and applications_. Springer Vienna, Vienna, 2010. ISBN 9783709102930. doi: 10.1007/978-3-7091-0294-7.
* Mendoza et al. (2010) Marcelo Mendoza, Barbara Poblete, and Carlos Castillo. Twitter under crisis. In _Proceedings of the First Workshop on Social Media Analytics - SOMA ’10_, pages 71–79, New York, New York, USA, 2010. ACM Press. ISBN 9781450302173. doi: 10.1145/1964858.1964869.
* Mislove et al. (2007) Alan Mislove, Massimiliano Marcon, Krishna P. Gummadi, Peter Druschel, and Bobby Bhattacharjee. Measurement and analysis of online social networks. In _Proceedings of the 7th ACM SIGCOMM conference on Internet measurement - IMC ’07_, page 29, New York, New York, USA, 2007. ACM Press. ISBN 9781595939081. doi: 10.1145/1298306.1298311.
* MORGAN (2009) CRAIG MORGAN. In this issue. _Psychological Medicine_, 39(12):1933, nov 2009. ISSN 0033-2917. doi: 10.1017/S0033291709991759.
* Mucha et al. (2010) Peter J Mucha, Thomas Richardson, Kevin Macon, Mason a Porter, and Jukka-Pekka Onnela. Community structure in time-dependent, multiscale, and multiplex networks. _Science (New York, N.Y.)_, 328(5980):876–8, may 2010. ISSN 1095-9203. doi: 10.1126/science.1184819.
* Myers et al. (2012) Seth a. Myers, Chenguang Zhu, and Jure Leskovec. Information diffusion and external influence in networks. In _Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’12_, page 33, New York, New York, USA, jun 2012. ACM Press. ISBN 9781450314626. doi: 10.1145/2339530.2339540.
* Naveed et al. (2011) Nasir Naveed, Thomas Gottron, Jérôme Kunegis, and Arifah Che Alhadi. Bad news travel fast. In _Proceedings of the 3rd International Web Science Conference on - WebSci ’11_, pages 1–7, New York, New York, USA, 2011. ACM Press. ISBN 9781450308557. doi: 10.1145/2527031.2527052.
* Neubig et al. (2011) Graham Neubig, Y Matsubayashi, M Hagiwara, and K Murakami. Safety Information Mining-What can NLP do in a disaster. In _IJCNLP_, pages 965–973, 2011.
* Noulas et al. (2011) A Noulas, S Scellato, C Mascolo, and M Pontil. An Empirical Study of Geographic User Activity Patterns in Foursquare. _ICwSM_, 2011.
* O’Callaghan et al. (2012) Derek O’Callaghan, Martin Harrigan, Joe Carthy, and Pádraig Cunningham. Network Analysis of Recurring YouTube Spam Campaigns. In _International AAAI Conference on Web and Social Media (ICWSM)_, jan 2012.
* Ong (2013) Josh Ong. China’s Sina Weibo Grew 73% in 2012 to 500M Accounts, 2013.
* Page et al. (1998) L Page, S Brin, R Motwani, and T Winograd. The PageRank citation ranking:bringing order to the web. _Technical report, Stanford Digital Library Technologies Project_, 1998.
* Pak and Paroubek (2010) Alexander Pak and Patrick Paroubek. Twitter as a Corpus for Sentiment Analysis and Opinion Mining. In _Lrec_, volume Proceeding, pages 1320–1326, Valletta, Malta, 2010. European Languages Resources Association (ELRA). ISBN 2951740867. doi: 10.1371/journal.pone.0026624.
* Pandit et al. (2007) Shashank Pandit, Duen Horng Chau, Samuel Wang, and Christos Faloutsos. Netprobe: a fast and scalable system for fraud detection in online auction networks. In _Proceedings of the 16th international conference on World Wide Web - WWW ’07_, page 201, New York, New York, USA, may 2007. ACM Press. ISBN 9781595936547. doi: 10.1145/1242572.1242600.
* Peixoto (2014) Tiago P Peixoto. The graph-tool python library. _figshare_, 2014. doi: 10.6084/m9.figshare.1164194.
* Petrovic et al. (2011) S Petrovic, Miles Osborne, and Victor Lavrenko. Rt to win! predicting message propagation in twitter. _International AAAI Conference on Weblogs and Social Media (ICWSM)_, 13:586–589, 2011.
* Psallidas et al. (2013) Fotis Psallidas, Luis Gravano, and Cornell Tech. Effective Event Identification in Social Media. _Bulletin of the IEEE Computer Society Technical Committee on Data Engineering_, 36(3):42–50, 2013. doi: 10.1007/BF00183540.
* Qazvinian et al. (2011) Vahed Qazvinian, Emily Rosengren, Dragomir R Radev, and Qiaozhu Mei. Rumor has it : Identifying Misinformation in Microblogs. In _Proceeding of the 2011 Conference on Empirical Methods in Natural Language Processing - ‘EMNLP_, pages 1589–1599, Stroudsburg, PA, USA, 2011. Association for Computational Linguistics, Association for Computational Linguistics. ISBN 978-1-937284-11-4.
* Qu et al. (2011) Yan Qu, Chen Huang, Pengyi Zhang, and Jun Zhang. Microblogging after a major disaster in China. In _Proceedings of the ACM 2011 conference on Computer supported cooperative work - CSCW ’11_, CSCW ’11, page 25, New York, New York, USA, 2011. ACM Press. ISBN 9781450305563. doi: 10.1145/1958824.1958830.
* Ratkiewicz et al. (2011) J Ratkiewicz, M Conover, and M Meiss. Detecting and Tracking Political Abuse in Social Media. _International AAAI Conference on Web and Social Media (ICWSM)_, 2011.
* Reyes et al. (2012) Antonio Reyes, Paolo Rosso, and Tony Veale. A multidimensional approach for detecting irony in Twitter. _Language Resources and Evaluation_, 47(1):239–268, jul 2012. ISSN 1574-020X. doi: 10.1007/s10579-012-9196-x.
* Rice (1927) Stuart a Rice. The Identification of Blocs in Small Political Bodies. _The American Political Science Review_, 21(3):619, 1927. ISSN 00030554. doi: 10.2307/1945514.
* Rogers (2013) Richard Rogers. Debanalizing Twitter. In _Proceedings of the 5th Annual ACM Web Science Conference on - WebSci ’13_, pages 356–365, New York, New York, USA, 2013. ACM Press. ISBN 9781450318891. doi: 10.1145/2464464.2464511.
* Romero et al. (2011a) Daniel M Romero, Brendan Meeder, and Jon Kleinberg. Differences in the mechanics of information diffusion across topics. In _Proceedings of the 20th international conference on World wide web - WWW ’11_, page 695, New York, New York, USA, 2011a. ACM Press. ISBN 9781450306324. doi: 10.1145/1963405.1963503.
* Romero et al. (2011b) Daniel M. Romero, Chenhao Tan, and Johan Ugander. On the Interplay between Social and Topical Structure. _arXiv preprint arXiv:1112.1115_, page 11, dec 2011b.
* Saito et al. (2012) Rintaro Saito, Michael E Smoot, Keiichiro Ono, Johannes Ruscheinski, Peng-Liang Wang, Samad Lotia, Alexander R Pico, Gary D Bader, and Trey Ideker. A travel guide to Cytoscape plugins. _Nature methods_, 9(11):1069–1076, 2012.
* Sakaki et al. (2010) Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. Earthquake shakes Twitter users. In _Proceedings of the 19th international conference on World wide web - WWW ’10_, WWW ’10, page 851, New York, New York, USA, 2010. ACM Press. ISBN 9781605587998. doi: 10.1145/1772690.1772777.
* Salathé et al. (2013) Marcel Salathé, Duy Q Vu, Shashank Khandelwal, and David R. Hunter. The dynamics of health behavior sentiments on a large online social network. _EPJ Data Science_, 2(1):4, apr 2013. ISSN 2193-1127. doi: 10.1140/epjds16.
* Salganik et al. (2006) Matthew J Salganik, Peter Sheridan Dodds, and Duncan J Watts. Experimental study of inequality and unpredictability in an artificial cultural market. _Science (New York, N.Y.)_, 311(5762):854–6, feb 2006. ISSN 1095-9203. doi: 10.1126/science.1121066.
* Sarcevic et al. (2012) Aleksandra Sarcevic, Leysia Palen, Joanne White, Kate Starbird, Mossaab Bagdouri, and Kenneth Anderson. ”Beacons of hope” in decentralized coordination. In _Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work - CSCW ’12_, page 47, New York, New York, USA, 2012. ACM, ACM Press. ISBN 9781450310864. doi: 10.1145/2145204.2145217.
* Sasahara et al. (2013) Kazutoshi Sasahara, Yoshito Hirata, Masashi Toyoda, Masaru Kitsuregawa, and Kazuyuki Aihara. Quantifying Collective Attention from Tweet Stream. _PLoS ONE_, 8(4):e61823, jan 2013. ISSN 19326203. doi: 10.1371/journal.pone.0061823.
* Scellato et al. (2011) S Scellato, A Noulas, R Lambiotte, and C Mascolo. Socio-Spatial Properties of Online Location-Based Social Networks. _ICWSM_, 2011.
* Shah and Zaman (2011) Devavrat Shah and Tauhid Zaman. Rumors in a network: Who’s the culprit? _IEEE Transactions on Information Theory_, 57(8):5163–5181, aug 2011. ISSN 00189448. doi: 10.1109/TIT.2011.2158885.
* Shakarian et al. (2013) Paulo Shakarian, Sean Eyre, and Damon Paulo. A Scalable Heuristic for Viral Marketing Under the Tipping Model. _arXiv preprint arXiv:1309.2963_, 3(4):37, oct 2013. ISSN 1869-5450. doi: 10.1007/s13278-013-0135-7.
* Shamma et al. (2011) David a. Shamma, Lyndon Kennedy, and Elizabeth F. Churchill. Peaks and persistence. In _Proceedings of the ACM 2011 conference on Computer supported cooperative work - CSCW ’11_, pages 355–358, New York, New York, USA, mar 2011. ACM Press. ISBN 9781450305563. doi: 10.1145/1958824.1958878.
* Smith et al. (2014) Marc a Smith, Lee Rainie, Itai Himelboim, and Ben Shneiderman. Mapping Twitter Topic Networks: From Polarized Crowds to Community Clusters. _The Pew Research Center_, pages 1–57, 2014.
* Starbird and Palen (2010) Kate Starbird and L Palen. _Pass it on?: Retweeting in mass emergency_. International Community on Information Systems for Crisis Response and Management, 2010. doi: 10.1111/j.1556-4029.2009.01231.x.
* Starbird and Palen (2011) Kate Starbird and Leysia Palen. ”Voluntweeters”. In _Proceedings of the 2011 annual conference on Human factors in computing systems - CHI ’11_, page 1071, New York, New York, USA, 2011. ACM Press. ISBN 9781450302289. doi: 10.1145/1978942.1979102.
* Starbird and Palen (2012) Kate Starbird and Leysia Palen. (How) will the revolution be retweeted? In _Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work - CSCW ’12_, page 7, New York, New York, USA, 2012. ACM, ACM Press. ISBN 9781450310864. doi: 10.1145/2145204.2145212.
* Starbird et al. (2010) Kate Starbird, Leysia Palen, Amanda L. Hughes, and Sarah Vieweg. Chatter on the red. In _Proceedings of the 2010 ACM conference on Computer supported cooperative work - CSCW ’10_, page 241, New York, New York, USA, 2010. ACM Press. ISBN 9781605587950. doi: 10.1145/1718918.1718965.
* Staudt et al. (2014) Christian L Staudt, Aleksejs Sazonovs, and Henning Meyerhenke. NetworKit: An Interactive Tool Suite for High-Performance Network Analysis. _arXiv preprint arXiv:1403.3005_, 2014.
* Stieglitz and Dang-Xuan (2012) Stefan Stieglitz and Linh Dang-Xuan. Political Communication and Influence through Microblogging–An Empirical Analysis of Sentiment in Twitter Messages and Retweet Behavior. In _2012 45th Hawaii International Conference on System Sciences_, pages 3500–3509. IEEE, jan 2012. ISBN 978-1-4577-1925-7. doi: 10.1109/HICSS.2012.476.
* Suh et al. (2010) Bongwon Suh, Lichan Hong, Peter Pirolli, and Ed H. Chi. Want to be Retweeted? Large Scale Analytics on Factors Impacting Retweet in Twitter Network. In _2010 IEEE Second International Conference on Social Computing_, pages 177–184. IEEE, aug 2010. ISBN 978-1-4244-8439-3. doi: 10.1109/SocialCom.2010.33.
* Sun et al. (2009) Eric Sun, Itamar Rosenn, Cameron a Marlow, and Thomas M Lento. Gesundheit ! Modeling Contagion through Facebook News Feed Mechanics of Facebook Page Diffusion. In _Proceedings of the Third International AAAI Conference on Weblogs and Social Media (ICWSM)_, pages 146–153, 2009. ISBN 978-1-57735-421-5.
* Sun and Han (2012) Yizhou Sun and Jiawei Han. Mining Heterogeneous Information Networks: Principles and Methodologies. _Morgan & Claypool Publishers_, 3(2):1–159, jul 2012. ISSN 2151-0067. doi: 10.2200/S00433ED1V01Y201207DMK005.
* Szabo and Huberman (2010) Gabor Szabo and Bernardo a. Huberman. Predicting the popularity of online content. _Communications of the ACM_, 53(8):80, aug 2010. ISSN 00010782. doi: 10.1145/1787234.1787254.
* Takahashi et al. (2014) Toshimitsu Takahashi, Ryota Tomioka, and Kenji Yamanishi. Discovering Emerging Topics in Social Streams via Link-Anomaly Detection. _IEEE Transactions on Knowledge and Data Engineering_, 26(1):120–130, jan 2014. ISSN 1041-4347. doi: 10.1109/TKDE.2012.239.
* Takhteyev et al. (2012) Yuri Takhteyev, Anatoliy Gruzd, and Barry Wellman. Geography of Twitter networks. _Social Networks_, 34(1):73–81, jan 2012. ISSN 03788733. doi: 10.1016/j.socnet.2011.05.006.
* Tang et al. (2010) John Tang, Mirco Musolesi, Cecilia Mascolo, and Vito Latora. Characterising temporal distance and reachability in mobile and online social networks. _ACM SIGCOMM Computer Communication Review_, 40(1):118, jan 2010. ISSN 01464833. doi: 10.1145/1672308.1672329.
* Tripathy et al. (2010) Rudra M. Tripathy, Amitabha Bagchi, and Sameep Mehta. A study of rumor control strategies on social networks. In _Proceedings of the 19th ACM international conference on Information and knowledge management - CIKM ’10_, page 1817, New York, New York, USA, 2010. ACM, ACM Press. ISBN 9781450300995. doi: 10.1145/1871437.1871737.
* Tsur and Rappoport (2012) Oren Tsur and Ari Rappoport. What’s in a hashtag? In _Proceedings of the fifth ACM international conference on Web search and data mining - WSDM ’12_, page 643, New York, New York, USA, 2012. ACM Press. ISBN 9781450307475. doi: 10.1145/2124295.2124320.
* Tumasjan et al. (2010) Andranik Tumasjan, To Sprenger, Pg Sandner, and Im Welpe. Predicting Elections with Twitter: What 140 Characters Reveal about Political Sentiment. _International AAAI Conference on Weblogs and Social Media (ICWSM)_, 10:178–185, 2010. ISSN 00219258.
* Tyler et al. (2005) Joshua R. Tyler, Dennis M. Wilkinson, and Bernardo a. Huberman. E-Mail as Spectroscopy: Automated Discovery of Community Structure within Organizations. _The Information Society_, 21(2):143–153, 2005. ISSN 0197-2243. doi: 10.1080/01972240590925348.
* Vieweg et al. (2010) Sarah Vieweg, Amanda L. Hughes, Kate Starbird, and Leysia Palen. Microblogging during two natural hazards events. In _Proceedings of the 28th international conference on Human factors in computing systems - CHI ’10_, page 1079, New York, New York, USA, 2010. ACM Press. ISBN 9781605589299. doi: 10.1145/1753326.1753486.
* Walter et al. (2007) Frank Edward Walter, Stefano Battiston, and Frank Schweitzer. A model of a trust-based recommendation system on a social network. _Autonomous Agents and Multi-Agent Systems_, 16(1):57–74, oct 2007. ISSN 1387-2532. doi: 10.1007/s10458-007-9021-x.
* Watts and Strogatz (1998) D J Watts and S H Strogatz. Collective dynamics of ’small-world’ networks. _Nature_, 393(6684):440–2, jun 1998. ISSN 0028-0836. doi: 10.1038/30918.
* Watts (2012) Duncan J. Watts. Everything Is Obvious: How Common Sense Fails Us. _Random House LLC_, page 368, 2012.
* Weng et al. (2010) Jianshu Weng, Ee-Peng Lim, Jing Jiang, and Qi He. TwitterRank. In _Proceedings of the third ACM international conference on Web search and data mining - WSDM ’10_, page 261, New York, New York, USA, 2010. ACM Press. ISBN 9781605588896. doi: 10.1145/1718487.1718520.
* Weng et al. (2011) Jianshu Weng, Yuxia Yao, Erwin Leonardi, Francis Lee, and Bu-sung Lee. Event Detection in Twitter. In _International AAAI Conference on Weblogs and Social Media (ICWSM)_, 2011.
* Weng et al. (2012) L. Weng, A. Flammini, A. Vespignani, and F. Menczer. Competition among memes in a world with limited attention. _Scientific reports_, 2:335, jan 2012. ISSN 2045-2322. doi: 10.1038/srep00335.
* Weng et al. (2013) Lilian Weng, Filippo Menczer, and Yong-Yeol Ahn. Virality prediction and community structure in social networks. _Scientific reports_, 3:2522, jan 2013. ISSN 2045-2322. doi: 10.1038/srep02522.
* Wilkinson (2008) Dennis M Wilkinson. Strong regularities in online peer production. In _Proceedings of the 9th ACM conference on Electronic commerce - EC ’08_, page 302, New York, New York, USA, 2008. ACM Press. ISBN 9781605581699. doi: 10.1145/1386790.1386837.
* Williams et al. (2013) Shirley a. Williams, Melissa M. Terras, and Claire Warwick. What do people study when they study Twitter? Classifying Twitter related academic papers. _Journal of Documentation_, 69(3):384–410, may 2013. ISSN 0022-0418. doi: 10.1108/JD-03-2012-0027.
* Wilson et al. (2009) Christo Wilson, Bryce Boe, Alessandra Sala, Krishna P.N. Puttaswamy, and Ben Y Zhao. User interactions in social networks and their implications. In _Proceedings of the fourth ACM european conference on Computer systems - EuroSys ’09_, page 205, New York, New York, USA, 2009. ACM Press. ISBN 9781605584829. doi: 10.1145/1519065.1519089.
* Wong et al. (2012) Felix Ming Fai Wong, Soumya Sen, and Mung Chiang. Why watching movie tweets won’t tell the whole story? In _Proceedings of the 2012 ACM workshop on Workshop on online social networks - WOSN ’12_, WOSN ’12, page 61, New York, New York, USA, 2012. ACM Press. ISBN 9781450314800. doi: 10.1145/2342549.2342564.
* Wu and Huberman (2007) Fang Wu and Bernardo a Huberman. Novelty and collective attention. _Proceedings of the National Academy of Sciences of the United States of America_, 104(45):17599–601, 2007. ISSN 0027-8424. doi: 10.1073/pnas.0704916104.
* Wu et al. (2011) Shaomei Wu, Jake M. Hofman, Winter a. Mason, and Duncan J. Watts. Who says what to whom on twitter. In _Proceedings of the 20th International Conference on World Wide Web - WWW ’11_, page 705, New York, New York, USA, 2011. ACM Press. ISBN 9781450306324. doi: 10.1145/1963405.1963504.
* Xiang et al. (2010) Rongjing Xiang, Jennifer Neville, and Monica Rogati. Modeling relationship strength in online social networks. In _Proceedings of the 19th international conference on World wide web - WWW ’10_, volume 55, page 981, New York, New York, USA, 2010. ACM Press. ISBN 9781605587998. doi: 10.1145/1772690.1772790.
* Yang et al. (2012) Fan Yang, Yang Liu, Xiaohui Yu, and Min Yang. Automatic detection of rumor on Sina Weibo. In _Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics - MDS ’12_, volume 2, pages 1–7, New York, New York, USA, aug 2012. ACM Press. ISBN 9781450315463. doi: 10.1145/2350190.2350203.
* Yang and Leskovec (2010) Jaewon Yang and Jure Leskovec. Modeling Information Diffusion in Implicit Networks. In _2010 IEEE International Conference on Data Mining_, pages 599–608, Sydney, NSW, dec 2010. IEEE. ISBN 978-1-4244-9131-5. doi: 10.1109/ICDM.2010.22.
* Yang et al. (2013) Xiwang Yang, Yang Guo, and Yong Liu. Bayesian-Inference-Based Recommendation in Online Social Networks. _IEEE Transactions on Parallel and Distributed Systems_, 24(4):642–651, apr 2013. ISSN 1045-9219. doi: 10.1109/TPDS.2012.192.
* Zhang et al. (2011) Xue Zhang, Hauke Fuehres, and Peter a. Gloor. Predicting Stock Market Indicators Through Twitter “I hope it is not as bad as I fear”. _Procedia - Social and Behavioral Sciences_, 26:55–62, 2011. ISSN 18770428. doi: 10.1016/j.sbspro.2011.10.562.
* Zhong et al. (2013) Changtao Zhong, Sunil Shah, and Nishanth Sastry. Sharing the Loves : Understanding the How and Why of Online Content Curation. _Proceedings of the Seventh International AAAI Conference on Weblogs and Social Media Sharing_, pages 659–667, 2013.
* Zhou et al. (2010) Zicong Zhou, Roja Bandari, Joseph Kong, Hai Qian, and Vwani Roychowdhury. Information resonance on Twitter. In _Proceedings of the First Workshop on Social Media Analytics - SOMA ’10_, pages 123–131, New York, New York, USA, 2010. ACM, ACM Press. ISBN 9781450302173. doi: 10.1145/1964858.1964875.
|
1703.00259 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 142245,
"num_imgs": 0,
"llama3_tokens_count": 54813
} | [] | We discuss a binary nature of funding impacts. Under some conditions, funding is either cost or benefit, i.e., one of the lending/borrowing rates does not play any role in pricing derivatives. When we price derivatives, considering different lending/borrowing rates leads to semi-linear BSDEs and PDEs, so we need to solve the equations numerically. However, once we can guarantee that only one of the rates affects pricing, we can recover linear equations and derive analytic formulae. Moreover, as a byproduct, our results explain how _debt value adjustment_ (DVA) and funding benefits are different. It is often believed that DVA and funding benefits are overlapped but it will be shown that the two components are affected by different mathematical structures of derivative transactions. We will see later that FBA occurs where the payoff is non-increasing, but this relationship becomes weaken as the funding choices of underlying assets are transferred to repo markets.
_Key words:_ BSDEs, Malliavin calculus, bilateral contracts, incremental cash-flows, funding benefits
_AMS subject classifications:_ 60H07, 91G20, 91G40
## 1 Introduction
The financial crisis in 2007-2008 has forced us to examine many parts of general practice to price derivatives. In the crisis, the defaults of big firms heightened default risk. Moreover, banks became more reluctant to lend money and, as a result, the gap between London inter-bank offered rate (LIBOR) and overnight indexed swap (OIS) rate was widened. By the changed market conditions, banks began to make several adjustments in derivative prices. It has long been a standard to make _credit value adjustment_ (CVA). CVA is correcting derivative prices for the risk of counterparty’s default. On the other hand, _debt value adjustment_ (DVA) is a deduction for the hedger’s own default. If one party defaults earlier than the contractual maturity, a promised payment should be settled as close-out amount. However, because of the default, the obligation may not fully honored. For the risk, it is general that collateral is pledged. _Funding value adjustment_ (FVA) is the adjustment of derivative prices for the excessive cost and benefit in maintaining the hedging portfolio and posting collateral.
Considering entity-specific information in derivative prices has required many changes in classical pricing theory. First, since _law of one price_ does not hold, when a price is given for a contract, it may be an _arbitrage opportunity_ for a trader, but may not be for others. For the issue, entity-specific definitions of _arbitrage opportunity_ and fair values were suggested by Bielecki Rutkowski (9); Bielecki . (6); Bichuch . (5) and those arguments were applied to various derivatives (Kim ., 36, 35, for example). In addition, many other pricing methodologies have been developed. A replication pricing under FVA with collateral was introduced by Piterbarg (41). Wu (43); Li Wu (37) also discussed replication pricing with CVA, FVA with collateral. For risk-neutral valuation principle, readers can refer to Brigo . (16). Crépey (26, 25) discussed min-variance pricing under default risks, funding spreads, and collateral.
Still, there have remained many puzzles among the adjustments. Especially, DVA and FVA have their own but intertwined issues. First, there is a doubt on reporting DVA, which is to say that an aggravation of own default risk can be beneficial to the shareholders. As we will see later, theoretically, the increase of own default risk can be monetized by buying back their bonds. However, it is often impossible in practice because the bank should really default to realize the benefit. Indeed, DVA is accepted by both IFRS and GAAP, but excluded from _common equity tier 1_ capital (CET1), which is a proxy of shareholder’s wealth. Second, it is often believed that DVA double-counts funding benefits (see Remark2.21). FVA has two parts: _funding benefit adjustment_ (FBA) and _funding cost adjustment_ (FCA). Both DVA and FBA may originate from banks’ own default risk. DVA is a deduction from liabilities of a bank due to its creditworthiness. On the other hand, FBA, as a counterpart of FCA, may countervail FCA which is also from the bank’s own default risk. Thus, including both DVA and FBA may inflate the bank’s reported profit (see Cameron, 22) as well as deflate the price charged to counterparties. Possibly due to these reasons, seller’s DVA is often excluded in derivative transactions.
FVA is more arguable. According to Modigliani-Miller (MM) theorem, which is a long-established financial principle, choices of funding should not be considered in pricing. On the other hand, in practice, with the increased interest rates offered by funding desks, it will be loss to the traders without recouping the funding costs from counterparties. Indeed, traders feel confident that funding costs are observable in derivative transactions (see Andersen ., 4)). If the traders’ belief is true, the inclusion of FVA in derivative prices may be justified by market frictions, which is a violation of the assumptions of MM theorem (see Modigliani Miller, 38; Stiglitz, 42). However, as pointed out by Hull White (32), if FVA is really an element to determine derivative prices, the existence of Treasury bonds is mysterious that banks buy a bond that returns less than their funding rates.
The above issues will be discussed by the main results of this paper. Our main contribution is showing a binary nature of FVA that FVA is either FCA or \(-\text{FBA}\) for many derivative contracts. For example, we shall see later that, based on our model, when buying bonds, the trader will never enter a borrowing position, i.e., \(\text{FCA}=0\) and \(\text{FVA}=-\text{FBA}\). If we assume the lending rate is equal to OIS rate (for example, as in Burgard Kjaer, 20), we have \(\text{FVA}=0\). This may explain the reason why banks do not require FVA when buying Treasury bonds, while FVA is observable in other derivative transactions.
The switching between funding costs and benefits depends on the structures of derivative payoffs, close-out conventions, and choices of funding, e.g., from a repo market or treasury department. The structure to determine the binary nature of funding impacts is whether the payoffs of derivatives increase or decrease in underlying assets. On the other hand, since DVA is a deduction on derivative payables, DVA occurs where the payoffs are positive. Therefore, DVA and FBA are affected by different mathematical structures of the payoffs. Indeed, it will be shown later by an example that DVA and FBA are clearly separated and this result is in line with a part of the conclusion in Andersen . (4) that DVA should be considered in pricing.
The difference between FBA and DVA were also pointed out by Albanese Andersen (2). It was shown that FBA is larger than DVA mainly because of the effective discounting rate and absence of default indicator in the definition even though FBA and DVA are quite similar. In their case study, FBA was 20% larger than DVA. However, we will see later that FBA and DVA are not even close in some cases (see Table1, for example). The values of FBA and DVA often resemble because of the funding choices commonly taken in the literature that all underlying assets can be acquired from repo markets. When available, repo markets are preferred by dealers, but it is not always possible. As we will see later, FBA occurs where the payoff is non-increasing, but the more assets are traded through repo markets, the weaker this relationship becomes. Readers may want to refer to Remark3.2-(iv, v) in advance.
Our results are also related to accounting with FVA. In FCA/FBA accounting, which is endorsed by some leading banks, DVA is recorded in Contra-Liability (CL) account and
\[\text{FCA}-(\text{FBA}-\text{DVA})\]
is recorded in Contra-Asset (CA) account (see Castagna, 23). It has been pointed out that this accounting method engenders large asset/liability asymmetry. According to the binary nature of funding impacts, the large asymmetry is inevitable since, for some derivative contracts, either FCA\(=0\) or FBA\(=0\), and DVA\(\not=\)FBA, i.e., FCA does not countervail FBA and FBA is not overlapped with DVA. FVA/FDA accounting, as an alternative, was suggested to recover asset/liability symmetry as well as protect CET1 capital, which is a proxy of shareholder’s wealth (see Albanese Andersen, 2, for example). Readers can refer to Andersen . (4) for the issue of shareholder’s wealth protection with funding spreads.
Another benefit of our results is that we can recover linear equations to price derivatives. Because different lending/borrowing rates make the pricing equations semi-linear, analytic solutions are not generally allowed. Therefore, we need to solve the equations numerically, and sometimes the computational cost becomes expensive. For an attempt to approximate FVA of contracts with short maturity, readers may want to refer to Gobet Pagliarani (30). Moreover, there have been several attempts to find closed-form solutions under funding spreads (Piterbarg, 41; Bichuch ., 5; Brigo ., 14). Still, in the arguments, the crucial assumption was the same lending/borrowing rates. This itself can bee seen as a strong assumption. However, once we can guarantee that one of funding rates does not play any role in pricing derivatives, we can assume the same lending and borrowing rates without loss of generality (because the dealer will never switch from one to another). Put differently, our results allow their results widely applicable.
Even though the binary nature allows us to find analytic formulae for a large class of derivatives, depending on the close-out conventions, the analytic formulae may or may not be represented by a closed-form. In this paper, we deal with two close-out conventions: _clean price_ and _replacement cost_. _clean price_ (resp. _replacement cost_) is the risk-neutral price without (resp. with) with value adjustments. Under _clean close-out_, because pricing measures are not matched, the analytic formulae can not be represented in a closed-form. Indeed, to avoid the inconsistency, Bichuch . (5) assumed a flexibility to choose a pricing measure for calculating close-out amount, and Brigo . (14) considered only un-collateralized contracts with null cash-flow at defaults. In the case of _replacement close-out_, the mismatch does not appear, because, in our model, the funding rates are tacitly embedded to the _replacement cost_. Therefore, we can provide closed-form solutions under _replacement close-out_.
In our model, we include incremental CVA, DVA, FVA, and _variation margin_. The reference filtration is generated by a Brownian motion. Then we progressively enlarge the filtration by default times of the two parties. The default times are assumed to have intensities. We do not assume that interest rates are deterministic, so our results can be applied to interest rate derivatives. In the main theorems, we assume volatility and default intensities are deterministic to avoid heavy calculations, but stochastic parameters do not necessarily change the main results. We report one example that intensities are not deterministic in Appendix.
This paper is organized as follows. In Section2, we introduce our setup on the filtration, intensities, and construct the _incremental hedging portfolio_. In Section2.4, we introduce a BSDE to price derivatives on the enlarged filtration. Instead of dealing with the BSDE, we define XVA (x-value adjustment) process in Section2.3.2 and the XVA process is reduced to a BSDE on the reference filtration as in Crépey Song (27). Then our main results are provided in Section3. For proving the main theorems, iterative transformations of the XVA process are needed and the transformations depend on the close-out conventions. The proofs for the main theorems are reported in Appendix. In Section4, examples are examined and we provide a closed-form solution for a stock call option with _replacement close-out_.
## 2 Modeling
### Mathematical Setup
We consider two parties entering a bilaterally cleared contract. We call one party a “hedger” and the other party a “counterparty”. We sometimes address the hedger (resp. counterparty) “she” (resp. “he”). An index \(H\) (resp. \(C\)) will be used to represent the hedger (resp. counterparty). The argument of this paper is conducted in view of the hedger. The hedger is a financial firm that holds a portfolio to hedge the exchanged cash-flows of the contract. The counterparty may or may not be a non-financial firm. Note that when we say a “dealer”, “bank”, and “trader”, it is not necessarily addressing the hedger since the counterparty can be also a bank.
Let \((\Omega,\mathcal{G},\mathbb{Q})\) be a probability space, where \(\mathbb{Q}\) is a risk-neutral probability measure. Let \(\mathbb{E}\) denote the expectation under \(\mathbb{Q}\). We consider random times \(\tau^{i}\), \(i\in\{H,C\}\),
\[\tau^{i}\colon(\Omega,\mathcal{G})\to(\mathbb{R}_{+},\mathcal{B}( \mathbb{R}_{+}))),\]
which represent the default times of the hedger and counterparty. For \(i\in\{H,C\}\), we assume that \(\mathbb{Q}(\tau^{i}=0)=0\) and \(\mathbb{Q}(\tau^{i}>t)>0,~{}\forall t\in\mathbb{R}_{+}\). We also denote
\[\tau\coloneqq\tau^{H}\wedge\tau^{C},~{}~{}~{}\bar{\tau}\coloneqq \tau\wedge T,\]
where \(T\) is the maturity of the bilateral contract.
Let \(W=(W^{1},\dots,W^{n})\) be a standard \(n\)-dimensional Brownian motion under \(\mathbb{Q}\). Let \(\mathbb{F}=(\mathcal{F}_{t})_{t\geq 0}\) be the _usual natural filtration_ of \((W_{t})_{t\geq 0}\). Then we define
\[\mathbb{G}=(\mathcal{G}_{t})_{t\geq 0}\coloneqq\Big{(}\mathcal{F} _{t}\vee\sigma\big{(}\{\tau^{i}\leq u\}~{}\colon~{}u\leq t,i\in\{H,C\}\big{)} \Big{)}_{t\geq 0}.\]
We call \(\mathbb{F}\) (resp. \(\mathbb{G}\)) the reference filtration (resp. full filtration).
Then we consider a filtered probability space \((\Omega,\mathcal{G},\mathbb{G},\mathbb{Q})\). Note that \(\tau^{i}\), \(i\in\{H,C\}\), are \(\mathbb{G}\)-stopping times but may not be \(\mathbb{F}\)-stopping times. As a convention, for any \(\mathbb{G}\)-progressively measurable process \(u\), \((\mathbb{G},\mathbb{Q})\)-semimartingale \(U\), and \(s\leq t\),
\[\int_{s}^{t}u_{s}\mathop{}\!\mathrm{d}U_{s}\coloneqq\int_{(s,t]}u _{s}\mathop{}\!\mathrm{d}U_{s},\]
where the integral is well-defined. In addition, for any \(\mathbb{G}\)-stopping time \(\theta\) and process \((\xi_{t})_{t\geq 0}\), we denote
\[\xi^{\theta}_{\cdot}\coloneqq\xi_{\cdot\wedge\theta},\]
and when \(\xi_{\theta-}\) exists, \(\Delta\xi_{\theta}\coloneqq\xi_{\theta}-\xi_{\theta-}\). In what follows, for \(i\in\{H,C\}\), \(t\geq 0\), we denote \(G^{i}_{t}\coloneqq\mathbb{Q}(\tau^{i}>t|\mathcal{F}_{t})\), and
\[G_{t}\coloneqq\mathbb{Q}(\tau>t|\mathcal{F}_{t}).\]
The following assumption stands throughout this paper.
**Assumption 2.1**.:
1. \((G_{t})_{t\geq 0}\) _is non-increasing and absolutely continuous with respect to Lebesgue measure._
2. _For any_ \(i\in\{H,C\}\)_, there exists a process_ \(h^{i}\)_, defined as_ \[h^{i}_{t}\coloneqq\lim_{u\downarrow 0}\frac{1}{u}\frac{\mathbb{Q }(t<\tau^{i}\leq t+u,\tau>t|\mathcal{F}_{t})}{\mathbb{Q}(\tau>t|\mathcal{F}_{t })},\] _and the process_ \(M^{i}\)_, given by_ \[M^{i}_{t}\coloneqq\mathds{1}_{\tau^{i}\leq t\wedge\tau}-\int_{0} ^{t\wedge\tau}h^{i}_{s}\mathop{}\!\mathrm{d}s,\] _is a_ \((\mathbb{G},\mathbb{Q})\)_-martingale._
_For any \(i\in\{H,C\}\), there exists a process \(h^{i}\), defined as_
\[h^{i}_{t}\coloneqq\lim_{u\downarrow 0}\frac{1}{u}\frac{\mathbb{Q }(t<\tau^{i}\leq t+u,\tau>t|\mathcal{F}_{t})}{\mathbb{Q}(\tau>t|\mathcal{F}_{t })},\]
_and the process \(M^{i}\), given by_
\[M^{i}_{t}\coloneqq\mathds{1}_{\tau^{i}\leq t\wedge\tau}-\int_{0} ^{t\wedge\tau}h^{i}_{s}\mathop{}\!\mathrm{d}s,\]
_is a \((\mathbb{G},\mathbb{Q})\)-martingale._
By (i) in creftypecap 2.1, there exists an \(\mathbb{F}\)-progressively mesaurable process \((h^{0}_{t})_{t\geq 0}\) such that
\[h^{0}_{t}=\lim_{u\downarrow 0}\frac{1}{u}\frac{\mathbb{P}(t<\tau \leq t+u|\mathcal{F}_{t})}{\mathbb{P}(\tau>t|\mathcal{F}_{t})},\]
and
\[M_{t}\coloneqq\mathds{1}_{\tau\leq t}-\int_{0}^{t\wedge\tau}h^{0 }_{s}\mathop{}\!\mathrm{d}s\]
is also a \((\mathbb{G},\mathbb{P})\)-martingale. Let us denote \(h\coloneqq h^{H}+h^{C}\). If \(\tau^{H}\) and \(\tau^{C}\) are independent on \(\mathbb{F}\), \(h^{0}=h\). It is in general not the case. Moreover, by (i) in creftypecap 2.1, \(\tau\) avoids any \(\mathbb{F}\)-stopping time (see Coculescu Nikeghbali, 24, Corollary 3.4). In other words, for any \(\mathbb{F}\)-stopping time \(\tau^{\mathbb{F}}\),
\[\mathbb{Q}(\tau=\tau^{\mathbb{F}})=0.\] (2.1)
The next lemma is borrowed from Bielecki . (7) and Chapter 5 in Bielecki Rutkowski (8).
**Lemma 2.2**.: _Let \(i\in\{H,C\}\)._
1. _Let_ \(U\) _be an_ \(\mathcal{F}_{s}\)_-measurable, integrable random variable for some_ \(s\geq 0\)_. Then, for any_ \(t\leq s\)_,_ \[\mathbb{E}(\mathds{1}_{s<\tau}U|\mathcal{G}_{t})= \mathds{1}_{t<\tau}G_{t}^{-1}\mathbb{E}(G_{s}U|\mathcal{F}_{t}),\] \[\mathbb{E}(\mathds{1}_{s<\tau^{i}}U|\mathcal{G}_{t})= \mathds{1}_{t<\tau^{i}}(G^{i}_{t})^{-1}\mathbb{E}(G^{i}_{s}U| \mathcal{F}_{t}).\]
2. _Let_ \((U_{t})_{t\geq 0}\) _be a real-valued,_ \(\mathbb{F}\)_-predictable process and_ \(\mathbb{E}|U_{\bar{\tau}}|<\infty\)_. Then,_ \[\mathbb{E}(\mathds{1}_{\tau=\tau^{i}\leq T}U_{\tau}|\mathcal{G}_{ t})=\mathds{1}_{t<\tau}G_{t}^{-1}\mathbb{E}\Big{(}\int_{t}^{T}h^{i}_{s}G_{s}U_ {s}\mathop{}\!\mathrm{d}s\Big{|}\mathcal{F}_{t}\Big{)}.\]
We define spaces of random variables, and stochastic processes as follows.
**Definition 2.3**.: _Let \(m\in\mathbb{N}\) and \(p\geq 2\)._
* \(\mathbb{L}^{p}_{T}\)_: the set of all_ \(\mathcal{F}_{T}\)_-measurable random variables_ \(\xi\)_, such that_ \[\|\xi\|_{p}\coloneqq\mathbb{E}[|\xi|^{p}]^{\frac{1}{p}}<\infty.\]
* \(\mathbb{S}^{p}_{T}\)_: the set of all real valued,_ \(\mathbb{F}\)_-adapted, càdlàg_¹ _processes_ \((U_{t})_{t\geq 0}\)_, such that_ [FOOTNOTE:1][ENDFOOTNOTE] \[\|U\|_{\mathbb{S}_{T}^{p}}\coloneqq\mathbb{E}\big{(}\sup\limits_{ t\leq T}|U_{t}|^{p}\big{)}^{\frac{1}{p}}<\infty.\]
* \(\mathbb{H}^{p,m}_{T}\)_: the set of all_ \(\mathbb{R}^{m}\)_-valued,_ \(\mathbb{F}\)_-predictable processes_ \((U_{t})_{t\geq 0}\)_, such that_ \[\|U\|_{\mathbb{H}^{p}_{T}}\coloneqq\mathbb{E}\Big{(}\int_{0}^{T} \big{|}U_{t}\big{|}^{p}\mathop{}\!\mathrm{d}t\Big{)}^{\frac{1}{p}}<\infty.\]
* \(\mathbb{H}^{p,m}_{T,loc}\) _: the set of all_ \(\mathbb{R}^{m}\)_-valued,_ \(\mathbb{F}\)_-predictable_ _processes_ \((U_{t})_{t\geq 0}\)_, such that_ \[\int_{0}^{T}\big{|}U_{t}\big{|}^{p}\mathop{}\!\mathrm{d}t<\infty, \quad\text{a.s.}\]
Moreover, we let \(D_{\theta}=(D^{1}_{\theta},\dots,D^{n}_{\theta})\) denote Malliavin derivative at \(\theta\geq 0\), and \(\mathbb{D}^{1,2}\) denote the set of Malliavin differentiable random variables. For Malliavin calculus, readers can refer to Di Nunno . (28) and Section 5.2 in El Karoui . (29). In the next section, we describe hedging portfolios under incremental CVA, DVA, FVA, and collateral. These aggregated adjustment is often called XVA. For simplicity in notations, when \(n=1\), we denote \(D_{\theta}=D^{1}_{\theta}\), \(W=W^{1}\), and \(\mathbb{H}^{p}_{T}\coloneqq\mathbb{H}^{p,1}_{T}\), \(\mathbb{H}^{p}_{T,loc}\coloneqq\mathbb{H}^{p,1}_{T,loc}\).
### BSDEs under Incremental XVA
#### 2.2.1 Cash-flows
We consider a hedger and counterparty entering a “new” contract which exchanges promised dividends. First, we proceed our argument with the assumption that the two parties have not made any contract before the new contract. Later, this assumption will be relaxed by slightly modifying our model so that incremental effects can be considered.
Let \(\textfrak{D}^{N}_{t}\) denote the accumulated amount of the promised dividends up to \(t\geq 0\). We assume that \(\textfrak{D}^{N}\) is an \(\mathbb{F}\)-adapted càdàg process, and the value is determined by an \(n\)-dimensional underlying asset process \(S=(S^{1},\dots,S^{n})\) that follows the next stochastic differential equation (SDE):
\[\mathop{}\!\mathrm{d}S^{i}_{t}= r_{t}S^{i}_{t}\mathop{}\!\mathrm{d}t+\sigma^{i}_{t}S^{i}_{t} \mathop{}\!\mathrm{d}W_{t},~{}~{}i\in\{1,\dots,n\},\] (2.2)
for some \(\mathbb{F}\)-progressively measurable processes \(r\) and \((\sigma^{i})^{\top}\in\mathbb{R}^{n}\). In addition, we assume that the \(\mathbb{F}\)-adapted process \(\textfrak{D}^{N}\) is independent of the information of defaults.
**Remark 2.4**.: _Note that an \(\mathbb{F}\)-adapted process may depend on default risks, since the default intensities are \(\mathbb{F}\)-adapted._
If a default occurs before the maturity of the contract \(T\), two parties stop exchanging \(\textfrak{D}^{N}\), and the derivative contract is marked-to-market. The method to calculate the close-out amount is determined before initiation of the contract and documented in Credit Support Annex (CSA)². Let \(e^{N}_{t}\) denote the close-out amount at \(t\leq T\). In this paper, we deal with two conventions for the close-out amount \(e^{N}\): _clean close-out_ and _replacement close-out_. We postpone explaining the conventions to the next section after we define the hedger’s hedging portfolio. As conventions, \(\mathop{}\!\mathrm{d}\textfrak{D}^{N}_{t}\geq 0,~{}e^{N}_{t}\geq 0\) (resp. \(<0\)) means that the hedger pays to (resp. is paid by) the counterparty at \(t\leq T\).
[FOOTNOTE:2][ENDFOOTNOTE]
**Example 2.5**.: _If the hedger buys a zero coupon bond of unit notional amount, \(\textfrak{D}^{N}=-\mathds{1}_{\llbracket T,\infty\llbracket}\)._
The obligation to settle \(e^{N}\) may not be fully honored because of the default. To mitigate the risk, the two parties post or receive collateral (often referred to as margin). The amount of the collateral posted at \(t\geq 0\) (only for the new contract) is denoted by \(m^{N}_{t}\). We assume that \((m^{N}_{t})_{t\geq 0}\) is an \(\mathbb{F}\)-adapted process. By creftypecap 2.1, \(\tau^{i}\), \(i\in\{H,C\}\), are totally inaccessible, which means that the defaults arrive with surprise. Margins are posted because we do not know full information of the defaults, and this is why \(m^{N}\) is calculated on the observable information \(\mathbb{F}\). The exact forms of \(m^{N}\) will be given later after the conventions for \(e^{N}\) are introduced. We assume that the close-out payment is settled at the moment of default without delay and \(m^{N}\) is posted continuously. As conventions, if \(m^{N}_{t}\geq 0\) (resp. \(<0\)), it means that the hedger posts (receives) the collateral at \(t\leq T\).
**Remark 2.6**.: _In practice, there exists a gap between the day of default and actual settlement. The delay is required to check whether the default really happened, collect information of the contract, find the best candidate to replace the defaulting party (Murphy, 39). Gap risk is the risk from the gap between the close-out amount at the day of settlement and the last day that variation margin is posted. For gap risk, two parties post initial margin which is often calculated by a risk measure. Note that we ignore the gap risk and initial margin. If we consider initial margin, we encounter anticipative backward stochastic differential equations (ABSDEs) under replacement close-out. For the main result of this paper, Malliavin calculus for BSDEs will be used. However, to the best of our knowledge, Malliavin differentiability of ABSDEs has not been studied. Moreover, It is a challenging problem to solve ABSDEs numerically (see Agarwal ., 1). The continuous posting of variation margin can also be seen as a simplification. One may want to model \(m\) as a càdlàg step process. For the discussion, readers may want to refer to Brigo, Liu . (18)._
At default, collateral is not exchanged. Thus, we set the collateral amount posted at \(\tau\leq T\), as \(m^{N}_{\tau-}\). Therefore, the cash-flow at default can be
\[\Delta\textfrak{D}^{N}_{\tau}+e^{N}_{\tau}-m^{N}_{\tau-}.\]
However, it is immaterial whether we separate \(\Delta\textfrak{D}^{N}_{\tau}\) from \(e^{N}\) or not in the modeling, because jumps of \(\mathbb{F}\)-adapted càdlàg processes are exhausted by \(\mathbb{F}\)-stopping times (see He Yan, 31, Theorem 4.21). Thus, by (2.1),
\[\Delta\textfrak{D}^{N}_{\tau}=0,~{}~{}\text{a.s.}\]
Let \(\mathfrak{C}^{N}\) denote the accumulated cash-flows. Then, for any \(t\leq T\) a.s,
\[\mathfrak{C}^{N}_{t}\coloneqq\mathds{1}_{\tau>t}\textfrak{D}^{N}_ {t}+\mathds{1}_{\tau\leq t}(\textfrak{D}^{N}_{\tau}+e^{N}_{\tau})-\mathds{1}_{ \tau=\tau^{H}\leq t}L^{H}(e^{N}_{\tau}-m^{N}_{\tau-})^{+}+\mathds{1}_{\tau= \tau^{C}\leq t}L^{C}(e^{N}_{\tau}-m^{N}_{\tau-})^{-},\]
where \(0\leq L^{H}\leq 1\) (resp. \(0\leq L^{C}\leq 1\) ) is the loss rate of the hedger (resp. counterparty). Recall that \(\mathfrak{C}^{N}\) is derived from the assumption that the new contract is the first contract between the hedger and counterparty. Now, we relax the assumption so that we can consider incremental cash-flows.
#### 2.2.2 Incremental Cash-flows
Assume that the two parties has made contracts given by some endowed càdlàg \(\mathbb{F}\)-adapted processes \((\textfrak{D}^{E},e^{E},m^{E})\) before initiation of the new contract. If the two parties did not enter the contract, for the bank of the hedger, the cash-flows would be
\[\mathfrak{C}^{E}_{t}\coloneqq\mathds{1}_{\tau>t}\textfrak{D}^{E}_ {t}+\mathds{1}_{\tau\leq t}(\textfrak{D}^{E}_{\tau}+e^{E}_{\tau})-\mathds{1}_{ \tau=\tau^{H}\leq t}L^{H}(e^{E}_{\tau}-m^{E}_{\tau-})^{+}+\mathds{1}_{\tau= \tau^{C}\leq t}L^{C}(e^{E}_{\tau}-m^{E}_{\tau-})^{-}.\]
On the other hand, when entering the new contract, the exposure and margin would be \(e^{E}+e^{N}\) and \(m^{E}+m^{N}\), respectively. In this case, the summed cash-flows for the bank are
\[\mathfrak{C}^{S}_{t}\coloneqq \mathds{1}_{\tau>t}(\textfrak{D}^{E}+\textfrak{D}^{N}_{t})+ \mathds{1}_{\tau\leq t}(\textfrak{D}^{E}_{\tau}+\textfrak{D}^{N}_{\tau}+e^{E}_ {\tau}+e^{N}_{\tau})\]
\[-\mathds{1}_{\tau=\tau^{H}\leq t}L^{H}(e^{E}_{\tau}+e^{N}_{\tau}- m^{E}_{\tau-}-m^{N}_{\tau-})^{+}+\mathds{1}_{\tau=\tau^{C}\leq t}L^{C}(e^{E}_{ \tau}+e^{N}_{\tau}-m^{E}_{\tau-}-m^{N}_{\tau-})^{-}.\]
Thus, the amount that should be handled by the hedger to enter the “new” contract can be given by
\[\mathfrak{C}_{t}\coloneqq \mathfrak{C}^{S}_{t}-\mathfrak{C}^{E}_{t}\]
\[= \mathds{1}_{\tau>t}\textfrak{D}^{N}_{t}+\mathds{1}_{\tau\leq t}( \textfrak{D}^{N}_{\tau}+e^{N}_{\tau})-\mathds{1}_{\tau=\tau^{H}\leq t}L^{H} \big{(}(e^{E}_{\tau}+e^{N}_{\tau}-m^{E}_{\tau-}-m^{N}_{\tau-})^{+}-(e^{E}_{ \tau}-m^{E}_{\tau-})^{+}\big{)}\]
\[+\mathds{1}_{\tau=\tau^{C}\leq t}L^{C}\big{(}(e^{E}_{\tau}+e^{N}_ {\tau}-m^{E}_{\tau-}-m^{N}_{\tau-})^{-}-(e^{E}_{\tau}-m^{E}_{\tau-})^{-}\big{)}.\] (2.3)
We denote the cash-flow process after \(\tau\) by \(\Theta\), i.e.,
\[\Theta\coloneqq\mathfrak{C}-(\textfrak{D}^{N})^{\tau}.\] (2.4)
We assume that the hedger can access to (defaultable) zero coupon bonds of hedger and counterparty. Let \(S^{H}\) (resp. \(S^{C}\)) denote the defaultable bond of the hedger (resp. counterparty), where \(S^{H}\) and \(S^{C}\) follow the next SDE:
\[\mathop{}\!\mathrm{d}S^{i}_{t}= r_{t}S^{i}_{t}\mathop{}\!\mathrm{d}t+\sigma^{i}_{t}S^{i}_{t} \mathop{}\!\mathrm{d}W_{t}-S^{i}_{t-}\mathop{}\!\mathrm{d}M^{i}_{t},~{}~{}i\in \{H,C\},\] (2.5)
where \((\sigma^{i})^{\top}\in\mathbb{R}^{n}\) are \(\mathbb{F}\)-progressively measurable processes. \(S^{1},\dots,S^{n}\) are used to hedge \(\textfrak{D}^{N}\), while \(S^{H}\) and \(S^{C}\) are used to hedge \(\Theta\). We define the \((n+2)\times n\) matrix \(\sigma\) as
\[\sigma\coloneqq\begin{bmatrix}\sigma^{1}\\ \vdots\\ \sigma^{n}\\ \sigma^{H}\\ \sigma^{C}\end{bmatrix}.\]
Note that the considered financial markets are complete, since there are \(n+2\) sources of randomness, \(W,\tau^{H},\tau^{C}\), and \(n+2\) traded assets \(S,S^{H},S^{C}\).
**Remark 2.7**.: _By (2.2), underlying assets do not depend on default risk, which means that we do not deal with credit derivatives. For modeling with emphasis on contagion risk, readers can refer to Jiao . (34); Bo . (12); Brigo, Capponi Pallavicini (15); Bo . (13)._
#### 2.2.3 Accounts and Hedging Strategy
In this section, we introduce several saving accounts and the hedger’s hedging strategy. In what follows, we denote
\[I\coloneqq\{1,2,\dots,H,C\}.\] (2.6)
For any \(i\in I\), let \(\eta^{S,i}\) denote the number of units of \(S^{i}\) held by the hedger, and we assume \(\eta^{S,i}\) is \(\mathbb{G}\)-predictable. We denote
\[\varphi\coloneqq (\eta^{S,1},\dots,\eta^{S,n,},\eta^{S,H},\eta^{S,C}),\]
\[\pi^{i}\coloneqq \eta^{S,i}S^{i},~{}~{}i\in\{1,\dots,n\},\]
\[\pi^{i}\coloneqq \eta^{S,i}S^{i}_{-},~{}~{}i\in\{H,C\},\]
\[\pi\coloneqq (\pi^{1},\dots,\pi^{n},\pi^{H},\pi^{C}).\]
We call the \((n+2)\)-dimensional \(\mathbb{G}\)-predictable process \(\varphi\) the hedger’s hedging strategy.
**Remark 2.8**.: \(\varphi\) _is chosen to be \(\mathbb{G}\)-adapted only to describe an immediate action taken at default. We shall see later that on \([0,\tau)\), \(\varphi\) is \(\mathbb{F}\)-adapted._
If the collateral is pledged, the posting party is remunerated by the receiving party according to a certain interest rate. When \(m^{N}_{t}\geq 0\) (resp. \(<0\)), the counterparty (resp. hedger) pays the interest rate \(R^{m,\ell}_{t}\) (resp. \(R^{m,b}_{t}\)) at \(t\leq T\). We assume that the collateral is posted as cash and the interest rate is accrued to a margin account of the hedger. We denote the lending and borrowing accounts \(B^{m,\ell}\) and \(B^{m,b}\) respectively, i.e., \(B^{m,i}\), \(i\in\{\ell,b\}\), are given by
\[\mathop{}\!\mathrm{d}B^{m,i}_{t}=R^{m,i}_{t}B^{m,i}_{t}\mathop{} \!\mathrm{d}t.\] (2.7)
Let \(\eta^{m,\ell}\) (resp. \(\eta^{m,b}\)) denote the number of units of \(B^{m,\ell}\) (resp. \(B^{m,b}\)). Then the following equations hold:
\[\eta^{m,\ell}\geq 0,~{}~{}\eta^{m,b}<0,~{}~{}\eta^{m,\ell}\eta^{m ,b}=0,\] (2.8)
\[\eta^{m,\ell}B^{m,\ell}+\eta^{m,b}B^{m,b}=m^{N}.\] (2.9)
We assume that the _variation margin_\(m\) can be _rehypothecated_, i.e., \(m^{N}\) is used by the hedger to maintain her portfolio.
**Remark 2.9**.: _The margin account for \(m^{E}\) may be dealt with by other dealers, so it is not a part of the hedging portfolio._
Some underlying assets can be traded through repo markets. We denote the set of indices for which a repo market is available by \(\rho\subseteq I\coloneqq\{1,2,\dots,n,H,C\}\). We assume that the borrowing and lending repo market rates are the same, and for \(i\in\rho\), let \(R^{\rho,i}\) denote the repo rate. Moreover, for any \(i\in\rho\), let \(B^{\rho,i}\) denote the account that \(R^{\rho,i}\) accrues, i.e., \(B^{\rho,i}\) follows
\[\mathop{}\!\mathrm{d}B^{\rho,i}_{t}=R^{\rho,i}_{t}B^{\rho,i}_{t} \mathop{}\!\mathrm{d}t.\] (2.10)
For \(i\in\rho\), we denote the number of units of \(B^{\rho,i}\) by \(\eta^{\rho,i}\). Then it follows that for any \(i\in\rho\),
\[\eta^{\rho,i}B^{\rho,i}+\eta^{S,i}S^{i}=0.\] (2.11)
If the hedger has any surplus cash, she can earn the lending rate \(R^{\ell}\), while for borrowing money, she needs to pay the borrowing rate \(R^{b}\). For \(i\in\{\ell,b\}\), let \(B^{i}\) denote the hedger’s funding account and \(\eta^{i}\) denote the number of units of \(B^{i}\). Therefore, it follows that
\[\eta^{\ell}\geq 0,~{}~{}\eta^{b}<0,\] (2.12)
\[\mathop{}\!\mathrm{d}B^{i}_{t}=R^{i}_{t}B^{i}_{t}\mathop{}\! \mathrm{d}t,~{}~{}i\in\{\ell,b\}.\] (2.13)
### Hedger’s Incremental Hedging Portfolio
Now, we are almost ready to define dealer’s incremental portfolio. Another ingredient in defining hedging portfolios is incremental funding effects. These effects will be considered by imposing some conditions on the hedging portfolio. The detail will be followed after the next definition.
**Definition 2.10**.: _If \(V=V(\varphi,\mathfrak{C})\) defined on \(t\in\mathbb{R}_{+}\), by_
\[V_{t}=\eta^{\ell}_{t}B^{\ell}_{t}+\eta^{b}_{t}B^{b}_{t}+\eta^{m, \ell}_{t}B^{m,\ell}_{t}+\eta^{m,b}_{t}B^{m,b}_{t}+\sum_{i\in I} \eta^{S,i}_{t}S^{i}_{t}+\sum_{i\in\rho}\eta^{\rho,i}_{t}B^{\rho,i }_{t},\] (2.14)
_satisfies_
\[V_{t}= V_{0}+\sum_{i=\ell,b}\int_{0}^{t\wedge\bar{\tau}} \eta^{i}_{s}\mathop{}\!\mathrm{d}B^{i}_{s}+\sum_{i=\ell,b}\int_{0 }^{t\wedge\bar{\tau}}\eta^{m,i}_{s}\mathop{}\!\mathrm{d}B^{m,i}_{s}+ \sum_{i\in I}\int_{0}^{t\wedge\bar{\tau}}\eta^{S,i}_{s}\mathop{} \!\mathrm{d}S^{i}_{s}\]
\[+\sum_{i\in\rho}\int_{0}^{t\wedge\bar{\tau}}\eta^{ \rho,i}_{s}\mathop{}\!\mathrm{d}B^{\rho,i}_{s}-\mathfrak{C}_{t\wedge\bar{\tau}},\] (2.15)
_for any \(t\in\mathbb{R}_{+}\), then \(V\) is called the hedger’s incremental hedging portfolio._
**Remark 2.11**.: _Note that by (2.3), \(\mathfrak{C}_{t}=\mathfrak{C}_{t\wedge\bar{\tau}}\), \(\forall t\geq 0\), and by (2.15), \(V_{t}=V_{t\wedge\bar{\tau}}\), \(\forall t\geq 0\)._
Our goal is to find a proper price charged to the counterparty and hedging strategy \(\varphi\) satisfying Definition2.10 and a certain terminal condition. We seek to impose the terminal condition so that an incremental funding effect should be considered. The incremental funding effect means the difference between the funding cost/benefit of two choices: entering or not entering the new contract. To explain the necessity of the incremental effect briefly, consider a situation that the dealer wants to enter a new contract that makes the hedging portfolio fall in borrowing state. If there have been no business of the bank before the contract, the treasury department would finance the dealer with the borrowing rate, and the excessive borrowing cost might be charged to the counterparty. However, if the bank’s initial position was in lending state before the contract, the treasury department should consider deduction of lending profit rather than excessive borrowing cost. The mentioned activity can be seen that the dealer borrows and keeps the bank’s initial portfolio for funding the portfolio, and returns it to treasury at maturity.
To explain the mathematical detail, let \(B^{\epsilon}\) denote the endowed bank’s portfolio without entering the new contract, for some \(\epsilon\in\mathbb{R}\) such that \(B^{\epsilon}_{0}=\epsilon\). We sometimes call \(B^{\epsilon}\)_legacy portfolio_. In reality, \(B^{\epsilon}\) is a massive combination of numerous portfolios. We assume that the _legacy portfolio_ is approximately risk-neutral and grows with respect to their funding rates. Therefore,
\[B^{\epsilon}_{t}\coloneqq \epsilon\exp{\Big{(}\int_{0}^{t}R^{\epsilon}_{s}\mathop{}\! \mathrm{d}s\Big{)}},~{}\forall t\in\mathbb{R}_{+},\]
\[R^{\epsilon}\coloneqq \mathds{1}_{\epsilon\geq 0}R^{\ell}+\mathds{1}_{\epsilon<0}R^{b},\]
and we denote \(s^{\epsilon}\coloneqq R^{\epsilon}-r\).
Now, let us think of \(V\) as the bank’s summed profit/loss and consider two cases. First, if the dealer does not enter the contract exchanging \(\mathfrak{C}\), the bank will have \(B^{\epsilon}_{\bar{\tau}}\) at \(\bar{\tau}\). Second, the dealer can enter the contract with a certain initial price, say \(p\in\mathbb{R}\), for the contract from the counterparty. Then, the bank’s initial wealth is \(\epsilon+p\), namely
\[V_{0}=\epsilon+p.\]
The dealer gains from investing in risky assets and accounts. These revenues are used to pay the cash-flows \(\mathfrak{C}\). Thus, in (2.15), represents the hedging error between the investment and cash-flows. Recall that we consider complete markets (there are \(n+2\) sources of randomness, \((W,\tau^{H},\tau^{C})\), and \(n+2\) hedging assets \(\pi\)). Therefore, we can expect that we can find a hedging strategy that replicates the cash-flows with null hedging error up to termination of the contract, either by the maturity or default. For example, if \(\epsilon=0\), we can expect to find \((V,\pi)\) such that \(V_{\bar{\tau}}=0\). However, since the opportunity cost for the bank is the profit/loss from _legacy portfolio_, the portfolio value at \(\bar{\tau}\) should be compared with \(B^{\epsilon}_{\bar{\tau}}\), i.e., we seek to find \((V,\pi)\) such that
\[V_{\bar{\tau}}=B^{\epsilon}_{\bar{\tau}},\] (2.16)
and the dealer may want to charge
\[p=V_{0}-\epsilon\] (2.17)
to the counterparty. Therefore, by (2.15) together with (2.2), (2.5), (2.7), (2.8), (2.9), (2.10), (2.11), (2.13), we need to solve the following BSDE:
(2.18)
For now, we do not examine the existence and uniqueness of (2.18). The solvability will be examined by a reduced form of (2.18). The financial interpretation of each component in (2.18) will be provided later in the following section. Before the detail, we first discuss the incremental funding impacts by a simple example.
**Example 2.12** (Incremental FVA).: _Let \(n=1\), \(\rho=\emptyset\), \(\textfrak{D}^{N}=\mathds{1}_{\llbracket T,\infty\llbracket}\xi B_{T}\) for some \(\xi\in\mathbb{L}^{2}_{T}\). We ignore default risks and set \(\pi^{H}=\pi^{C}=m^{N}=0\). In this case, \((V,\pi^{1})\) is given by_
_To consider a net profit/loss to the hedger without the legacy portfolio, consider_
\[v\coloneqq B^{-1}(V-B^{\epsilon}),\] (2.19)
_and we denote that \(\tilde{\pi}^{1}\coloneqq B^{-1}\pi^{1}\), \(\tilde{B}^{\epsilon}=B^{-1}B^{\epsilon}\). Then \((v,\tilde{\pi}^{1})\) is given by_
\[v_{t}=\xi-\int_{t}^{T}\Big{[}(v_{s}+\tilde{B}^{\epsilon}_{s}- \tilde{\pi}^{1}_{s})^{+}s^{\ell}_{s}-(v_{s}+\tilde{B}^{\epsilon}_{s}-\tilde{ \pi}^{1}_{s})^{-}s^{b}_{s}-s^{\epsilon}_{s}\tilde{B}^{\epsilon}_{s}\Big{]} \mathop{}\!\mathrm{d}s-\int_{t}^{T}\tilde{\pi}^{1}_{s}\sigma^{1}_{s}\mathop{} \!\mathrm{d}W_{s}.\] (2.20)
_Assuming there exists a unique solution \((v,\tilde{\pi}^{1})\in\mathbb{S}^{2}_{T}\times\mathbb{H}^{2}_{T}\) of (2.20), for any \(t\geq 0\), we define_
\[\text{FBA}^{\Delta}_{t}\coloneqq \mathbb{E}\bigg{[}\int_{t}^{T}\big{[}(v_{s}+\tilde{B}^{\epsilon}_ {s}-\tilde{\pi}^{1}_{s})^{+}s^{\ell}_{s}-(s^{\epsilon}_{s}\tilde{B}^{\epsilon} _{s})^{+}\big{]}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{F}_{t}\bigg{]},\]
\[\text{FCA}^{\Delta}_{t}\coloneqq \mathbb{E}\bigg{[}\int_{t}^{T}\big{[}(v_{s}+\tilde{B}^{\epsilon}_ {s}-\tilde{\pi}^{1}_{s})^{-}s^{b}_{s}-(s^{\epsilon}_{s}\tilde{B}^{\epsilon}_{s })^{-}\big{]}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{F}_{t}\bigg{]}.\]
\(\text{FBA}^{\Delta}\) _(resp. \(\text{FCA}^{\Delta}\)) represents the incremental funding benefits (resp. costs) for entering the new contract. Notice that as \(\epsilon\) increases, \(v+\tilde{B}^{\epsilon}-\pi^{1}\) is more likely to positive. Consider a case that \(v-\pi^{1}\) is negative but \(v+\tilde{B}^{\epsilon}-\pi^{1}\) is non-negative. If we ignore the incremental effect, i.e., \(\epsilon=0\), the dealer should consider the increased funding cost. However, in the view of incremental effects, instead, the deduction of funding benefit should be included in the derivative price. However, the dealer also need to consider the opportunity cost for not entering the new contract, e.g., if \(\epsilon\geq 0\), the lost (discounted) benefit at \(t\leq T\) would be_
\[\tilde{B}^{\epsilon}_{T}-\tilde{B}^{\epsilon}_{t}=\int_{t}^{T} \mathop{}\!\mathrm{d}\tilde{B}^{\epsilon}_{s}=\int_{t}^{T}s^{\epsilon}_{s} \tilde{B}^{\epsilon}_{s}\mathop{}\!\mathrm{d}s=\int_{t}^{T}(s^{\epsilon}_{s} \tilde{B}^{\epsilon}_{s})^{+}\mathop{}\!\mathrm{d}s.\] (2.21)
_The difference between the two impacts is the actual net benefit and cost, \(\text{FBA}^{\Delta}\) and \(\Delta\text{FCA}^{\Delta}\), that should be charged to the counterparty. Indeed, by (2.17), (2.19) and (2.20), the dealer would want to charge_
\[p=\mathbb{E}[\xi]-\text{FBA}^{\Delta}_{0}+\text{FCA}^{\Delta}_{0}.\] (2.22)
_In addition, if \(R^{\ell}=R^{b}\), (2.20) becomes_
\[v_{t}=\xi-\int_{t}^{T}(v_{s}-\tilde{\pi}^{1}_{s})s^{\ell}_{s} \mathop{}\!\mathrm{d}s-\int_{t}^{T}\tilde{\pi}^{1}_{s}\sigma^{1}_{s}\mathop{} \!\mathrm{d}W_{s}.\] (2.23)
_Thus, under linear funding models, \(v\) does not depend on \(B^{\epsilon}\)._
In what follows, for simplicity, we impose some realistic assumptions on interest rates, and the endowed processes, \(e^{E}\) and \(m^{E}\). In practice, \(R^{m,i}\), \(i\in\{\ell,b\}\), are chosen as Federal funds or EONIA rates, approximately \(r\). In addition, the difference between OIS and repo market rates can be interpreted as the liquidity premium of the repo markets. We assume the repo markets are liquid enough for the difference to be small. Moreover, we assume that OIS rate, \((r_{t})_{t\geq 0}\), is the smallest among all interest rates. In addition, \(e^{E}\) and \(m^{E}\) are given before the new contract, so they are chosen exogenously, i.e., they do not depend on \(V\). These assumptions are summarized as follows.
**Assumption 2.13**.:
1. \(R^{\rho,i}=R^{m,\ell}=R^{m,b}=r\)_, for_ \(i\in\rho\)_._
2. \(R^{\ell}\geq r\) _and_ \(R^{b}\geq r\)_._
3. \(e^{E}\) _and_ \(m^{E}\) _are exogenous processes._
**Remark 2.14**.: _The assumption on repo market rates given in creftypecap 2.13-(i) is merely for simplicity in representing (2.18). Mathematically, it does not play any crucial role._
Recall that we have not yet specified the amount of close-out \(e^{N}\) in \(\mathfrak{C}\). In the next section, two important close-out conventions are introduced: _clean price_ and _replacement cost_. _Clean price_ is basically the risk-neutral price of \(\textfrak{D}^{N}\). By using the SDE that the _clean price_ follows, we can remove \(\textfrak{D}^{N}\) from \(\mathfrak{C}\). After the elimination of \(\textfrak{D}^{N}\), the rest of cash-flows is \(\Theta\) which is exchanged only at one point \(\tau\). Thus, by subtracting the _clean price_ from \(V\), we can recover a standard BSDE with \((\mathbb{G},\mathbb{Q})\)-martingales, \(W\) and \(M^{i}\), \(i\in\{H,C\}\). Then we further reduce the BSDE to a BSDE only with the Brownian motions by the typical argument of filtration reduction.
#### 2.3.1 Close-out Conventions
When the contract terminates earlier than the contractual maturity by one party’s default, the defaulting party should settle the close-out amount. The close-out amount is calculated by a determining party which will act in good faith (ISDA, 33). As stated in ISDA (33, p.15), “the Determining Party may consider any relevant information, including, without limitation, one or more of the following types of information: quotations (either firm or indicative) for replacement transactions supplied by one or more third parties that may take into account the creditworthiness of the Determining Party at the time the quotation is provided and the terms of any relevant documentation …”. The statement leaves some rooms for interpretation. ISDA recommends to consider creditworthiness of the surviving party, but it is not mandatory. Moreover, it is unclear whether funding costs should be considered in the replacement transaction. Therefore, we consider two close-out conventions: _clean price_ and _replacement cost_. _Clean price_ is the risk-neutral price without XVA. The close-out with _clean price_ is often called _clean close-out_ and _risk-free close-out_. Let \(p^{N}\) denote the _clean price_, i.e.,
\[p^{N}_{t}\coloneqq B_{t}\mathbb{E}\bigg{(}\int_{t}^{T}B_{s}^{-1} \mathop{}\!\mathrm{d}\textfrak{D}^{N}_{s}\bigg{|}\mathcal{F}_{t}\bigg{)},~{}~{ }\forall t\in\mathbb{R}_{+}.\] (2.24)
_Clean close-out_ (or _risk-free close-out_) has been often chosen in literature (for example, Crépey (26, 25)).
**Remark 2.15**.: Bichuch . _(_5_)_ _also considered expected value of discounted cash-flow in absence of default risk, but in the calculation, they assumed flexibility to choose the discounting rate and probability measure. Indeed, they chose the pricing measure as an equivalent probability measure such that \((B^{i})^{-1}S\), \(i\in\{\ell,b\}\), are martingales. In other words, the amount is “clean price\(+\)FVA”. This choice is for avoiding mismatch of pricing measures to obtain a closed-form solution. We will explain this point later with examples._
_Replacement cost_ (or _replacement close-out_) means the price under XVA. In this case, it is not clear which funding rate should be chosen. We assume that the replacing party has similar credit spreads to the hedger. Recall that \(V-B^{\epsilon}\) is the value for calculating the derivative price in view of the hedger. In other words, one may want to choose the value of \(V-B^{\epsilon}\) at the default for the _replacement cost_. However, at the default, there is a jump in \(V\) by \(\pi^{i}\), \(i\in\{H,C\}\). Moreover, \(\pi^{i}\) is retained in \(V\) for the default risk, i.e., the close-out payment. Therefore, if we take
\[e^{N}_{\tau}= V_{\tau}-B^{\epsilon}_{\tau}=V_{\tau-}-B^{\epsilon}_{\tau}+ \Delta V_{\tau}\]
\[= V_{\tau-}-B^{\epsilon}_{\tau}-\mathds{1}_{\tau=\tau^{H}}\pi^{H}_ {\tau}-\mathds{1}_{\tau=\tau^{C}}\pi^{C},\]
the defaulting party would pay the remainder after the deduction of the same amount, basically nothing. Hence, for _replacement close-out_, we should set
\[e^{N}_{\tau}=V_{\tau-}-B^{\epsilon}_{\tau}.\] (2.25)
In both conventions, we assume that the collateral is a portion of the close-out amount. More precisely, for \(0\leq L^{m}\leq 1\) (margin loss),
\[m^{N}=(1-L^{m})e^{N}.\] (2.26)
Note that (2.26) is consistent with our financial modeling. Indeed, if there is a \(\mathbb{G}\)-adapted process satisfying (2.18), in our filtration setup \(\mathbb{G}\), there is an \(\mathbb{F}\)-adapted process \(V^{\mathbb{F}}\) such that
\[V=\mathds{1}_{\llbracket 0,\bar{\tau}\llbracket}V^{\mathbb{F}}.\]
Therefore, under _replacement close-out_, the margin process
\[m^{N}= (1-L^{m})e^{N}=(1-L^{m})(V_{-}-B^{\epsilon})\]
\[= (1-L^{m})(V^{\mathbb{F}}_{-}-B^{\epsilon})\]
is \(\mathbb{F}\)-adapted before \(\bar{\tau}\). In what follows, we assume that the endowed margin also follows the same convention as (2.26), i.e.,
\[m^{E}=(1-L^{m})e^{E}.\] (2.27)
**Remark 2.16**.:
1. _The two close-out conventions have pros and cons in financial modeling. Clean price may be disadvantageous to the defaulting party because the default risk of surviving party is not considered. However, the surviving party’s default risk can be negatively affected by the default, especially when defaulting party has a systemic impact. In that case, replacement close-out may heavily penalize the surviving party. Readers can refer to_ Brigo Morini _(_19_)_ _for the comparison._
2. _A similar collateral convention was discussed by_ Burgard Kjaer _(_20_)__. For BSDEs approach on general endogenous collateral, readers can refer to_ Nie Rutkowski _(_40_)__._
Before further argument, we provide a lemma on properties of _clean price_\(p^{N}\). The following lemma will be used to present an XVA process and simplify the representation of the amount of cash-flow at default \(\Theta\). Readers may want to recall (2.24), the definition of \(p^{N}\), before the following lemma.
**Lemma 2.17**.:
1. \(p^{N}_{T}=0\)_._
2. \(p^{N}_{\bar{\tau}}=\mathds{1}_{\tau\leq T}p^{N}_{\tau}\)_._
3. \(\mathop{}\!\mathrm{d}p^{N}_{t}=r_{t}p^{N}_{t}\mathop{}\!\mathrm{d}t+B_{t}(Z^{N }_{t})^{\top}\mathop{}\!\mathrm{d}W_{t}-\mathop{}\!\mathrm{d}\textfrak{D}^{N}_ {t}\)_,_ \(\forall t\leq T\)_, for some_ \(Z^{N}\in\mathbb{H}^{2,n}_{T,loc}\)_._
4. \(p^{N}_{\tau-}=p^{N}_{\tau}\) _almost surely._
Proof.: (i) is from the definition and (ii) is a directly obtained from (i). For (iii), notice that \(B^{-1}p^{N}+\int_{0}^{\cdot}B^{-1}_{s}\mathop{}\!\mathrm{d}\textfrak{D}^{N}_{s}\) is an \((\mathbb{F},\mathbb{Q})\)-local martingale. Thus, by (local) martingale representation property, there exists \(Z^{N}\in\mathbb{H}^{2,n}_{T,loc}\) such that for any \(t\),
\[B_{t}^{-1}p^{N}_{t}+\int_{0}^{t}B^{-1}_{s}\mathop{}\!\mathrm{d} \textfrak{D}^{N}_{s}=\int_{0}^{t}(Z^{N}_{s})^{\top}\mathop{}\!\mathrm{d}W_{s},\]
Therefore, \(p^{N}_{t}\) follows the SDE:
\[\mathop{}\!\mathrm{d}p^{N}_{t}= r_{t}p^{N}_{t}\mathop{}\!\mathrm{d}t+B_{t}(Z^{N}_{t})^{\top} \mathop{}\!\mathrm{d}W_{t}-\mathop{}\!\mathrm{d}\textfrak{D}^{N}_{t}\] (2.28)
By (iii), \(p^{N}\) is an \(\mathbb{F}\)-adapted càdlàg process, but \(\tau\) avoids any \(\mathbb{F}\)-stopping time. Thus, \(\Delta p^{N}_{\tau}=0\) almost surely, equivalently \(p^{N}_{\tau-}=p^{N}_{\tau}\) a.s. ∎
**Remark 2.18**.: _Note that \(Z^{N}\) is the (discounted) delta risk of clean price. For example, consider \(n=1\) and a stock forward contract with exercise price \(K\), and assume \(r,~{}\sigma^{1}\) are deterministic. Then \(B^{-1}p^{N}=B^{-1}S^{1}-B_{T}^{-1}K\). Thus, for \(t<T\),_
\[Z^{N}_{t}=D_{t}(B^{-1}_{t}p^{N}_{t})=\sigma^{1}_{t}B^{-1}_{t}S^{ 1}_{t}.\]
Recall that \(e^{E}\) and \(B^{\epsilon}\) are \(\mathbb{F}\)-adapted, so they do not jump at \(\tau\), i.e., \(e^{E}_{\tau}=e^{E}_{\tau-}\) and \(B^{\epsilon}_{\tau}=B^{\epsilon}_{\tau-}\), a.s. Then, by (2.4) together with (iv) in Lemma2.17, under _clean close-out_, a.s,
\[\Theta_{t}= \mathds{1}_{\tau=\tau^{H}\leq t}\big{[}p^{N}_{\tau-}-L^{H}L^{m} \big{(}(p^{N}_{\tau-}+e^{E}_{\tau-})^{+}-(e^{E}_{\tau-})^{+}\big{)}\big{]}\]
\[+\mathds{1}_{\tau=\tau^{C}\leq t}\big{[}p^{N}_{\tau-}+L^{C}L^{m} \big{(}(p^{N}_{\tau-}+e^{E}_{\tau-})^{-}-(e^{E}_{\tau-})^{-}\big{)}\big{]}.\] (2.29)
On the other hand, under _replacement close-out_,
\[\Theta_{t}= \mathds{1}_{\tau=\tau^{H}\leq t}\big{[}V_{\tau-}-B^{\epsilon}_{ \tau-}-L^{H}L^{m}\big{(}(V_{\tau-}-B^{\epsilon}_{\tau-}+e^{E}_{\tau-})^{+}-(e^ {E}_{\tau-})^{+}\big{)}\big{]}\]
\[+\mathds{1}_{\tau=\tau^{C}\leq t}\big{[}V_{\tau-}-B^{\epsilon}_{ \tau-}+L^{C}L^{m}\big{(}(V_{\tau-}-B^{\epsilon}_{\tau-}+e^{E}_{\tau-})^{-}-(e^ {E}_{\tau-})^{-}\big{)}\big{]}.\] (2.30)
#### 2.3.2 Incremental XVA process
We can remove \(\textfrak{D}^{N}\) in (2.18) by using (iii) in Lemma2.17. To this end, we introduce an incremental XVA process. In both close-out conventions, we will deal with the XVA process instead of (2.18). The XVA process is defined by the discounted difference between \(V-B^{\epsilon}\) and \(p^{N}\). Let \(X\) denote the (incremental) XVA process, i.e.,
\[X\coloneqq B^{-1}[V-B^{\epsilon}-(p^{N})^{\bar{\tau}}].\]
Moreover, let \(\tilde{p}^{N}\coloneqq B^{-1}p^{N}\), \(\tilde{\pi}\coloneqq B^{-1}\pi\), \(c\coloneqq B^{-1}m^{N}\), \(\tilde{\Theta}\coloneqq B^{-1}\Theta\), \(\tilde{B}^{\epsilon}\coloneqq B^{-1}B^{\epsilon}\), and
\[s^{i}\coloneqq R^{i}-r,~{}~{}i\in\{\ell,b\}.\]
Note that \(s^{\ell}\) (resp. \(s^{b}\)) represents the lending (resp. borrowing) spread of the hedger. We can easily check that for \(t\geq 0\), \(\mathop{}\!\mathrm{d}\tilde{p}^{N}_{t\wedge\bar{\tau}}=\mathds{1}_{t\leq\bar{ \tau}}\mathop{}\!\mathrm{d}\tilde{p}^{N}_{t}\). Assuming there exists \((V,\pi)\) satisfying (2.18), by applying Itô’s formula to \(X\), we attain that for \(t\leq\bar{\tau}\), \((X,\tilde{\pi})\) follows
(2.31)
Note that under _replacement close-out_,
\[\tilde{\Theta}_{t}= \mathds{1}_{\tau\leq t}\tilde{p}^{N}_{\tau-}+\mathds{1}_{\tau= \tau^{H}\leq t}\big{[}X_{\tau-}-L^{H}L^{m}\big{(}(X_{\tau-}+\tilde{p}^{N}_{ \tau-}+\tilde{e}^{E}_{\tau-})^{+}-(\tilde{e}^{E}_{\tau-})^{+}\big{)}\big{]}\]
\[\tilde{e}^{E}\coloneqq B^{-1}e^{E}\]
while under _clean close-out_, \(\tilde{\Theta}\) is independent of \(X\). In both cases, we denote
\[\Theta(X_{-})\coloneqq\Theta.\]
Moreover, we define \(\Theta^{H}(X_{-})\) and \(\Theta^{C}(X_{-})\) such that
\[\tilde{\Theta}_{\bar{\tau}}-\tilde{p}^{N}_{\bar{\tau}}=-\mathds{1 }_{\tau=\tau^{H}\leq T}(\tilde{\Theta}^{H}_{\tau}(X_{\tau-})-L^{H}L^{m}(\tilde {e}^{E}_{\tau-})^{+})+\mathds{1}_{\tau=\tau^{C}\leq T}(\tilde{\Theta}^{C}_{ \tau}(X_{\tau-}-L^{C}L^{m}(\tilde{e}^{E}_{\tau-})^{-}).\]
where \(\tilde{\Theta}^{i}\coloneqq B^{-1}\Theta^{i}\), \(i\in\{H,C\}\). For example, under _replacement close-out_,
\[\tilde{\Theta}^{H}_{t}(X_{t-})= -X_{t-}+L^{H}L^{m}(X_{t-}+\tilde{p}^{N}_{t-}+\tilde{e}^{E}_{t-})^ {+},\] (2.32)
\[\tilde{\Theta}^{C}_{t}(X_{t-})= X_{t-}+L^{C}L^{m}(X_{t-}+\tilde{p}^{N}_{t-}+\tilde{e}^{E}_{t-})^ {-}.\] (2.33)
while under _clean close-out_,
\[\tilde{\Theta}^{H}_{t}(X_{t-})= L^{H}L^{m}(\tilde{p}^{N}_{t-}+\tilde{e}^{E}_{t-})^{+},\]
\[\tilde{\Theta}^{C}_{t}(X_{t-})= L^{C}L^{m}(\tilde{p}^{N}_{t-}+\tilde{e}^{E}_{t-})^{-}.\]
In the case of _clean close-out_, \(\tilde{\Theta}^{i}\), \(i\in\{H,C\}\), represent the amount of breach of the contract. Recall \(B\) and \(p^{N}\) are independent of \(V\). Therefore, showing the existence and uniqueness of the hedger’s hedging portfolio and hedging strategy, \((V,\pi)\), reduces to investigating the BSDE of the XVA process (2.31). Before examining the solvability, assuming the existence and integrability, we define each component in the incremental XVA and give some remarks on them.
**Definition 2.19**.: _Assume that there exists \((X,\tilde{\pi}\)) satisfying (2.31). Then for \(t<\bar{\tau}\), we define adjustment processes: FCA, FBA, CVA, DVA, and incremental adjustment processes: \(\emph{FBA}^{\Delta}\), \(\emph{FCA}^{\Delta}\), \(\emph{DVA}^{\Delta}\), \(\emph{CVA}^{\Delta}\) processes as follows:_
\[\emph{FCA}_{t}\coloneqq \mathbb{E}\bigg{[}\int_{t}^{\bar{\tau}}\big{(}X_{s}+\tilde{p}^{N} _{s}+\tilde{B}^{\epsilon}_{s}-c_{s}-\sum_{i\in I\setminus\rho} \tilde{\pi}^{i}_{s}\big{)}^{-}s^{b}_{s}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{ G}_{t}\bigg{]},\] (2.34)
\[\emph{FBA}_{t}\coloneqq \mathbb{E}\bigg{[}\int_{t}^{\bar{\tau}}\big{(}X_{s}+\tilde{p}^{N} _{s}+\tilde{B}^{\epsilon}_{s}-c_{s}-\sum_{i\in I\setminus\rho} \tilde{\pi}^{i}_{s}\big{)}^{+}s^{\ell}_{s}\mathop{}\!\mathrm{d}s\bigg{|} \mathcal{G}_{t}\bigg{]},\] (2.35)
\[\emph{DVA}_{t}\coloneqq \mathbb{E}\bigg{[}\mathds{1}_{\bar{\tau}=\tau=\tau^{H}}\tilde{ \Theta}^{H}_{\tau}(X_{\tau-})\bigg{|}\mathcal{G}_{t}\bigg{]},\] (2.36)
\[\emph{CVA}_{t}\coloneqq \mathbb{E}\bigg{[}\mathds{1}_{\bar{\tau}=\tau=\tau^{C}}\tilde{ \Theta}^{C}_{\tau}(X_{\tau-})\bigg{|}\mathcal{G}_{t}\bigg{]},\] (2.37)
_and denoting \(\mathcal{O}_{t}\coloneqq\big{[}\int_{t}^{\bar{\tau}}s^{\epsilon}_{s}\tilde{B}^ {\epsilon}_{s}\mathop{}\!\mathrm{d}s\big{|}\mathcal{G}_{t}\big{]}\),_
\[\emph{FCA}^{\Delta}_{t}\coloneqq \emph{FCA}_{t}-\mathcal{O}_{t}^{-},\] (2.38)
\[\emph{FBA}^{\Delta}_{t}\coloneqq \emph{FBA}-\mathcal{O}_{t}^{+},\] (2.39)
\[\emph{DVA}^{\Delta}_{t}\coloneqq \emph{DVA}_{t}-\mathbb{E}\Big{[}\int_{t}^{\bar{\tau}}L^{H}L^{m}( \tilde{e}^{E}_{s})^{+}\mathop{}\!\mathrm{d}s\Big{|}\mathcal{G}_{t}\Big{]},\] (2.40)
\[\emph{CVA}^{\Delta}_{t}\coloneqq \emph{CVA}_{t}-\mathbb{E}\Big{[}\int_{t}^{\bar{\tau}}L^{C}L^{m}( \tilde{e}^{E}_{s})^{-}\mathop{}\!\mathrm{d}s\Big{|}\mathcal{G}_{t}\Big{]},\] (2.41)
_where (2.34)-(2.41) are well-defined. In this case, we also define_
\[\emph{FVA}\coloneqq \emph{FCA}-\emph{FBA},\]
\[\emph{FVA}^{\Delta}\coloneqq \emph{FCA}^{\Delta}-\emph{FBA}^{\Delta}.\]
**Remark 2.20**.:
1. _Assume that the local-martingales in (_2.31_) are true martingales. Then_ \[X= \text{FVA}^{\Delta}-\text{DVA}^{\Delta}+\text{CVA}^{\Delta}\] \[= \text{FCA}^{\Delta}-\text{FBA}^{\Delta}-\text{DVA}^{\Delta}+\text {CVA}^{\Delta}.\] (2.42)
2. \(\mathcal{O}\) _is the opportunity cost of not entering the new contract. In addition, FCA and FBA is the aggregated funding cost and benefit together with the legacy portfolio. Recalling the definitions_ \[\text{FCA}^{\Delta}= \text{FCA}-\mathcal{O}^{-},\] \[\text{FBA}^{\Delta}= \text{FBA}-\mathcal{O}^{+},\] _the incremental funding impacts are the differences between aggregated funding adjustments and the opportunity cost._
3. _Under replacement close-out,_ \[\text{FCA}^{\Delta}_{t}= \mathbb{E}\bigg{[}\int_{t}^{\bar{\tau}}\Big{[}\big{(}L^{m}X_{s}+L ^{m}\tilde{p}^{N}_{s}+\tilde{B}^{\epsilon}_{s}-\sum_{i\in I \setminus\rho}\tilde{\pi}^{i}_{s}\big{)}^{-}s^{b}_{s}-(s^{\epsilon}_{s}\tilde{ B}^{\epsilon}_{s})^{-}\Big{]}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{G}_{t} \bigg{]},\] (2.43) \[\text{FBA}^{\Delta}_{t}= \mathbb{E}\bigg{[}\int_{t}^{\bar{\tau}}\Big{[}\big{(}L^{m}X_{s}+L ^{m}\tilde{p}^{N}_{s}+\tilde{B}^{\epsilon}_{s}-\sum_{i\in I \setminus\rho}\tilde{\pi}^{i}_{s}\big{)}^{+}s^{\ell}_{s}-(s^{\epsilon}_{s} \tilde{B}^{\epsilon}_{s})^{+}\Big{]}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{G}_ {t}\bigg{]},\] (2.44) _while under clean close-out,_ \[\text{FCA}^{\Delta}_{t}= \mathbb{E}\bigg{[}\int_{t}^{\bar{\tau}}\Big{[}\big{(}X_{s}+L^{m} \tilde{p}^{N}_{s}+\tilde{B}^{\epsilon}_{s}-\sum_{i\in I\setminus \rho}\tilde{\pi}^{i}_{s}\big{)}^{-}s^{b}_{s}-(s^{\epsilon}_{s}\tilde{B}^{ \epsilon}_{s})^{-}\Big{]}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{G}_{t}\bigg{]},\] (2.45) \[\text{FBA}^{\Delta}_{t}= \mathbb{E}\bigg{[}\int_{t}^{\bar{\tau}}\Big{[}\big{(}X_{s}+L^{m} \tilde{p}^{N}_{s}+\tilde{B}^{\epsilon}_{s}-\sum_{i\in I\setminus \rho}\tilde{\pi}^{i}_{s}\big{)}^{+}s^{\ell}_{s}-(s^{\epsilon}_{s}\tilde{B}^{ \epsilon}_{s})^{+}\Big{]}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{G}_{t}\bigg{]}.\] (2.46) _It is often stated that there is no FVA when contracts are fully collateralized. Indeed, European Banking Authority (EBA) requires banks to assess FCA and FBA for derivatives that “are not strongly collateralized”_ _(see_ Cameron_,_ 22_)__. To examine this, assume full collateralization, i.e.,_ \(L^{m}=0\)_, replacement close-out, and_ \(\rho=I\)_. The condition of full repo markets is commonly chosen in literature. Moreover, when_ \(R^{\ell}\geq r\) _and_ \(R^{b}\geq r\)_, we can see that_ \(\text{FCA}^{\Delta}=\text{FBA}^{\Delta}=0\) _from (_2.43_) and (_2.44_). Therefore, based on our model,_ \(\text{FVA}^{\Delta}\) _is not necessary when the close-out amount is the replacement cost and all repo markets are fully liquid. However, even in full collateralization,_ \(\text{FVA}^{\Delta}\) _still exists_ _under clean close-out. This is one of the reasons why replacement close-out should be discussed._
4. _Because of FVA, the BSDE of XVA (_2.31_) becomes semi-linear. When_ \(s^{\ell}\) _and_ \(s^{b}\) _are bounded, the generator is uniformly Lipsitch, so the existence and uniqueness are not hard to obtain. However, we need to solve the BSDE numerically, and this can be costly when a large netting set and long maturity are considered. There have been several attempts to obtain a closed-form solution. However, for the closed-form solution, it was necessary to assume_ \(s^{\ell}=s^{b}\) _so that one can recover a linear equation as in_ Piterbarg _(_41_)_; Brigo . _(_14_)_; Bichuch . _(_5_)__. At the stage, it was an assumption, but we will show that we do not need such assumption by proving that FVA is either_ _FCA_ _or_ \(-\text{FBA}\)_, by obtaining either_ \[X_{t}+\tilde{p}^{N}_{t}+\tilde{B}^{\epsilon}_{t}-c_{t}- \sum_{i\in I\setminus\rho}\tilde{\pi}^{i}_{t}\geq\] (2.47) \[X_{t}+\tilde{p}^{N}_{t}+\tilde{B}^{\epsilon}_{t}-c_{t}- \sum_{i\in I\setminus\rho}\tilde{\pi}^{i}_{t}\leq\] (2.48) _for many derivative contracts. Then the BSDE becomes linear regardless of the assumption,_ \(s^{\ell}=s^{b}\)_, because one of the spreads does not play any role in solving the BSDE._
5. _Considering different lending/borrowing not only makes the BSDEs for replication pricing semi-linear but also makes the associated Hamiltonians non-smooth in optimal investment problems_ _(see_ Bo_,_ 10; Bo Capponi_,_ 11; Yang ._,_ 44_, for example)__._
6. _For modeling incremental XVA with capital value adjustment and initial margin, readers may want to refer to_ Albanese . _(_3_)__._
**Remark 2.21**.: _It is worth mentioning how FVA and DVA related. To see this, we briefly review the results of Burgard Kjaer (20, 21). Let \(n=1\), \(\rho=\{1,C\}\), \(L^{m}=1\), \(e^{N}=p^{N}\). As XVA was not incremental in Burgard Kjaer (20, 21), in this example, we also set \(e^{N}=0\) and \(\epsilon=0\)._
_First, we guess the amount of \(\tilde{\pi}^{H}\). At \(\tau=\tau^{H}\leq T\), \(X_{\tau^{H}}=-L^{H}(\tilde{p}^{N}_{\tau^{H}-})^{+}\). Therefore,_
\[\Delta X_{\tau^{H}}=-X_{\tau^{H}-}+\tilde{\pi}^{H}_{\tau^{H}}-L^{ H}(\tilde{p}^{N}_{\tau^{H}-})^{+}.\]
_The hedger may want to hedge the jump risk at \(\tau^{H}\) using \(\tilde{\pi}^{H}\) so that \(\Delta X_{\tau^{H}}=0\), i.e., the hedger may choose_
\[\tilde{\pi}^{H}=X_{-}-L^{H}(\tilde{p}^{N})^{+}.\] (2.49)
_Indeed, it will be shown later that (2.49) is the right choice. For now, we accept (2.49)._
_Assuming the local martingales in (2.31) are true martingales, by Definition2.19, for \(t<\bar{\tau}\),_
\[\text{DVA}_{t}= \mathbb{E}\bigg{[}\mathds{1}_{\bar{\tau}=\tau=\tau^{H}}L^{H}( \tilde{p}^{N}_{\tau})^{+}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{G}_{t}\bigg{]},\]
\[\text{FCA}_{t}= \mathbb{E}\bigg{[}\int_{t}^{\bar{\tau}}s^{b}_{s}\big{[}\tilde{p}^ {N}_{s}-L^{H}(\tilde{p}^{N}_{s})^{+}\big{]}^{-}\mathop{}\!\mathrm{d}s\bigg{|} \mathcal{G}_{t}\bigg{]},\]
\[\text{FBA}_{t}=\]
_However, since \(L^{H}\leq 1\),_
\[[\tilde{p}^{N}-L^{H}(\tilde{p}^{N})^{+}]^{-}= (\tilde{p}^{N})^{-},\]
\[{}^{+}= (\tilde{p}^{N})^{+}.\]
_Then by Lemma2.2,_
\[\text{DVA}_{t}= \mathbb{E}\bigg{[}\int_{t}^{T}G_{s}h^{H}_{s}L^{H}(\tilde{p}^{N}_{ s})^{+}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{F}_{t}\bigg{]},\]
\[\text{FCA}_{t}= \mathbb{E}\bigg{[}\int_{t}^{T}G_{s}s^{b}_{s}(\tilde{p}^{N}_{s})^{ -}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{F}_{t}\bigg{]},\]
\[\text{FBA}_{t}= \mathbb{E}\bigg{[}\int_{t}^{T}G_{s}s^{\ell}_{s}(\tilde{p}^{N}_{s} )^{+}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{F}_{t}\bigg{]}.\]
_If we assume that the hedger’s borrowing rate is higher than \(r\) only because of the own default risk, i.e., there is no liquidity premium, then we can approximate the hedger’s borrowing spread as_
\[s^{b}=h^{H}L^{H}.\]
_With this assumption, FCA becomes a counterpart of DVA, i.e.,_
\[\text{DVA}_{t}= \int_{t}^{T}G_{s}h^{H}_{s}L^{H}\mathbb{E}[(\tilde{p}^{N}_{s})^{+} |\mathcal{F}_{t}]\mathop{}\!\mathrm{d}s,\] (2.50)
\[\text{FCA}_{t}= \int_{t}^{T}G_{s}h^{H}_{s}L^{H}\mathbb{E}[(\tilde{p}^{N}_{s})^{-} |\mathcal{F}_{t}]\mathop{}\!\mathrm{d}s.\] (2.51)
_The above two equations show the financial relationship between DVA and FCA. DVA is a benefit to the shareholders because the hedger may default on derivative payables. On the other hand, the bondholders will receive a partial amount of the derivative receivables, namely \((1-L^{H})(\tilde{p}^{N})^{-}\). Therefore, the hedger should compensate the funding provider for the expected loss._
_When the hedger has redundant money, she can use it to buy back loans that were already issued. In this case, we may inspect \(s^{\ell}=h^{H}L^{H}\). This inspection leads to_
\[\text{DVA}=\text{FBA}.\] (2.52)
_It is important to avoid double-counting for both pricing and accounting. However, in this case, recalling that (2.42), it seems that we have two adjustments with the approximately same value. If (2.52) is valid for many contracts, i.e., DVA is overlapped with FBA, then one of them should be ignored._
_Indeed, it often believed that recording both FBA and DVA in bank’s account engenders a double-counting paradox. On the other hand, IFRS and GAAP accept DVA. To remedy this, in FCA/FBA accounting, which is endorsed by some banks, DVA is recorded in Contra-Liability (CL) account and_
\[\text{FCA}-(\text{FBA}-\text{DVA})\]
_is recorded in Contra-Asset (CA) account (see Castagna, 23; Albanese Andersen, 2). However, it has been pointed out that FCA/FBA accounting produces large asset/liability asymmetry._
_The large asymmetry is partly attributed to the binary nature of FVA. Based on the marginal effect of entering a contract, the FCA term in CA account does not countervail the FBA term in CL account. It will be shown later that the binary nature of FVA is related to whether the payoff functions are increasing or decreasing with respect to underlying assets. On the other hand, because DVA occurs from derivative payables, e.g., \(p^{N}\geq 0\) in (2.50), DVA arises where the payoffs are positive. Therefore, FBA and DVA are affected by different mathematical structures of derivative contracts, i.e., FBA is not overlapped with DVA in CA account. Thus, FCA/FBA accounting inevitably leads to large asset/liability asymmetry. To avoid the asymmetry, in FVA/FDA accounting, FVA is recorded in CA account and funding debt value adjustment (FDA) is recorded in CL account. FDA is a benefit that the bank can default on its liability. It was named DVA2 in Hull White (32). If liquidity is not considered, the value of FVA can be approximated by FDA. For the detail, readers may want to refer to Albanese Andersen (2)._
In the next section, we represent (2.31) as a standard form, and reduce it to a BSDE on the reference filtration \(\mathbb{F}\).
### BSDE formulation
For a BSDE representation, we begin this section with considering a family of maps \((\phi_{t})_{t\geq 0}\) such that
\[\phi_{t}~{}\colon~{}\sum_{i\in I}\tilde{\pi}\sigma^{ i}_{t}-Z^{N}_{t}~{}\to~{}\sum_{i\in I\setminus\rho}\tilde{\pi}^{i}.\]
The form of \((\phi_{t})_{t\geq 0}\) varies depending on parameters and accessibility of repo markets. Before giving the general form \((\phi_{t})_{t\geq 0}\), we examine some examples.
**Example 2.22**.: _(i) Consider \(\rho=I\), which is commonly assumed in literature. In this case,_
\[\phi_{t}\colon z\to 0.\] (2.53)
_(ii) Consider \(n=1\), and constant parameters. Then \(S^{1},~{}S^{H},~{}S^{C}\) follow_
\[\mathop{}\!\mathrm{d}S^{1}_{t}= rS^{1}_{t}\mathop{}\!\mathrm{d}t+\sigma^{1}S^{1}_{t}\mathop{}\! \mathrm{d}W_{t},\]
\[\mathop{}\!\mathrm{d}S^{i}_{t}= rS^{i}_{t}\mathop{}\!\mathrm{d}t-S_{t-}\mathop{}\!\mathrm{d}M^{i }_{t},~{}~{}i\in\{H,C\}.\]
_It follows that \(\sum_{i\in I}\tilde{\pi}^{1}\sigma^{i}=\tilde{\pi}^{1}\sigma^{1}\). When \(\rho=\{H,C\}\),_
\[\phi_{t}\colon z\to(\sigma^{1})^{-1}(z+Z^{N}_{t})=(\sigma^{1})^{- 1}z+(\sigma^{1})^{-1}Z^{N}_{t}.\] (2.54)
_On the other hand, when \(\rho=\{1,C\}\), \(\sum_{i\in I\setminus\rho}\tilde{\pi}^{i}=\tilde{\pi}^{H}\). Thus,_
\[\phi_{t}\colon z\to\tilde{\pi}^{H}_{t}.\] (2.55)
_This case was discussed by Burgard Kjaer (20)._
_(iii) On the other hand, let us assume that OIS rate is an \(\mathbb{F}\)-adapted process. In addition, we assume that for any \(i\in\{H,C\}\), \((G^{i}_{t})_{t\geq 0}\) is given by_
\[\mathop{}\!\mathrm{d}G^{i}_{t}=-h^{i}_{t}G^{i}_{t}\mathop{}\! \mathrm{d}t,\]
_where \((h^{i}_{t})_{t\geq 0}\) are deterministic processes. We consider non-defaultable and defaultable zero coupon bonds with the same maturity as \(T\), i.e., \(S^{1},~{}S^{H},~{}S^{C}\) are defined as_
\[S^{1}_{t}\coloneqq B_{t}\mathbb{E}\big{[}B_{T}^{-1}\big{|}\mathcal{F}_{t}\big{]},\]
\[S^{i}_{t}\coloneqq B_{t}\mathbb{E}\big{[}\mathds{1}_{\tau^{i}>T}B_{T}^{-1}\big{|} \mathcal{G}_{t}\big{]},~{}~{}i\in\{H,C\}.\]
_By Lemma2.2, \(S^{i}_{t}=\mathds{1}_{t<\tau}B_{t}(G^{i}_{t})^{-1}\mathbb{E}[G^{i}_{T}B_{T}^{- 1}|\mathcal{F}_{t}]\). Since \((G^{i}_{t})_{t\geq 0}\), \(i\in\{H,C\}\), are deterministic,_
\[S^{i}_{t}=\mathds{1}_{t<\tau}(G^{i}_{t})^{-1}G^{i}_{T}S^{1}_{t}.\]
_It follows that for \(t<\tau\), \(\sigma^{1}=\sigma^{H}=\sigma^{C}\). Recall \(\sigma=[\sigma^{1}~{}\dots~{}\sigma^{n}~{}\sigma^{H}~{}\sigma^{C}]^{\top}\). We define \(n\times n\) matrix_
\[\Sigma\coloneqq\begin{bmatrix}\sigma^{1}\\ \vdots\\ \sigma^{n}\end{bmatrix},\]
_and assume that \(\Sigma\) is invertible for any \(t<T\). Then,_
\[(\Sigma^{\top})^{-1}\sigma^{\top}\tilde{\pi}=\begin{bmatrix} \tilde{\pi}^{1}+\tilde{\pi}^{H}+\tilde{\pi}^{C}\\ \tilde{\pi}^{2}\\ \vdots\\ \tilde{\pi}^{n}\end{bmatrix}.\]
_Therefore, \(\mathds{1}^{\top}(\Sigma^{\top})^{-1}\sigma^{\top}\tilde{\pi}=\sum_{i\in I} \tilde{\pi}\), where \(\mathds{1}\coloneqq(1,\dots,1)^{\top}\in\mathbb{R}^{n}\). Thus, if \(\rho=\emptyset\),_
\[\phi_{t}\colon z\to\mathds{1}^{\top}(\Sigma^{\top}_{t})^{-1}(z+Z^ {N}_{t}),\] (2.56)
_On the other hand, consider \(\rho\not=\emptyset\) and define \(\mathds{1}^{\rho}\in\mathbb{R}^{n}\) by_
\[(\mathds{1}^{\rho})_{i}\coloneqq\left\{\begin{array}[]{r@{}l}0&i \in\rho,\\ 1&i\notin\rho,\end{array}\right.\]
_where \((\mathds{1}^{\rho})_{i}\) denote \(i\)-th component of \(\mathds{1}^{\rho}\). When \(\rho\cap\{1,H,C\}=\emptyset\), \(\phi_{t}\colon z\to(\mathds{1}^{\rho})^{\top}(\Sigma^{\top}_{t})^{-1}(z+Z^{N}_ {t})\). However, if \(\rho=\{1\}\),_
\[\phi_{t}\colon z\to(\mathds{1}^{\rho})^{\top}(\Sigma^{\top}_{t})^ {-1}(z+Z^{N}_{t})+\tilde{\pi}^{H}+\tilde{\pi}^{C}.\] (2.57)
Therefore, the form of BSDE depends on the choice of model, accessibility of repo markets, etc. However, from (2.53)-(2.57), we can observe that \(\phi\) is linear in \(z,Z^{N},\tilde{\pi}^{H},\tilde{\pi}^{C}\). As we will see later that \(\tilde{\pi}^{H}\) and \(\tilde{\pi}^{C}\) can be dependent of a solution of the BSDE under _replacement close-out_. For simplicity, we assume that \(\phi\) is an independent form of \(\tilde{\pi}^{H}\) and \(\tilde{\pi}^{C}\) such as (2.53), (2.54), and (2.56).
**Assumption 2.23**.: _For some \(n\)-dimensional \(\mathbb{F}\)-progressively measurable process \(\alpha\),_
\[\phi_{t}(z)=\alpha^{\top}_{t}(z+Z^{N}_{t}).\]
**Remark 2.24**.: _Note that we do not exclude the convention commonly used in the literature that assets are traded from repo markets, i.e., \(I=\rho\). In this case, we can set \(\alpha=0\) in creftypecap 2.23._
Now, we denote the generator of (2.31) by \(g^{\mathbb{G}}\), i.e.,
\[g^{\mathbb{G}}_{t}(y,z)\coloneqq-\big{(}y+\tilde{p}^{N}_{t}+ \tilde{B}^{\epsilon}_{t}-c_{t}-\phi_{t}(z)\big{)}^{+}s^{\ell}_{t}+\big{(}y+ \tilde{p}^{N}_{t}+\tilde{B}^{\epsilon}_{t}-c_{t}-\phi_{t}(z)\big{)}^{-}s^{b}_{ t}+s^{\epsilon}_{t}\tilde{B}^{\epsilon}_{t}.\]
In addition, we denote the incremental exposures by \(\Theta^{\Delta,i}\), \(i\in\{H,C\}\), more precisely,
\[\Theta^{\Delta,H}(y)\coloneqq \Theta^{H}(y)-L^{H}L^{m}(e^{E}_{-})^{+},\]
\[\Theta^{\Delta,C}(y)\coloneqq \Theta^{C}(y)-L^{C}L^{m}(e^{E}_{-})^{-}.\]
Let \((Y^{\mathbb{G}},Z^{\mathbb{G}},\tilde{\pi}^{H},\tilde{\pi}^{C})\) denote the solution, in a certain space, of the following BSDE:
\[Y^{\mathbb{G}}_{t}= -\mathds{1}_{\bar{\tau}=\tau=\tau^{H}}\tilde{\Theta}^{\Delta,H}_{ \tau}(Y^{\mathbb{G}}_{\tau-})+\mathds{1}_{\bar{\tau}=\tau=\tau^{C}}\tilde{ \Theta}^{\Delta,C}_{\tau}(Y^{\mathbb{G}}_{\tau-})\]
\[+\int_{t}^{\bar{\tau}}g^{\mathbb{G}}_{s}(Y^{\mathbb{G}}_{s},Z^{ \mathbb{G}}_{s})\mathop{}\!\mathrm{d}s-\int_{t}^{\bar{\tau}}(Z^{\mathbb{G}}_{s })^{\top}\mathop{}\!\mathrm{d}W_{s}+\sum_{i=H,C}\int_{t}^{\bar{ \tau}}\tilde{\pi}^{i}_{s}\mathop{}\!\mathrm{d}M^{i}_{s}.\] (2.58)
Then \((Y^{\mathbb{G}},Z^{\mathbb{G}},\tilde{\pi}^{H},\tilde{\pi}^{C})\) provides \((X,\tilde{\pi})\) as well as \((V,\pi)\). However, instead of directly dealing with (2.58), we will investigate a reduced BSDE on the reference filtration \(\mathbb{F}\). The idea is as follows.
It is a known fact that in the progressively enlarged filtration \(\mathbb{G}\), for any \(\mathbb{G}\)-optional (resp. predictable) process has an \(\mathbb{F}\)-optional (resp. predictable) reduction. Therefore, if there exists a solution of (2.58) such that \(Y^{\mathbb{G}}\) is \(\mathbb{G}\)-optional and \(Z^{\mathbb{G}}\) is \(\mathbb{G}\)-predictable, there exists an \(\mathbb{F}\)-adapted pair \((Y^{\mathbb{F}},Z^{\mathbb{F}})\) satisfying
\[Y^{\mathbb{G}}= \mathds{1}_{t<\bar{\tau}}Y^{\mathbb{F}},\] (2.59)
\[Z^{\mathbb{G}}= \mathds{1}_{t\leq\bar{\tau}}Z^{\mathbb{F}}.\] (2.60)
Moreover, we guess that \((Y^{\mathbb{F}},Z^{\mathbb{F}})\) is a solution of a BSDE on the reference filtration \(\mathbb{F}\), i.e., “for some” \(g^{\mathbb{F}}\colon\Omega\times[0,T]\times\mathbb{R}^{n+1}\to\mathbb{R}\),
\[Y^{\mathbb{F}}_{t}=\int_{t}^{T}g^{\mathbb{F}}_{s}(Y^{\mathbb{F}} _{s},Z^{\mathbb{F}}_{s})\mathop{}\!\mathrm{d}s-\int_{t}^{T}(Z^{\mathbb{F}}_{s} )^{\top}\mathop{}\!\mathrm{d}W_{s}.\] (2.61)
Then, by the terminal condition \(Y^{\mathbb{G}}_{\bar{\tau}}=-\mathds{1}_{\bar{\tau}=\tau=\tau^{H}}\tilde{ \Theta}^{\Delta,H}_{\tau}(Y^{\mathbb{G}}_{\tau-})+\mathds{1}_{\bar{\tau}=\tau= \tau^{C}}\tilde{\Theta}^{\Delta,C}_{\tau}(Y^{\mathbb{G}}_{\tau-})\), together with (2.59), we can expect that
\[Y^{\mathbb{G}}_{t}=\mathds{1}_{t<\bar{\tau}}Y^{\mathbb{F}}_{t}+ \mathds{1}_{t\geq\bar{\tau}}\big{[}-\mathds{1}_{\bar{\tau}=\tau=\tau^{H}} \tilde{\Theta}^{\Delta,H}_{\tau}(Y^{\mathbb{F}}_{\tau-})+\mathds{1}_{\bar{\tau }=\tau=\tau^{C}}\tilde{\Theta}^{\Delta,C}_{\tau}(Y^{\mathbb{F}}_{\tau-})\big{]}.\] (2.62)
By applying Itô’s formula to (2.62), we can find what the proper \(g^{\mathbb{F}}\) should be. At the end, finding \((V,\pi)\) reduces to investigating the reduced BSDE (2.61). The next proposition explains the detail.
**Proposition 2.25**.: _Assume there exits a unique solution \((Y^{\mathbb{F}},Z^{\mathbb{F}})\in\mathbb{S}^{2}_{T}\times\mathbb{H}^{2,n}_{T}\) of the following BSDE:_
\[Y^{\mathbb{F}}_{t}=\int_{t}^{T}g^{\mathbb{F}}_{s}(Y^{\mathbb{F}} _{s},Z^{\mathbb{F}}_{s})\mathop{}\!\mathrm{d}s-\int_{t}^{T}(Z^{\mathbb{F}}_{s} )^{\top}\mathop{}\!\mathrm{d}W_{s},\] (2.63)
\[g^{\mathbb{F}}_{t}(y,z)\coloneqq g^{\mathbb{G}}_{t}(y,z)-h^{H}_{ t}\tilde{\Theta}^{\Delta,H}_{t}(y)+h^{C}_{t}\tilde{\Theta}^{\Delta,C}_{t}(y)-h _{t}y.\] (2.64)
_Then \((Y^{\mathbb{G}},Z^{\mathbb{G}})\) can solve (2.58) by the following relationships:_
\[Y^{\mathbb{G}}_{t}= \mathds{1}_{t<\bar{\tau}}Y^{\mathbb{F}}_{t}+\mathds{1}_{t\geq\bar {\tau}}\big{[}-\mathds{1}_{\bar{\tau}=\tau=\tau^{H}}\tilde{\Theta}^{\Delta,H}_ {\tau}(Y^{\mathbb{F}}_{\tau-})+\mathds{1}_{\bar{\tau}=\tau=\tau^{C}}\tilde{ \Theta}^{\Delta,C}_{\tau}(Y^{\mathbb{F}}_{\tau-})\big{]},\] (2.65)
\[Z^{\mathbb{G}}_{t}= \mathds{1}_{t<\bar{\tau}}Z^{\mathbb{F}}_{t},\] (2.66)
\[\tilde{\pi}^{H}_{t}= Y^{\mathbb{F}}_{t}+\tilde{\Theta}^{H}_{t}(Y^{\mathbb{F}}_{t-}),\] (2.67)
\[\tilde{\pi}^{C}_{t}= Y^{\mathbb{F}}_{t}-\tilde{\Theta}^{C}_{t}(Y^{\mathbb{F}}_{t-}).\] (2.68)
_Moreover, if (H)-hypothesis holds between \(\mathbb{F}\) and \(\mathbb{G}\), i.e., any \((\mathbb{F},\mathbb{Q})\)-martingale is a \((\mathbb{G},\mathbb{Q})\)-martingale, then (2.65)-(2.68) is the only solution of (2.58)._
Proof.: By Itô’s formula,
\[\mathop{}\!\mathrm{d}(\mathds{1}_{t<\bar{\tau}}Y^{\mathbb{F}}_{t} )=\mathds{1}_{t\leq\bar{\tau}}\mathop{}\!\mathrm{d}Y^{\mathbb{F}}_{t}-\bm{ \delta}_{\bar{\tau}}(\mathop{}\!\mathrm{d}t)Y^{\mathbb{F}}_{\bar{\tau}}= \mathds{1}_{t\leq\bar{\tau}}\mathop{}\!\mathrm{d}Y^{\mathbb{F}}_{ t}-\mathds{1}_{t\leq\bar{\tau}}Y^{\mathbb{F}}_{t}\mathop{}\!\mathrm{d}M_{t}- \mathds{1}_{t\leq\bar{\tau}}h^{0}_{t}Y^{\mathbb{F}}\mathop{}\!\mathrm{d}t,\]
\[= \mathds{1}_{t\leq\bar{\tau}}\mathop{}\!\mathrm{d}Y^{\mathbb{F}}_{ t}-\mathds{1}_{t\leq\bar{\tau}}Y^{\mathbb{F}}_{t}\mathop{}\!\mathrm{d}M^{H}_{t }-\mathds{1}_{t\leq\bar{\tau}}Y^{\mathbb{F}}_{t}\mathop{}\!\mathrm{d}M^{C}_{t} -\mathds{1}_{t\leq\bar{\tau}}h_{t}Y^{\mathbb{F}}\mathop{}\!\mathrm{d}t,\]
and
\[\mathop{}\!\mathrm{d}\Big{(}\mathds{1}_{t\geq\bar{\tau}}\big{[}- \mathds{1}_{\bar{\tau}=\tau=\tau^{H}}\tilde{\Theta}^{\Delta,H}_{\tau}(Y^{ \mathbb{F}}_{\tau-})+ \mathds{1}_{\bar{\tau}=\tau=\tau^{C}}\tilde{\Theta}^{\Delta,C}_{ \tau}(Y^{\mathbb{F}}_{\tau-})\big{]}\Big{)}\]
\[= -\mathds{1}_{\tau=\tau^{H}\leq T}\bm{\delta}_{\tau}(\mathop{}\! \mathrm{d}t)\tilde{\Theta}^{\Delta,H}_{t}(Y^{\mathbb{F}}_{t-})+\mathds{1}_{ \tau=\tau^{C}\leq T}\bm{\delta}_{\tau}(\mathop{}\!\mathrm{d}t)\tilde{\Theta}^{ \Delta,C}_{t}(Y^{\mathbb{F}}_{t-})\]
\[= -\mathds{1}_{t\leq\bar{\tau}}\tilde{\Theta}^{\Delta,H}_{t}(Y^{ \mathbb{F}}_{t-})\mathop{}\!\mathrm{d}M^{H}_{t}-\mathds{1}_{t\leq\bar{\tau}}h_ {t}^{H}\tilde{\Theta}^{\Delta,H}_{t}(Y^{\mathbb{F}}_{t-})\mathop{}\!\mathrm{d}t\]
\[+\mathds{1}_{t\leq\bar{\tau}}\tilde{\Theta}^{\Delta,C}_{t}(Y^{ \mathbb{F}}_{t-})\mathop{}\!\mathrm{d}M^{C}_{t}+\mathds{1}_{t\leq\bar{\tau}}h_ {t}^{C}\tilde{\Theta}^{\Delta,C}_{t}(Y^{\mathbb{F}}_{t-})\mathop{}\!\mathrm{d}t.\]
If we take \(Y^{\mathbb{G}}\) as in (2.65),
\[\mathop{}\!\mathrm{d}Y_{t}^{\mathbb{G}}= \mathds{1}_{t\leq\bar{\tau}}\mathop{}\!\mathrm{d}Y^{\mathbb{F}}_{ t}-\mathds{1}_{t\leq\bar{\tau}}Y^{\mathbb{F}}_{t}\mathop{}\!\mathrm{d}M^{H}_{t }-\mathds{1}_{t\leq\bar{\tau}}Y^{\mathbb{F}}_{t}\mathop{}\!\mathrm{d}M^{C}_{t} -\mathds{1}_{t\leq\bar{\tau}}h_{t}Y^{\mathbb{F}}\mathop{}\!\mathrm{d}t\]
\[-\mathds{1}_{t\leq\bar{\tau}}\tilde{\Theta}^{\Delta,H}_{t}(Y^{ \mathbb{F}}_{t-})\mathop{}\!\mathrm{d}M^{H}_{t}-\mathds{1}_{t\leq\bar{\tau}}h_ {t}^{H}\tilde{\Theta}^{\Delta,H}_{t}(Y^{\mathbb{F}}_{t-})\mathop{}\!\mathrm{d}t\]
\[+\mathds{1}_{t\leq\bar{\tau}}\tilde{\Theta}^{\Delta,C}_{t}(Y^{ \mathbb{F}}_{t-})\mathop{}\!\mathrm{d}M^{C}_{t}+\mathds{1}_{t\leq\bar{\tau}}h_ {t}^{C}\tilde{\Theta}^{\Delta,C}_{t}(Y^{\mathbb{F}}_{t-})\mathop{}\!\mathrm{d}t\]
\[=\]
\[-\mathds{1}_{t\leq\bar{\tau}}[Y^{\mathbb{F}}_{t}+\tilde{\Theta}^{ \Delta,H}_{t}(Y^{\mathbb{F}}_{t})]\mathop{}\!\mathrm{d}M^{H}_{t}-\mathds{1}_{t \leq\bar{\tau}}[Y^{\mathbb{F}}_{t}-\tilde{\Theta}^{\Delta,C}_{t}(Y^{\mathbb{F} }_{t})]\mathop{}\!\mathrm{d}M^{H}_{t}.\]
Therefore, by (2.64), (2.65)-(2.68) give a solution for (2.58). Moreover, if (H)-hypothesis holds, (unique) martingale representation property holds by \(W\) and \(M^{i}\), \(i\in\{H,C\}\). Therefore, by Theorem 4.1 in Crépey Song (27), if \((Y^{\mathbb{G}},Z^{\mathbb{G}})\) solves (2.58), \(((Y^{\mathbb{G}})^{\tau-},Z^{\mathbb{G}}\mathds{1}_{\llbracket 0,\tau \llbracket})\) solves (2.61) as well. ∎
**Remark 2.26**.: _This reduction argument was also used by Bichuch . (5); Crépey (26, 25); Brigo . (17)._
**Remark 2.27**.: _(2.67) explains how the bank’s own default can be beneficial to the shareholders. The own default can be monetized by buying back their own bond that becomes cheaper. However, this means that banks can realize the profit only when they actually default. Indeed, DVA is often excluded from Common Equity Tier 1 capital (CET1), which is a proxy of the shareholder’s wealth._
Note that \(hY^{\mathbb{F}}\) in \(g^{\mathbb{F}}\) is an adjustment for an early termination. To see this, let \(\hat{Y}\coloneqq GY^{\mathbb{F}}\), \(\hat{Z}\coloneqq GZ^{\mathbb{F}}\). Then (2.63) becomes
\[\hat{Y}^{\mathbb{F}}_{t}=\int_{t}^{T}G_{s}\big{[}g^{\mathbb{G}}_{ s}(Y^{\mathbb{F}}_{s},Z^{\mathbb{F}}_{s})-h^{H}_{s}\tilde{\Theta}^{\Delta,H}_{ s}(Y^{\mathbb{F}}_{s})+h^{C}_{s}\tilde{\Theta}^{\Delta,C}_{s}(Y^{\mathbb{F}}_{ s})\big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}(\hat{Z}^{\mathbb{F}}_{s})^{\top} \mathop{}\!\mathrm{d}W_{s}.\]
Moreover, by Definition2.19 and Lemma2.2, if \(Z^{\mathbb{F}}\in\mathbb{H}^{2,n}_{T}\), for \(t<\tau\),
\[\text{FCA}^{\Delta}_{t}= G^{-1}_{t}\mathbb{E}\bigg{[}\int_{t}^{T}G_{s}\Big{[}\big{(}Y^{ \mathbb{F}}_{s}+\tilde{p}^{N}_{s}+\tilde{B}^{\epsilon}_{s}-c_{s}-\phi_{s}(Z^{ \mathbb{F}}_{s})\big{)}^{-}s^{b}_{s}-(s^{\epsilon}_{s}\tilde{B}^{\epsilon}_{s} )^{+}\Big{]}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{F}_{t}\bigg{]},\] (2.69)
\[\text{FBA}^{\Delta}_{t}= G^{-1}_{t}\mathbb{E}\bigg{[}\int_{t}^{T}G_{s}\Big{[}\big{(}Y^{ \mathbb{F}}_{s}+\tilde{p}^{N}_{s}+\tilde{B}^{\epsilon}_{s}-c_{s}-\phi_{s}(Z^{ \mathbb{F}}_{s})\big{)}^{+}s^{\ell}_{s}-(s^{\epsilon}_{s}\tilde{B}^{\epsilon}_ {s})^{-}\Big{]}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{F}_{t}\bigg{]},\] (2.70)
\[\text{DVA}^{\Delta}_{t}= G^{-1}_{t}\mathbb{E}\bigg{[}\int_{t}^{T}G_{s}\tilde{\Theta}^{ \Delta,H}_{s}(Y^{\mathbb{F}}_{s})\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{F}_{t} \bigg{]},\] (2.71)
\[\text{CVA}^{\Delta}_{t}= G^{-1}_{t}\mathbb{E}\bigg{[}\int_{t}^{T}G_{s}\tilde{\Theta}^{ \Delta,C}_{s}(Y^{\mathbb{F}}_{s})\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{F}_{t} \bigg{]},\] (2.72)
and \(\text{XVA}=G^{-1}\hat{Y}\). Still, because of the semi-linearity in (2.69) and (2.70), we need to solve (2.61) numerically. More importantly, it will be interesting to investigate how much cancellation between FCA and FBA, and between FBA and DVA, can be expected. We answer the questions in the next section. Based on our model, the answers for the questions are both negative.
## 3 Main Results
In this section, we show a binary nature of FVA; FVA is either FCA or -FBA. In other words, either \(\text{FCA}=0\) or \(\text{FBA}=0\). This switching property of FVA is determined by some properties of payoff structures of the derivative contract. Before the main theorem, we explain the idea by an example first.
**Example 3.1** (Stock forward contract with _clean close-out_).: _Consider \(n=1\), \(e^{N}=p^{N}\). For simplicity, we assume all parameters are constant and let the traded assets \((S^{1},S^{H},S^{C})\) given by_
\[\mathop{}\!\mathrm{d}S^{1}_{t}= r_{t}S^{1}_{t}\mathop{}\!\mathrm{d}t+\sigma^{1}S^{1}_{t}\mathop{ }\!\mathrm{d}W_{t},\]
\[\mathop{}\!\mathrm{d}S^{i}_{t}= r_{t}S^{i}_{t}\mathop{}\!\mathrm{d}t-S^{i}_{t-}\mathop{}\! \mathrm{d}M^{i}_{t},~{}~{}i\in\{H,C\}.\]
_Moreover, we assume that the defaultable bonds can be traded through repo markets, i.e., \(\rho=\{H,C\}\). Then \(\sum_{i\in I}\tilde{\pi}^{i}\sigma^{i}=\tilde{\pi}^{1}\sigma^{1}\) and \(\sum_{i\in I\setminus\rho}\tilde{\pi}^{i}=\tilde{\pi}^{1}\), therefore_
\[\phi(z)=\alpha(z+Z^{N})=(\sigma^{1})^{-1}(z+Z^{N}).\]
_Let \(\textfrak{D}^{N}=\mathds{1}_{\llbracket T,\infty\llbracket}(S^{1}_{T}-K)\), for some \(K\geq 0\). We denote \(\tilde{S}^{1}\coloneqq B^{-1}S^{1}\). Recall the definition \(\tilde{p}^{N}=B^{-1}p^{N}\) and_
\[\tilde{\Theta}^{\Delta,H}= L^{H}L^{m}\big{(}(\tilde{p}^{N}+\tilde{e}^{E})^{+}-(\tilde{e}^{E })^{+}\big{)},\]
\[\tilde{\Theta}^{\Delta,C}= L^{C}L^{m}\big{(}(\tilde{p}^{N}+\tilde{e}^{E})^{-}-(\tilde{e}^{E })^{-}\big{)}.\]
_Thus, the generator \(g^{\mathbb{F}}\) becomes_
\[g^{\mathbb{F}}_{t}(y,z)= -\big{[}y+L^{m}\tilde{p}^{N}_{t}+\tilde{B}^{\epsilon}_{t}-\alpha( z+Z^{N}_{t})\big{]}^{+}s^{\ell}+\big{[}y+L^{m}\tilde{p}^{N}_{t}+\tilde{B}^{ \epsilon}_{t}-\alpha(z+Z^{N}_{t})\big{]}^{-}s^{b}+s^{\epsilon}\tilde{B}^{ \epsilon}_{t}\]
\[-h^{H}L^{H}L^{m}\big{(}(\tilde{p}^{N}_{t}+\tilde{e}^{E}_{t})^{+}- (\tilde{e}^{E}_{t})^{+}\big{)}+h^{C}L^{C}L^{m}\big{(}(\tilde{p}^{N}_{t}+\tilde {e}^{E}_{t})^{-}-(\tilde{e}^{E}_{t})^{-}\big{)}-hy.\]
_To explain the idea, the hedger should pay \(S^{1}_{T}-K\) at the maturity or \(p^{N}_{t}=S_{t}^{1}-B_{t}B_{T}^{-1}K\) at an early termination \(t<T\). For the payment, she needs to retain \(S^{1}\). To buy \(S^{1}\), the hedger may need to borrow money, so it is expected that \(s^{\ell}\) does not play an important role in maintaining the hedging portfolio. Therefore, we guess_
\[Y^{\mathbb{F}}+L^{m}\tilde{p}^{N}-\alpha(Z^{\mathbb{F}}-Z^{N}) \leq 0,~{}~{}\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\!\mathrm{d}t- \text{a.s}.\] (3.1)
_Unless the tendency of (3.1) is dominated by the legacy portfolio, we can recover a linear BSDE. For simplicity, we assume that the dealer had big enough exposure to the counterparty before the new contract and the initial exposure dominates the new exposure, i.e., we assume that_
\[\tilde{p}^{N}+\tilde{e}^{E}\geq 0,~{}~{}\tilde{e}^{E}\geq 0,~{}~{ }\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\!\mathrm{d}t-\text{a.s}.\] (3.2)
_Then we consider \((Y^{\#},Z^{\#})\) satisfying_
\[Y^{\#}_{t}= -\int_{t}^{T}\bigg{[}\big{(}Y^{\#}_{s}+L^{m}\tilde{p}^{N}_{s}+ \tilde{B}^{\epsilon}_{s}-\alpha(Z^{\#}_{s}+Z^{N}_{s})\big{)}s^{b}-s^{\epsilon} \tilde{B}^{\epsilon}_{s}+hY^{\#}_{s}\bigg{]}\mathop{}\!\mathrm{d}s\]
\[-\int_{t}^{T}h^{H}L^{H}L^{m}\tilde{p}^{N}_{s}\mathop{}\!\mathrm{d }s-\int_{t}^{T}(Z^{\#}_{s})^{\top}\mathop{}\!\mathrm{d}W_{s},\]
_Then we will show that_
\[Y^{\#}+L^{m}\tilde{p}^{N}+\tilde{B}^{\epsilon}-\alpha(Z^{\#}+Z^{ N})\leq 0,~{}~{}\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\!\mathrm{d}t- \text{a.s}.\] (3.3)
_To show this, we take another transformation, \(V^{\mathbb{F}}\coloneqq Y^{\#}+L^{m}\tilde{p}^{N}+\tilde{B}^{\epsilon}\) and \(\Pi^{\mathbb{F}}\coloneqq Z^{\#}+L^{m}Z^{N}\). Then \((V^{\mathbb{F}},\Pi^{\mathbb{F}})\) is the solution of_
\[V^{\mathbb{F}}_{t}= L^{m}\xi+\tilde{B}^{\epsilon}_{T}+\int_{t}^{T}F_{t}(V^{\mathbb{F }}_{s},\Pi^{\mathbb{F}}_{s})\mathop{}\!\mathrm{d}s-\int_{t}^{T}(\Pi^{\mathbb{F }}_{s})^{\top}\mathop{}\!\mathrm{d}W_{s},\] (3.4)
_where \(\xi\coloneqq\tilde{S}^{1}_{T}-B^{-1}_{T}K\) and_
\[F_{t}(y,z)\coloneqq-(y-\alpha z)s^{b}-hy+\xi^{b}_{t},\] (3.5)
\[\xi^{b}\coloneqq(h-h^{H}L^{H}L^{m})\tilde{p}^{N}+h\tilde{B}^{ \epsilon}+\alpha(1-L^{m})s^{b}Z^{N}.\]
_Under mild conditions, we can obtain \((V^{\mathbb{F}},\Pi^{\mathbb{F}})\in\mathbb{L}^{2}([0,T]\colon\mathbb{D}^{1,2} \times\mathbb{D}^{1,2})\), i.e., \(\forall t\leq T\), \((V^{\mathbb{F}}_{t},\Pi^{\mathbb{F}}_{t})\in\mathbb{D}^{1,2}\times\mathbb{D}^{ 1,2}\) and_
\[\int_{0}^{T}\Big{(}\|V^{\mathbb{F}}_{t}\|^{2}_{1,2}+\|\Pi^{ \mathbb{F}}_{t}\|^{2}_{1,2}\Big{)}\mathop{}\!\mathrm{d}t<\infty.\] (3.6)
_Moreover, \((D_{t}V^{\mathbb{F}}_{t})_{0\leq t\leq T}\) is a version of \((\Pi^{\mathbb{F}}_{t})_{0\leq t\leq T}\). In addition, note that (3.3) is equivalent to_
\[V^{\mathbb{F}}-(1-L^{m})s^{b}\alpha Z^{N}-\alpha\Pi^{\mathbb{F}} \leq V^{\mathbb{F}}-(1-L^{m})s^{b}\alpha Z^{N}-\alpha DV^{\mathbb{F}}\leq 0,~{ }~{}\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\!\mathrm{d}t-\text{a.s}.\] (3.7)
_To show (3.7), for \(\theta\leq t\), let \((V^{\mathbb{F}}_{t,\theta},\Pi^{\mathbb{F}}_{t,\theta})\coloneqq(\alpha D_{ \theta}V^{\mathbb{F}}_{t},\alpha D_{\theta}\Pi^{\mathbb{F}}_{t})\). Therefore, \((V^{\mathbb{F}}_{\cdot,\theta},\Pi^{\mathbb{F}}_{\cdot,\theta})\) is given by_
(3.8)
\[F_{t,\theta}(y,z)\coloneqq-(y-\alpha z)s_{t}^{b}-h_{t}y+\alpha D _{\theta}\xi^{b}_{t}.\] (3.9)
_Note that \(F_{t,\theta}(y,z)=F_{t}(y,z)+\alpha D_{\theta}\xi^{b}_{t}-\xi^{b}_{t}\). Namely, (3.8) can be written as_
\[V^{\mathbb{F}}_{t,\theta}= \alpha L^{m}D_{\theta}\xi+\int_{t}^{T}\Big{[}F_{s}(V^{\mathbb{F}} _{t,\theta},\Pi^{\mathbb{F}}_{t,\theta})+\alpha D_{\theta}\xi^{b}_{t}-\xi^{b}_ {t}\big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}(\Pi^{\mathbb{F}}_{s,\theta})^{ \top}\mathop{}\!\mathrm{d}W_{s},\] (3.10)
_We will show \(V^{\mathbb{F}}_{\cdot}\leq V^{\mathbb{F}}_{\cdot,\theta}\) by comparing (3.4) and (3.8), and it suffices to show that for \(\theta\leq t\),_
\[L^{m}\xi+\tilde{B}^{\epsilon}_{T}-\alpha L^{m}D_{\theta}\xi\leq 0,~{}~{}\text{a.s, }\] (3.11)
\[\xi^{b}-\alpha D_{\theta}\xi^{b}\leq\] (3.12)
\[Z^{N}\geq\] (3.13)
_We will show that the above inequalities hold if \(\epsilon\) is not too big. More precisely, we assume_
(3.14)
_It is easy to check_
\[\tilde{B}^{\epsilon}_{T}+L^{m}\xi-\alpha L^{m}D_{\theta}\xi= \tilde{B}^{\epsilon}_{T}+L^{m}(\tilde{S}^{1}_{T}-B^{-1}_{T}K)-( \sigma^{1})^{-1}L^{m}D_{\theta}(\tilde{S}^{1}_{T}-B_{T}^{-1}K)\]
\[= \tilde{B}^{\epsilon}_{T}+L^{m}(\tilde{S}^{1}_{T}-B_{T}^{-1}K)-L^{ m}\tilde{S}^{1}_{T}=\tilde{B}^{\epsilon}_{T}-L^{m}B_{T}^{-1}K\leq 0.\]
_Moreover, by Proposition 3.12 in Di Nunno . (28),_
\[\tilde{p}^{N}_{t}-\alpha D_{\theta}\tilde{p}^{N}_{t}= \mathbb{E}[\xi|\mathcal{F}_{t}]-\alpha D_{\theta}\mathbb{E}[\xi| \mathcal{F}_{t}]\]
\[= \mathbb{E}[\xi|\mathcal{F}_{t}]-\alpha\mathbb{E}[D_{\theta}\xi| \mathcal{F}_{t}]\]
\[= \mathbb{E}[\xi-\alpha D_{\theta}\xi|\mathcal{F}_{t}].\]
_Moreover, \(Z^{N}=D\tilde{p}^{N}=(\sigma^{1})^{-1}\tilde{S}^{1}\). Then, it follows that_
\[\xi^{b}_{t}-\alpha D_{\theta}\xi^{b}_{t}= (h-h^{H}L^{H}L^{m})\mathbb{E}[\xi-\alpha D_{\theta}\xi|\mathcal{F }_{t}]+h\tilde{B}^{\epsilon}_{t}\]
\[= h(\tilde{B}^{\epsilon}_{t}-B_{T}^{-1}K)+h^{H}L^{H}L^{m}B_{T}^{-1}K\] (3.15)
_Therefore, by comparison principle, for \(\theta\leq t\), \(V^{\mathbb{F}}_{t}\leq V^{\mathbb{F}}_{t,\theta}\). In particular, \(V^{\mathbb{F}}_{t}\leq V^{\mathbb{F}}_{t,t}=\alpha D_{t}V^{\mathbb{F}}_{t}= \alpha\Pi^{\mathbb{F}}_{t}\). Thus, (3.3) is guaranteed and \((Y^{\#},Z^{\#})=(Y^{\mathbb{F}},Z^{\mathbb{F}})\). Moreover, \((Y^{\mathbb{F}},Z^{\mathbb{F}})\) follows the linear BSDE (3.4), so we can find an analytic form of \((Y^{\mathbb{F}},Z^{\mathbb{F}})\), namely, \((V,\tilde{\pi})\) as well. In addition, \(\text{FBA}=0\). ∎_
**Remark 3.2**.:
1. _If_ \(\epsilon\geq\epsilon^{*}\coloneqq K(B^{b}_{T})^{-1}\)_, one may want to consider_ \((Y^{\#},Z^{\#})\) _given by_ \[Y^{\#}_{t}= -\int_{t}^{T}\bigg{[}\big{(}Y^{\#}_{s}+L^{m}\tilde{p}^{N}_{s}+ \tilde{B}^{\epsilon}_{s}-\alpha(Z^{\#}_{s}+Z^{N}_{s})\big{)}s^{\ell}-s^{ \epsilon}\tilde{B}^{\epsilon}_{s}+hY^{\#}_{s}\bigg{]}\mathop{}\!\mathrm{d}s\] \[-\int_{t}^{T}h^{H}L^{H}L^{m}\tilde{p}^{N}_{s}\mathop{}\!\mathrm{d }s-\int_{t}^{T}(Z^{\#}_{s})^{\top}\mathop{}\!\mathrm{d}W_{s},\] _and obtain opposite inequalities of (_3.11_) and (_3.12_). In this case,_ \(\text{FCA}=0\)_. Thus, the binary funding impacts depend on the value of initial portfolio,_ \(\epsilon\)_._
2. _If inequalities such as (_3.2_) are not satisfied, instead of (_3.1_) in the above example, one should consider_ \[\xi^{b}\coloneqq h\tilde{B}^{\epsilon}+\alpha(1-L^{m})s^{b}Z^{N}+h\tilde{p}^{N}- \mathds{1}_{\tilde{p}^{N}+\tilde{e}^{E}\geq 0,\tilde{e}^{E}\geq 0}h^{H}L^{H}L^ {m}\tilde{p}^{N}\] \[-\mathds{1}_{\tilde{p}^{N}+\tilde{e}^{E}\leq 0,\tilde{e}^{E}\leq 0 }h^{C}L^{C}L^{m}\tilde{p}^{N}-\mathds{1}_{\tilde{p}^{N}+\tilde{e}^{E}\geq 0, \tilde{e}^{E}\leq 0}L^{m}(h^{H}L^{H}\tilde{p}^{N}+(h^{H}L^{H}-h^{C}L^{C}) \tilde{e}^{E})\] \[-\mathds{1}_{\tilde{p}^{N}+\tilde{e}^{E}\leq 0,\tilde{e}^{E}\geq 0 }L^{m}(h^{C}L^{C}\tilde{p}^{N}+(h^{C}L^{C}-h^{H}L^{H})\tilde{e}^{E}).\] _If the hedger and counterparty are both major banks having similar credit risks so that we can assume_ \(h^{H}L^{H}=h^{C}L^{C}\)_, we can obtain the same result as in_ Example3.1_._
3. _Note that in the above example,_ \(\text{DVA}\not=0\)_. Consider the same market conditions but_ \[\textfrak{D}^{N}=\mathds{1}_{\llbracket T,\infty\llbracket}(K-S^{ 1}_{T}).\] _Namely, the hedger is in long position of the stock forward contract. We can_ _calculate the same way for the opposite inequality of (_3.3_). In this case,_ \(\text{FCA}=0\)_, and still_ \(\text{DVA}\not=0\)_. Note that FVA is either_ \(-\)_FBA or FCA, so neutralization of a substantial portion between FCA and FBA is hardly expected. This binary nature of FVA is a source of asset/liability asymmetry of FCA/FBA accounting._
4. Example3.1 _also tells us how and when FBA and DVA are different. FBA, as a counterpart of FCA, reduces FCA which originates from the bank’s default risk. DVA is also a benefit from the bank’s default risk, but it is hard to monetize DVA before the bank actually defaults. By these reasons, it is often believed that FBA and DVA are overlapped, and DVA is not considered in derivative transactions. However,_ Example3.1 _shows the different mathematical structure between FBA and DVA. DVA is a benefit from the possibility that the bank may default on its derivative payables. Thus, DVA occurs where_ \(\xi\geq 0\)_. On the other hand, FBA occurs where the opposite inequality of (_3.11_). To understand the meaning of (_3.11_), we set_ \(\epsilon=0\) _and consider_ \(\xi=\psi(\tilde{S}^{1}_{T})\) _for some smooth function_ \(\psi\colon\mathbb{R}\to\mathbb{R}_{+}\)_. Then, FBA occurs where_ \[\xi-\alpha D_{\theta}\xi\geq 0,\] (3.16) _where_ \(\alpha=(\sigma^{1})^{-1}\) _in_ Example3.1_. (_3.16_) can be rewritten as_ \[\psi(\tilde{S}^{1}_{T})-\psi^{\prime}(\tilde{S}^{1}_{T})\tilde{S} ^{1}_{T}\geq 0,\] (3.17) _and (_3.17_) again can be rewritten as_ \[\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d}x}\ln{\psi(x)} \leq\frac{1}{x},~{}~{}\forall x\geq 0.\] (3.18) _One sufficient condition for (_3.18_) is that_ \(\psi\) _is non-increasing. Similarly, consider_ \(\xi=\Psi(\tilde{S}^{1}_{T})\)_, where_ \(\Psi(x)\coloneqq a(x-k)\)_, for some_ \(a\in\mathbb{R}\) _and_ \(k>0\)_. In this case again, (_3.16_) is equivalent to_ \(a\leq 0\)_. In summary, DVA occurs where the payoff is positive while FBA occurs where the payoff is non-increasing._
5. _Another reason of belief that FBA is overlapped with DVA may be the convention in literature that all assets can be traded in repo markets, i.e.,_ \(\rho=I\)_. Recall that_ \(\alpha=0\) _when_ \(\rho=I\) _(recall_ Remark2.24_). When_ \(\alpha=0\)_, by (_3.16_), FBA also occurs where_ \(\xi\geq 0\)_. However, we can not guarantee that repo markets are always available. Indeed, there are some difficulties in using equities as repo collateral. The amounts of traded equities are smaller than fixed-income securities and there is no generally accepted method to valuate equities. By these reasons, equity repo markets are often limited to equity indices and baskets including many securities._
**Remark 3.3**.: _To the best of our knowledge, Malliavin differentiability of ABSDE has not been studied. In cases of clean close-out, the absence of initial margin is merely for simplicity. However, when we consider with initial margin under replacement close-out, we can not use the same method as in Example3.1._
### Main Theorems
In what follows, for the incremental effects, we focus on funding impacts, so in what follows, we assume that
\[e^{E}=0,\]
i.e., \(\text{CVA}^{\Delta}=\text{CVA}\) and \(\text{DVA}^{\Delta}=\text{DVA}\). However, \(\epsilon\) may not be zero. Under this restriction, we give the main theorems to show that either FBA\(=0\) or FCA\(=0\). We consider deterministic default intensities, volatility, and funding spreads, but \((r_{t})_{t\geq 0}\) does not need to be deterministic so that we can apply the result to cases that \((r_{t})_{t\geq 0}\) is a general \(\mathbb{F}\)-adapted process. These assumptions are mainly for simplicity. One case of stochastic default intensities is reported in Appendix. Moreover, we consider a derivatives of European style. For derivatives that has cash-flows at multiple times, we can divide the interval \([0,T]\) according to the time of cash-flows. For example, if \(\textfrak{D}^{N}_{t}=\sum_{i=1}^{N}\mathds{1}_{T_{i}\leq t}\xi^{i}\), for \(t\in(T_{i-1},T_{i})\), we consider the following BSDE:
\[Y^{\mathbb{F}}_{t}=Y^{\mathbb{F}}_{T_{i}}+\int_{t}^{T_{i}}g^{ \mathbb{F}}_{s}(Y^{\mathbb{F}}_{s},Z^{\mathbb{F}}_{s})\mathop{}\!\mathrm{d}s+ \int_{t}^{T_{i}}(Z^{\mathbb{F}}_{s})^{\top}\mathop{}\!\mathrm{d}W_{s}.\] (3.19)
Then we can apply the next theorems for each BSDE (3.19). The idea of the proofs is similar to Example3.1, and the proofs are reported in Appendix. We start from _clean close-out_.
**Theorem 3.4** (_Clean close-out_).: _We assume \(h^{H}\), \(h^{C}\), \(s^{\ell}\), \(s^{b}\) are deterministic and bounded. Moreover, \(\alpha\) is also deterministic and \(\alpha s^{\ell}\), \(\alpha s^{b}\) are bounded. Consider clean close-out, i.e., \(e^{N}=p^{N}\), and \(B^{-1}\textfrak{D}^{N}=\mathds{1}_{T\leq t}\xi\). In addition, we assume_
\[\xi\in\mathbb{L}_{T}^{2}\cap\mathbb{D}^{1,2},~{}~{}D_{\theta}\xi \in(\mathbb{D}^{1,2})^{n}~{}~{}\forall\theta\leq T,\] (3.20)
\[\mathbb{E}\bigg{[}\int_{0}^{T}\big{|}D_{\theta}\xi\big{|}^{2} \mathop{}\!\mathrm{d}\theta\bigg{]}<\infty,\] (3.21)
\[\mathbb{E}\bigg{[}\int_{0}^{T}\int_{0}^{T}\big{|}D_{t}(D_{\theta} \xi)\big{|}^{2}\mathop{}\!\mathrm{d}\theta\mathop{}\!\mathrm{d}t\bigg{]}<\infty.\] (3.22)
_We assume that \(\mathds{1}_{\tilde{p}^{N}=0}=0\), \(\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\!\mathrm{d}t\) a.s. Let_
\[\xi^{b}\coloneqq s^{b}(1-L^{m})\alpha^{\top}Z^{N}+h\tilde{B}^{\epsilon}+(h-h^{H}L ^{H}L^{m})(\tilde{p}^{N})^{+}-(h-h^{C}L^{C}L^{m})(\tilde{p}^{N})^{-},\]
\[\xi^{\ell}\coloneqq s^{\ell}(1-L^{m})\alpha^{\top}Z^{N}+h\tilde{B}^{\epsilon}+(h-h^{ H}L^{H}L^{m})(\tilde{p}^{N})^{+}-(h-h^{C}L^{C}L^{m})(\tilde{p}^{N})^{-}.\]
_(i) Assume that for any \(\theta\leq T\),_
\[L^{m}(\xi-\alpha_{\theta}^{\top}D_{\theta}\xi)+\tilde{B}^{ \epsilon}_{T}\leq 0,~{}~{}\text{a.s},\] (3.23)
\[\alpha^{\top}Z^{N}\geq\] (3.24)
\[\xi^{b}-\alpha_{\theta}^{\top}D_{\theta}\xi^{b}\leq\] (3.25)
_then there exists \((Y^{\#},Z^{\#})\in\mathbb{S}^{2}_{T}\times\mathbb{H}^{2,n}_{T}\) that satisfies_
\[Y^{\#}_{t}= \int_{t}^{T}-\bigg{[}\Big{(}Y^{\#}_{s}+L^{m}\tilde{p}^{N}_{s}+ \tilde{B}^{\epsilon}_{s}-\phi_{s}(Z^{\#}_{s})\Big{)}s^{b}_{s}-s^{\epsilon}_{s} \tilde{B}^{\epsilon}_{s}+h_{s}Y^{\#}_{s}\bigg{]}\mathop{}\!\mathrm{d}s\]
\[+\int_{t}^{T}-\big{[}\mathds{1}_{\tilde{p}^{N}_{s}\geq 0}h^{H}_{s }L^{H}L^{m}+\mathds{1}_{\tilde{p}^{N}_{s}<0}h^{C}_{s}L^{C}L^{m}\big{]}\tilde{p }^{N}_{s}\mathop{}\!\mathrm{d}s-\int_{t}^{T}(Z^{\#}_{s})^{\top}\mathop{}\! \mathrm{d}W_{s},\]
_and \((Y^{\#},Z^{\#})=(Y^{\mathbb{F}},Z^{\mathbb{F}})\). In particular, FBA=\(0\)._
_(ii) On the other hand, assume that for any \(\theta\leq T\),_
\[L^{m}(\xi-\alpha_{\theta}^{\top}D_{\theta}\xi)+\tilde{B}^{ \epsilon}_{T}\geq 0,~{}~{}\text{a.s,}\] (3.26)
\[\alpha^{\top}Z^{N}\leq\] (3.27)
\[\xi^{\ell}-\alpha_{\theta}^{\top}D_{\theta}\xi^{\ell}\geq\] (3.28)
_then there exists \((Y^{\#},Z^{\#})\in\mathbb{S}^{2}_{T}\times\mathbb{H}^{2,n}_{T}\) that satisfies_
\[Y^{\#}_{t}= \int_{t}^{T}-\bigg{[}\Big{(}Y^{\#}_{s}+L^{m}\tilde{p}^{N}_{s}+ \tilde{B}^{\epsilon}_{s}-\phi_{s}(Z^{\#}_{s})\Big{)}s^{\ell}_{s}-s^{\epsilon}_ {s}\tilde{B}^{\epsilon}_{s}+h_{s}Y^{\#}_{s}\bigg{]}\mathop{}\!\mathrm{d}s\]
\[+\int_{t}^{T}-\big{[}\mathds{1}_{\tilde{p}^{N}_{s}\geq 0}h^{H}_{s }L^{H}L^{m}+\mathds{1}_{\tilde{p}^{N}_{s}<0}h^{C}_{s}L^{C}L^{m}\big{]}\tilde{p }^{N}_{s}\mathop{}\!\mathrm{d}s-\int_{t}^{T}(Z^{\#}_{s})^{\top}\mathop{}\! \mathrm{d}W_{s},\]
_and \((Y^{\#},Z^{\#})=(Y^{\mathbb{F}},Z^{\mathbb{F}})\). In particular, \(\emph{FCA}=0\)._
_(iii) If the contract is un-collateralized, i.e., \(L^{m}=1\), (3.24) and (3.27) are not required in (i) and (ii)._
Recall that when we consider _replacement close-out_, \(\tilde{\Theta}^{\Delta,i}\), \(i\in\{H,C\}\), depend on \(Y^{\mathbb{F}}\). However, \(\tilde{\Theta}^{\Delta,i}(y)\) is not differentiable in \(y\). We can avoid the irregularity by considering contracts such that either \(\tilde{p}^{N}\geq 0\) or \(\tilde{p}^{N}\leq 0\), \(\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\!\mathrm{d}t\) a.s, i.e., options.
**Theorem 3.5** (_Replacement close-out_).: _We assume \(h^{H}\), \(h^{C}\), \(s^{\ell}\), \(s^{b}\) are deterministic and bounded. Moreover \(\alpha\) is also deterministic and \(\alpha s^{\ell}\), \(\alpha s^{b}\) are bounded. Consider replacement close-out, i.e., \(e^{N}=V_{-}-B^{\epsilon}=BY^{\mathbb{F}}+p^{N}\), and \(B^{-1}\textfrak{D}^{N}=\mathds{1}_{T\leq t}\xi\). In addition, we assume_
\[\xi\in\mathbb{L}_{T}^{2}\cap\mathbb{D}^{1,2},\] (3.29)
\[\mathbb{E}\bigg{[}\int_{0}^{T}\big{|}D_{\theta}\xi\big{|}^{2} \mathop{}\!\mathrm{d}\theta\bigg{]}<\infty.\] (3.30)
_We assume that either \(\xi\geq 0\) or \(\xi\leq 0\) a.s, i.e., we consider options._
_(i) Assume that \(\epsilon\leq 0\) and for any \(\theta\leq T\),_
\[L^{m}\xi-\alpha_{\theta}^{\top}D_{\theta}\xi\leq 0,~{}~{}\text{a .s.}\] (3.31)
_If \(\xi\geq 0\) a.s, then there exists a solution \((Y^{\#},Z^{\#})\in\mathbb{S}^{2}_{T}\times\mathbb{H}^{2,n}_{T}\) that satisfies_
\[Y^{\#}_{t}= \int_{t}^{T}-\bigg{[}\Big{(}L^{m}Y^{\#}_{s}+L^{m}\tilde{p}^{N}_{s }+\tilde{B}^{\epsilon}_{s}-\phi_{s}(Z^{\#}_{s})\Big{)}s^{b}_{s}-s^{\epsilon}_{ s}\tilde{B}^{\epsilon}_{s}\bigg{]}\mathop{}\!\mathrm{d}s\]
\[+\int_{t}^{T}\big{[}-h^{H}_{s}L^{H}L^{m}(Y^{\#}_{s}+\tilde{p}^{N} _{s})\big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}(Z^{\#}_{s})^{\top}\mathop{}\! \mathrm{d}W_{s},\]
_and \((Y^{\#},Z^{\#})=(Y^{\mathbb{F}},Z^{\mathbb{F}})\). On the other hand, if \(\xi\leq 0\) a.s, then there exists a solution \((Y^{\#},Z^{\#})\in\mathbb{S}^{2}_{T}\times\mathbb{H}^{2,n}_{T}\) that satisfies_
\[Y^{\#}_{t}= \int_{t}^{T}-\bigg{[}\Big{(}L^{m}Y^{\#}_{s}+L^{m}\tilde{p}^{N}_{s }+\tilde{B}^{\epsilon}_{s}-\phi_{s}(Z^{\#}_{s})\Big{)}s^{b}_{s}-s^{\epsilon}_{ s}\tilde{B}^{\epsilon}_{s}\bigg{]}\mathop{}\!\mathrm{d}s\]
\[+\int_{t}^{T}\big{[}-h^{C}_{s}L^{C}L^{m}(Y^{\#}_{s}+\tilde{p}^{N} _{s})\big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}(Z^{\#}_{s})^{\top}\mathop{}\! \mathrm{d}W_{s},\]
_and \((Y^{\#},Z^{\#})=(Y^{\mathbb{F}},Z^{\mathbb{F}})\). In particular, for both cases, \(\emph{FBA}=0\)._
_(ii) Assume that \(\epsilon\geq 0\) and for any \(\theta\leq T\),_
\[L^{m}\xi-\alpha_{\theta}^{\top}D_{\theta}\xi\geq 0,~{}~{}\text{a .s.}\] (3.32)
_If \(\xi\geq 0\) a.s, then there exists a solution \((Y^{\#},Z^{\#})\in\mathbb{S}^{2}_{T}\times\mathbb{H}^{2,n}_{T}\) that satisfies_
\[Y^{\#}_{t}= \int_{t}^{T}-\bigg{[}\Big{(}L^{m}Y^{\#}_{s}+L^{m}\tilde{p}^{N}_{s }+\tilde{B}^{\epsilon}_{s}-\phi_{s}(Z^{\#}_{s})\Big{)}s^{\ell}_{s}-s^{\epsilon }_{s}\tilde{B}^{\epsilon}_{s}\bigg{]}\mathop{}\!\mathrm{d}s\]
\[+\int_{t}^{T}\big{[}-h^{H}_{s}L^{H}L^{m}(Y^{\#}_{s}+\tilde{p}^{N} _{s})\big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}(Z^{\#}_{s})^{\top}\mathop{}\! \mathrm{d}W_{s},\]
_and \((Y^{\#},Z^{\#})=(Y^{\mathbb{F}},Z^{\mathbb{F}})\). On the other hand, if \(\xi\leq 0\) a.s, then there exists a solution \((Y^{\#},Z^{\#})\in\mathbb{S}^{2}_{T}\times\mathbb{H}^{2,n}_{T}\) that satisfies_
\[Y^{\#}_{t}= \int_{t}^{T}-\bigg{[}\Big{(}L^{m}Y^{\#}_{s}+L^{m}\tilde{p}^{N}_{s }+\tilde{B}^{\epsilon}_{s}-\phi_{s}(Z^{\#}_{s})\Big{)}s^{\ell}_{s}-s^{\epsilon }_{s}\tilde{B}^{\epsilon}_{s}\bigg{]}\mathop{}\!\mathrm{d}s\]
\[+\int_{t}^{T}\big{[}-h^{C}_{s}L^{C}L^{m}(Y^{\#}_{s}+\tilde{p}^{N} _{s})\big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}(Z^{\#}_{s})^{\top}\mathop{}\! \mathrm{d}W_{s},\]
_and \((Y^{\#},Z^{\#})=(Y^{\mathbb{F}},Z^{\mathbb{F}})\). In particular, for both cases, \(\emph{FCA}=0\)._
## 4 Examples and a Closed-form Solution
Many standard derivatives satisfy the conditions in Theorem3.4 and Theorem3.5. We will apply the main theorems to several derivatives and provide a closed-form solution for a call option. In what follows, for \(i\in I\), we denote
\[\tilde{S}^{i}\coloneqq B^{-1}S^{i}.\]
Moreover, recall that in the main theorems, we defined \(\xi\) by
\[\xi\coloneqq B_{T}^{-1}\Delta\textfrak{D}^{N}_{T}.\]
### Clean close-out
Banks buy Treasury bonds that return less than their funding rate. It was insisted in Hull White (32) that this shows that FVA should not be considered in derivative prices. We will show that when buying bonds, FCA\(=0\) for the hedger. Therefore, if we assume
\[R^{\ell}=r,\]
as in Burgard Kjaer (20), the fair price for the hedger is approximately the same as the bond price derived from discounting with the Treasury rate. Recall that, in the main theorem, we only assume that \(s^{\ell}\) and \(s^{b}\) are deterministic. As long as the spreads are deterministic, we can apply the theorems to interest rate derivatives.
**Example 4.1** (Buying a Treasury bond).: _Let us consider a hedger buying a Treasury bond with unit notional amount, i.e., \(\textfrak{D}^{N}=-\mathds{1}_{\llbracket T,\infty\llbracket}\). We assume that for \(i\in\{H,C\}\), \(\mathop{}\!\mathrm{d}G^{i}_{t}=-h^{i}_{t}G^{i}_{t}\mathop{}\!\mathrm{d}t\), where \((h^{i}_{t})_{t\geq 0}\) are deterministic processes. Moreover, we assume that OIS rate \(r\) is given by_
_for some \(\kappa,\theta,\Sigma>0\). Thus,_
\[\sigma^{1}= \sigma^{H}=\sigma^{C},\]
\[\sigma^{1}_{t}= -\frac{\Sigma[1-e^{-\kappa(T-t)}]}{\kappa}.\]
_Moreover, we assume \(\rho=\emptyset\). Then \(\sum_{i\in I}\tilde{\pi}^{i}\sigma^{i}=\sigma^{1}(\sum_{i\in I}\tilde{\pi}^{i})\). Therefore,_
\[\phi(z)=\alpha(z+Z^{N})=(\sigma^{1})^{-1}(z+Z^{N}).\]
_Since_
\[r_{t}=r_{0}e^{-\kappa t}+\theta(1-e^{-\kappa t})+\Sigma\int_{0}^ {t}e^{-\kappa(t-u)}\mathop{}\!\mathrm{d}W_{u},\]
_by Corollary 3.19 in Di Nunno . (28), for any \(\theta\leq t\), \(D_{\theta}r_{t}=\Sigma e^{-\kappa(t-\theta)}\), and it follows that_
\[D_{\theta}B^{-1}_{t}= -B_{t}^{-1}\int_{\theta}^{t}D_{\theta}r_{s}\mathop{}\!\mathrm{d}s\]
\[= -B_{t}^{-1}\Sigma\int_{\theta}^{t}e^{-\kappa(s-\theta)}\mathop{} \!\mathrm{d}s\]
\[= -B_{t}^{-1}\frac{\Sigma[1-e^{-\kappa(t-\theta)}]}{\kappa}.\]
_Recalling that \(\xi=-B^{-1}_{T}\),_
\[\xi-\alpha_{\theta}D_{\theta}\xi= \xi-(\sigma^{1}_{\theta})^{-1}D_{\theta}\xi\]
\[= -B_{T}^{-1}+(\sigma^{1}_{\theta})^{-1}B_{T}^{-1}\frac{\Sigma[1-e^ {-\kappa(T-\theta)}]}{\kappa}=0.\]
_It follows that for any \(\theta\leq t\), \(\tilde{p}^{N}_{t}-\alpha_{\theta}D_{\theta}\tilde{p}^{N}_{t}=0\). Moreover,_
\[\alpha_{t}^{\top}Z^{N}_{t}=(\sigma^{1}_{t})^{-1}D_{t}\tilde{p}^{N }_{t}=-S^{1}_{t}<0.\]
_Therefore, by (ii) in Theorem3.4, FCA\(=0\) where \(\epsilon\geq 0\). Therefore, if the initial value of legacy portfolio is non-negative, the trader does not enter a borrowing position and there is no FVA that should be recouped. ∎_
The next example is a general form of Example3.1.
**Example 4.2** (Combination of forward contracts).: _Let \(n=1\), \(\rho=\{H,C\}\), and the traded assets are given by_
\[\mathop{}\!\mathrm{d}S^{1}_{t}= rS^{1}_{t}\mathop{}\!\mathrm{d}t+\sigma^{1}_{t}S^{1}_{t}\mathop{ }\!\mathrm{d}W_{t},\]
\[\mathop{}\!\mathrm{d}S^{i}_{t}= rS^{i}_{t}\mathop{}\!\mathrm{d}t+S^{i}_{t-}\mathop{}\!\mathrm{d} M^{i}_{t},~{}i\in\{H,C\}.\]
_Since \(\rho=\{H,C\}\), \(\sum_{i\in I}\tilde{\pi}^{i}\sigma^{i}=\sigma^{1}\tilde{\pi}^{1}\) and \(\sum_{i\in I\setminus\rho}\tilde{\pi}^{i}=\tilde{\pi}^{1}\). Thus,_
\[\phi_{t}(z)= \alpha_{t}(z+Z^{N}_{t}),\]
\[\alpha_{t}= (\sigma^{1}_{t})^{-1}.\]
_We consider a combination of forward contracts: \(\textfrak{D}^{N}_{t}=\mathds{1}_{T\leq t}\sum_{i=1}^{N}\omega_{i}(S^{1}_{T}-K_ {i})\), where \(\omega_{i},K_{i}\in\mathbb{R}\). We assume that parameters are deterministic and_
\[s^{b}\geq 0,~{}~{}h^{H}\geq 0,~{}~{}h^{C}\geq 0,\]
\[\sum^{N}_{i=1}\omega_{i}\leq 0,~{}~{}\sum^{N}_{i=1}\omega_{i}K_{i }\leq 0.\]
_By the definition of clean price,_
\[\tilde{p}^{N}_{t}= \sum_{i=1}^{N}\mathbb{E}\big{[}\omega_{i}B_{T}^{-1}(S^{1}_{T}-K_{ i})\big{|}\mathcal{F}_{t}\big{]}\]
\[= \sum_{i=1}^{N}\omega_{i}(\tilde{S}_{t}-B^{-1}_{T}K_{i}).\]
_Since \(\mathop{}\!\mathrm{d}\tilde{S}^{1}_{t}=\sigma^{1}_{t}\tilde{S}_{t}\mathop{}\! \mathrm{d}W_{t}\), by Corollary 3.19 in Di Nunno . (28), for any \(\theta\leq t\),_
\[D_{\theta}\tilde{S}^{1}_{t}=\int_{\theta}^{t}\sigma_{\theta}^{1} D_{\theta}\tilde{S}^{1}_{s}\mathop{}\!\mathrm{d}W_{s}+\sigma^{1}_{\theta} \tilde{S}^{1}_{\theta}.\] (4.1)
_Since \(\sigma_{\theta}^{1}\tilde{S}^{1}_{t}\) satisfies (4.1), by the uniqueness,_
\[D_{\theta}\tilde{S}^{1}_{t}=\sigma_{\theta}^{1}\tilde{S}^{1}_{t}.\]
_Moreover, \(\alpha_{t}Z^{N}_{t}=(\sigma_{t}^{1})^{-1}Z^{N}_{t}=\tilde{S}^{1}_{t}\sum_{i} \omega_{i}\leq 0\). By straightforward calculation, for any \(\theta\leq t\),_
\[Z^{N}_{t}-\alpha_{\theta}D_{\theta}Z^{N}_{t}= \tilde{S}^{1}_{t}\sum_{i}\omega_{i}-\alpha_{\theta}D_{\theta}( \tilde{S}^{1}_{t})\sum_{i}\omega_{i}\]
\[= \tilde{S}^{1}_{t}\sum_{i}\omega_{i}-\tilde{S}^{1}_{t}\sum_{i} \omega_{i}=0,\] (4.2)
\[\tilde{p}^{N}_{t}-\alpha_{\theta}D_{\theta}\tilde{p}^{N}_{t}= \sum_{i=1}^{N}\omega_{i}(\tilde{S}_{t}-B^{-1}_{T}K_{i})-\alpha_{ \theta}D_{\theta}\big{[}\sum_{i=1}^{N}\omega_{i}(\tilde{S}_{t}-B^{-1}_{T}K_{i} )\big{]}\]
\[= -B^{-1}_{T}\sum\limits_{i=1}^{N}\omega_{i}K_{i}\leq 0.\] (4.3)
_Recall that as defined in Theorem3.4,_
\[\xi= \sum_{i=1}^{N}\omega_{i}(\tilde{S}^{1}_{T}-B_{T}^{-1}K_{i}),\]
\[\xi^{\ell}= s^{\ell}(1-L^{m})\alpha Z^{N}+h\tilde{B}^{\epsilon}+(h-h^{H}L^{H }L^{m})(\tilde{p}^{N})^{+}-(h-h^{C}L^{C}L^{m})(\tilde{p}^{N})^{-}.\]
_Note that \(\mathds{1}_{\tilde{p}^{N}=0}=0\), \(\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\!\mathrm{d}t\) a.s. Then by the (4.2) and (4.3), we attain that_
\[L^{m}(\xi-\alpha_{\theta}D_{\theta}\xi)+\tilde{B}^{\epsilon}_{T} =B_{T}^{-1}\big{(}B^{\epsilon}_{T}-L^{m}\sum_{i=1}^{N}\omega_{i}K_{i}\big{)} \geq 0,\]
\[\xi^{\ell}_{t}-\alpha D_{\theta}\xi^{\ell}_{t}=B_{T}^{-1}\bigg{(} hB^{\epsilon}_{t}-\sum_{i}\omega_{i}K_{i}\Big{[}s^{\ell}(1-L^{m})+(h-h^{H}L^{H }L^{m})\mathds{1}_{\tilde{p}^{N}>0}+(h-h^{C}L^{C}L^{m})\mathds{1}_{\tilde{p}^{ N}<0}\Big{]}\bigg{)}\geq 0.\]
_Assume that_
\[\epsilon\geq (B^{b}_{T})^{-1}q\sum_{i}\omega_{i}K_{i},\]
\[q\coloneqq \min\Big{\{}L^{m},~{}\frac{s^{\ell}(1-L^{m})+(h-h^{H}L^{H}L^{m})} {h}\Big{\}}.\] (4.4)
_Then, by (ii) in Theorem3.4, \((Y^{\mathbb{F}},Z^{\mathbb{F}})\) follows_
\[Y^{\mathbb{F}}_{t}= \int_{t}^{T}-\bigg{[}\Big{(}Y^{\mathbb{F}}_{s}+L^{m}\tilde{p}^{N} _{s}+\tilde{B}^{\epsilon}_{s}-\phi_{s}(Z^{\mathbb{F}}_{s})\Big{)}s^{\ell}_{s}- s^{\epsilon}_{s}\tilde{B}^{\epsilon}_{s}+h_{s}Y^{\mathbb{F}}_{s}\bigg{]} \mathop{}\!\mathrm{d}s\]
\[+\int_{t}^{T}-\big{[}\mathds{1}_{\tilde{p}^{N}_{s}\geq 0}h^{H}_{s }L^{H}L^{m}+\mathds{1}_{\tilde{p}^{N}_{s}<0}h^{C}_{s}L^{C}L^{m}\big{]}\tilde{p }^{N}_{s}\mathop{}\!\mathrm{d}s-\int_{t}^{T}(Z^{\mathbb{F}}_{s})^{\top}\mathop {}\!\mathrm{d}W_{s},\] (4.5)
_and FCA\(=0\). ∎_
To find an analytic form of (4.5), let \(V^{\mathbb{F}}\coloneqq Y^{\mathbb{F}}+\tilde{p}^{N}\), \(\Pi^{\mathbb{F}}=Z^{\mathbb{F}}+Z^{N}\), and let \(\mathbb{Q}^{\ell}\) denote an equivalent measure such that \((B^{\ell})^{-1}S^{i}\) is \((\mathbb{Q}^{\ell},\mathbb{G})\)-local martingales. In particular,
\[W^{\ell}_{t}\coloneqq W_{t}-\int_{0}^{t}\alpha_{s}s^{\ell}_{s} \mathop{}\!\mathrm{d}s,\]
is an \((\mathbb{F},\mathbb{Q}^{\ell})\)-Brownian motion. Then (4.5) becomes
\[V^{\mathbb{F}}_{t}= \xi+\int_{t}^{T}\Big{[}-(s^{\ell}_{s}+h_{s})V^{\mathbb{F}}_{s}+(s ^{\epsilon}_{s}-s^{\ell}_{s})\tilde{B}^{\epsilon}_{s}+\beta_{s}\tilde{p}^{N}_{ s}\Big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}\Pi^{\mathbb{F}}_{s}\mathop{}\! \mathrm{d}W^{\ell}_{s},\]
\[\beta_{t}\coloneqq (1-L^{m})s^{\ell}_{t}+h_{t}-\mathds{1}_{\tilde{p}^{N}_{s}\geq 0}h ^{H}_{t}L^{H}L^{m}-\mathds{1}_{\tilde{p}^{N}_{s}<0}h^{C}_{t}L^{C}L^{m}.\]
Let \(A_{t}\coloneqq\exp{\big{[}-\int_{0}^{t}(s^{\ell}_{s}+h_{s})\mathop{}\!\mathrm{ d}s\big{]}}\). Then
\[V^{\mathbb{F}}_{t}=A^{-1}_{t}\mathbb{E}^{\ell}\bigg{[}A_{T}\xi+ \int_{t}^{T}A_{s}\big{[}(s^{\epsilon}_{s}-s^{\ell}_{s})\tilde{B}^{\epsilon}_{s }+\beta_{s}\tilde{p}^{N}_{s}\big{]}\mathop{}\!\mathrm{d}s\bigg{|}\mathcal{F}_{ t}\bigg{]},\] (4.6)
where \(\mathbb{E}^{\ell}\) is the expectation under \(\mathbb{Q}^{\ell}\). (4.6) reduces the computational cost because it changes from backward simulation to forward simulation. In other words, we can reduce the computational cost for calculating conditional expectations at each time step in solving BSDEs numerically. However, the advantage stops there with _clean close-out_ convention and we can not find a closed-form solution of \(V^{\mathbb{F}}\). This is because of the mismatch of the pricing measures in
\[\mathbb{E}^{\ell}[\beta_{s}\tilde{p}^{N}_{s}\big{|}\mathcal{F}_{t }]=\mathbb{E}^{\ell}[\mathbb{E}[\beta_{s}\xi|\mathcal{F}_{s}]\big{|}\mathcal{F }_{t}].\]
To avoid this difficulty, Brigo . (14) considered un-collateralized contracts with null cash-flow at defaults. On the other hand, Bichuch . (5) assumed that the close-out amount and collateral are calculated by the risk-neutral price under \(\mathbb{Q}^{\ell}\) (or \(\mathbb{Q}^{b}\)), namely
\[e^{N}_{t}= (B^{\ell}_{t})^{-1}\mathbb{E}^{\ell}[(B^{\ell}_{T})^{-1}\zeta| \mathcal{F}_{t}],\] (4.7)
\[\zeta\coloneqq \Delta\textfrak{D}^{N}_{T},\] (4.8)
\[m^{N}_{t}= (1-L^{m})e^{N}_{t}.\] (4.9)
In these cases, the pricing measures are aligned and a closed-form solution is allowed. However, note that (4.7) is
\[\textit{clean price}+\text{"the hedger's FVA"}.\]
As we will see later, when _replacement close-out_ is assumed, the inconsistency of pricing measures does not appear and closed-form solutions are given. However, recall that the hedger’s funding information is already considered in \(V_{-}\). Thus, the consistency of pricing measures is inherent in _replacement close-out_.
### Replacement Close-out
In the next example, we deal with a non-Markovian case. This is one benefit of BSDEs and Malliavin calculus.
**Example 4.3** (Floating strike Asian call option with _replacement close-out_).: _We consider the same market condition as in Example4.2 except_
\[e^{N}=V_{-}-B^{\epsilon}.\] (4.10)
_Moreover, we consider a floating strike Asian call option:_
\[\textfrak{D}^{N}= \mathds{1}_{\llbracket T,\infty\llbracket}B_{T}\xi,\]
\[\xi\coloneqq (\tilde{S}^{1}_{T}-B_{T}^{-1}K\mathbb{I}_{T})^{+},\]
\[\mathbb{I}_{T}\coloneqq \exp{\Big{(}\frac{1}{T}\int_{0}^{T}\ln{(S^{1}_{u})}\mathop{}\! \mathrm{d}u\Big{)}}.\]
_We will show (3.32). By Theorem 3.5 in Di Nunno . (28), for \(\theta\leq T\),_
\[\alpha_{\theta}D_{\theta}\mathbb{I}_{T}= \frac{\alpha_{\theta}}{T}\mathbb{I}_{T}\int_{\theta}^{T}D_{\theta }\ln{(S^{1}_{u})}\mathop{}\!\mathrm{d}u\]
\[= \frac{\alpha_{\theta}}{T}\mathbb{I}_{T}\int_{\theta}^{T}(S^{1}_{u })^{-1}D_{\theta}(S^{1}_{u})\mathop{}\!\mathrm{d}u\]
\[= \frac{\alpha_{\theta}}{T}\mathbb{I}_{T}\int_{\theta}^{T}(S^{1}_{u })^{-1}\sigma^{1}_{\theta}S^{1}_{u}\mathop{}\!\mathrm{d}u\]
\[= \frac{T-\theta}{T}\mathbb{I}_{T}.\]
_Since \(\alpha_{\theta}D_{\theta}\tilde{S}^{1}_{T}=\tilde{S}^{1}_{T}\), we attain that_
\[L^{m}\xi-\alpha_{\theta}D_{\theta}\xi=\]
\[=\]
\[\leq \mathds{1}_{S^{1}_{T}\geq K\mathbb{I}_{T}}\big{[}(L^{m}-1)\tilde{ S}^{1}_{T}-B^{-1}_{T}K\mathbb{I}_{T}(L^{m}-1)\big{]}\]
\[= (L^{m}-1)(\tilde{S}^{1}_{T}-B^{-1}_{T}K\mathbb{I}_{T})^{+}\leq 0.\]
_Therefore, FBA\(=0\), where \(\epsilon\leq 0\)._
As the last example, we deal with a bond option.
**Example 4.4** (Bond option with _replacement close-out_).: _We assume that the same market conditions as in Example4.1. Let \(\textfrak{D}^{N}_{t}=\mathds{1}_{T\leq t}(S^{1}_{T,U}-K)^{+}\), where \(K>0\) and \(S^{1}_{\cdot,U}\) is a zero coupon bond with \(U>T\) as its maturity. We consider two defaultable bonds with the same maturity, i.e.,_
\[S^{1}_{t,U}\coloneqq B_{t}\mathbb{E}\big{[}B_{U}^{-1}|\mathcal{F}_{t}\big{]},\]
\[S^{i}_{t,U}\coloneqq\]
_Recall \(\phi(z)=\alpha(z+Z^{N})=(\sigma^{1})^{-1}(z+Z^{N})\), and_
\[\sigma^{1}= \sigma^{H}=\sigma^{C},\]
\[\sigma^{1}_{t}= -\frac{\Sigma[1-e^{-\kappa(U-t)}]}{\kappa}.\]
_Moreover, recall the definition of \(\xi\):_
\[\xi=\mathds{1}_{S^{1}_{T,U}\geq K}(\tilde{S}^{1}_{T,U}-KB^{-1}_{T }).\]
_We can see that_
\[\alpha_{\theta}D_{\theta}(KB^{-1}_{T})= (\sigma_{\theta}^{1})^{-1}KD_{\theta}B_{T}^{-1}\]
\[= -KB_{T}^{-1}(\sigma_{\theta}^{1})^{-1}\frac{\Sigma[1-e^{-\kappa(T -\theta)}]}{\kappa}\]
\[= KB_{T}^{-1}\frac{1-e^{-\kappa(T-\theta)}}{1-e^{-\kappa(U-\theta) }}.\]
_In addition,_
\[\alpha_{\theta}D_{\theta}\tilde{S}^{1}_{T,U}=(\sigma_{\theta}^{1} )^{-1}\sigma_{\theta}^{1}\tilde{S}^{1}_{T}=\tilde{S}^{1}_{T,U}.\]
_It follows that on \(\{S^{1}_{T,U}\geq K\}\),_
\[L^{m}\xi-\alpha_{\theta}D_{\theta}\xi= L^{m}(\tilde{S}^{1}_{T,U}-KB^{-1}_{T})-\alpha_{\theta}D_{\theta} (\tilde{S}^{1}_{T,U}-KB^{-1}_{T})\]
\[= (L^{m}-1)\tilde{S}^{1}_{T,U}-KB^{-1}_{T}\bigg{(}L^{m}-\frac{1-e^{ -\kappa(T-\theta)}}{1-e^{-\kappa(U-\theta)}}\bigg{)}\]
\[\leq (L^{m}-1)\tilde{S}^{1}_{T,U}-KB^{-1}_{T}(L^{m}-1)\]
\[= (L^{m}-1)(\tilde{S}^{1}_{T,U}-KB^{-1}_{T})\leq 0.\] (4.11)
_Therefore, by (i) in Theorem3.5, \((Y^{\mathbb{F}},Z^{\mathbb{F}})\) is given by_
\[Y^{\mathbb{F}}_{t}=\int_{t}^{T}-\bigg{[}\Big{(}L^{m}Y^{\mathbb{F }}_{s}+L^{m}\tilde{p}^{N}_{s}-\phi_{s}(Z^{\mathbb{F}}_{s})\Big{)}s^{b}_{s}+h^{ H}_{s}L^{H}L^{m}(Y^{\mathbb{F}}_{s}+\tilde{p}^{N}_{s})\bigg{]}\mathop{}\! \mathrm{d}s-\int_{t}^{T}Z^{\mathbb{F}}_{s}\mathop{}\!\mathrm{d}W_{s},\] (4.12)
_and FBA\(=0\). Note that \(\text{DVA}\not=0\) because \(\xi\geq 0\), but \(\text{FBA}=0\), where \(\epsilon\leq 0\) because \(\xi\) increases in \(S^{1}_{t,U}\). ∎_
Table1 shows the effects of FBA and DVA with respect to option contracts. Only when the hedger sells a put option, FBA and DVA are both positive, but still \(\text{FBA}\not=\text{DVA}\).
| FBA | DVA
---|---|---
buy/call | positive | nil
sell/call | nil | positive
buy/put | nil | nil
sell/put | positive | positive
Table 1: DVA and FBA with respect to option contracts
### A Closed-form Solution of a Call Option under Replacement Close-out
Under _replacement close-out_, we can find a closed-form solution. As an example, we discuss the solution of a stock call option. Let \(n=1\), \(e^{N}=V_{-}-B^{\epsilon}\), \(\rho=\{H,C\}\), \(\epsilon\leq 0\), \(\textfrak{D}^{N}=\mathds{1}_{\llbracket T,\infty\llbracket}(S^{1}_{T}-K)^{+}\), where
\[\mathop{}\!\mathrm{d}S^{1}_{t}= rS^{1}_{t}\mathop{}\!\mathrm{d}t+\sigma^{1}S^{1}_{t}\mathop{}\! \mathrm{d}W_{t},\]
\[\mathop{}\!\mathrm{d}S^{i}_{t}= rS^{i}_{t}\mathop{}\!\mathrm{d}t+S^{i}_{t-}\mathop{}\!\mathrm{d} M^{i}_{t},~{}i\in\{H,C\},\]
for some constants \(r\) and \(\sigma^{1}\). We also assume that \(s^{b}\), \(h^{H}\) are constant. Recall
\[\xi= B_{T}^{-1}(S^{1}_{T}-K)^{+},\]
\[\phi_{t}(z)= \alpha(z+Z^{N}_{t})\]
\[= (\sigma^{1})^{-1}(z+Z^{N}_{t}).\]
It is easy to check that
\[L^{m}\xi-\alpha_{\theta}D_{\theta}\xi= L^{m}\xi-(\sigma^{1})^{-1}D_{\theta}\xi\]
\[= \mathds{1}_{S^{1}_{T}\geq K}\big{[}(L^{m}-1)\tilde{S}^{1}_{T}-KL^ {m}B_{T}^{-1}\big{]}\leq 0.\]
Therefore, by (i) in Theorem3.5, FBA\(=0\) and \((Y^{\mathbb{F}},Z^{\mathbb{F}})\) follows
\[Y^{\mathbb{F}}_{t}= \int_{t}^{T}-\bigg{[}\Big{(}L^{m}Y^{\mathbb{F}}_{s}+L^{m}\tilde{p }^{N}_{s}-\phi_{s}(Z^{\mathbb{F}}_{s})\Big{)}s^{b}_{s}\bigg{]}\mathop{}\! \mathrm{d}s\]
\[+\int_{t}^{T}\big{[}-h^{H}_{s}L^{H}L^{m}(Y^{\mathbb{F}}_{s}+ \tilde{p}^{N}_{s})\big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}Z^{\mathbb{F}}_{s} \mathop{}\!\mathrm{d}W_{s}.\] (4.13)
Let \(V^{\mathbb{F}}\coloneqq Y^{\mathbb{F}}+\tilde{p}^{N}\), \(\Pi^{\mathbb{F}}\coloneqq Z^{\mathbb{F}}+Z^{N}\). Moreover, let \(\mathbb{Q}^{b}\) denote an equivalent measure that
\[W^{b}_{t}\coloneqq W_{t}-\int_{0}^{t}s^{b}\alpha_{s}\mathop{}\!\mathrm{d}s\] (4.14)
is an \((\mathbb{F},\mathbb{Q}^{b})\)-Brownian motion and \(\mathbb{E}^{b}\) denote the expectation under \(\mathbb{Q}^{b}\). Then (4.13) becomes
\[V^{\mathbb{F}}_{t}=\xi-\int_{t}^{T}(s^{b}+h^{H}L^{H})L^{m}V^{ \mathbb{F}}_{s}-\int_{t}^{T}\Pi^{\mathbb{F}}_{s}\mathop{}\!\mathrm{d}W^{b}_{s}.\]
Thus, it follows that
\[V^{\mathbb{F}}_{t}= A^{-1}_{t}\mathbb{E}^{b}[A_{T}\xi|\mathcal{F}_{t}].\] (4.15)
\[A_{t}\coloneqq e^{-\beta t},\] (4.16)
\[\beta\coloneqq (s^{b}+h^{H}L^{H})L^{m}.\] (4.17)
Note that there is no inconsistency of pricing measures. This is because we considered the hedger’s funding cost and benefit in the close-out amount, \(e^{N}=V_{-}\). To represent (4.15) in an explicit form, write
\[\xi= (B^{b}_{T})^{-1}B^{b}_{T}B^{-1}_{T}B_{T}\xi\]
\[= e^{s^{b}T}(B^{b}_{T})^{-1}B_{T}\xi.\]
Then (4.15) becomes, for \(t<\bar{\tau}\),
\[V^{\mathbb{F}}_{t}=A_{t}^{-1}A_{T}e^{s^{b}T}(B^{b}_{t})^{-1}B_{t }^{b}\mathbb{E}^{b}[(B^{b}_{T})^{-1}B_{T}\xi|\mathcal{F}_{t}].\]
More explicitly,
\[V_{t}=B_{t}V^{\mathbb{F}}_{t}=\] (4.18)
\[\mathcal{C}^{b}(t,S^{1}_{t})\coloneqq S^{1}_{t}\Phi(d(t,S^{1}_{t}))-Ke^{-R^{b}(T-t)}\Phi\big{(}d(t,S^{ 1}_{t})-\sigma^{1}\sqrt{T-t}\big{)},\] (4.19)
\[\Phi(x)\coloneqq \int_{-\infty}^{x}e^{-y^{2}/2}/\sqrt{2\pi}\mathop{}\!\mathrm{d}y,\] (4.20)
\[d(t,x)\coloneqq \frac{\ln{(S^{1}_{t}/K)}+\big{(}R^{b}+(\sigma^{1})^{2}/2\big{)}(T -t)}{\sigma^{1}\sqrt{T-t}}.\] (4.21)
Note that \(\partial_{s^{\ell}}V_{t}^{\mathbb{F}}=0\). Moreover, in (4.18),
\[(1-L^{m})s^{b}+L^{m}(-h^{H}L^{H})\]
is a weighted sum of \(s^{b}\) and \(-h^{H}L^{H}\). It shows how the effect of DVA is transferred to FCA as \(L^{m}\) changes. As \(L^{m}\) increases, the effect of funding cost is weaken, since the cost of posting the collateral is more expensive than the interest rate remunerating on the collateral, namely OIS rate. In more detail,
\[\exp{\big{(}-h^{H}L^{H}L^{m}(T-t)\big{)}}\]
is a deduction from DVA, while
\[\exp{\big{(}s^{b}(1-L^{m})(T-t)\big{)}}\]
is a compensation for the hedger for posting the collateral. However, this is not the only FCA. The other part of FCA is the cost to acquire \(S^{1}\) and it is included in \(\mathcal{C}^{b}\).
## 5 Conclusion
In summary, we discussed a binary nature of FVA. By the binary nature of FVA, we can recover linear BSDEs and analytic solutions are allowed. In cases of _replacement close-out_, the analytic solution can be represented as a closed-form. As a byproduct, this feature of FVA explains how FBA and DVA are different. In addition, this result provides an interpretation why banks buy Treasury bonds with presence of funding rates higher than OIS rate.
## References
* Agarwal . (2018)agarwal2018numericalAgarwal, A., De Marco, S., Gobet, E., López-Salas, J., Noubiagain, F. Zhou, A. 2018. Numerical approximations of McKean Anticipative Backward Stochastic Differential Equations arising in Variation Margin requirements Numerical approximations of McKean anticipative backward stochastic differential equations arising in variation margin requirements. Available at HAL:01686952.
* Albanese Andersen (2014)albanese2014accountingAlbanese, C. Andersen, L. 2014. Accounting for OTC derivatives: Funding adjustments and the re-hypothecation option Accounting for OTC derivatives: Funding adjustments and the re-hypothecation option. Available at SSRN:2482955.
* Albanese . (2018)albanese2018wealthAlbanese, C., Chataigner, M. Crépey, S. 2018. Wealth Transfers, Indifference Pricing, and XVA Compression Schemes Wealth transfers, indifference pricing, and xva compression schemes. Available at https://math.maths.univ-evry.fr/crepey/papers/wealth%20transfer-NEW.pdf.
* Andersen . (2017)andersen2016fundingAndersen, LB., Duffie, D. Song, Y. 2017. Funding value adjustments Funding value adjustments. Available at SSRN:2746010.
* Bichuch . (2018)bichuch2017arbitrageBichuch, M., Capponi, A. Sturm, S. 2018. Arbitrage-free XVA Arbitrage-free XVA. Mathematical Finance282582–620.
* Bielecki . (2018)bielecki2018arbitrageBielecki, TR., Cialenco, I. Rutkowski, M. 2018. Arbitrage-free pricing of derivatives in nonlinear market models Arbitrage-free pricing of derivatives in nonlinear market models. Probability, Uncertainty and Quantitative Risk31–56.
* Bielecki . (2008)bielecki2008pricingBielecki, TR., Jeanblanc, M. Rutkowski, M. 2008. Pricing and trading credit default swaps in a hazard process model Pricing and trading credit default swaps in a hazard process model. The Annals of Applied Probability1862495–2529.
* Bielecki Rutkowski (2013)bielecki2013creditBielecki, TR. Rutkowski, M. 2013. Credit risk: modeling, valuation and hedging Credit risk: modeling, valuation and hedging. Springer-Verlag Berlin Heidelberg.
* Bielecki Rutkowski (2015)bielecki2015valuationBielecki, TR. Rutkowski, M. 2015. Valuation and hedging of contracts with funding costs and collateralization Valuation and hedging of contracts with funding costs and collateralization. SIAM Journal on Financial Mathematics61594–655.
* Bo (2017)bo2017portfolioBo, L. 2017. Portfolio optimization of credit swap under funding costs Portfolio optimization of credit swap under funding costs. Probability, Uncertainty and Quantitative Risk2112.
* Bo Capponi (2016)bo2016optimalBo, L. Capponi, A. 2016. Optimal credit investment with borrowing costs Optimal credit investment with borrowing costs. Mathematics of Operations Research422546–575.
* Bo . (2017)bo2017riskBo, L., Capponi, A. Ceci, C. 2017. Risk-Minimizing Hedging of Counterparty Risk Risk-minimizing hedging of counterparty risk. Available at arXiv:1709.01115.
* Bo . (2018)bo2017creditBo, L., Capponi, A. Chen, P. 2018. Credit portfolio selection with decaying contagion intensities Credit portfolio selection with decaying contagion intensities. Mathematical Finance1-37.
* Brigo . (2017)brigo2017fundingBrigo, D., Buescu, C. Rutkowski, M. 2017. Funding, repo and credit inclusive valuation as modified option pricing Funding, repo and credit inclusive valuation as modified option pricing. Operations Research Letters456665–670.
* Brigo, Capponi Pallavicini (2014)brigo2014arbitrageBrigo, D., Capponi, A. Pallavicini, A. 2014. Arbitrage-Free Bilateral Counterparty Risk Valuation Under Collateralization and Application to Credit Default Swaps Arbitrage-free bilateral counterparty risk valuation under collateralization and application to credit default swaps. Mathematical Finance241125–146.
* Brigo . (2011)brigo2011collateralBrigo, D., Capponi, A., Pallavicini, A. Papatheodorou, V. 2011. Collateral margining in arbitrage-free counterparty valuation adjustment including re-hypotecation and netting Collateral margining in arbitrage-free counterparty valuation adjustment including re-hypotecation and netting. Avaialble at SSRN:1744101.
* Brigo . (2016)brigo2016analysisBrigo, D., Francischello, M. Pallavicini, A. 2016. Analysis of nonlinear valuation equations under credit and funding effects Analysis of nonlinear valuation equations under credit and funding effects. Innovations in Derivatives Markets Innovations in derivatives markets ( 37–52). Springer.
* Brigo, Liu . (2014)brigo2014nonlinearBrigo, D., Liu, Q., Pallavicini, A. Sloth, D. 2014. Nonlinear valuation under collateral, credit risk and funding costs: a numerical case study extending black-scholes Nonlinear valuation under collateral, credit risk and funding costs: a numerical case study extending black-scholes. Available at SSRN:2430696.
* Brigo Morini (2011)brigo2011closeBrigo, D. Morini, M. 2011. Close-out convention tensions Close-out convention tensions. Risk1274–78.
* Burgard Kjaer (2010)burgard2010partialBurgard, C. Kjaer, M. 2010. Partial differential equation representations of derivatives with bilateral counterparty risk and funding costs Partial differential equation representations of derivatives with bilateral counterparty risk and funding costs. Journal of Credit Risk73.
* Burgard Kjaer (2011)burgard2011balanceBurgard, C. Kjaer, M. 2011. In the balance In the balance. Risk1172–75.
* Cameron (2013)cameron2013blackCameron, M. 2013. The black art of FVA: Banks spark double-counting fears The black art of FVA: Banks spark double-counting fears. Risk415–18.
* Castagna (2011)castagna2011fundingCastagna, A. 2011. Funding, liquidity, credit and counterparty risk: Links and implications Funding, liquidity, credit and counterparty risk: Links and implications. Available at SSRN:1855028.
* Coculescu Nikeghbali (2012)coculescu2012hazardCoculescu, D. Nikeghbali, A. 2012. Hazard processes and martingale hazard processes Hazard processes and martingale hazard processes. Mathematical Finance223519–537.
* Crépey (20151)crepey2015bilateral2Crépey, S. 20151. Bilateral counterparty risk under funding constraints—Part II: CVA Bilateral counterparty risk under funding constraints—Part II: CVA. Mathematical Finance25123–50.
* Crépey (20152)crepey2015bilateral1Crépey, S. 20152. Bilateral counterparty risk under funding constraints—Part I: Pricing Bilateral counterparty risk under funding constraints—Part I: Pricing. Mathematical Finance2511–22.
* Crépey Song (2015)crepey2015bsdesCrépey, S. Song, S. 2015. BSDEs of counterparty risk BSDEs of counterparty risk. Stochastic Processes and their Applications12583023–3052.
* Di Nunno . (2009)di2009malliavinDi Nunno, G., Øksendal, BK. Proske, F. 2009. Malliavin calculus for Lévy processes with applications to finance Malliavin calculus for Lévy processes with applications to finance ( 2). Springer-Verlag Berlin Heidelberg.
* El Karoui . (1997)el1997backwardEl Karoui, N., Peng, S. Quenez, MC. 1997. Backward stochastic differential equations in finance Backward stochastic differential equations in finance. Mathematical Finance711–71.
* Gobet Pagliarani (2015)gobet2015analyticalGobet, E. Pagliarani, S. 2015. Analytical approximations of BSDEs with nonsmooth driver Analytical approximations of BSDEs with nonsmooth driver. SIAM Journal on Financial Mathematics61919–958.
* He Yan (1992)he1992semimartingaleHe, SW. Yan, JA. 1992. Semimartingale theory and stochastic calculus Semimartingale theory and stochastic calculus. Taylor & Francis.
* Hull White (2012)hull2012fvaHull, J. White, A. 2012. The FVA debate The FVA debate. Risk783–85.
* ISDA (2009)isda2009closeISDA. 2009. ISDA CLOSE-OUT AMOUNT PROTOCOL ISDA close-out amount protocol. Available at https://www.isda.org.
* Jiao . (2013)jiao2013optimalJiao, Y., Kharroubi, I. Pham, H. 2013. Optimal investment under multiple defaults risk: a BSDE-decomposition approach Optimal investment under multiple defaults risk: a BSDE-decomposition approach. The Annals of Applied Probability232455–491.
* Kim . (20181)nie2018americanKim, E., Nie, T. Rutkowski, M. 20181. Arbitrage-free pricing of American options in nonlinear markets Arbitrage-free pricing of american options in nonlinear markets. Available at arXiv:1804.10753.
* Kim . (20182)nie2018gameKim, E., Nie, T. Rutkowski, M. 20182. Arbitrage-Free Pricing of Game Options in Nonlinear Markets Arbitrage-free pricing of game options in nonlinear markets. Available at arXiv:1807.05448.
* Li Wu (2016)li2016fvaLi, C. Wu, L. 2016. FVA and CVA for Collateralized Trades with Re-hypothecation FVA and CVA for collateralized trades with re-hypothecation. Wilmott20168350–59.
* Modigliani Miller (1958)modigliani1958costModigliani, F. Miller, MH. 1958. The cost of capital, corporation finance and the theory of investment The cost of capital, corporation finance and the theory of investment. The American economic review483261–297.
* Murphy (2013)murphy2013otcMurphy, D. 2013. OTC Derivatives: Bilateral Trading and Central Clearing: An Introduction to Regulatory Policy, Market Impact and Systemic Risk OTC derivatives: Bilateral trading and central clearing: An introduction to regulatory policy, market impact and systemic risk. Palgrave Macmillan.
* Nie Rutkowski (2016)nie2016bsdeNie, T. Rutkowski, M. 2016. A BSDE approach to fair bilateral pricing under endogenous collateralization A BSDE approach to fair bilateral pricing under endogenous collateralization. Finance and Stochastics204855–900.
* Piterbarg (2010)piterbarg2010fundingPiterbarg, V. 2010. Funding beyond discounting: collateral agreements and derivatives pricing Funding beyond discounting: collateral agreements and derivatives pricing. Risk23297.
* Stiglitz (1969)stiglitz1969reStiglitz, JE. 1969. A re-examination of the Modigliani-Miller theorem A re-examination of the Modigliani-Miller theorem. The American Economic Review595784–793.
* Wu (2015)wu2015cvaWu, L. 2015. CVA and FVA to derivatives trades collateralized by cash CVA and FVA to derivatives trades collateralized by cash. International Journal of Theoretical and Applied Finance18051550035.
* Yang . (2017)yang2017constrainedYang, Z., Liang, G. Zhou, C. 2017. Constrained portfolio-consumption strategies with uncertain parameters and borrowing costs Constrained portfolio-consumption strategies with uncertain parameters and borrowing costs. Available at arXiv:1711.02939.
## Appendix A Proofs of the Main Theorems
Proof of Theorem 3.4.: We only prove (i). (ii) can be proved similarly, and (iii) is an easy consequence of (i) and (ii).
(i) By (3.20) and (3.21),
\[\mathbb{E}\bigg{[}\int_{0}^{T}\big{|}\tilde{p}^{N}_{t}\big{|}^{2} \mathop{}\!\mathrm{d}t\bigg{]}= \bigg{[}\int_{0}^{T}\mathbb{E}\big{|}\mathbb{E}[\xi|\mathcal{F}_{ t}]\big{|}^{2}\mathop{}\!\mathrm{d}t\bigg{]}\leq\int_{0}^{T}\mathbb{E}[|\xi|^{ 2}]\mathop{}\!\mathrm{d}t<\infty,\]
\[\mathbb{E}\bigg{[}\int_{0}^{T}\big{|}Z^{N}_{t}\big{|}^{2}\mathop{ }\!\mathrm{d}t\bigg{]}=\]
It is easy to see that there exists a unique solution \((Y^{\#},Z^{\#})\in\mathbb{S}^{2}_{T}\times\mathbb{H}^{2,n}_{T}\) of the following BSDE:
\[Y^{\#}_{t}=\int_{t}^{T}g^{\#}_{s}(Y^{\#}_{s},Z^{\#}_{s})\mathop{ }\!\mathrm{d}s-\int_{t}^{T}(Z^{\#}_{s})^{\top}\mathop{}\!\mathrm{d}W_{s},\]
where
\[g^{\#}_{t}(y,z):=-\big{(}y+L^{m}\tilde{p}^{N}_{t}+\tilde{B}^{ \epsilon}_{s}-\phi_{t}(z)\big{)}s^{b}_{t}+s^{\epsilon}_{s}\tilde{B}^{\epsilon} _{s}-h_{t}y-h^{H}_{t}L^{H}L^{m}(\tilde{p}^{N}_{t})^{+}+h^{C}_{t}L^{C}L^{m}( \tilde{p}^{N}_{t})^{-}.\]
We will show that
\[(Y^{\#},Z^{\#})=(Y^{\mathbb{F}},Z^{\mathbb{F}}),\] (A.1)
where \((Y^{\mathbb{F}},Z^{\mathbb{F}})\) is the solution of (2.63). Because \((Y^{\mathbb{F}},Z^{\mathbb{F}})\) satisfying (2.61) is unique in \(\mathbb{S}^{2}_{T}\times\mathbb{H}^{2,n}_{T}\), to prove (A.1), it suffices to show that
\[Y^{\#}+L^{m}\tilde{p}^{N}+\tilde{B}^{\epsilon}-\phi(Z^{\#})\leq 0 ,~{}~{}~{}\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\!\mathrm{d}t-\text{a .s}.\] (A.2)
To this end, we introduce another transformation:
\[V^{\mathbb{F}}\coloneqq Y^{\#}+L^{m}\tilde{p}^{N}+\tilde{B}^{ \epsilon},\]
\[\Pi^{\mathbb{F}}\coloneqq Z^{\#}+L^{m}Z^{N}.\]
Then, by (iii) in Lemma2.17, \((V^{\mathbb{F}},\Pi^{\mathbb{F}})\) satisfies
\[V^{\mathbb{F}}=L^{m}\xi+\tilde{B}^{\epsilon}_{T}+\int_{t}^{T}F_{ s}(V^{\mathbb{F}}_{s},\Pi^{\mathbb{F}}_{s})\mathop{}\!\mathrm{d}s-\int_{t}^{T} (\Pi^{\mathbb{F}}_{s})^{\top}\mathop{}\!\mathrm{d}W_{s},\]
where
\[F_{t}(y,z)\coloneqq g^{\#}_{t}(y-L^{m}\tilde{p}^{N}_{t}-\tilde{B}^{\epsilon},z-L^{m} Z^{N}_{t})\]
\[= -(y-\alpha^{\top}_{t}z)s^{b}_{t}-h_{t}y\]
\[+(1-L^{m})s^{b}_{t}\alpha^{\top}_{t}Z^{N}_{t}+h_{t}\tilde{B}^{ \epsilon}_{t}+(h_{t}-h^{H}_{t}L^{H}L^{m})(\tilde{p}^{N}_{t})^{+}-(h_{t}-h^{C}_ {t}L^{C}L^{m})(\tilde{p}^{N}_{t})^{-}\]
\[= -(y-\alpha^{\top}_{t}z)s^{b}_{t}-h_{t}y+\xi^{b}_{t}.\]
Note that (A.2) is equivalent to
\[V^{\mathbb{F}}-(1-L^{m})s^{b}\alpha^{\top}Z^{N}-\phi(\Pi^{ \mathbb{F}})\leq 0,~{}~{}~{}\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\! \mathrm{d}t-\text{a.s}.\] (A.3)
To show (A.3), we use Malliavin calculus and comparison principle of BSDEs.
By (3.21) and (3.22),
\[\mathbb{E}\bigg{[}\int_{0}^{T}\int_{0}^{T}\big{|}D_{\theta}\tilde {p}^{N}_{t}\big{|}^{2}\mathop{}\!\mathrm{d}t\mathop{}\!\mathrm{d}\theta\bigg{]}=\]
\[\mathbb{E}\bigg{[}\int_{0}^{T}\int_{0}^{T}\big{|}D_{\theta}Z^{N}_ {t}\big{|}^{2}\mathop{}\!\mathrm{d}t\mathop{}\!\mathrm{d}\theta\bigg{]}= \bigg{[}\int_{0}^{T}\int_{0}^{T}\mathbb{E}\big{|}\mathbb{E}[D_{ \theta}(D_{t}\xi)|\mathcal{F}_{t}]\big{|}^{2}\mathop{}\!\mathrm{d}t\mathop{}\! \mathrm{d}\theta\bigg{]}\leq\int_{0}^{T}\int_{0}^{T}\mathbb{E}|D_{\theta}(D_{t }\xi)|^{2}\mathop{}\!\mathrm{d}t\mathop{}\!\mathrm{d}\theta<\infty.\]
Therefore, by Proposition 5.3 in El Karoui . (29), \((V^{\mathbb{F}},\Pi^{\mathbb{F}})\in\mathbb{L}^{2}([0,T]:\mathbb{D}^{1,2} \times(\mathbb{D}^{1,2})^{n})\), and for any \(1\leq i\leq n\), a version of \(\{(D^{i}_{\theta}V^{\mathbb{F}}_{t},D^{i}_{\theta}\Pi^{\mathbb{F}}_{t})|~{}0 \leq\theta,t\leq T\}\) is given by
\[D^{i}_{\theta}V^{\mathbb{F}}_{t}= L^{m}D^{i}_{\theta}\xi+\int_{t}^{T}\Big{[}-(D^{i}_{\theta}V^{ \mathbb{F}}_{s}-\alpha^{\top}_{s}D^{i}_{\theta}\Pi^{\mathbb{F}}_{s})s^{b}_{s}- h_{s}D^{i}_{\theta}V^{\mathbb{F}}_{s}+D^{i}_{\theta}\xi^{b}_{s}\Big{]}\mathop{ }\!\mathrm{d}s-\int_{t}^{T}(D^{i}_{\theta}\Pi^{\mathbb{F}}_{s})^{\top}\mathop{ }\!\mathrm{d}W_{s},\] (A.4)
and \(\{D_{t}V^{\mathbb{F}}_{t}\colon~{}0\leq t\leq T\}\) is a version of \(\{\Pi^{\mathbb{F}}_{t}\colon~{}0\leq t\leq T\}\). Let us denote
\[V^{\mathbb{F}}_{t,\theta}\coloneqq \alpha^{\top}_{\theta}(D_{\theta}V^{\mathbb{F}}_{t}),\]
\[\Pi^{\mathbb{F}}_{t,\theta}\coloneqq (D_{\theta}\Pi^{\mathbb{F}}_{t})\alpha_{\theta}.\]
Then \((V^{\mathbb{F}}_{t,\theta},\Pi^{\mathbb{F}}_{t,\theta})\) is given by
Therefore, by (3.23) and (3.25) together with comparison principle of BSDEs, we attain that \(V^{\mathbb{F}}_{t,\theta}\geq V^{\mathbb{F}}_{t}\) for any \(\theta\leq t\). Moreover, by (3.24), for any \(\theta\leq t\),
\[V^{\mathbb{F}}_{t}-(1-L^{m})s^{b}_{t}Z^{N}_{t}-V^{\mathbb{F}}_{t ,\theta}\leq V^{\mathbb{F}}_{t}-V^{\mathbb{F}}_{t,\theta}\leq 0,\]
and this implies (A.2) and (A.3). Moreover, by (2.70), FBA\(=0\). ∎
Proof of Theorem 3.5.: As in the proof of Theorem 3.4, we can check the existence, uniqueness, and Malliavin differentiability of BSDEs. We only explain the transformation and how to apply comparison principle. Without loss of generality, we assume \(\xi\geq 0\). It follows that \(\tilde{p}^{N}\geq 0\), \(\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\!\mathrm{d}t-\text{a.s}\).
(i) We consider a solution \((Y^{\#},Z^{\#})\in\mathbb{S}^{2}_{T}\times\mathbb{H}^{2,n}_{T}\) of the following BSDE:
\[Y^{\#}_{t}= \int_{t}^{T}-\bigg{[}\big{(}L^{m}Y^{\#}_{s}+L^{m}\tilde{p}^{N}_{s }+\tilde{B}^{\epsilon}_{s}-\alpha^{\top}_{s}(Z^{\#}_{s}+Z^{N}_{s})\big{)}s^{b} _{s}-s^{\epsilon}_{s}\tilde{B}^{\epsilon}_{s}+h^{H}_{s}L^{H}L^{m}(Y^{\#}_{s}+ \tilde{p}^{N}_{s})\bigg{]}\mathop{}\!\mathrm{d}s\]
\[-\int_{t}^{T}(Z^{\#}_{s})^{\top}\mathop{}\!\mathrm{d}W_{s}.\]
Take \(V^{\mathbb{F}}\coloneqq Y^{\#}+\tilde{p}^{N}\), \(\Pi^{\mathbb{F}}\coloneqq Z^{\#}+Z^{N}\). Then,
\[V^{\mathbb{F}}_{t}= \xi+\int_{t}^{T}F_{s}(V^{\mathbb{F}}_{s},\Pi^{\mathbb{F}}_{s}) \mathop{}\!\mathrm{d}s-\int_{t}^{T}(\Pi^{\mathbb{F}}_{s})^{\top}\mathop{}\! \mathrm{d}W_{s},\] (A.5)
where
\[F_{t}(y,z)\coloneqq-(L^{m}y-\alpha_{s}^{\top}z)s^{b}-h^{H}_{t}L^ {H}L^{m}y\] (A.6)
Since \((0,0)\) is the unique solution of the following BSDE:
\[y_{t}= \int_{t}^{T}-\bigg{[}\big{(}L^{m}_{s}y_{s}-\alpha_{s}^{\top}z_{s} \big{)}s^{b}_{s}+h^{H}_{s}L^{H}L^{m}y_{s}\bigg{]}\mathop{}\!\mathrm{d}s-\int_{ t}^{T}z^{\top}_{s}\mathop{}\!\mathrm{d}W_{s},\] (A.7)
by comparison between (A.5) and (A.7), we can attain that \(V^{\mathbb{F}}\geq 0\), namely
\[Y^{\#}+\tilde{p}^{N}\geq 0.\] (A.8)
Moreover, \((V^{\mathbb{F}},\Pi^{\mathbb{F}})\in\mathbb{L}^{2}([0,T]\colon\mathbb{D}^{1,2} \times(\mathbb{D}^{1,2})^{n})\), and for any \(1\leq i\leq n\), a version of \(\{(D^{i}_{\theta}V^{\mathbb{F}}_{t},D^{i}_{\theta}\Pi^{\mathbb{F}}_{t})|~{}0 \leq\theta,t\leq T\}\) is given by
\[D^{i}_{\theta}V^{\mathbb{F}}_{t}=D^{i}_{\theta}\xi+\int_{t}^{T} \Big{[}-(L^{m}D^{i}_{\theta}V^{\mathbb{F}}_{s}-\alpha^{\top}_{s}D^{i}_{\theta} \Pi^{\mathbb{F}}_{s})s^{b}_{s}-h^{H}_{s}L^{H}L^{m}D_{\theta}^{i}V^{\mathbb{F}} _{s}\Big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}(D^{i}_{\theta}\Pi^{\mathbb{F}}_ {s})^{\top}\mathop{}\!\mathrm{d}W_{s},\]
and \(\{D_{t}V^{\mathbb{F}}_{t}\colon~{}0\leq t\leq T\}\) is a version of \(\{\Pi^{\mathbb{F}}_{t}\colon~{}0\leq t\leq T\}\). Let us denote
\[V^{m}_{t}\coloneqq L^{m}V^{\mathbb{F}}_{t},\]
\[\Pi^{m}_{t}\coloneqq L^{m}\Pi^{\mathbb{F}}_{t},\]
\[V^{\mathbb{F}}_{t,\theta}\coloneqq \alpha^{\top}_{\theta}(D_{\theta}V^{\mathbb{F}}_{t}),\]
\[\Pi^{\mathbb{F}}_{t,\theta}\coloneqq (D_{\theta}\Pi^{\mathbb{F}}_{t})\alpha_{\theta}.\]
Then \((V^{m},\Pi^{m})\) and \((V^{\mathbb{F}}_{t,\theta},\Pi^{\mathbb{F}}_{t,\theta})\) are given by
\[V^{\mathbb{F}}_{t,\theta}= \alpha_{\theta}D_{\theta}\xi+\int_{t}^{T}F_{s}(V^{\mathbb{F}}_{s, \theta},\Pi^{\mathbb{F}}_{s,\theta})\mathop{}\!\mathrm{d}s-\int_{t}^{T}(\Pi^{ \mathbb{F}}_{s,\theta})^{\top}\mathop{}\!\mathrm{d}W_{s},\]
\[V^{m}_{t}= L^{m}\xi+\int_{t}^{T}F_{s}(V^{m}_{s},\Pi^{m}_{s})\mathop{}\! \mathrm{d}s-\int_{t}^{T}(\Pi^{m}_{s})^{\top}\mathop{}\!\mathrm{d}W_{s}.\]
Therefore, by (3.31), \(V^{m}_{t}\leq V^{\mathbb{F}}_{t,\theta}\), for any \(\theta\leq t\). It follows that
\[L^{m}Y^{\#}_{t}+L^{m}\tilde{p}^{N}_{t}+\tilde{B}^{\epsilon}_{t}- \alpha^{\top}_{t}(Z^{\#}_{t}+Z^{N}_{t})\leq L^{m}V^{\mathbb{F}}_{t}-\alpha^{\top}_{t}\Pi^{\mathbb{F}}_{t}\]
\[= L^{m}V^{\mathbb{F}}_{t}-V^{\mathbb{F}}_{t,t}\leq 0.\]
Hence, by uniqueness of \((Y^{\mathbb{F}},Z^{\mathbb{F}})\), we obtain \((Y^{\mathbb{F}},Z^{\mathbb{F}})=(Y^{\#},Z^{\#})\). Moreover, by (2.70), \(\text{FBA}=0\). The proof of (ii) is similar to (i). ∎
## Appendix B Stochastic Intensities
We assume that \(h^{i}\), \(i\in\{H,C\}\), are \(\mathbb{F}\)-adapted processes, but \(\sigma^{H}\) and \(\sigma^{C}\) are deterministic. For simplicity, we set \(\epsilon=0\) and \(e^{E}=0\). Let \(n=2\), \(B_{t}^{-1}\textfrak{D}^{N}_{t}=\mathds{1}_{T\leq t}\xi\) for some \(\xi\geq 0\), and we consider _replacement close-out_. \(\xi\) is determined by an asset \(S^{1}\), but the market is completed by another non-defaultable traded asset \(S^{2}\). We assume
\[\Sigma\coloneqq\begin{bmatrix}\sigma^{1}\\ \sigma^{2}\end{bmatrix}\]
is of full rank. In this case, we can not expect that the transformation
\[\phi_{t}\colon\sum_{i\in I}(\sigma^{i}_{t})^{\top}\tilde{\pi}^{i} _{t}\to\sum_{i\in I\setminus\rho}\tilde{\pi}^{i}_{t}\]
is independent of \(\tilde{\pi}^{H}\) and \(\tilde{\pi}^{C}\). By (2.67) and (2.68), \(\tilde{\pi}^{i}\), \(i\in\{H,C\}\), are represented by \(Y^{\mathbb{F}}\). Thus, we write \(\phi\) as
\[\phi_{t}(y,z).\] (B.1)
We assume that \(\rho=\{H,C\}\). Then
\[\phi_{t}(y,z)= \alpha_{t}^{\top}\big{(}z+Z^{N}_{t}-(\sigma^{H}_{T})^{\top}\tilde {\pi}^{H}_{t}-(\sigma^{C}_{T})^{\top}\tilde{\pi}^{C}_{t}\big{)}\]
\[= \mathds{1}^{\top}(\Sigma^{\top}_{t})^{-1}\big{[}z+Z^{N}_{t}-( \sigma^{H}_{t})^{\top}L^{H}L^{m}(y+\tilde{p}^{N}_{t})\big{]}.\]
Let us consider
\[Y^{\#}_{t}= \int_{t}^{T}-\bigg{[}\Big{(}L^{m}Y^{\#}_{s}+L^{m}\tilde{p}^{N}_{s }+\sigma^{H}_{s}\alpha_{s}L^{H}L^{m}(Y^{\#}_{s}+\tilde{p}^{N}_{s})-\alpha_{s}^ {\top}(Z^{\#}_{s}+Z^{N}_{s}))\Big{)}s^{\ell}_{s}\bigg{]}\mathop{}\!\mathrm{d}s\]
\[+\int_{t}^{T}\big{[}-h^{H}_{s}L^{H}L^{m}(Y^{\#}_{s}+\tilde{p}^{N} _{s})\big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}Z^{\#}_{s}\mathop{}\!\mathrm{d} W_{s},\]
We will show that \((Y^{\#},Z^{\#})=(Y^{\mathbb{F}},Z^{\mathbb{F}})\). To this end, let \(V^{\mathbb{F}}\coloneqq Y^{\#}+\tilde{p}^{N}\), \(\Pi^{\mathbb{F}}\coloneqq Z^{\#}+Z^{N}\), and \(\beta\coloneqq L^{m}(1+\sigma^{H}\alpha L^{H})\). Then \((V^{\mathbb{F}},\Pi^{\mathbb{F}})\) is given by
\[V^{\mathbb{F}}_{t}=\xi+\int_{t}^{T}F_{s}(V^{\mathbb{F}}_{s},\Pi^ {\mathbb{F}}_{s})\mathop{}\!\mathrm{d}s-\int_{t}^{T}\Pi^{\mathbb{F}}_{s} \mathop{}\!\mathrm{d}W_{s},\] (B.2)
\[F_{t}(y,z)\coloneqq-(\beta_{t}y-\alpha^{\top}_{t}z\big{)}s^{\ell }_{t}-h^{H}_{t}L^{H}L^{m}y.\] (B.3)
It suffices to show
\[\beta V^{\mathbb{F}}-\alpha^{\top}\Pi^{\mathbb{F}}\geq 0,~{}~{}~{ }\mathop{}\!\mathrm{d}\mathbb{Q}\otimes\mathop{}\!\mathrm{d}t-\text{a.s}.\] (B.4)
We denote that for \(\theta\leq t\),
\[V^{\beta}_{t,\theta}\coloneqq\beta_{\theta}V^{\mathbb{F}}_{t},~{ }~{}\Pi^{\beta}_{t,\theta}\coloneqq\beta_{\theta}\Pi^{\mathbb{F}}_{t},\]
\[V^{D}_{t,\theta}\coloneqq\alpha_{\theta}^{\top}D_{\theta}V^{ \mathbb{F}}_{t},~{}~{}\Pi^{D}_{t,\theta}\coloneqq\alpha^{\top}_{\theta}D_{ \theta}\Pi^{\mathbb{F}}_{t}.\]
Then \((V^{\beta}_{t,\theta},\Pi^{\beta}_{t,\theta})\) and \((V^{\beta}_{t,\theta},\Pi^{\beta}_{t,\theta})\) are given by
\[V^{\beta}_{t,\theta}= \beta_{\theta}\xi+\int_{t}^{T}F_{s}(V^{\beta}_{s,\theta},\Pi^{ \beta}_{s,\theta})\mathop{}\!\mathrm{d}s-\int_{t}^{T}\Pi^{\beta}_{s,\theta} \mathop{}\!\mathrm{d}W_{s},\]
\[V^{D}_{t,\theta}= \alpha_{\theta}^{\top}D_{\theta}\xi+\int_{t}^{T}\big{[}F_{s}(V^{D }_{s,\theta},\Pi^{D}_{s,\theta})-\alpha_{\theta}^{\top}(D_{\theta}h_{s})L^{H}L ^{m}V^{\mathbb{F}}_{s}\big{]}\mathop{}\!\mathrm{d}s-\int_{t}^{T}\Pi^{D}_{s, \theta}\mathop{}\!\mathrm{d}W_{s},\]
Recall \(\xi\geq 0\). Thus, by (B.2), \(V^{\mathbb{F}}\geq 0\). Therefore, to show (B.4), we need
\[\beta_{\theta}\xi\geq\alpha_{\theta}^{\top}D_{\theta}\xi,\] (B.5)
\[D_{\theta}h^{H}_{t}\geq 0.\] (B.6)
If (B.5) and (B.5) are satisfied, FCA\(=0\).
|
1004.1541 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 21638,
"num_imgs": 3,
"llama3_tokens_count": 6676
} | [
"content_image/1004.1541/x1.png",
"content_image/1004.1541/x2.png",
"content_image/1004.1541/x3.png"
] | # Huge quantum particle number fluctuations in a two-component Bose gas in a double-well potential
Paweł Ziń
Soltan Institute for Nuclear Studies, Hoża 69, 00-681 Warsaw, Poland
Bartłomiej Oleś
Instytut Fizyki imienia Mariana Smoluchowskiego and Mark Kac Complex Systems Research Center, Uniwersytet Jagielloński, ulica Reymonta 4, PL-30-059 Kraków, Poland
Krzysztof Sacha
Instytut Fizyki imienia Mariana Smoluchowskiego and Mark Kac Complex Systems Research Center, Uniwersytet Jagielloński, ulica Reymonta 4, PL-30-059 Kraków, Poland
February 26, 2024
###### Abstract
Two component Bose gas in a double well potential with repulsive interactions may undergo a phase separation transition if the inter-species interactions outweigh the intra-species ones. We analyze the transition in the strong interaction limit within the two-mode approximation. Numbers of particles in each potential well are equal and constant. However, at the transition point, the ground state of the system reveals huge fluctuations of numbers of particles belonging to the different gas components. That is, probability for observation of any mixture of particles in each potential well becomes uniform.
pacs: 03.75.Mn, 03.75.Lm, 64.70.Tg
## I Introduction
Ultra-cold dilute gases of bosonic atoms constitute perfect systems for experimental and theoretical investigations of various phenomena of quantum many body problems [1]. From the viewpoint of quantum computing and interferometry an especially relevant subject is quantum fluctuations [2; 3].
Most experimental studies of fluctuations concentrated on systems of cold atoms in double well [4] and optical lattice potentials [5]. In the former system squeezed states were predicted and produced, with particle number fluctuations (i.e. uncertainties of populations of the potential wells) turning from poissonian to sub-poissonian [6; 7; 8]. The latter system reveals a superfluid to Mott insulator transition [9; 10; 11] with enhanced phase fluctuations but with decreasing particle number fluctuations.
In the present paper we focus on a system where the total particle number is fixed but occupation of certain single particle states reveals considerable quantum fluctuations. We are interested in a system where the mean field theory predicts symmetry breaking [12; 13; 14] and the symmetry broken solutions are degenerated and form a Hilbert subspace parameterized by a continuous parameter. If the occupation of single particle states varies a lot as we move in the degenerate subspace than huge particle number fluctuations can be expected in the exact quantum many body eigenstates. Degenerate subspace parameterized by a continuous parameter appears in spin-1 Bose gas with an anti-ferromagnetic interaction [15] or in scalar condensates with solitonic solutions [15; 16; 17]. Attractive single component Bose gas in a symmetric double well potential reveals also huge particle number fluctuations but it constitutes a slightly different example [18; 19]. There the degeneracy is small, i.e. the degenerate subspace is two dimensional, and the particle number fluctuations correspond to random localization of all particles in one of the potential wells in different experimental realizations. In all these examples the correct mean field theory reduces to the Gross-Pitaevskii equations [1]. In the present paper we consider a Bose gas system where the Gross-Pitaevskii equation is not a correct mean-field description, that is, a two-component Bose gas in a double-well potential in the strong interaction limit.
In Sec. II we present a theoretical model for a two-component Bose gas in a double-well potential. In Sec. III.1 we derive the effective Hamiltonian using second order perturbation theory valid in the strong interaction limit. In Sec. III.2 we analyze its mean field (classical) limit and identify phase transition region. It turns out that the mean-field solutions reveal continuous degeneracy at the transition point. We deduce the exact ground state of the system in Sec. III.3 and show that the particle number fluctuations are indeed huge at the critical point. In Sec. III.4 we estimate the range of parameters where the predicted fluctuations can be observed and in Sec. IV the results presented in the paper are summarized.
## II The model
The Hamiltonian of a two component Bose gas in a symmetric double well potential, in the tight binding approximation, takes the form of the Bose-Hubbard model
\[\hat{H} = -\ \frac{J}{2}\left(\hat{a}_{1}^{\dagger}\hat{a}_{2}+\hat{a}_{2}^ {\dagger}\hat{a}_{1}+\hat{b}_{1}^{\dagger}\hat{b}_{2}+\hat{b}_{2}^{\dagger} \hat{b}_{1}\right)\] (3)
\[+\frac{U}{2}\left(\hat{a}_{1}^{\dagger}\hat{a}_{1}^{\dagger}\hat{ a}_{1}\hat{a}_{1}+\hat{a}_{2}^{\dagger}\hat{a}_{2}^{\dagger}\hat{a}_{2}\hat{a} _{2}+\hat{b}_{1}^{\dagger}\hat{b}_{1}^{\dagger}\hat{b}_{1}\hat{b}_{1}+\hat{b}_ {2}^{\dagger}\hat{b}_{2}^{\dagger}\hat{b}_{2}\hat{b}_{2}\right)\]
\[+\ U_{ab}\left(\hat{a}_{1}^{\dagger}\hat{a}_{1}\hat{b}_{1}^{ \dagger}\hat{b}_{1}+\hat{a}_{2}^{\dagger}\hat{a}_{2}\hat{b}_{2}^{\dagger}\hat{ b}_{2}\right),\]
where we have assumed that intra-species interactions are the same in both gas components and they are characterized by a coupling constant \(U\). The parameter \(U_{ab}\) is a coupling constant that describes inter-species interactions while \(J\) stands for the tunneling rate between the two potential wells. We assume also that numbers of particles of each component are equal to \(2N\). Such a choice of the system parameters allows us to perform fully analytical calculations. Analysis of a general case is beyond the scope of the present paper. The Hamiltonian (3) can be transformed to
\[\hat{H} = -\ \frac{J}{2}\left(\hat{a}_{1}^{\dagger}\hat{a}_{2}+\hat{a}_{2}^ {\dagger}\hat{a}_{1}+\hat{b}_{1}^{\dagger}\hat{b}_{2}+\hat{b}_{2}^{\dagger} \hat{b}_{1}\right)\] (6)
\[+\frac{U_{s}}{8}\left(\hat{a}_{1}^{\dagger}\hat{a}_{1}-\hat{a}_{2 }^{\dagger}\hat{a}_{2}+\hat{b}_{1}^{\dagger}\hat{b}_{1}-\hat{b}_{2}^{\dagger} \hat{b}_{2}\right)^{2}\]
\[+\frac{U_{d}}{8}\left(\hat{a}_{1}^{\dagger}\hat{a}_{1}-\hat{a}_{2 }^{\dagger}\hat{a}_{2}-\hat{b}_{1}^{\dagger}\hat{b}_{1}+\hat{b}_{2}^{\dagger} \hat{b}_{2}\right)^{2},\]
where \(U_{s}=U+U_{ab}\), \(U_{d}=U-U_{ab}\) and constant terms have been omitted. In the following we consider \(U_{s}\) as the unit of energy.
## III Perturbation approach
### The second order effective Hamiltonian
We are interested in the strong interaction limit. Therefore, the tunneling part of the Hamiltonian will be considered as a small perturbation. For \(J=0\) the system Hamiltonian has exact eigenstates
\[|N+n_{a},N-n_{a}\rangle|N+n_{b},N-n_{b}\rangle,\] (7)
where \(N+n_{a}\), \(N-n_{a}\) refer to numbers of particles of the component \(a\) in the first and the second potential well, respectively, and similarly for the component \(b\). The energies of such states (remember that \(U_{s}\) is the unit of energy) are
\[E=\frac{1}{2}(n_{a}+n_{b})^{2}+\frac{U_{d}}{2}(n_{a}-n_{b})^{2}.\] (8)
Switching to variables \(n_{s}=n_{a}+n_{b}\), \(n_{d}=n_{a}-n_{b}\) we obtain eigenenergies in a very simple form
\[E=\frac{1}{2}n_{s}^{2}+\frac{U_{d}}{2}n_{d}^{2}.\] (9)
If we assume that the parameters satisfy the condition
\[1\gg|U_{d}|N^{2},\] (10)
then manifolds with different values of \(|n_{s}|\) are separated on the energy scale (see Fig. 1). The lowest energy manifold is related to \(n_{s}=0\) and states within each manifold are labelled by different values of \(n_{d}\).
<figure><img src="content_image/1004.1541/x1.png"><figcaption>Figure 1: (Color online) Energy levels (9) versus Ud for a number of particlesof each component equal 2N=100. Black lines: the lowest energy manifold, i.e.ns=0, red lines: the manifold corresponding to |ns|=1.</figcaption></figure>
Matrix elements of the tunneling part of the Hamiltonian are zero between states of the same manifold. However, this part of the Hamiltonian introduces couplings between different manifolds. In an effective Hamiltonian that describes the lowest manifold of the system the effect of the coupling can be included via the second order perturbation theory. A compact form of the effective Hamiltonian may be obtained if we introduce spin operators
\[\hat{S}_{jx} = \frac{1}{2}(\hat{a}^{\dagger}_{j}\hat{b}_{j}+\hat{b}_{j}^{\dagger }\hat{a}_{j}),\] (11)
\[\hat{S}_{jy} = -\frac{i}{2}(\hat{a}^{\dagger}_{j}\hat{b}_{j}-\hat{b}_{j}^{ \dagger}\hat{a}_{j}),\] (12)
\[\hat{S}_{jz} = \frac{1}{2}(\hat{a}^{\dagger}_{j}\hat{a}_{j}-\hat{b}_{j}^{\dagger }\hat{b}_{j}).\] (13)
States belonging to the lowest manifold (\(n_{s}=0\)) can be written in the Fock basis (7) as
\[|\psi\rangle=\sum_{n=-N}^{N}\psi(n)|N+n,N-n\rangle|N-n,N+n\rangle.\] (14)
The Fock states \(|N+n,N-n\rangle|N-n,N+n\rangle\) are the eigenstates of the \(\hat{\bf S}_{j}^{2}\), \(\hat{S}_{1z}\) and \(\hat{S}_{2z}\) operators with the corresponding eigenvalues \(N(N+1)\), \(n\) and \(-n\), respectively, so the \(n_{s}=0\) manifold can be specified by:
\[\hat{S}_{1z}+\hat{S}_{2z}=0,\ \ \ \ \hat{\bf S}_{j}^{2}=N(N+1).\] (15)
In the second order in \(J\) the effective Hamiltonian that describes the lowest manifold reads [20; 21; 22; 23; 24]
\[\hat{H}_{2}=-2J^{2}\hat{\bf S}_{1}\cdot\hat{\bf S}_{2}+U_{d}(\hat{S}_{1z}^{2}+ \hat{S}_{2z}^{2}).\] (16)
The above Hamiltonian together with the condition (15) defines our problem where each potential well is associated with an angular momentum operator and there is interaction between such subsystems due to tunneling of atoms. Note that eigenstates of the Hamiltonian (16) depend on two parameters only, i.e. \(U_{d}/(2J^{2})\) and \(N\).
### Classical limit
Let us analyze the Hamiltonian (16) [in the manifold defined in (15)] in the classical limit by substituting the spin operators by classical angular momentum components. The condition (15) implies that \(S_{1z}=-S_{2z}\). We are interested in the ground state of the system. The value of the tunneling part of the Hamiltonian
\[-2J^{2}{\bf S}_{1}\cdot{\bf S}_{2}=-2J^{2}(S_{1x}S_{2x}+S_{1y}S_{2y}+S_{1z}S_{ 2z}),\] (17)
is minimal for \(S_{1x}=S_{2x}\), \(S_{1y}=S_{2y}\). For \(U_{d}/(2J^{2})+1<0\) the ground state corresponds to \(|S_{jz}|=N\) which can be related to \(n_{d}=\pm 2N\) (i.e. phase separation occurs where different gas components occupy different potential wells). For \(U_{d}/(2J^{2})+1>0\) the \(z\)-components of the angular momenta disappear in the ground state (\(S_{jz}=0\)) which corresponds to \(n_{d}=0\) (i.e. equal mixture of both components in each potential well). When \(U_{d}/(2J^{2})+1=0\) we deal with the transition point where \(|S_{jz}|\) can be arbitrary provided \(S_{1z}+S_{2z}=0\). Then all values of \(n_{d}\) are equally probable (any mixture of both components in each potential well is equally likely). The transition between the phase separation and miscible regimes is discontinuous.
### Quantum ground state
The analysis of the classical limit suggests that huge particle number fluctuations can be expected in the quantum ground state of the system at the transition point. Let us switch now to quantum analysis of the Hamiltonian (16). For \(U_{d}=-2J^{2}\) the Hamiltonian reads
\[\hat{H}_{2}=J^{2}[(\hat{S}_{1x}-\hat{S}_{2x})^{2}+(\hat{S}_{1y}-\hat{S}_{2y})^ {2}-\hat{\bf S}_{1}^{2}-\hat{\bf S}_{2}^{2}].\] (18)
Applying the unitary (rotation) transformation
\[\hat{S}_{1x}\rightarrow-\hat{S}_{1x}\ \ \ \hat{S}_{1y}\rightarrow-\hat{S}_{1y} \ \ \ \hat{S}_{1z}\rightarrow\hat{S}_{1z},\] (19)
which commutes with \(\hat{S}_{jz}\) and thus leaves the manifold \(\hat{S}_{1z}+\hat{S}_{2z}=0\) invariant, and defining the total spin operator, \(\hat{\bf S}=\hat{\bf S}_{1}+\hat{\bf S}_{2}\), we can rewrite the Hamiltonian (18) in the following form
\[\hat{H}_{2}=J^{2}(\hat{\bf S}^{2}-\hat{\bf S}_{1}^{2}-\hat{\bf S}_{2}^{2}).\] (20)
States \(|j,0\rangle\) with an integer \(j\), where \(j(j+1)\) is an eigenvalue of the \(\hat{\bf S}^{2}\) operator and \(0\) is an eigenvalue of the \(\hat{S}_{z}=\hat{S}_{1z}+\hat{S}_{2z}\) operator, are therefore eigenstates of our problem. The energy spectrum reads
\[E_{j}=J^{2}(j(j+1)-2N(N+1)),\] (21)
with \(0\leq j\leq 2N\) and the ground state solution corresponds to \(j=0\). In the basis (7) it takes the form of
\[|\psi_{0}\rangle=\frac{1}{\sqrt{2N+1}}\sum_{n=-N}^{N}|N+n,N-n\rangle|N-n,N+n\rangle.\] (22)
Huge particle number fluctuations become apparent in Eq. (22) where the ground state turns out to be a uniform superposition of all Fock states belonging to the lowest energy manifold (\(n_{s}=0\)). That is, all values of \(n_{d}\) are equally probable.
<figure><img src="content_image/1004.1541/x2.png"><figcaption>Figure 2: (Color online) Ground state probability density ρ(nd)=|ψ0(nd)|2 forUd2J2+1=10−3 (a) and Ud2J2+1=−10−3 (b). Solid black lines are related to thenumber of particles in each gas component 2N=50, dashed red lines to 2N=100and dotted-dashed blue lines to 2N=200.</figcaption></figure>
It is interesting to note that we can construct the exact quantum ground state using the superposition of symmetry broken solutions obtained in the classical limit. At the transition point the classical analysis tells us that in the ground state the potential wells are associated with classical angular momenta where orientation of one is given by \((\theta_{1},\phi_{1})=(\theta,\phi)\) and the other one by \((\theta_{2},\phi_{2})=(\pi-\theta,\phi)\) and spherical angles \(\theta\) and \(\phi\) can be arbitrary. The best quantum approximation of a classical angular momentum is a coherent state [25; 26]
\[|\theta_{j},\phi_{j}\rangle = \frac{1}{\sqrt{(2N)!}}\left(\cos\frac{\theta_{j}}{2}e^{i\phi_{j}/ 2}\hat{a}_{j}^{\dagger}\right.\] (24)
\[\left.+\sin\frac{\theta_{j}}{2}e^{-i\phi_{j}/2}\hat{b}_{j}^{ \dagger}\right)^{2N}|0\rangle.\]
If we postulate that the quantum ground state of our system can be approximated by a single tensor product state \(|\theta,\phi\rangle|\pi-\theta,\phi\rangle\) the rotational symmetry of the Hamiltonian (16) will be broken. A rotationally invariant state can be restored if we prepare a uniform superposition of the tensor product states, i.e. by integrating over all solid angles
\[|\psi_{0}\rangle=\frac{\sqrt{2N+1}}{2}\int_{0}^{2\pi}\frac{\mbox{d}\phi}{2\pi} \int_{0}^{\pi}\sin\theta\mbox{d}\theta\,|\theta,\phi\rangle|\pi-\theta,\phi\rangle,\] (25)
where \(\langle\psi_{0}|\psi_{0}\rangle=1\). Substituting Eq. (24) into Eq. (25), integrating over \(\phi\) and employing the identity
\[\int_{-1}^{1}\mbox{d}(\cos\theta)\left(\frac{1+\cos\theta}{2} \right)^{N+n}\left(\frac{1-\cos\theta}{2}\right)^{N-n}=\] (26)
\[\frac{2}{2N+1}\left[\binom{2N}{N+n}\right]^{-1},\] (27)
that follows from the completeness relation of the coherent states we restore the quantum ground state (22).
We see that at the transition point the classical analysis allows us to construct the exact ground state of the system. However, in the close vicinity of the transition point this analysis is not able to provide a good estimate for the ground state of the system. This is because the classical approach predicts discontinuous transition between the phase separation and miscible regimes but the transition is actually continuous. Starting with the classical ground states and using the coherent states we can construct quantum ground states that depend on \(N\) and on the sign of the parameter \(U_{d}/(2J^{2})+1\) but not on its absolute value. However, the exact diagonalization indicates that there is a range of \(U_{d}/(2J^{2})+1\) where the ground state changes continuously from the miscible to phase separation character. As one can expect this range shrinks with \(N\) because differences between the classical and quantum angular momentum diminish in the large \(N\) limit. This is illustrated in Fig. 2 where we plot \(\rho(n_{d})=|\psi_{0}(n_{d})|^{2}\) for \(U_{d}/(2J^{2})+1=10^{-3}\) and \(-10^{-3}\), i.e. close to the transition point, for different values of \(N\) obtained in numerical diagonalization of the effective Hamiltonian (16).
<figure><img src="content_image/1004.1541/x3.png"><figcaption>Figure 3: (Color online) Ground state probability densities for a number ofparticles in each component 2N=1000, J=5×10−7 and Ud2J2=−0.9999 (a), Ud2J2=−1(b) and Ud2J2=−1.0001 (c). Solid black lines correspond toρ(nd)=∑ns|ψ0(ns,nd)|2 where ψ0(ns,nd) is obtained by diagonalization of (6)while dashed red lines are related to ρ(nd)=|ψ0(nd)|2 with ψ0(nd) obtained bydiagonalization of the effective Hamiltonian (16). In panels (a) and (c) theblack and red lines are hardly distinguishable.</figcaption></figure>
### Validity of the perturbation approach
Our predictions are based on the effective Hamiltonian (16) which is second order in \(J\), and they are correct provided higher order terms can be neglected. The fourth order terms are smaller than \(J^{4}(2N)^{4}\) or \(J^{2}|U_{d}|(2N)^{4}\). At the transition point \(|U_{d}|=2J^{2}\), so if \(J^{4}(2N)^{4}\) is much smaller than the energy gap between the ground and first excited states of the Hamiltonian (20), i.e. \(E_{1}-E_{0}=2J^{2}\), then the higher order terms can be neglected and the system is properly described by the second order Hamiltonian (20). Hence,
\[J^{2}(2N)^{4}\ll 1,\] (28)
is a sufficient condition for the validity of the ground state (22) that describes the huge particle number fluctuations in the system.
We have tested our predictions comparing them with exact numerical calculations. Figure 3 shows probability densities \(\rho(n_{d})\) corresponding to ground states of the system obtained by diagonalization of the full Hamiltonian (6) and the effective Hamiltonian (16) for \(2N=1000\) and \(J=5\times 10^{-7}\). Different panels are related to different values of \(U_{d}\) in the vicinity of the transition point. That is, \(\frac{U_{d}}{2J^{2}}+1=10^{-4}\) corresponds to the miscible regime, \(\frac{U_{d}}{2J^{2}}+1=0\) to the transition point and \(\frac{U_{d}}{2J^{2}}+1=-10^{-4}\) to the phase separation side of the transition point. We can see that on both sides of the transition point the density \(\rho(n_{d})\) is peaked around the classical solutions, while at the transition point it is uniformly distributed. Figure 3 indicates perfect agreement between the perturbation calculations and the exact results even though \(J^{2}(2N)^{4}=0.25\) and thus the condition (28) is barely fulfilled.
## IV Conclusion
In summary, we have analyzed a strongly interacting two-component Bose gas in a double well potential for parameters close to the transition point where the phase separation occurs. The second order effective Hamiltonian allows us to describe the system in the vicinity of the transition point when higher order corrections are negligible. We have shown that, at the transition point, the ground state of the system becomes a uniform superposition of Fock states. That is, the system reveals huge quantum fluctuations of populations of the potential wells.
## Acknowledgment
We are grateful to Maciej Lewenstein for a fruitful discussion and his help in solving the Hamiltonian eigenvalue problem at the transition point. Support within Polish Government scientific funds (for years 2008-2011 – PZ and KS, 2009-2012 – BO) as a research project is acknowledged.
## References
* (1) A. J. Leggett, Rev. Mod. Phys. **73**, 307 (2001).
* (2) T. D. Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Monroe, and J. L. O’Brien, Nature **464**, 45 (2010).
* (3) J. A. Dunningham and K. Burnett, Phys. Rev. A **70**, 033601 (2004).
* (4) R. Gati and M. K. Oberthaler, J. Phys. B **40**, R61 (2007).
* (5) I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).
* (6) J. Esteve, C. Gross, A. Weller, S. Giovanazzi, and M. K. Oberthaler, Nature 455, 1216 (2008).
* (7) C. Orzel, A. K. Tuchman, M. L. Fenselau, M. Yasuda, and M. K. Kasevich, Science 291, 2386 (2001).
* (8) S. Choi and N. P. Bigelow, Phys. Rev. A 72, 033612 (2005).
* (9) D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, and P. Zoller, Phys. Rev. Lett. 81, 3108 (1998).
* (10) M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, and I. Bloch, Nature 415, 39 (2002).
* (11) F. Gerbier, S. Foelling, A. Widera, O. Mandel, and I. Bloch Phys. Rev. Lett. 96, 090401 (2006).
* (12) J.I. Cirac, M. Lewenstein, K. Molmer, and P. Zoller, Phys. Rev. A 57, 1208 (1998).
* (13) J. Ruostekoski, M.J. Collett, R. Graham, and Dan. F. Walls, Phys. Rev. A 57, 511 (1998).
* (14) D. Gordon and C. M. Savage , Phys. Rev. A 59, 4623 (1999).
* (15) Y. Castin, in _Les Houches Session LXXII, Coherent atomic matter waves 1999_, edited by R. Kaiser, C. Westbrook and F. David, (Springer-Verlag Berlin Heilderberg New York 2001).
* (16) J. Dziarmaga, P. Deuar, and K. Sacha, Phys. Rev. Lett. **105**, 018903 (2010).
* (17) R. V. Mishmash, and L. D. Carr, Phys. Rev. Lett. **105**, 018904 (2010).
* (18) T.-L. Ho and C. V. Ciobanu, J. Low Temp. Phys. 135, 257 (2004).
* (19) M. W. Jack and M. Yamashita, Phys. Rev. A 71, 023610 (2005).
* (20) A. B. Kuklov and B. V. Svistunov, Phys. Rev. Lett. **90**, 100401 (2003).
* (21) E. Altman, W. Hofstetter, E. Demler, M. D. Lukin, New J. Phys. **5**, 113 (2003).
* (22) A. Isacsson, M.-C. Cha, K. Sengupta, and S. M. Girvin, Phys. Rev. B **72**, 184507 (2005).
* (23) S. G. Söyler, B. Capogrosso-Sansone, N. V. Prokof’ev, and B. V. Svistunov, arXiv:0811.0397.
* (24) S. Powell, arXiv:0902.1993.
* (25) F.T. Arecchi, E. Courtens, G. Gilmore, H. Thomas, Phys. Rev. A **6**, 2211 (1972).
* (26) R. Glauber, F. Haake, Phys. Rev. A **13**, 357 (1976).
|
1109.3983 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 60525,
"num_imgs": 0,
"llama3_tokens_count": 22798
} | [] | # \(L^{2}\)-estimates for the \(d\)-operator acting on super forms
Aron Lagerberg
A. Lagerberg: Department of Mathematics, Chalmers University of Technology and the University of Göteborg, 412 96, GöTEBORG, SWEDEN.
aronl@chalmers.se
###### Abstract.
In the setting of super forms developed in [7], we introduce the notion of \(\mathbb{R}-\)Kähler metrics on \(\mathbb{R}^{n}\). We consider existence theorems and \(L^{2}-\)estimates for the equation \(d\alpha=\beta\), where \(\alpha\) and \(\beta\) are super forms, in the spirit of Hörmander’s \(L^{2}-\)estimates for the \(\bar{\partial}-\)equation on a complex Kähler manifold.
###### Contents
Contents
* 1 Introduction
* 2 Preliminaries
* 3 Comparison with the complex theory
* 4 Primitive super forms
* 5 \(L^{2}\)-estimates for the \(d\)-operator
* 6 The Legendre transform
## 1. Introduction
This article is concerned with introducing the notion of an \(\mathbb{R}-\)Kähler metric on the Euclidean space, \(\mathbb{R}^{n}\). Let us explain the meaning of this statement: on a complex manifold, a hermitian metric induces a \((1,1)-\)form \(\omega\), and the manifold is _Kähler_ if \(d\omega=0\). In [7] the formalism of super forms on \(\mathbb{R}^{n}\) was considered, which enables us to define \((p,q)-\)forms on \(\mathbb{R}^{n}\). In particular, a smooth metric \(g\) on \(\mathbb{R}^{n}\) can be represented by a smooth, positive \((1,1)-\)form \(\omega\), and in analogy with the complex setting, we define the metric \(g\) to be _\(\mathbb{R}-\)Kähler_ if \(d\omega=0\). In this article, our main concern is for the \(d-\)equation for \((p,q)\)-forms on \(\mathbb{R}^{n}\) endowed with a Kähler metric; by this we mean that given a \((p,q)\)-form \(\beta\), we wish to find a \((p-1,q)\)-form \(\alpha\) solving the equation
\[d\alpha=\beta.\]
Under certain hypothesis on \(\beta\), we shall prove existence theorems for this equation using arguments from the technique of \(L^{2}-\)estimates due to Hörmander for the \(\overline{\partial}-\)equation on a complex Kähler manifold (c.f. [6]). This will also give us an \(L^{2}-\)estimate on the solution \(\alpha\) in terms of \(\beta\) on a given \(L^{2}\)-space (depending on the Kähler metric), to be introduced later in this article. As a particular case, we are able to solve the \(d-\)equation for ordinary \(p\)-forms on \(\mathbb{R}^{n}\) together with an \(L^{2}-\)estimate on the solution in terms of the given data. The key point in applying the arguments of Hörmander is to establish a Kodaira-Bochner-Nakano-type identity (c.f [8]) for natural Laplace-operators arising in our setting. We also take the opportunity to introduce, in analogy with the complex case, the theory of primitive super forms. Our hope is that the results developed in this article can be used to establish results in convex analysis. For instance, there are many articles concerned with convex inequalities that utilizes \(L^{2}\)-theory (see for instance [3],[1]), and we hope that our approach in this article will give a fruitful addition to the theory already developed.
**Acknowledgements:** I would like to thank my advisor Bo Berndtsson for inspiration and support.
## 2. Preliminaries
In this article, we will consider differential forms in \(\mathbb{R}^{n}\times\mathbb{R}^{n}=\{(x_{1},...,x_{n},\xi_{1},...,\xi_{n})\}\) with coefficients depending only on the variables \((x_{1},...,x_{n})\). Such forms, which we shall call super forms, were considered in the article [7]. We say that \(\alpha\) is a \((p,q)-\)form if
\[\alpha=\sum_{|I|=p,|J|=q}\alpha_{IJ}(x)dx_{I}\wedge d\xi_{J},\]
where we use multi-index notation, and a \(k\)-form is a \((p,q)\)-form with \(p+q=k\). The set of \((p,q)-\)forms whose coefficients are smooth will be denoted by \(\mathcal{E}^{p,q}\). A smooth \((1,1)\)-form \(\omega\) is said to be _positive_ if the coefficient-matrix \((\omega_{ij}(x))_{i,j}\) is positive definite, for each \(x\). Let us define the operator
\[d^{\#}:\mathcal{E}^{p,q}\rightarrow\mathcal{E}^{p,q+1}\]
by letting
\[d^{\#}(\sum_{|I|=p,|J|=q}\alpha_{IJ}(x)dx_{I}\wedge d\xi_{J})=\sum_{i=1}^{n} \sum_{|I|=p,|J|=q}\frac{\partial}{\partial x_{i}}\alpha_{IJ}(x)d\xi_{i}\wedge dx _{I}\wedge d\xi_{J}.\]
The operator \(d:\mathcal{E}^{p,q}\rightarrow\mathcal{E}^{p+1,q}\) is defined as usual, and a form \(\alpha\) is called _closed_ if \(d\alpha=0\). We also define the linear map
\[J:\{(p,q)-\textmd{forms}\}\rightarrow\{(q,p)-\textmd{forms}\}\]
by letting
\[J(\sum_{|I|=p,|J|=q}\alpha_{IJ}(x)dx_{I}\wedge d\xi_{J})=\sum_{|I|=p,|J|=q} \alpha_{IJ}(x)d\xi_{I}\wedge dx_{J}.\]
Observe that this makes for \(J^{2}=Id\). The operator \(d^{\#}\) can be written in terms of \(J\) as
\[d^{\#}=J\circ d\circ J.\]
We have the following result (cf. [7]):
**Proposition 2.1**.: _A closed, smooth \((1,1)-\)form \(\omega\) is positive if and only if there exists a convex, smooth function \(f\) such that_
\[\omega=dd^{\#}f.\]
Now fix a smooth, positive, and closed \((1,1)\)- form \(\omega\). We shall use the notation
\[\omega_{q}=\omega^{q}/q!.\]
Such a form \(\omega\) induces a metric on \(\mathbb{R}^{n}\) in a natural way: if \(v=(v_{1},...,v_{n}),w=(w_{1},...,w_{n})\in\mathbb{R}^{n}\), then for every \(x\in\mathbb{R}^{n}\), we define
\[(v,w)_{x}=\sum_{i,j=1}^{n}v_{i}w_{j}\omega_{ij}(x),\]
where the functions \(\omega_{ij}\) are defined by \(\omega=\sum_{i,j=1}^{n}\omega_{ij}dx_{i}\wedge d\xi_{j}\). We obtain an induced metric on the space of \((1,0)-\) and \((0,1)-\)forms: if \(\alpha=\sum\alpha_{i}dx_{i}\) then \((\alpha,\alpha)_{x}=\sum\omega^{ij}(x)\alpha_{i}\alpha_{j}\) where \((\omega^{ij})\) denotes the inverse of the matrix \((\omega_{ij})\), and analogously for \((0,1)\)-forms. Using this metric, we would like to define the norm of a \((p,q)\)-form, at a point. Let us fix an orthonormal (with respect to \(\omega\)) coordinate system \((dx_{1},...,dx_{n})\) for the space of \((1,0)\)- forms. If \(\alpha=\sum\alpha_{IJ}dx_{I}\wedge d\xi_{J}\), we define
(2.1) \[|\alpha|^{2}=\sum|\alpha_{IJ}|^{2}.\]
If \(\alpha=\sum_{|I|=p}\alpha_{I}dx_{I}\) and \(\beta=\sum_{|J|=q}\beta_{J}d\xi_{J}\), then
\[|\alpha\wedge\beta|^{2}=\sum_{I,J}\alpha_{I}^{2}\beta_{J}^{2}=|\alpha|^{2}| \beta|^{2}.\]
If we polarize this formula we obtain
(2.2) \[(\alpha\wedge\beta,\alpha^{\prime}\wedge\beta^{\prime})=(\alpha,\alpha^{\prime })(\beta,\beta^{\prime}),\]
with \((p,0)-\)forms \(\alpha,\alpha^{\prime}\) and \((0,q)-\)forms \(\beta\),\(\beta^{\prime}\), and where \((\cdot,\cdot)\) denotes the inner product associated with the norm \(|\cdot|\). Let us show that the definition (2.1) is independent of the choice of orthonormal coordinate system: We begin with the case of \((p,0)\)- forms: Let \(\alpha=\sum_{|I|=p}\alpha_{I}dx_{I}.\) A simple calculation shows that,
\[|\alpha|^{2}\omega_{n}=c_{p}\alpha\wedge J(\alpha)\wedge\omega_{n-p},\]
where \(c_{p}=(-1)^{p(p-1)/2}\), and this expression does not depend on the basis chosen. The number \(c_{p}\) is chosen such that \(dx_{i_{1}}\wedge...\wedge dx_{i_{p}}\wedge d\xi_{i_{1}}\wedge...\wedge d\xi_{i _{p}}=c_{p}\cdot dx_{i_{1}}\wedge d\xi_{i_{1}}\wedge...\wedge dx_{i_{p}}\wedge d \xi_{i_{p}}\). The same calculations hold for \((0,q)\)-forms. Thus, at least for \((p,0)\)- and \((0,q)-\)forms, formula (2.1) does not depend on which orthonormal coordinates we choose. Now, let \((dy_{1},...,dy_{n})\) be another orthonormal basis, and let \(d\zeta_{i}=J(dy_{i})\). If \(\alpha=\sum\alpha_{IJ}dy_{I}\wedge d\zeta_{J}\), then by (2.2) we get that
\[(\alpha,\alpha)=\sum\alpha_{IJ}\alpha_{KL}(dy_{I},dy_{K})(d\zeta_{J},d\zeta_{L }).\]
But by the above, we know that \((\cdot,\cdot)\) does not depend on which orthonormal basis we work with, when applied to \((p,0)-\) or \((0,q)-\)forms. Thus \((dy_{I},dy_{K})\) and \((d\zeta_{J},d\zeta_{L})\) is non zero, and equal to one, if and only if \(I=K\) and \(J=L\). Thus the definition is independent of which orthonormal basis we use. When we wish to emphasize which metric \(\omega\) the norm and inner product depend on, we will write \(|\cdot|_{\omega}\), and \((\cdot,\cdot)_{\omega}.\)
The Hodge-star in our setting is defined by the relation
(2.3) \[\alpha\wedge*J(\beta)=(\alpha,\beta)\omega_{n}.\]
For an example, if we choose orthonormal coordinates at a point, then in terms of these we have that
\[*dx_{I}\wedge d\xi_{J}=c_{IJ}\cdot dx_{J^{c}}\wedge d\xi_{I^{c}},\]
for a constant \(c_{IJ}=\pm 1\) chosen so that (2.3) is true; here \(I^{c}\) denotes the complementary index of \(I\). We will later investigate the constant \(c_{IJ}\) more carefully.
The integral of an \((n,n)-\)form \(\alpha=\alpha_{0}(x)c_{n}dx\wedge d\xi\), is defined by
(2.4) \[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}\alpha_{0}(x)c_{n}dx\wedge d\xi=\int_ {\mathbb{R}^{n}}\alpha_{0}(x)dx,\]
and this gives us an \(L^{2}\)-structure on the space of forms:
\[L_{p,q}^{2}=\{(p,q)-\text{forms}\,\alpha:\int_{\mathbb{R}^{n}\times\mathbb{R}^ {n}}|\alpha|^{2}\omega_{n}<+\infty\}.\]
We will later consider a weighted version of this \(L^{2}-\)space. We remark that in defining the integral (2.4) we have fixed a volume element \(d\xi\) on which the integral thus depends.
## 3. Comparison with the complex theory
In this section, we will consider how super forms correspond to complex forms. Let us begin in the linear setting, that is, we consider only forms at a single point, say \(x_{0}\in\mathbb{R}^{n}\). Let \(\omega\) be an \(\mathbb{R}-\)Kähler form. At the point \(x_{0}\), we choose coordinates \((x_{1},...,x_{n},\xi_{1},...,\xi_{n})\) for \(\mathbb{R}^{n}\times\mathbb{R}^{n}\) such that
\[\omega(x_{0})=\sum_{k=1}^{n}dx_{k}\wedge d\xi_{k}.\]
Since we will consider complex forms as well, we let \((z_{1},...,z_{n})\) be the standard complex coordinates of \(\mathbb{C}^{n}\). We will use the notation,
\[dV_{i}=dx_{i}\wedge d\xi_{i},\,dV_{i}^{\mathbb{C}}=dz_{i}\wedge d\bar{z}_{i}\]
and for a multi-index \(I=(i_{1},...,i_{p})\), we let
\[dV_{I}=dV_{i_{1}}\wedge...\wedge dV_{i_{p}},\,dV_{I}^{\mathbb{C}}=dV_{i_{1}}^{ \mathbb{C}}\wedge...\wedge dV_{i_{p}}^{\mathbb{C}}.\]
Now, define
\[\Theta_{I,J,K}=dx_{J}\wedge d\xi_{K}\wedge dV_{I},\]
for disjoint indices \(I,J,\) and \(K\). We also define the complex form
\[\Theta_{I,J,K}^{\mathbb{C}}=dz_{J}\wedge d\bar{z}_{K}\wedge dV_{I}^{\mathbb{C}}.\]
Every super form \(\alpha\) can at a fixed point be written as a linear combination
\[\alpha=\sum\alpha_{I,J,K}\Theta_{I,J,K},\]
where the coefficients \(\alpha_{I,J,K}\) are real numbers; we define a map \(\mathcal{C}\) which takes super forms to complex forms by,
(3.1) \[\mathcal{C}(\alpha)=\sum\alpha_{I,J,K}\Theta_{I,J,K}^{\mathbb{C}}.\]
The map \(\mathcal{C}\) is linear by definition, and it is also injective, since \(\alpha=0\) is equivalent to \(\mathcal{C}(\alpha)=0\). However, only complex forms of the type (3.1) with real coefficients correspond to a super form, and thus, the correspondence describes an isomorphism between the vector space of super forms at a fixed point and the vector space of complex forms of the form (3.1) with real coefficients \(\alpha_{I,J,K}\). This latter space is, from the complex point of view, not very natural and depends very much on the choice of coordinates. For instance, a generic change of coordinates on the complex side does not leave this space invariant.
The operation of multiplying with the \(\mathbb{R}-\)Kähler form \(\omega\) is sufficiently important to deserve its own notation:
**Definition 3.1**.: We define the operator
\[L:\{k-\text{forms}\}\rightarrow\{(k+2)-\text{forms}\}\]
by letting
\[L(\alpha)=\omega\wedge\alpha,\]
where \(\alpha\) is a \(k-\)form. The dual \(\Lambda\) of the operator \(L\) is defined by,
\[(L(\alpha),\beta)=(\alpha,\Lambda(\beta)),\]
for \(\beta\) a \((k+2)-\)form.
On the complex side, we set the Kähler form to be \(\Omega=\frac{i}{2}\sum_{k=1}^{n}dz_{k}\wedge d\bar{z}_{k}\), with corresponding operator \(L_{\Omega}\). This form \(\Omega\) induces an inner product on the space of complex forms such that the square of the norm of \(\sum\alpha_{I,J,K}\Theta_{I,J,K}^{\mathbb{C}}\) is equal to \(\sum|\alpha_{I,J,K}|^{2}\) in exactly the same way as in formula (2.1). The definitions are made so that, if \(\alpha\) is a super form, then the norm of \(\mathcal{C}(\alpha)\) measured with respect to \(\Omega\), is equal to the norm of \(\alpha\) measured with respect to \(\omega\). Thus the correspondence \(\alpha\leftrightarrow\alpha^{\mathbb{C}}\) is in fact an isometry. We denote by \(\Lambda_{\Omega}^{\mathbb{}}\) the dual of \(L_{\Omega}\) with respect to to the metric given by \(\Omega\). We have the following:
**Proposition 3.2**.: _Let \(\alpha\) be a \(k-\)form. Then_
(3.2) \[\mathcal{C}(L\alpha)=\frac{2}{i}L_{\Omega}(\mathcal{C}(\alpha)),\]
(3.3) \[\mathcal{C}(\Lambda\alpha)=\frac{i}{2}\Lambda_{\Omega}(\mathcal{C}(\alpha)).\]
_Moreover,_
\[\Lambda\alpha=0\Longleftrightarrow\Lambda_{\Omega}\mathcal{C}(\alpha)=0.\]
Proof.: We let \(I+i\) be the multi index \(I\cup\{i\}\) and \(I-i=I\setminus\{i\}\) which we define to be the empty set if \(i\notin I\). First, the formula (3.2) is immediate. Next, we claim that
(3.4) \[\Lambda\Theta_{L,M,N}=\sum_{j\in L}\Theta_{L-j,M,N},\]
where we use the convention that if an index \(I,J,\) or \(K\) is the empty set, then \(\Theta_{I,J,K}=0\). One realizes this as follows: we have that
\[L\Theta_{I,J,K}=\sum_{\{i\notin I\bigcup J\bigcup K\}}\Theta_{I+i,J,K}.\]
and that \(\Lambda\) is defined by the relation
\[(L(\Theta_{I,J,K}),\Theta_{L,M,N})=(\Theta_{I,J,K},\Lambda(\Theta_{L,M,N})).\]
Here the left hand side is non-zero if and only if there is an \(i\notin I\bigcup J\bigcup K,\) such that \(I+i=L\) and \(J=M\), \(K=N\). In this case, the left hand side is equal to 1, which proves the formula. On the other hand, we have the well known formula (c.f [9], p.21)
\[\Lambda_{\Omega}\Theta_{L,M,N}^{\mathbb{C}}=\frac{2}{i}\sum_{j\in L}\Theta_{L- j,M,N}^{\mathbb{C}}.\]
Thus, using linearity of \(\Lambda,\) we conclude that formula (3.3) holds. The last part follows since \(\Lambda\alpha=0\Longleftrightarrow\mathcal{C}(\Lambda\alpha)=0 \Longleftrightarrow\Lambda_{\Omega}\mathcal{C}(\alpha)=0\). ∎
The Hodge-star \(*_{\Omega}\), acting on complex forms, is defined by the formula
\[v\wedge*_{\Omega}(\bar{v})=|v|^{2}\Omega_{n}.\]
Let
\[N=\{1,2,...,n\},\]
and recall that we defined
\[c_{p}=(-1)^{p(p-1)/2},\]
for each integer \(p\); the number \(c_{p}\) was chosen so that
\[dx_{I}\wedge d\xi_{I}=c_{p}\,dx_{i_{1}}\wedge d\xi_{i_{1}}\wedge....\wedge dx_ {i_{p}}\wedge d\xi_{i_{p}}\]
where \(I=(i_{1},...,i_{p})\). Now, it is well known that (c.f. [9], p.20) that
\[*_{\Omega}dz_{A}\wedge d\bar{z}_{B}\wedge dV_{M}^{\mathbb{C}}=\left[i^{p-q}(-1 )^{k(k-1)/2+m}(-2i)^{k-n}\right]dz_{A}\wedge d\bar{z}_{B}\wedge dV_{M^{{}^{ \prime}}}^{\mathbb{C}},\]
with \(M^{{}^{\prime}}=N\setminus(A\cup B\cup M)\). However, a small calculations reveals that
\[*dx\wedge d\xi_{B}\wedge dV_{M}=c_{p}c_{q}(-1)^{p+m+pq}dx_{A}\wedge d\xi_{B} \wedge dV_{M^{{}^{\prime}}}.\]
Thus, the real and the complex Hodge stars are related by
(3.5) \[\mathcal{C}(*dx\wedge d\xi_{B}\wedge dV_{M})=i^{n}2^{n-k}(-1)^{n}\cdot(*_{ \Omega}dz_{A}\wedge d\bar{z}_{B}\wedge dV_{M}^{\mathbb{C}})\]
since a straightforward calculation shows that
\[\frac{c_{p}c_{q}(-1)^{p+m+pq}}{i^{p-q}(-1)^{k(k-1)/2+m}(-2i)^{k-n}}=i^{n}2^{n- k}(-1)^{n}.\]
Thus, if \(\alpha\) is a \(k\)-form, then
\[\mathcal{C}(*\alpha)=(i^{n}2^{n-k}(-1)^{n})*_{\Omega}(\mathcal{C}(\alpha)).\]
From the complex theory, if \(v\) is a complex form, there is a relation between \(*^{\Omega}L_{\Omega}^{r}v\) and \(L_{\Omega}^{n-r-k}v\) given by the following theorem (cf. [9], Theorem 2):
**Theorem 3.3**.: _If \(v=\sum_{|I|=p,|J|=q,|M|=m}v_{I,J,M}\Theta_{I,J,M}^{\mathbb{C}}\), then_
\[*_{\Omega}L_{\Omega}^{r}v=i^{p-q}(-1)^{k(k+1)/2}\frac{r!}{(n-k-r)!}L_{\Omega}^ {n-r-k}v.\]
If we apply the above theorem to \(\mathcal{C}(\alpha)\) for a \(k\)-form
\[\alpha=\sum_{|I|=p,|J|=q,|M|=m}\alpha_{I,J,M}\Theta_{I,J,M},\]
using (3.5) and that \(L_{\Omega}^{N}\mathcal{C}(\alpha)=(\frac{i}{2})^{N}\mathcal{C}(L\alpha),\) we obtain,
\[\frac{1}{i^{n}2^{n-k-2r}(-1)^{n}}*((\frac{i}{2})^{r}L{}^{r}\alpha)=i^{p-q}(-1) ^{k(k+1)/2}\frac{r!}{(n-k-r)!}(\frac{i}{2})^{n-k-r}L{}^{n-r-k}\alpha,\]
which gives us
\[*L{}^{r}\alpha=(-1)^{k(k+1)/2+r+q+m}\frac{r!}{(n-k-r)!}L{}^{n-r-k}\alpha.\]
Thus, we have proved the following:
**Theorem 3.4**.: _If_
\[\alpha=\sum_{|I|=p,|J|=q,|M|=m}\alpha_{I,J,M}\Theta_{I,J,M},\]
_and \(k=p+q+2m\), then_
(3.6) \[*L^{r}\alpha=(-1)^{k(k+1)/2+r+q+m}\frac{r!}{(n-k-r)!}L^{n-r-k}\alpha.\]
This far, we have only compared super forms with complex forms in the linear setting, that is, at a fixed point. Let us now extend the map \(\mathcal{C}\) to be defined on super forms on all of \(\mathbb{R}^{n}\). Until this point, there has been no need for a relationship between our real coordinates \((x_{1},.,,,x_{n},\xi_{1},...,\xi_{n})\) and \((z_{1},.,,,.z_{n})\), but now we make the usual identification \(z_{k}=x_{k}+iy_{k}\) for each \(k=1,...,n\). For
\[\alpha(x)=\sum\alpha_{IJM}(x)\Theta_{IJM},\]
where \(\alpha_{IJM}(\cdot)\) are functions on \(\mathbb{R}^{n}\), we define
\[\mathcal{C}(\alpha)(z)=\sum\alpha_{IJM}(x)\Theta_{IJM}^{\mathbb{C}},\]
where \(x=(z+\bar{z})/2.\)
**Proposition 3.5**.: _For any super form \(\alpha\) we have that_
\[\mathcal{C}(d\alpha)=2\partial\mathcal{C}(\alpha),\]
\[\mathcal{C}(d^{\#}\alpha)=2\bar{\partial}\mathcal{C}(\alpha).\]
Proof.: Let \(\alpha(x)=\sum_{I,J,M}\alpha_{IJM}(x)dx_{I}\wedge d\xi_{J}\wedge dV_{M}.\) Then
\[\mathcal{C}(d\alpha)=\mathcal{C}(\sum_{I,J,M,l}\frac{\partial\alpha_{IJM}(x)}{ \partial x_{l}}dx_{l}\wedge dx_{I}\wedge d\xi_{J}\wedge dV_{M})=\]
\[=\sum_{I,J,M,l}\frac{\partial\alpha_{IJM}(x)}{\partial x_{l}}dz_{l}\wedge dz_{ I}\wedge d\bar{z}_{J}\wedge dV_{M}^{\mathbb{C}}.\]
Since \(\frac{\partial}{\partial z_{l}}=\frac{1}{2}(\frac{\partial}{\partial x_{l}}-i \frac{\partial}{\partial y_{l}})\), we see that
\[\frac{\partial\alpha(x)}{\partial x_{l}}=2\frac{\partial\alpha(x)}{\partial z_ {l}},\]
and thus
\[\mathcal{C}(d\alpha)=2\partial(\mathcal{C}(\alpha)).\]
The formula for \(\bar{\partial}\) follows in the same way. ∎
An important formula in complex analysis is the following (c.f [9], p. 42-44):
**Theorem 3.6**.: _For any complex form \(v\),_
\[[\Lambda_{\Omega},\partial]v=-i*_{\Omega}\partial*_{\Omega}v.\]
Let \(\alpha\) be a \(k-\)form. Then, applying the above theorem, we get
\[[\Lambda_{\Omega},\partial]\mathcal{C}(\alpha)=-i*_{\Omega}\partial*_{\Omega} \mathcal{C}(\alpha).\]
However, by Propositions 3.2 and 3.5, we notice that
\[[\Lambda_{\Omega},\partial]\mathcal{C}(\alpha)=-i\cdot\mathcal{C}([\Lambda,d] \alpha),\]
and, by repeated use of (3.5), keeping in mind that \(d*\alpha\) is a \((2n-k+1)\)-form,
\[*_{\Omega}\partial*_{\Omega}\mathcal{C}(\alpha)=\left((i^{n}2^{n-k}(-1)^{n})^{ -1}\right)*_{\Omega}\partial\mathcal{C}(*\alpha)=\]
\[=\left((i^{n}2^{n-k}(-1)^{n}i^{n}2^{n-(2n-k+1)}(-1)^{n})^{-1}\right)\mathcal{C }(*\frac{1}{2}d(*\alpha))=(-1)^{n}\mathcal{C}(*d*\alpha).\]
This gives us that
\[-i(-1)^{n}\mathcal{C}(*d*\alpha)=-i\cdot\mathcal{C}([\Lambda,d]\alpha),\]
and so we arrive at:
**Theorem 3.7**.: _For any form \(\alpha\) we have_
\[[\Lambda,d]\alpha=(-1)^{n}*d*(\alpha).\]
Let us conclude this section with some elementary observations:
**Lemma 3.8**.: _For any \(k-\)form \(\alpha\) we have_
(3.7) \[**\alpha=(-1)^{n-k}\alpha\]
\[\Lambda J\alpha=-J\Lambda\alpha,\]
_and_
\[*J=(-1)^{n}J*.\]
Proof.: Since every form \(\alpha\) is a linear combination of forms of the type \(dx_{A}\wedge d\xi_{B}\wedge dV_{M}\), we need only to prove the lemma with \(\alpha=dx_{A}\wedge d\xi_{B}\wedge dV_{M}\). One easily calculates
\[*dx_{A}\wedge d\xi_{B}\wedge dV_{M}=c_{p}c_{q}(-1)^{p+m+qp}dx_{A}\wedge d\xi_{ B}\wedge dV_{M^{\prime}},\]
with \(M^{\prime}=\{1,2,...,n\}\setminus A\cup B\cup M,\) and \(p=|A|\), \(q=|B|,\)\(m=|M|\) using the same notation as before. Thus by applying the Hodge-star twice, the form \(dx_{A}\wedge d\xi_{B}\wedge dV_{M}\) will be multiplied by the constant
\[c_{p}^{2}c_{q}^{2}(-1)^{p+m+qp+p+m^{\prime}+pq}=(-1)^{m+n-m-p-q}=(-1)^{n-k},\]
where \(m^{\prime}=|M^{\prime}|=n-p-q-m\), which proves the first formula. The second formula follows by using (3.4). The last formula follows from direct calculations:
\[J*dx_{A}\wedge d\xi_{B}\wedge dV_{M}=c_{p}c_{q}(-1)^{p+m+m^{\prime}}dx_{A} \wedge d\xi_{B}\wedge dV_{M^{\prime}},\]
\[*J(dx_{A}\wedge d\xi_{B}\wedge dV_{M})=c_{p}c_{q}(-1)^{q}dx_{A}\wedge d\xi_{B} \wedge dV_{M^{\prime}}.\]
Thus \(*J\) differs from \(J*\) by the constant
\[c_{p}c_{q}(-1)^{p+m+m^{\prime}}\cdot c_{p}c_{q}(-1)^{q}=(-1)^{n}.\]
∎
Finally, we note the following corollary of Theorem 3.7:
**Corollary 3.9**.: _For a form \(\alpha\), we have,_
\[*d^{\#}*\alpha=(-1)^{n+1}[\Lambda,d^{\#}]\alpha.\]
Proof.: Let us consider the expression \(J[\Lambda,d]J\alpha\). From the definition of \(d^{\#}\) we get:
\[J[\Lambda,d]J\alpha=J\Lambda dJ\alpha-Jd\Lambda J\alpha=J\Lambda Jd^{\#}\alpha +JdJ\Lambda\alpha=\]
\[=-J^{2}\Lambda d^{\#}\alpha+J^{2}d^{\#}\Lambda\alpha=-[\Lambda,d^{\#}]\alpha,\]
where we used that
\[J\Lambda=-\Lambda J.\]
On the other hand, Theorem 3.7 says that
\[J[\Lambda,d]J\alpha=(-1)^{n}J(*d*)J\alpha,\]
and since \(*d*J=J*d^{\#}*\) by applying Lemma 3.8, we have proved that indeed,
\[*d^{\#}*\alpha=(-1)^{n+1}[\Lambda,d^{\#}]\alpha.\]
∎
## 4. Primitive super forms
In this section we take the opportunity to introduce the notion of primitivity for super forms and establish expected results by once again comparing with the complex setting.
**Proposition 4.1**.: _Let \(\alpha\) be a \((p,q)-\)form with \(p+q=k.\) Then_
(4.1) \[[\Lambda,L^{s}]\alpha=C_{k,s}L^{s-1}\alpha,\]
_with_
\[C_{k,s}=s(n-k+1-s).\]
Proof.: The result follows from the complex theory (c.f [9]): if \(v\) is any complex form then
\[[\Lambda_{\Omega},L_{\Omega}^{s}]v=C_{k,s}L_{\Omega}^{s-1}v,\]
and the result follows by letting \(v=\mathcal{C}(\alpha)\) and by repeatedly applying Proposition 3.2. ∎
Let us define an important concept in this setting:
**Definition 4.2**.: A form \(\alpha\) is called _primitive_ if \(\alpha\) satisfies
\[\Lambda\alpha=0.\]
Note that, in view of Proposition 3.2, \(\alpha\) is primitive if and only if \(\mathcal{C}(\alpha)\) is primitive (a complex form \(v\) is primitive if \(\Lambda_{\Omega}v=0\)). The importance of primitive forms is that they are easier to work with than just any arbitrary form, combined with the fact that any form can be decomposed into primitive components in the following sense:
**Proposition 4.3**.: _Let \(\alpha\) be a \(k\)-form. Then we can write \(\alpha\) as_
\[\alpha=\alpha_{0}+L\alpha_{1}+...+L^{s}\alpha_{s},\]
_where each \(\alpha_{j}\) is a primitive \((k-2j)\)-form. Moreover, the terms of the sum are pairwise orthogonal._
Proof.: The result is well known in the complex case (c.f [9]). Thus we know that the formula holds for \(\mathcal{C}(\alpha)\), that is
\[\mathcal{C}(\alpha)=\alpha_{0}^{{}^{\prime}}+L\alpha_{1}^{{}^{\prime}}+...+L^{ s}\alpha_{s}^{{}^{\prime}}\]
where each \(\alpha_{j}^{{}^{\prime}}\) is a primitive, complex, \((k-2j)-\)form. But since \(\mathcal{C}(\alpha)\) has real coefficients we can assume that each \(\alpha_{j}^{{}^{\prime}}\) have real coefficients as well. Thus, each \(\alpha_{j}^{{}^{\prime}}\) is in fact \(\mathcal{C}(\alpha_{j})\) for some super form \(\alpha_{j}\). By Proposition 3.2, each \(\alpha_{j}\) is primitive, and the property of being pairwise orthogonal is immediate since the correspondence \(\alpha\longleftrightarrow\mathcal{C}(\alpha)\) is an isometry. ∎
**Proposition 4.4**.: _Let \(\alpha\) be a \(k-\)form. If_
\[L^{n-k+s}\alpha=0\]
_then_
\[\alpha=\sum_{0\leq j\leq s-1}L^{j}\alpha_{j}\]
_with \(\alpha_{j}\) a primitive \((k-2j)-\)forms. Moreover, \(\alpha\) is primitive iff_
\[L^{n-k+1}\alpha=0.\]
Proof.: The formulas are well known in the complex case and translates into our setting in the same way as above. ∎
The main theorem of this section, known in the complex case as the Lefschetz isomorphism Theorem, is given by the following:
**Theorem 4.5**.: _Let \(k\leq n\). Then the operator_
\[L^{n-k}:\{\mbox{k}-forms\}\rightarrow\{\mbox{(2n-k)}-forms\}\]
_is an isomorphism._
Proof.: From the complex setting, we know that \((L_{\Omega})^{n-k}\) is an isomorphism, and it is easily verified that \(k\)-forms that are real linear combinations of \(\Theta_{I,J,K}^{\mathbb{C}}\) correspond, via \(L_{\Omega}^{n-k}\), to \((2n-k)\)-forms that are real linear combinations of the same type of degree \((2n-k)\) which establishes the Theorem. ∎
## 5. \(L^{2}\)-estimates for the \(d\)-operator
Let us fix a closed, strictly positive, smooth \((1,1)-\)form \(\omega\). As we have seen, \(\omega\) induces an inner product on the space of forms, so that for \((p,q)-\)forms \(\alpha\) and \(\beta\), the function \(x\mapsto(\alpha,\beta)(x)\), is a function on \(\mathbb{R}^{n}\). We now define the associated \(L^{2}\) inner product:
**Definition 5.1**.: For \((p,q)-\)forms \(\alpha\) and \(\beta\) we define
\[\bigl{\langle}\alpha,\beta\bigr{\rangle}=\int_{\mathbb{R}^{n}\times\mathbb{R}^ {n}}(\alpha,\beta)\omega_{n}\]
and we define the associated norm
\[||\alpha||^{2}=\bigl{\langle}\alpha,\alpha\bigr{\rangle}.\]
Observe that, by (2.3), we have that
(5.1) \[\bigl{\langle}\alpha,\beta\bigr{\rangle}=\int_{\mathbb{R}^{n}\times\mathbb{R}^ {n}}(\alpha,\beta)\omega_{n}=\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}\alpha \wedge*J(\beta).\]
We defined before the space \(L_{p,q}^{2}\) as the set of all \((p,q)-\)forms \(\alpha\), whose coefficients are integrable, and which satisfies \(||\alpha||^{2}<\infty.\) Moreover, we let
\[L^{2}=\oplus_{p,q=0}^{n}(L_{p,q}^{2}).\]
We consider the operator \(d:L_{p,q}^{2}\to L_{p+1,q}^{2}\) as a closed, densely defined operator, with
\[dom(d)=\{\alpha\in L_{p,q}^{2}:d\alpha\in L_{p+1,q}^{2}\},\]
where \(d\) is taken in sense of super currents if \(\alpha\) is not smooth (cf. [7]). If \(\alpha\) is an \((n,q)\)-form it is understood that \(d\alpha=0\). By standard arguments, smooth \((p,q)-\)forms with compact support is dense in \(L_{p,q}^{2}\) and each such form is in \(dom(d)\). Thus, \(dom(d)\) is indeed dense in \(L_{p,q}^{2}\). We define the dual of the operator \(d\) with respect to the inner product by, the relation
\[\bigl{\langle}d^{\star}\alpha,\beta\bigr{\rangle}=\bigl{\langle}\alpha,d\beta \bigr{\rangle}\]
for smooth forms \(\alpha,\beta\in L^{2}.\) The dual of \(d^{\#}\) is defined analogously.
**Proposition 5.2**.: _For a smooth, compactly supported form \(\alpha\) we have_
\[d^{\star}\alpha=[\Lambda,d^{\#}]\alpha,\]
_and_
\[(d^{\#})^{\star}\alpha=-[\Lambda,d]\alpha.\]
Proof.: By Theorem 3.7 and Corollary 3.9 it is enough to prove that
\[d^{\star}=(-1)^{n+1}*d^{\#}*\]
and
\[(d^{\#})^{\star}=(-1)^{n}*d*.\]
To this end, let \(\alpha\) be a \(k\)-form and \(\beta\) a \((k+1)\)-form, both smooth and compactly supported. By (5.1) and Stokes’ formula,
\[\bigl{\langle}d\alpha,\beta\bigr{\rangle}=\int d\alpha\wedge*J(\beta)=(-1)^{k+ 1}\int\alpha\wedge d*J(\beta).\]
Since \(d*J(\beta)\) is a \((2n-k)-\)form, we know from Lemma 3.8 that
\[d*J(\beta)=(-1)^{n-(2n-k)}**d*J(\beta)=(-1)^{n-(2n-k)}*J(*d^{\#}*\beta),\]
since \(dJ=Jd^{\#}\). Thus,
\[\bigl{\langle}d\alpha,\beta\bigr{\rangle}=(-1)^{k+1+n-(2n-k)}\int\alpha\wedge* *d*J(\beta)=(-1)^{n+1}\bigl{\langle}\alpha,*d^{\#}*\beta\bigr{\rangle},\]
which proves that
\[d^{\star}\beta=(-1)^{n+1}*d^{\#}*\beta.\]
By Corollary 3.9, we see that indeed
\[d^{\star}=[\Lambda,d^{\#}].\]
The second formula of the Proposition follows in the same way, using Theorem 3.7. ∎
There are two natural Laplace operators in our setting:
**Definition 5.3**.: For \(\alpha\) any smooth and compactly supported form, we define
\[\Box\alpha=dd^{\star}\alpha+d^{\star}d\alpha\]
and
\[\Box^{\#}\alpha=d^{\#}(d^{\#})^{\star}\alpha+(d^{\#})^{\star}d^{\#}\alpha.\]
Our previous work can now be applied to show that these operators are in fact equal:
**Proposition 5.4**.: _For any smooth and compactly supported form \(\alpha\) we have that_
\[\Box\alpha=\Box^{\#}\alpha.\]
Proof.: By Proposition 5.2, we obtain
\[\Box\alpha=(d[\Lambda,d^{\#}]\alpha+[\Lambda,d^{\#}]d\alpha)\]
and
\[\Box^{\#}\alpha=-(d^{\#}[\Lambda,d]\alpha+[\Lambda,d]d^{\#}\alpha).\]
Writing out the terms explicitly one immediately concludes that these expressions are equal. ∎
Let us consider “twisted” versions of these Laplacians:
**Definition 5.5**.: For \(\varphi\) a smooth function, we define
\[d_{\varphi}=e^{\varphi}de^{-\varphi},\]
and
\[d_{\varphi}^{\#}=e^{\varphi}d^{\#}e^{-\varphi}.\]
We define the weighted inner product
\[\bigl{\langle}\alpha,\beta\bigr{\rangle}_{\varphi}=\int_{\mathbb{R}^{n}\times \mathbb{R}^{n}}(\alpha,\beta)e^{-\varphi}\omega_{n},\]
and let \(L_{\varphi}^{2}=\oplus_{0\leq p,q\leq n}(L_{p,q,\varphi}^{2})\) be the space of forms such that
\[||\alpha||_{\varphi}^{2}:=\bigl{\langle}\alpha,\alpha\bigr{\rangle}_{\varphi}<\infty.\]
This is easily seen to be a Hilbert space. We will write \(L_{\varphi}^{2}(\omega)\) when we wish to emphasize which \(\mathbb{R}-\)Kähler metric \(\omega\) we are integrating against in defining \(L_{\varphi}^{2}\). The dual of \(d\) and \(d^{\#}\) with respect to this inner product will be denoted \(d^{*}\) and \((d^{\#})^{*}\) respectively. We can now introduce the “twisted” Laplacians:
\[\Box_{\varphi}\alpha=dd^{*}\alpha+d^{*}d\alpha\]
and
\[\Box_{\varphi}^{\#}\alpha=d_{\varphi}^{\#}(d_{\varphi}^{\#})^{*}\alpha+(d_{ \varphi}^{\#})^{*}d_{\varphi}^{\#}\alpha.\]
Our next task is to relate these Laplacians to each other in the spirit of Proposition 5.4. We begin with the weighted analogue of Proposition 5.2:
**Proposition 5.6**.: _For any smooth, compactly supported form \(\alpha\), the equations_
\[d^{*}\alpha=[\Lambda,d_{\varphi}^{\#}]\alpha,\]
_and_
\[(d_{\varphi}^{\#})^{*}\alpha=-[\Lambda,d]\alpha,\]
_are satisfied._
Proof.: Let \(\alpha\) be a \((p,q)\)-form and \(\beta\) a \((p+1,q)\)-form. Then, we compute
\[\bigl{\langle}d\alpha,\beta\bigr{\rangle}_{\varphi}=\int(d\alpha,\beta)e^{- \varphi}=\int(\alpha,d^{\star}(e^{-\varphi}\beta))=\int(\alpha,e^{\varphi}d^{ \star}(e^{-\varphi}\beta))e^{-\varphi}.\]
By Proposition 5.2 we know that \(d^{\star}=[\Lambda,d^{\#}].\) Inserting this into the last integral, we see that
\[\bigl{\langle}d\alpha,\beta\bigr{\rangle}_{\varphi}=\int(\alpha,e^{\varphi}[ \Lambda,d^{\#}](e^{-\varphi}\beta))e^{-\varphi}=\int(\alpha,[\Lambda,d_{ \varphi}^{\#}]\beta)e^{-\varphi},\]
using that \(\Lambda\) commutes with the operation of multiplying with \(e^{\varphi}.\) But this means precisely that
\[d^{*}\beta=[\Lambda,d_{\varphi}^{\#}]\beta,\]
which proves the first formula. The second one follows in the same way. ∎
**Theorem 5.7**.: _If \(\alpha\) is a smooth and compactly supported form, then_
(5.2) \[\Box_{\varphi}\alpha=\Box_{\varphi}^{\#}\alpha+[dd^{\#}\varphi,\Lambda]\alpha.\]
Proof.: We calculate, using Proposition 5.6,
\[\Box_{\varphi}\alpha-\Box_{\varphi}^{\#}\alpha=dd^{*}\alpha+d^{*}d\alpha-(d_{ \varphi}^{\#}(d_{\varphi}^{\#})^{*}\alpha+(d_{\varphi}^{\#})^{*}d_{\varphi}^{ \#}\alpha)=\]
\[=d[\Lambda,d_{\varphi}^{\#}]\alpha+[\Lambda,d_{\varphi}^{\#}]d\alpha+d_{ \varphi}^{\#}[\Lambda,d]\alpha+[\Lambda,d]d_{\varphi}^{\#}\alpha.\]
Since
\[d_{\varphi}^{\#}\alpha=d^{\#}\alpha-d^{\#}\varphi\wedge\alpha,\]
we get
\[\Box_{\varphi}\alpha-\Box_{\varphi}^{\#}\alpha=\Box\alpha-\Box^{\#}\alpha-d[ \Lambda,d^{\#}\varphi]\alpha-[\Lambda,d^{\#}\varphi]d\alpha-d^{\#}\varphi \wedge[\Lambda,d]\alpha-[\Lambda,d](d^{\#}\varphi\wedge\alpha).\]
Here we identify \(d^{\#}\varphi\) with the operator sending \(\alpha\mapsto d^{\#}\varphi\wedge\alpha\). By Proposition 5.4, we know that the un-weighted Laplace operators satisfy \(\Box\alpha-\Box^{\#}\alpha=0\). Thus,
\[\Box_{\varphi}\alpha-\Box_{\varphi}^{\#}\alpha=-(d[\Lambda,d^{\#}\varphi] \alpha+[\Lambda,d^{\#}\varphi]d\alpha+d^{\#}\varphi\wedge[\Lambda,d]\alpha+[ \Lambda,d](d^{\#}\varphi\wedge\alpha)).\]
Expanding the commutators, we see that
\[\Box_{\varphi}\alpha-\Box_{\varphi}^{\#}\alpha=-(d(\Lambda(d^{\#}\varphi\wedge \alpha)-d(d^{\#}\varphi\wedge(\Lambda\alpha))+\Lambda(d^{\#}\varphi\wedge d \alpha)-d^{\#}\varphi\wedge(\Lambda d\alpha)+\]
\[+d^{\#}\varphi\wedge(\Lambda d\alpha)-d^{\#}\varphi(d(\Lambda\alpha))+\Lambda( d(d^{\#}\varphi\wedge\alpha)-d(\Lambda(d^{\#}\varphi\wedge\alpha))).\]
Removing the terms which cancel out, we obtain
\[\Box_{\varphi}\alpha-\Box_{\varphi}^{\#}\alpha=d(d^{\#}\varphi\wedge(\Lambda \alpha))-\Lambda(d^{\#}\varphi\wedge d\alpha)-\Lambda(d(d^{\#}\varphi\wedge \alpha)+d^{\#}\varphi\wedge(d(\Lambda\alpha))=\]
\[=dd^{\#}\varphi\wedge(\Lambda\alpha)-d^{\#}\varphi\wedge d(\Lambda\alpha)- \Lambda(d^{\#}\varphi\wedge d\alpha)-\Lambda(dd^{\#}\varphi\wedge\alpha)+ \Lambda(d^{\#}\varphi\wedge d\alpha)+d^{\#}\varphi\wedge(d(\Lambda\alpha))=\]
\[=dd^{\#}\varphi\wedge(\Lambda\alpha)-\Lambda(dd^{\#}\varphi\wedge\alpha)=[dd^{ \#}\varphi,\Lambda]\alpha.\]
Putting everything together, we conclude that
\[\Box_{\varphi}\alpha=\Box_{\varphi}^{\#}\alpha+[dd^{\#}\varphi,\Lambda]\alpha,\]
as desired. ∎
**Example 5.8**.: Let us consider a concrete example of this identity. Let \(n=1\) and let \(f\) be a smooth function with compact support. For the weight function, we choose \(\varphi=x^{2}/2\). Then
\[\Box_{\varphi}f=d^{*}df=d^{*}(f^{\prime}dx)=-f^{\prime\prime}+xf^{\prime}\]
and
\[\Box_{\varphi}^{\#}f=(d_{\varphi}^{\#})^{*}d_{\varphi}^{\#}f=(d_{\varphi}^{\#} )^{*}[(f^{\prime}-xf)d\xi]=-f^{\prime\prime}+f+xf^{\prime}.\]
Thus
\[\Box_{\varphi}f=\Box_{\varphi}^{\#}f-f,\]
and we see that in this case \([dd^{\#}\varphi,\Lambda]=-Id\) as predicted by formula (5.2).
Let us take the inner product of identity (5.2) against a form smooth, compactly supported form \(\alpha\):
\[\left\langle\Box_{\varphi}\alpha,\alpha\right\rangle_{\varphi}=\left\langle \Box_{\varphi}^{\#}\alpha,\alpha\right\rangle_{\varphi}+\left\langle[\Lambda, dd^{\#}\varphi]\alpha,\alpha\right\rangle_{\varphi}\Longleftrightarrow\]
By the definition of the adjoint, this expression gives us the following fundamental identity, which should be compared with the classical Bochner-Kodaira-Nakano identity of complex analysis (c.f [8]):
**Theorem 5.9**.: _For every smooth, compactly supported form \(\alpha\),_
(5.3) \[||d^{*}\alpha||_{\varphi}^{2}+||d\alpha||_{\varphi}^{2}=||(d_{\varphi}^{\#})^{ *}\alpha||_{\varphi}^{2}+||d_{\varphi}^{\#}\alpha||_{\varphi}^{2}+\left\langle [dd^{\#}\varphi,\Lambda]\alpha,\alpha\right\rangle_{\varphi}.\]
By the following fundamental theorem of functional analysis, such an equality can be used to prove the existence of solutions of the \(d-\)equation (c.f. [6]):
**Theorem 5.10**.: _Let \(E\) and \(F\) be two Hilbert spaces, equipped with norms \(||\cdot||_{E}\) and \(||\cdot||_{F}\) and let \(\mathcal{H}\) be a closed subspace of \(F\). Let \(\mathcal{L}:E\to F\) be a closed, densely defined operator such that \(dom(\mathcal{L}^{*})\) is dense in \(F\), and that \(Range(\mathcal{L})\subset\mathcal{H}\) If, for each \(\alpha\in dom(\mathcal{L}^{*})\cap\mathcal{H}\), the inequality_
(5.4) \[||\mathcal{L}^{*}\alpha||_{F}^{2}\geq c||\alpha||_{E}^{2}\]
_is satisfied for some fixed constant \(c>0\), then we can find an element \(\beta\in E\) such that_
\[\mathcal{L}\beta=\alpha\]
_and_
\[||\beta||_{E}^{2}\leq c^{-1}||\alpha||_{F}^{2}.\]
To apply the above theorem to the operator \(d\), we thus need to show an inequality of the type (5.4) with \(\mathcal{L}=d\). Let us begin with:
**Proposition 5.11**.: _Let \(\alpha\in dom(d^{*})\cap dom(d)\). Then, if the inequality_
(5.5) \[||d\beta||_{\varphi}^{2}+||d^{*}\beta||_{\varphi}^{2}\geq c||\beta||_{\varphi} ^{2}\]
_holds for all smooth, compactly supported forms \(\beta\) with \(c>0\), then_
(5.6) \[||d\alpha||_{\varphi}^{2}+||d^{*}\alpha||_{\varphi}^{2}\geq c||\alpha||_{ \varphi}^{2}.\]
Proof.: We can of course assume that \(d^{*}\alpha\in L_{p-1,q}^{2}\), since otherwise the inequality trivially holds, and the condition \(\alpha\in dom(d)\) means precisely that \(d\alpha\in L_{p+1,q}^{2}\). First, we show that if the inequality (5.5) holds for \(\beta\in dom(d^{*})\cap dom(d)\) with compact support, then the desired inequality (5.6) holds. Indeed, let \(\chi_{R}\) be a smooth bump function which is 1 on the ball defined by \(\{x\in\mathbb{R}^{n}:|x|\leq R\}\) and vanishes outside \(\{x\in\mathbb{R}^{n}:|x|\leq 2R\}\). Then it is easy to see that \(\chi_{R}\cdot\alpha\in dom(d^{*})\). By assumption, we know that
\[||d(\chi_{R}\cdot\alpha)||_{\varphi}^{2}+||d^{*}(\chi_{R}\cdot\alpha)||_{ \varphi}^{2}\geq c||\chi_{R}\cdot\alpha||_{\varphi}^{2}.\]
But \(d(\chi_{R}\alpha)=d\chi_{R}\wedge\alpha+\chi_{R}d\alpha\), so
\[||d(\chi_{R}\cdot\alpha)||_{\varphi}\leq||d\chi_{R}\wedge\alpha||_{\varphi}+|| \chi_{R}d\alpha||_{\varphi}.\]
The first term satisfies
\[||d\chi_{R}\wedge\alpha||_{\varphi}^{2}=\int|d\chi_{R}\wedge\alpha|^{2}e^{- \varphi}\omega_{n}\leq\int|d\chi_{R}|^{2}|\alpha|^{2}e^{-\varphi}\omega_{n},\]
and by the assumptions on \(\alpha\), this term tend to zero by the dominated convergence theorem, since \(|d\chi_{R}(x)|\to 0\) pointwise as \(R\to 0\). Since \(d\alpha\) belongs to \(L^{2}\), the second term tends to \(||d\alpha||^{2}\), and thus we see that
\[\lim_{R\rightarrow\infty}||d(\chi_{R}\cdot\alpha)||_{\varphi}^{2}\leq||d\alpha ||_{\varphi}^{2}.\]
For the term \(d^{*}(\chi_{R}\cdot\alpha)),\) a straightforward calculation reveals that
\[d^{*}(\chi_{R}\cdot\alpha))=\pm*d^{\#}\chi_{R}\wedge(*\alpha)\pm\chi_{R}d^{*}\alpha,\]
and consequently,
\[||d^{*}(\chi_{R}\cdot\alpha)||_{\varphi}\leq||d^{\#}\chi_{R}\wedge(*\alpha)||_ {\varphi}+||\chi_{R}d^{*}\alpha||_{\varphi}.\]
By the same argument as above this implies that
\[\lim_{R\rightarrow\infty}||d^{*}(\chi_{R}\cdot\alpha)||_{\varphi}^{2}\leq||d^{ *}\alpha||_{\varphi}^{2}.\]
Combining these observations, we obtain
\[||d\alpha||_{\varphi}^{2}+||d^{*}\alpha||_{\varphi}^{2}\geq c\lim_{R \rightarrow\infty}||\chi_{R}\cdot\alpha||_{\varphi}^{2}=c||\alpha||_{\varphi}^ {2},\]
as desired. The proof will thus be complete if we show that the hypothesis of the proposition implies that the inequality (5.6) holds for every \(\alpha\in dom(d^{*})\cap dom(d)\) with compact support. But for such an \(\alpha\), if we let \(\psi_{\epsilon}\) be an approximation of the identity, it is not hard to show that
\[\alpha*\psi_{\epsilon}\in dom(d^{*}),\]
and moreover, as \(\epsilon\to 0\),
\[||d(\alpha*\psi_{\epsilon})||_{\varphi}^{2}\rightarrow||d\alpha||_{\varphi}^{2}.\]
Since \(d^{*}\) is a first order differential operator with smooth coefficients, the same holds true for \(d^{*}\) in view of Friedrich’s lemma (c.f. [5] or [6] Lemma 1.2.2 which applies analogously in our setting), that is,
\[||d^{*}(\alpha*\psi_{\epsilon})||_{\varphi}^{2}\rightarrow||d^{*}\alpha||_{ \varphi}^{2},\]
as \(\epsilon\to 0\). Thus, since we know that (5.5) holds with \(\beta=\alpha*\psi_{\epsilon}\), we see that the inequality (5.6) holds, and we are done. ∎
Now, let \(\varphi\) be a smooth convex function such that \(dd^{\#}\varphi\geq\epsilon\omega\) for some fixed \(\epsilon>0\). We claim that this implies that
\[\left\langle[dd^{\#}\varphi,\Lambda]\alpha,\alpha\right\rangle\geq\epsilon p|| \alpha||^{2},\]
for \(\alpha\) a \((p,n)-\)form. Indeed, by standard linear algebra, we can at each fixed point \(x_{0}\) find orthogonal coordinates in which \(\omega_{x_{0}}=\sum_{i=1}^{n}dx_{i}\wedge d\xi_{i}\) and \(dd^{\#}\varphi_{x_{0}}=\sum_{i=1}^{n}\lambda_{i}dx_{i}\wedge d\xi_{i}\) where \(\lambda_{i}\) are ordered in such a way that \(\lambda_{1}\leq....\leq\lambda_{n}\). Since this holds for any point \(x_{0}\) we can for each \(i\) consider \(\lambda_{i}\) as a function on \(\mathbb{R}^{n}\) which will depend continuously on the point \(x_{0}\), and by the assumption on \(\varphi\) we will have that \(\lambda_{n}(x)\geq...\geq\lambda_{1}(x)\geq\epsilon\) for all \(x\). For a \((p,n)-\)form
\[\alpha=\sum_{|I|=p}\alpha_{I}dx_{I}\wedge d\xi,\]
a calculation reveals (c.f. [4], p. 69) that the pointwise inner product at the point \(x_{0}\) satisfies
\[([dd^{\#}\varphi,\Lambda]\alpha,\alpha])_{x_{0}}=\sum_{|I|=p}(\sum_{i\in I} \lambda_{i}(x_{0}))|\alpha_{I}|^{2}(x_{0})\geq(\lambda_{1}(x_{0})+...+\lambda_ {p}(x_{0}))|\alpha|^{2}(x_{0}).\]
Since \(x_{0}\) was arbitrary we infer that
\[\left\langle[dd^{\#}\varphi,\Lambda]\alpha,\alpha\right\rangle\geq p\epsilon|| \alpha||^{2},\]
and thus, by (5.3), we obtain the inequality
(5.7) \[||d^{*}\alpha||_{\varphi}^{2}+||d\alpha||_{\varphi}^{2}\geq p\epsilon||\alpha| |_{\varphi}^{2},\]
for every smooth, compactly supported form \(\alpha.\) By Proposition 5.11 the above inequality will then hold for every \(\alpha\in dom(d)\cap dom(d^{*}).\) If moreover \(d\alpha=0\), this implies that
\[||d^{*}\alpha||_{\varphi}^{2}\geq p\epsilon||\alpha||_{\varphi}^{2},\]
and so we can apply Theorem 5.10 with \(\mathcal{H}=ker(d)\) (which is a closed subspace) to obtain:
**Theorem 5.12**.: _Let \(\omega\) be an \(\mathbb{R}-\)Kähler form and let \(\varphi\) be a smooth function such that \(dd^{\#}\varphi\geq\epsilon\omega\) for some \(\epsilon>0\). If \(\beta\in L_{p,n,\varphi}^{2}\) satisfies that \(d\beta=0\) and if \(p\geq 1\), then we can find an \(\alpha\in L_{p-1,n,\varphi}^{2}\) such that_
\[d\alpha=\beta\]
_and_
\[||\alpha||_{\varphi}^{2}\leq\frac{1}{p\epsilon}||\beta||_{\varphi}^{2}.\]
If we instead let the \(\mathbb{R}\)-Kähler form \(\omega\) be given by \(\omega=dd^{\#}\varphi\) for some smooth, convex function \(\varphi\), then Proposition 4.1 tells us that \([dd^{\#}\varphi,\Lambda]\alpha=(k-n)\alpha\), if \(\alpha\) is a \(k\)-form. Thus,
(5.8) \[||d^{*}\alpha||_{\varphi}^{2}+||d\alpha||_{\varphi}^{2}\geq(k-n)||\alpha||_{ \varphi}^{2},\]
for every smooth, compactly supported form \(\alpha\) and we get the following result:
**Theorem 5.13**.: _Assume that \(\omega=dd^{\#}\varphi>0\) for a smooth convex function \(\varphi\). If \(\beta\in L_{p,q,\varphi}^{2}\) is a \(k-\)form with \(k>n\) and \(p\geq 1\) such that \(d\beta=0\), then we can find a \(\alpha\in L_{p-1,q,\varphi}^{2}\) such that_
\[d\alpha=\beta\]
_and_
\[||\alpha||_{\varphi}^{2}\leq\frac{1}{k-n}||\beta||_{\varphi}^{2}.\]
Now, if \(\tilde{\beta}\) is a closed \((p,n)\)-form, we can by the virtue of the above theorem solve the equation
\[d\tilde{\alpha}=\tilde{\beta}\]
with the estimate
\[||\tilde{\alpha}||_{\varphi}^{2}\leq\frac{1}{p}||\tilde{\beta}||_{\varphi}^{2}.\]
Let us write this in coordinates: if \(\tilde{\alpha}=\alpha\wedge d\xi\) with \(\alpha=\sum_{|K|=p-1}\alpha_{K}dx_{K},\) and \(\tilde{\beta}=\beta\wedge d\xi\) with \(\beta=\sum_{|L|=p}\beta_{L}dx_{L},\) then
\[\int|\alpha|_{dd^{\#}\varphi}^{2}c_{n}dx\wedge d\xi\leq\frac{1}{p}\int|\beta|_ {dd^{\#}\varphi}^{2}c_{n}dx\wedge d\xi.\]
This follows since \(|\tilde{\alpha}|_{dd^{\#}\varphi}^{2}=|\alpha|_{dd^{\#}\varphi}^{2}\det(\omega _{ij})^{-1},\) and similarly for \(|\tilde{\beta}|_{dd^{\#}\varphi}^{2}\). Let us reverse this argument: if \(\beta\) is a closed \((p,0)\)-form such that \(\int|\beta|_{dd^{\#}\varphi}^{2}dx\wedge d\xi<+\infty\), we can consider the \((p,n)\) form \(\tilde{\beta}=\beta\wedge d\xi\), which will also be closed. Then
\[|\tilde{\beta}|_{dd^{\#}\varphi}^{2}=|\beta|_{dd^{\#}\varphi}^{2}\det(\omega_{ ij})^{-1},\]
and by the above we have that \(\tilde{\beta}\in L_{\varphi}^{2}\). Thus we can solve \(d\tilde{\alpha}=\tilde{\beta}\), for some \((p-1,n)\)- form \(\tilde{\alpha}\). But, \(\tilde{\alpha}=\alpha\wedge d\xi\) for some \((p-1,0)\)-form \(\alpha,\) and we must have \(d\alpha=\beta.\) Thus we arrive at:
**Theorem 5.14**.: _For a closed \((p,0)\)-form \(\beta\) such that \(\int_{\mathbb{R}^{n}}|\beta|_{dd^{\#}\varphi}^{2}e^{-\varphi}dx<\infty\), we can solve_
\[d\alpha=\beta,\]
_with_
(5.9) \[\int_{\mathbb{R}^{n}}|\alpha|_{dd^{\#}\varphi}^{2}e^{-\varphi}dx\leq\frac{1}{p }\int_{\mathbb{R}^{n}}|\beta|_{dd^{\#}\varphi}^{2}e^{-\varphi}dx.\]
It is interesting to note that when \(p=1\) the left-hand-side of (5.9) does not depend on the Kähler metric \(dd^{\#}\varphi\).
_Remark 5.15_.: Let us explain briefly how our setting is related to solving the \(\overline{\partial}-\)equation on a holomorphic line bundle \(L\) over a compact Kähler manifold \(X\) (see [4] or [2] for a detailed account). Let \(\varphi\) be a metric on \(L\), inducing a hermitian structure on \(L\), and let \(\nabla\) be the Chern connection of \(L\). Strictly speaking, \(\varphi\) is a collection of smooth functions \(\{\varphi_{i}\}\), each defined on an open set of \(U_{i}\subset X\) corresponding to a trivialization of \(L\). If \(s\in H^{0}(X,L)\), then the norm of \(s\) is locally given by \(x\mapsto|s_{i}|^{2}(x)e^{-\varphi_{i}(x)}\), where \(s_{i}\) is a local representative of \(s\) using the trivialization of \(L\). We can thus perceive \(|s|e^{-\varphi}\) as a globally defined function on \(X\). As is well known, we can write the connection \(\nabla\) as \(\nabla=\nabla^{\prime}+\nabla^{\prime\prime}\), where \(\nabla^{\prime\prime}=\overline{\partial}\), and we can consider the duals of \(\nabla^{\prime}\) and \(\nabla^{\prime\prime}\) with respect to the metric \(\varphi\), and denote them by \((\nabla^{\prime})^{*}\) and \((\nabla^{\prime\prime})^{*}\). Then the classical Bochner-Kodaira-Nakano identity states that
\[\nabla^{\prime}(\nabla^{\prime})^{*}+(\nabla^{\prime})^{*}\nabla^{\prime}= \nabla^{\prime\prime}(\nabla^{\prime\prime})^{*}+(\nabla^{\prime\prime})^{*} \nabla^{\prime\prime}+[dd^{c}\varphi,\Lambda],\]
where one can show that \(dd^{c}\varphi\) is the curvature operator associated with \(\nabla.\) In the same way as in this article, this identity can be used to show the solvability of the \(\bar{\partial}-\)equation (the argument is basically due to Hörmander [6]):
**Theorem 5.16**.: (Hörmander_, [6]) Let \(\beta\) be a \((p,q)-\)form with values on \(L\) such that \(\overline{\partial}\beta=0\) and \(\int_{X}|\beta|^{2}e^{-\varphi}dV_{X}<+\infty\), and assume we can find a metric \(\varphi\) on \(L\) such that \(([dd^{c}\varphi,\Lambda]\alpha,\alpha)\geq c||\alpha||^{2}\) for each compactly supported \((p,q+1)\)-form \(\alpha\) with values in \(L\). Then we can solve the equation_
\[\overline{\partial}\alpha=\beta\]
_with_
\[\int_{X}|\alpha|^{2}e^{-\varphi}dV_{X}\leq\frac{1}{c}\int_{X}|\beta|^{2}e^{- \varphi}dV_{X},\]
_where \(dV_{X}\) is the volume element on \(X\)._
Now, let us instead consider the Laplace operator \(\Box_{\varphi}\): if \(k>n\) and \(\alpha\) is a \(k\)-form in \(L_{\varphi}^{2}\) we know that (5.7) holds, that is
\[||d\alpha||^{2}+||d^{*}\alpha||^{2}\geq(k-n)||\alpha||^{2},\]
under the assumption that the metric in question is \(dd^{\#}\varphi\). As we already have seen, the left-hand-side is equal to \(\left\langle\Box_{\varphi}\alpha,\alpha\right\rangle,\) and by the Cauchy-inequality
\[\left\langle\Box_{\varphi}\alpha,\alpha\right\rangle\leq||\Box_{\varphi}\alpha ||\cdot||\alpha||.\]
Thus
\[||\Box_{\varphi}\alpha||\geq(k-n)||\alpha||.\]
Since \(\Box_{\varphi}\) is self-adjoint we can apply Theorem 5.10 to obtain:
**Theorem 5.17**.: _With the notation and assumptions above, we can for each \(d-\)closed \(k\)-form \(\beta\) solve the equation_
\[\Box_{\varphi}\alpha=\beta,\]
_with_
\[||\alpha||_{\varphi}^{2}\leq(k-n)^{2}||\beta||_{\varphi}^{2}.\]
## 6. The Legendre transform
We recall the definition and some properties of the Legendre transform:
**Definition 6.1**.: Let \(f\) be a convex function on \(\mathbb{R}^{n}\). The Legendre transform of \(f\) is given by
(6.1) \[f^{*}(y)=\sup_{x\in\mathbb{R}^{n}}(x\cdot y-f(x)),\]
for \(y\in\mathbb{R}^{n}\).
The Legendre transform of a convex function is again convex, and \((f^{*})^{*}=f\). Let us assume that \(f\) is smooth. Then the supremum in (6.1) (if it is not equal to \(+\infty\), which we always shall assume in this section) is achieved at a point \(x(y)\) for which
\[y=\nabla f(x(y)),\]
where \(\nabla f\) denotes the gradient of \(f\); to see this is simply a matter of differentiating the expression inside of the supremum. Thus
\[f^{*}(y)=x(y)\cdot y-f(x(y)).\]
By a small calculation this implies that if \(y=\nabla f(x(y))\) as above, then
\[\nabla f^{*}(y)=x.\]
Thus, if we consider the map \(\psi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), given by
\[\psi(x)=\nabla f(x),\]
then
\[\psi^{-1}(y)=\nabla f^{*}(y).\]
For any smooth map \(\psi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), the pullback, \(\psi^{\star}\), of a form \((p,0)\)-form is just the regular pullback of a differential \(p\)-form, and we extend \(\psi^{\star}\) to act on \((p,q)\)-forms by requiring it to be \(J\)-linear, that is
\[\psi^{\star}J(\alpha)=J(\psi^{\star}\alpha),\]
for any \((p,q)-\)form \(\alpha.\) This makes sense since, in coordinates, this makes for
\[\psi^{\star}(\alpha_{IJ}(x)dx_{I}\wedge d\xi_{J})=\alpha_{IJ}(\psi(x))\psi^{ \star}dx_{I}\wedge J(\psi^{\star}dx_{J}),\]
and every \((p,q)\)-form can be written as a linear combination of such forms.
**Proposition 6.2**.: _Let \(\psi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) be a diffeomorphism. Then any integrable \((n,n)-\)form \(\alpha\) satisfies,_
\[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}\alpha=\int_{\mathbb{R}^{n}\times \mathbb{R}^{n}}\psi^{\star}(\alpha)/det(D\psi).\]
Proof.: This is a simple consequence of the usual change of variable formula for \(n\)-forms on \(\mathbb{R}^{n}\): let \(\alpha=\alpha_{0}(x)c_{n}dx\wedge d\xi\). Then \(\psi^{\star}\alpha(x)=(\alpha_{0}\circ\psi)(x)\cdot(det(D\psi)(x))^{2}c_{n}dx \wedge d\xi.\) Thus
\[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}\psi^{\star}(\alpha)/det(D\psi)=\int_ {\mathbb{R}^{n}\times\mathbb{R}^{n}}(\alpha_{0}\circ\psi)(x)\cdot det(D\psi)(x )c_{n}dx\wedge d\xi=\]
\[=\int_{\mathbb{R}^{n}}(\alpha_{0}\circ\psi)(x)\cdot det(D\psi)(x)dx=\int_{ \mathbb{R}^{n}}\alpha_{0}(x)dx=\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}\alpha.\]
∎
Now, let \(\varphi\) be a smooth, strictly convex function, and associate to \(\varphi\) the \(\mathbb{R}-\)Kähler form
\[\omega^{\varphi}=dd^{\#}\varphi=\sum_{i,j=1}^{n}\varphi_{ij}dx_{i}\wedge d\xi_ {j},\]
with \(\varphi_{ij}=\frac{\partial\varphi}{\partial x_{i}\partial x_{j}}\). Let also \(\psi=\nabla\varphi^{*}\) which is a diffeomorphism. Since \(\nabla\varphi^{*}\) and \(\nabla\varphi\) are inverse to each other by the above, we have \((\varphi_{ij}^{*})=(\varphi^{ij})\) where \((\varphi^{ij})\) denotes the inverse of the matrix \((\varphi_{ij})\). Thus, since \(\psi^{\star}dx_{i}=\sum_{i,k=1}^{n}\varphi_{ik}^{*}dx_{k}\) and \(\psi^{\star}d\xi_{i}=\sum_{i,k=1}^{n}\varphi_{ik}^{*}d\xi_{k}\), we conclude that
\[\psi^{\star}\omega^{\varphi}=\sum_{i,j,k,l=1}^{n}\varphi_{ij}\varphi^{ik} \varphi^{jl}dx_{k}\wedge d\xi_{l}=\sum_{j,l=1}^{n}\varphi^{jl}dx_{j}\wedge d \xi_{l}.\]
Here we used that the matrix \((\varphi_{ij})\) is symmetric so that \(\sum_{i=1}^{n}\varphi_{ij}\varphi^{ik}=\delta_{jk}\). Thus, we obtain:
(6.2) \[\psi^{\star}\omega^{\varphi}=\omega^{\varphi^{*}}.\]
Recall from section 2 that the norm of a \((p,0)\)-form \(\alpha\) satisfies the relation
\[|\alpha|_{dd^{\#}\varphi}^{2}\omega_{n}^{\varphi}=c_{p}\,\alpha\wedge J(\alpha )\wedge\omega_{n-p}^{\varphi}.\]
Applying \(\psi^{\star}\) to both sides of this equality and using (6.2), give us
\[\psi^{\star}(|\alpha|_{dd^{\#}\varphi}^{2}\omega_{n}^{\varphi})=c_{p}\,\psi^{ \star}\alpha\wedge J(\psi^{\star}\alpha)\wedge\omega_{n-p}^{\varphi^{*}}.\]
This in turn is equivalent to
\[|\alpha|_{dd^{\#}\varphi}^{2}(\psi(x))\omega_{n}^{\varphi^{*}}=|\psi^{\star} \alpha|_{dd^{\#}\varphi^{*}}^{2}(x)\omega_{n}^{\varphi^{*}},\]
and we have proved:
**Proposition 6.3**.: _Let \(\alpha\) be a \((p,0)-\)form. Then, at any point \(x\),_
\[|\psi^{\star}\alpha|_{dd^{\#}\varphi^{*}}^{2}(x)=|\alpha|_{dd^{\#}\varphi}^{2} (\psi(x)).\]
Let us consider the integral
\[\int|\alpha|^{2}e^{-\varphi}dx\wedge d\xi;\]
we shall see how this integral transform under the Legendre transform of \(\varphi\) under the additional assumption that \(\varphi\) is \(r-\)homogeneous, that is, when for each \(y\in\mathbb{R}^{n},\)
(6.3) \[\varphi(ty)=t^{r}\varphi(y),\]
for \(t\geq 0\). Differentiating the relation (6.3) with respect to \(t\), and evaluating at \(t=1\) tells us that,
\[y(\nabla\varphi)(y)=r\varphi(y).\]
This result is sometimes referred to as Euler’s theorem on homogeneous functions. Moreover, we know that
\[\varphi^{*}(x)=y\cdot(\nabla\varphi)(y)-\varphi(y)\]
where \(y\) is such that \(x=\nabla\varphi(y)\). Thus,
\[\varphi^{*}(x)=(r-1)\varphi(y).\]
Furthermore,
\[\psi^{\star}(dx\wedge d\xi)=det(\varphi_{ij}^{*})^{2}dx\wedge d\xi.\]
Thus, by Proposition 6.2 and Proposition 6.3, we obtain
\[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\alpha|_{\omega^{\varphi}}^{2}e^{- \varphi}c_{n}dx\wedge d\xi=\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}\psi^{ \star}(|\alpha|_{\omega^{\varphi}}^{2}e^{-\varphi}c_{n}dx\wedge d\xi)/det(D \psi)=\]
\[=\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\psi^{\star}\alpha|_{\omega^{ \varphi^{*}}}^{2}e^{-\frac{\varphi^{*}}{r-1}}det(\varphi_{ij}^{*})c_{n}dx \wedge d\xi=\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\psi^{\star}\alpha|_{ \omega^{\varphi^{*}}}^{2}e^{-\frac{\varphi^{*}(y)}{r-1}}\omega_{n}^{\varphi^{* }}.\]
Let us record this result as a proposition:
**Proposition 6.4**.: _Let \(\varphi\) be a \(r\)-homogeneous, convex and smooth function, and let \(\psi=\nabla\varphi^{*}\). Then any integrable \((p,0)\)-form \(\alpha\) satisfies_
(6.4) \[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\alpha|_{\omega^{\varphi}}^{2}e^{- \varphi}c_{n}dx\wedge d\xi=\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\psi^{*} \alpha|_{\omega^{\varphi^{*}}}^{2}e^{-\frac{\varphi^{*}(y)}{r-1}}\omega_{n}^{ \varphi^{*}}.\]
Under these circumstances we can prove the following:
**Proposition 6.5**.: _Let \(\beta\in L_{\phi}^{2}(dd^{\#}\phi)\) be a \((p,0)\)-form, with \(\phi\) an \(r\)-homogeneous, convex and smooth function, with \(r>1\). Then we can solve_
\[d\alpha=\beta\]
_with the estimate_
(6.5) \[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\alpha|_{\omega^{\phi}}^{2}e^{-\phi} \omega_{n}^{\phi}\leq\frac{1}{p(r-1)}\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}} |\beta|_{\omega^{\phi}}^{2}e^{-\phi}\omega_{n}^{\phi}.\]
Proof.: If \(s=\frac{r}{r-1}\), then it is well known that \(\phi^{*}\) is an \(s\)-homogeneous function. Thus, if we let
\[\varphi=(r-1)^{1-r}\phi^{*},\]
then \(\varphi\) is \(s-\)homogeneous, and satisfies \(\frac{\varphi^{*}}{r-1}=\phi\). To simplify notation we let \(\tau=(r-1)^{1-r}\). If \(\alpha\) is a \((p,0)\)-form, then
\[|\alpha|_{\omega^{\varphi}}^{2}=\tau^{-p}|\alpha|_{\omega^{\phi^{*}}}^{2},\]
\[|\psi^{\star}\alpha|_{\omega^{\varphi^{*}}}^{2}=(r-1)^{-p}|\psi^{\star}\alpha| _{\omega^{\phi}}^{2},\]
\[\omega_{n}^{\varphi^{*}}=(r-1)^{n}\omega_{n}^{\phi};\]
thus, with \(\psi=\nabla\varphi^{*}\) formula (6.4) transforms into
(6.6) \[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\alpha|_{\omega^{\phi^{*}}}^{2}e^{- \tau\phi^{*}}c_{n}dx\wedge d\xi=(r-1)^{n-pr}\int_{\mathbb{R}^{n}\times\mathbb{ R}^{n}}|\psi^{\star}\alpha|_{\omega^{\phi}}^{2}e^{-\phi}\omega_{n}^{\phi}.\]
Now let us consider the form \(\gamma=(\psi^{-1})^{*}\beta\). Then, since \(\psi^{*}(\psi^{-1})^{*}\beta=\beta,\) we see that \(\psi^{*}\gamma\in L_{\phi}^{2}(dd^{\#}\phi)\), and by the above the form \(\gamma\) satisfies
\[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\gamma|_{\omega^{\phi^{*}}}^{2}e^{- \tau\phi^{*}}c_{n}dx\wedge d\xi<+\infty.\]
Moreover, \(\gamma\) is \(d\)-closed, since \(d\psi^{*}=\psi^{*}d\). Thus, by Theorem 5.14 we can find a \((p-1,0)\)- form \(\eta\) such that
\[d\eta=\gamma\]
and
\[\tau^{-(p-1)}\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\eta|_{\omega^{\phi^{*} }}^{2}e^{-\tau\phi^{*}}c_{n}dx\wedge d\xi\leq\]
\[\leq\frac{1}{p}\tau^{-p}\int|\gamma|_{\omega^{\phi^{*}}}^{2}e^{-\tau\phi^{*}}c _{n}dx\wedge d\xi,\]
where we used that
\[|\alpha|_{\omega^{\tau\phi^{*}}}^{2}=\tau^{-p}|\alpha|_{\omega^{\phi^{*}}}^{2},\]
for any \((p,0)-\)form \(\alpha.\) If we apply formula (6.6) to this inequality, with \(\alpha=\psi^{\star}\eta\), then since \(\psi^{\star}\gamma=\beta\) we see that
\[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\alpha|_{\omega^{\phi}}^{2}e^{-\phi} \omega_{n}^{\phi}\leq\frac{1}{p(r-1)}\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}} |\beta|_{\omega^{\phi}}^{2}e^{-\phi}\omega_{n}^{\phi}.\]
Also,
\[d\alpha=d\psi^{\star}\eta=\psi^{\star}\gamma=\beta,\]
which concludes the proof. ∎
While the estimate of the above proposition is similar to that of Theorem 5.14, it does not seem to follow from the previous formalism in a direct way.
From section 5 we know that in order to obtain existence theorems for the \(d-\)operator we need to examine the commutator term \([dd^{\#}\varphi,\Lambda]\), and we used in that section the choice \(\omega=dd^{\#}\varphi\) for \(\varphi\) a smooth, strictly convex function, to obtain \([dd^{\#}\varphi,\Lambda]\alpha=(k-n)\alpha\), for \(\alpha\) a \(k\)-form. Let us instead assume that \(\phi\) is a smooth, strictly _concave_ function. Then \(-dd^{\#}\phi\) is a closed positive \((1,1)\)- form and we can let \(\omega^{\phi}=-dd^{\#}\phi\) be our Kähler form. In this situation we see that
\[[dd^{\#}\phi,\Lambda]\alpha=(n-k)\alpha,\]
and thus, we have
\[||d^{*}\alpha||_{\phi}^{2}+||d\alpha||_{\phi}^{2}\geq(n-k)||\alpha||_{\phi}^{2},\]
for every smooth, compactly supported form \(\alpha\), where \(||\alpha||_{\phi}^{2}=\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\alpha|_{ \omega^{\phi}}^{2}e^{-\phi}\omega_{n}^{\phi},\) and the dual of \(d\) is with respect to this norm. Applying Theorem 5.10 we obtain the following:
**Proposition 6.6**.: _Let \(\phi\) be a smooth, strictly concave function. Then, for every closed \((p,0)\)-form \(\beta\in L_{\phi}^{2}\), we can solve the equation_
\[d\alpha=\beta,\]
_with_
\[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\alpha|_{\omega^{\phi}}^{2}e^{-\phi} \omega_{n}^{\phi}\leq\frac{1}{n-p}\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}| \beta|_{\omega^{\phi}}^{2}e^{-\phi}\omega_{n}^{\phi}.\]
It is interesting to compare this result with Proposition 6.5: let \(\varphi\) be \(2-\)homogeneous and assume that \(\beta\in L_{\varphi}^{2}\cap L_{\phi}^{2}.\) Then we can find solutions \(\alpha_{1}\) and \(\alpha_{2}\) to \(d\alpha=\beta\) such that
\[\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}|\alpha_{2}|_{\omega^{\phi}}^{2}e^{- \phi}\omega_{n}^{\phi}\leq\frac{1}{n-p}\int_{\mathbb{R}^{n}\times\mathbb{R}^{n }}|\beta|_{\omega^{\phi}}^{2}e^{-\phi}\omega_{n}^{\phi}.\]
Thus, we can solve the equation \(d\alpha=\beta\) with fundamentally different estimates on the solutions: in one case the weight \(\varphi\) is convex and in the other \(\phi\) is concave.
## References
* [1] B. Berndtsson. Prekopa’s theorem and kiselman’s minimum principle for plurisubharmonic functions. _Math. Ann. 312, no. 4_, 1998.
* [2] B. Berndtsson. An introduction to things dbar. _Lecture notes Park City Mathematical institute/Institute of advanced study_, 2008.
* [3] B. Maurey D. Coredero-Erasquin, M. Fradelizi. The (b) conjecture for the gaussian measure of dilates of symmetric convex sets and related problems. _Journal of Functional Analysis, Volume 214, Issue 2_, 2004.
* [4] J-P. Demailly. Introduction to hodge theory. _SMF/AMS Texts and Monographs, volume 8_, 1996.
* [5] K. Friedrichs. The identity of weak and strong extensions of differential operators. _Trans. Amer. Math. Soc., 55 , 132-151._, 1944.
* [6] L. Hörmander. \({L}^{2}\)- estimates and existence theorems for the \(\bar{\partial}\) operator. _Acta Mathematica Volume 113, Number 1, 89-152_, 1965.
* [7] A. Lagerberg. Super currents and tropical geometry. _Mathematische Zeitschrift, 10.1007/s00209-010-0837-8_, 2011.
* [8] S. Nakano. On complex analytic vector bundles. _J. Math. Soc. Japan, Number 7, 1-12_, 1955.
* [9] A. Weil. Introduction á l’étude des variétés kählériennes. _Publications de l’Institut de Mathématique de l’Université de Nancago, VI. Actualités Sci. Ind. no. 1267 Hermann, Paris_, 1958.
|
1705.02758 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 38550,
"num_imgs": 5,
"llama3_tokens_count": 11258
} | [
"content_image/1705.02758/x1.png",
"content_image/1705.02758/x2.png",
"content_image/1705.02758/x3.png",
"content_image/1705.02758/x9.png",
"content_image/1705.02758/x10.png"
] | # Deep Descriptor Transforming for Image Co-Localization†
[FOOTNOTE:†][ENDFOOTNOTE]
Xiu-Shen Wei\({}^{1}\), Chen-Lin Zhang\({}^{1}\), Yao Li\({}^{2}\), Chen-Wei Xie\({}^{1}\), Jianxin Wu\({}^{1}\), Chunhua Shen\({}^{2}\), Zhi-Hua Zhou\({}^{1}\)
\({}^{1}\)National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
\({}^{2}\)The University of Adelaide, Adelaide, Australia
{weixs,zhangcl,xiecw,wujx,zhouzh}@lamda.nju.edu.cn, {yao.li01,chunhua.shen}@adelaide.edu.au
###### Abstract
Reusable model design becomes desirable with the rapid expansion of machine learning applications. In this paper, we focus on the reusability of pre-trained deep convolutional models. Specifically, different from treating pre-trained models as feature extractors, we reveal more treasures beneath convolutional layers, i.e., the convolutional activations could act as a detector for the common object in the image co-localization problem. We propose a simple but effective method, named Deep Descriptor Transforming (DDT), for evaluating the correlations of descriptors and then obtaining the category-consistent regions, which can accurately locate the common object in a set of images. Empirical studies validate the effectiveness of the proposed DDT method. On benchmark image co-localization datasets, DDT consistently outperforms existing state-of-the-art methods by a large margin. Moreover, DDT also demonstrates good generalization ability for unseen categories and robustness for dealing with noisy data.
## 1 Introduction
Model reuse [32] attempts to construct a model by utilizing existing available models, mostly trained for other tasks, rather than building a model from scratch. Particularly in deep learning, since deep convolutional neural networks have achieved great success in various tasks involving images, videos, texts and more, there are several studies have the flavor of reusing deep models pre-trained on ImageNet [18].
In machine learning, the Fixed Model Reuse scheme [27] is proposed recently for using the sophisticated fixed model/features from a well-trained deep model, rather than transferring with pre-trained weights. In computer vision, pre-trained models on ImageNet have also been successfully adopted to various usages, e.g., as universal feature extractors [25, 13], object proposal generators [7], etc. In particular, [26] proposed the SCDA method to utilize pre-trained models for both localizing a single fine-grained object (e.g., birds of different species) in each image and retrieving fine-grained images of the same classes/species in an unsupervised fashion.
<figure><img src="content_image/1705.02758/x1.png"><figcaption>Figure 1: Pipeline of the proposed DDT method for image co-localization. Inthis instance, the goal is to localize the _airplane_ within each image. Notethat, there might be few noisy images in the image set. (Best viewed incolor.)</figcaption></figure>
In this paper, we reveal that the convolutional activations can be a detector for the _common object_ in image co-localization. Image co-localization is a fundamental computer vision problem, which simultaneously localizes objects of the same category across a set of distinct images. Specifically, we propose a simple but effective method named DDT (Deep Descriptor Transforming) for image co-localization. In DDT, the deep convolutional descriptors extracted from pre-trained models are transformed into a new space, where it can evaluate the correlations between these descriptors. By leveraging the correlations among the image set, the common object inside these images can be located automatically without additional supervision signals. The pipeline of DDT is shown in Fig. 1. To our best knowledge, this is _the first work_ to demonstrate the possibility of convolutional activations/descriptors in pre-trained models _being able to act as a detector for the common object._
Experimental results show that DDT significantly outperforms existing state-of-the-art methods, including image co-localization and weakly supervised object localization, in both the deep learning and hand-crafted feature scenarios. Besides, we empirically show that DDT has a good generalization ability for unseen images apart from ImageNet. More importantly, the proposed method is robust, because DDT can also detect the noisy images which do not contain the common object.
## 2 Related work
### CNN model reuse
Reusability has been emphasized by [32] as a crucial characteristic of the new concept of _learnware_. It would be ideal if models can be reused in scenarios that are very different from their original training scenarios. Particularly, with the breakthrough in image classification using Convolutional Neural Networks (CNN), pre-trained CNN models trained for one task (e.g., recognition) have also been applied to domains different from their original purposes (e.g., for describing texture or finding object proposals [7]). However, for such adaptations of pre-trained models, they still require further annotations in the new domain (e.g., image labels). While, DDT deals with the image co-localization problem in an unsupervised setting.
Coincidentally, several recent works also shed lights on CNN pre-trained model reuse in the unsupervised setting, e.g., SCDA [26]. SCDA is proposed for handling the fine-grained image retrieval task, where it uses pre-trained models (from ImageNet, which is not fine-grained) to locate main objects in fine-grained images. It is the most related work to ours, even though SCDA is not for image co-localization. Different from our DDT, SCDA assumes only an object of interest in each image, and meanwhile objects from other categories does not exist. Thus, SCDA locates the object using cues from this _single_ image assumption. Apparently, it can not work well for images containing diverse objects (cf. Table 2 and Table 3), and also can not handle data noise (cf. Sec. 4.5).
### Image co-localization
Image co-localization is a fundamental problem in computer vision, where it needs to discover the common object emerging in only positive sets of example images (without any negative examples or further supervisions). Image co-localization shares some similarities with image co-segmentation [31, 12, 10]. Instead of generating a precise segmentation of the related objects in each image, co-localization methods aim to return a bounding box around the object. Moreover, co-segmentation has a strong assumption that _every_ image contains the object of interest, and hence is unable to handle noisy images.
Additionally, co-localization is also related to weakly supervised object localization (WSOL) [30, 1, 24, 21]. But the key difference between them is WSOL requires manually-labeled negative images whereas co-localization does not. Thus, WSOL methods could achieve better localization performance than co-localization methods. However, our DDT performs comparably with state-of-the-art WSOL methods and even outperforms them (cf. Table 4).
Recently, there are also several co-localization methods based on pre-trained models, e.g., [13, 24]. But, these methods just treated pre-trained models as simple feature extractors to extract the fully connected representations, which did not leverage the original correlations between deep descriptors among convolutional layers. Moreover, these methods also needed object proposals as a part of their object discovery, which made them highly dependent on the quality of object proposals. In addition, almost all the previous co-localization methods can not handle noisy data, except for [22].
Comparing with previous works, our DDT is unsupervised, without utilizing bounding boxes, additional image labels or redundant object proposals. Images only need one forward run through a pre-trained model. Then, efficient deep descriptor transforming is employed for obtaining the category-consistent image regions. DDT is very easy to implement, and surprisingly has good generalization ability and robustness.
## 3 The proposed method
### Preliminary
The following notations are used in the rest of this paper. The term “feature map” indicates the convolution results of one channel; the term “activations” indicates feature maps of all channels in a convolution layer; and the term “descriptor” indicates the \(d\)-dimensional component vector of activations.
Given an input image \(I\) of size \(H\times W\), the activations of a convolution layer are formulated as an order-3 tensor \(T\) with \(h\times w\times d\) elements. \(T\) can be considered as having \(h\times w\) cells and each cell contains one \(d\)-dimensional deep descriptor. For the \(n\)-th image, we denote its corresponding deep descriptors as \(X^{n}=\left\{\bm{x}^{n}_{\left(i,j\right)}\in\mathcal{R}^{d}\right\}\), where \(\left(i,j\right)\) is a particular cell (\(i\in\left\{1,\ldots,h\right\},j\in\left\{1,\ldots,w\right\}\)) and \(n\in\left\{1,\ldots,N\right\}\).
### SCDA recap
Since SCDA [26] is the most related work to ours, we hereby present a recap of this method. SCDA is proposed for dealing with the fine-grained image retrieval problem. It employs pre-trained models to select the meaningful deep descriptors by localizing the main object in fine-grained images unsupervisedly. In SCDA, it assumes that each image contains only one main object of interest and without other categories’ objects. Thus, the object localization strategy is based on the activation tensor of a _single_ image.
Concretely, for an image, the activation tensor is added up through the depth direction. Thus, the \(h\times w\times d\) 3-D tensor becomes a \(h\times w\) 2-D matrix, which is called the “aggregation map” in SCDA. Then, the mean value \(\bar{a}\) of the aggregation map is regarded as the threshold for localizing the object. If the activation response in the position \(\left(i,j\right)\) of the aggregation map is larger than \(\bar{a}\), it indicates the object might appear in that position.
### Deep descriptor transforming (DDT)
What distinguishes DDT from SCDA is that we can leverage the correlations beneath the whole _image set_, instead of a _single_ image. Additionally, different from weakly supervised object localization, we do not have either image labels or negative image sets in WSOL, so that the information we can use is only from the pre-trained models. Here, we transform the deep descriptors in convolutional layers to mine the hidden information for co-localizing common objects.
Principal component analysis (PCA) [15] is a statistical procedure, which uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of linearly uncorrelated variables (i.e., the principal components). This transformation is defined in such a way that the first principal component has the largest possible variance, and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to all the preceding components.
<figure><img src="content_image/1705.02758/x2.png"><figcaption>Figure 2: Examples from twelve randomly sampled classes of _VOC 2007_. Thefirst column of each subfigure are produced by SCDA, the second column are byour DDT. The red vertical lines in the histogram plots indicate thecorresponding thresholds for localizing objects. The selected regions inimages are highlighted in red. (Best viewed in color and zoomed in.)</figcaption></figure>
PCA is widely used in machine learning and computer vision for dimension reduction [2, 8, 28, 5], noise reduction [29, 14] and so on. Specifically, in this paper, we utilize PCA as projection directions for transforming these deep descriptors \(\{\bm{x}_{\left(i,j\right)}\}\) to evaluate their correlations. Then, on each projection direction, the corresponding principal component’s values are treated as the cues for image co-localization, especially the first principal component. Thanks to the property of this kind of transforming, DDT is also able to handle data noise.
In DDT, for a set of \(N\) images containing objects from the same category, we first collect the corresponding convolutional descriptors (\(X^{1},\ldots,X^{N}\)) by feeding them into a pre-trained CNN model. Then, the mean vector of all the descriptors is calculated by:
\[\bar{\bm{x}}=\frac{1}{K}\sum_{n}\sum_{i,j}\bm{x}_{\left(i,j\right)}^{n}\,,\] (1)
where \(K=h\times w\times N\). Note that, here we assume each image has the same number of deep descriptors (i.e., \(h\times w\)) for presentation clarity. Our proposed method, however, can handle input images with arbitrary resolutions.
Then, after obtaining the covariance matrix:
\[{\rm{Cov}}(\bm{x})=\frac{1}{K}\sum_{n}\sum_{i,j}(\bm{x}_{\left(i,j\right)}^{n} -\bar{\bm{x}})(\bm{x}_{\left(i,j\right)}^{n}-\bar{\bm{x}})^{\top}\,,\] (2)
we can get the eigenvectors \(\bm{\xi}_{1},\ldots,\bm{\xi}_{d}\) of \({\rm Cov}(\bm{x})\) which correspond to the sorted eigenvalues \(\lambda_{1}\geq\cdots\geq\lambda_{d}\geq 0\).
As aforementioned, since the first principal component has the largest variance, we take the eigenvector \(\bm{\xi}_{1}\) corresponding to the largest eigenvalue as the main projection direction. For the deep descriptor at a particular position \(\left(i,j\right)\) of an image, its first principal component \(p^{1}\) is calculated as follows:
\[p_{(i,j)}^{1}=\bm{\xi}^{\top}_{1}\left(\bm{x}_{\left(i,j\right)}-\bar{\bm{x}} \right)\,.\] (3)
According to their spatial locations, all \(p_{(i,j)}^{1}\) from an image are combined into a 2-D matrix whose dimensions are \(h\times w\). We call that matrix as _indicator matrix_:
\[P^{1}=\left[\begin{array}[]{cccc}p_{(1,1)}^{1}&p_{(1,2)}^{1}&\ldots&p_{(1,w)}^ {1}\\ p_{(2,1)}^{1}&p_{(2,2)}^{1}&\ldots&p_{(2,w)}^{1}\\ \vdots&\vdots&\ddots&\vdots\\ p_{(h,1)}^{1}&p_{(h,2)}^{1}&\ldots&p_{(h,w)}^{1}\end{array}\right]\,.\] (4)
\(P^{1}\) contains positive (negative) values which can reflect the positive (negative) correlations of these deep descriptors. The larger the absolute value is, the higher the positive (negative) correlation will be. Because \(\bm{\xi}_{1}\) is obtained through all \(N\) images, the positive correlation could indicate the _common characteristic_ through \(N\) images. Specifically, in the image co-localization scenario, the corresponding positive correlation indicates indeed the _common object_ inside these images.
Therefore, the value zero could be used as a natural threshold for dividing \(P^{1}\) of one image into two parts: one part has positive values indicating the common object, and the other part has negative values presenting background objects rarely appear. In addition, if \(P^{1}\) of an image has no positive value, it indicates that no common object exists in that image, which can be used for detecting noisy images. In practice, \(P^{1}\) is resized by the nearest interpolation, such that its size is the same as that of the input image. Meanwhile, we collect the largest connected component of the positive regions of \(P^{1}\) (as what is done in [26]). Based on these positive correlation values and the zero threshold, the minimum rectangle bounding box which contains the largest connected component of positive regions is returned as our object co-localization prediction.
### Discussions and analyses
In this section, we investigate the effectiveness of DDT by comparing with SCDA.
As shown in Fig. 2, the object localization regions of SCDA and DDT are highlighted in red. Because SCDA only considers the information from a single image, in Fig. 2 (a), “bike”, “person” and even “guide-board” are all detected as main objects. Furthermore, we normalize the values (all positive) of the aggregation map of SCDA into the scale of \(\left[0,1\right]\), and calculate the mean value (which is taken as the object localization threshold in SCDA). The histogram of the normalized values in aggregation map is also shown in that figure. The red vertical line corresponds to the threshold. We can find that, beyond the threshold, there are still many values. It gives an explanation about why SCDA highlights more regions.
Whilst, for DDT, it leverages the whole image set to transform these deep descriptors into \(P^{1}\). Thus, for the _bicycle_ class, DDT can accurately locate the “bicycle” object. The histogram is also drawn. But, \(P^{1}\) has both positive and negative values. We normalize \(P^{1}\) into the \(\left[-1,1\right]\) scale this time. Apparently, few values are larger than the DDT threshold (i.e., 0). More importantly, many values are close to \(-1\) which indicates the strong negative correlation. This observation validates the effectiveness of DDT in image co-localization. As another example shown in Fig. 2 (b), SCDA even wrongly locates “person” in the image belonging to the _diningtable_ class. While, DDT can correctly and accurately locate the “diningtable” image region. In Fig. 2, more examples are presented. In that figure, some failure cases can be also found, e.g., the _chair_ class in Fig. 2 (g).
In addition, the normalized \(P^{1}\) can be also used as localization probability scores. Combining it with conditional random filed techniques might produce more accurate object boundaries. Thus, DDT can be modified slightly in that way, and then perform the co-segmentation problem. More importantly, different from other co-segmentation methods, DDT can detect noisy images while other methods can not.
## 4 Experiments
In this section, we first introduce the evaluation metric and datasets used in image co-localization. Then, we compare the empirical results of our DDT with other state-of-the-arts on these datasets. The computational cost of DDT is reported too. Moreover, the results in Sec. 4.4 and Sec. 4.5 illustrate the generalization ability and robustness of the proposed method. Finally, our further study in Sec. 4.6 reveals DDT might deal with part-based image co-localization, which is a novel and challenging problem.
In our experiments, the images keep the original image resolutions. For the pre-trained deep model, the publicly available VGG-19 model [20] is employed to extract deep convolution descriptors from the last convolution layer (before \({\rm pool}_{5}\)). We use the open-source library MatConvNet [23] for conducting experiments. All the experiments are run on a computer with Intel Xeon E5-2660 v3, 500G main memory, and a K80 GPU.
### Evaluation metric and datasets
Following previous image co-localization works [13, 3, 22], we take the correct localization (CorLoc) metric for evaluating the proposed method. CorLoc is defined as the percentage of images correctly localized according to the PASCAL-criterion [6]: \(\frac{{\rm area}(B_{\rm p}\cap B_{\rm gt})}{{\rm area}(B_{\rm p}\cup B_{\rm gt })}>0.5\), where \(B_{\rm p}\) is the predicted bounding box and \(B_{\rm gt}\) is the ground-truth bounding box. All CorLoc results are reported in percentages.
Our experiments are conducted on four challenging datasets commonly used in image co-localization, i.e., the _Object Discovery_ dataset [17], the _PASCAL VOC 2007_ / _VOC 2012_ dataset [6] and the _ImageNet Subsets_ [13].
Methods | _Airplane_ | _Car_ | _Horse_ | Mean
---|---|---|---|---
[Joulin et al.2010] | 32.93 | 66.29 | 54.84 | 51.35
[Joulin et al.2012] | 57.32 | 64.04 | 52.69 | 58.02
[Rubinstein et al.2013] | 74.39 | 87.64 | 63.44 | 75.16
[Tang et al.2014] | 71.95 | 93.26 | 64.52 | 76.58
SCDA | 87.80 | 86.52 | 75.37 | 83.20
[Cho et al.2015] | 82.93 | 94.38 | 75.27 | 84.19
Our DDT | 91.46 | 95.51 | 77.42 | 88.13
Table 1: Comparisons of CorLoc on _Object Discovery_.
For experiments on the _VOC_ datasets, we follow [3, 13, 11] to use all images in the _trainval_ set (excluding images that only contain object instances annotated as _difficult_ or _truncated_). For _Object Discovery_, we use the 100-image subset following [17, 3] in order to make an appropriate comparison with other methods.
In addition, _Object Discovery_ has 18%, 11% and 7% noisy images in the _Airplane_, _Car_ and _Horse_ categories, respectively. These noisy images contain no object belonging to their category, as the third image shown in Fig. 1. Particularly, in Sec. 4.5, we quantitatively measure the ability of our proposed DDT to identify these noisy images.
To further investigate the generalization ability of DDT, _ImageNet Subsets_ [13] are used, which contain six subsets/categories. These subsets are held-out categories from the 1000-label ILSVRC classification [18]. That is to say, these subsets are “unseen” by pre-trained CNN models. Experimental results in Sec. 4.4 show that DDT is insensitive to the object category.
Methods | _aero_ | _bike_ | _bird_ | _boat_ | _bottle_ | _bus_ | _car_ | _cat_ | _chair_ | _cow_ | _table_ | _dog_ | _horse_ | _mbike_ | _person_ | _plant_ | _sheep_ | _sofa_ | _train_ | _tv_ | Mean
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
[Joulin et al.2014] | 32.8 | 17.3 | 20.9 | 18.2 | 4.5 | 26.9 | 32.7 | 41.0 | 5.8 | 29.1 | 34.5 | 31.6 | 26.1 | 40.4 | 17.9 | 11.8 | 25.0 | 27.5 | 35.6 | 12.1 | 24.6
SCDA | 54.4 | 27.2 | 43.4 | 13.5 | 2.8 | 39.3 | 44.5 | 48.0 | 6.2 | 32.0 | 16.3 | 49.8 | 51.5 | 49.7 | 7.7 | 6.1 | 22.1 | 22.6 | 46.4 | 6.1 | 29.5
[Cho et al.2015] | 50.3 | 42.8 | 30.0 | 18.5 | 4.0 | 62.3 | 64.5 | 42.5 | 8.6 | 49.0 | 12.2 | 44.0 | 64.1 | 57.2 | 15.3 | 9.4 | 30.9 | 34.0 | 61.6 | 31.5 | 36.6
[Li et al.2016] | 73.1 | 45.0 | 43.4 | 27.7 | 6.8 | 53.3 | 58.3 | 45.0 | 6.2 | 48.0 | 14.3 | 47.3 | 69.4 | 66.8 | 24.3 | 12.8 | 51.5 | 25.5 | 65.2 | 16.8 | 40.0
Our DDT | 67.3 | 63.3 | 61.3 | 22.7 | 8.5 | 64.8 | 57.0 | 80.5 | 9.4 | 49.0 | 22.5 | 72.6 | 73.8 | 69.0 | 7.2 | 15.0 | 35.3 | 54.7 | 75.0 | 29.4 | 46.9
Table 2: Comparisons of the CorLoc metric with state-of-the-art co-
localization methods on _VOC 2007_.
Methods | _aero_ | _bike_ | _bird_ | _boat_ | _bottle_ | _bus_ | _car_ | _cat_ | _chair_ | _cow_ | _table_ | _dog_ | _horse_ | _mbike_ | _person_ | _plant_ | _sheep_ | _sofa_ | _train_ | _tv_ | Mean
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
SCDA | 60.8 | 41.7 | 38.6 | 21.8 | 7.4 | 67.6 | 38.8 | 57.4 | 16.0 | 34.0 | 23.9 | 53.8 | 47.3 | 54.8 | 7.9 | 9.9 | 25.3 | 23.2 | 50.2 | 10.1 | 34.5
[Cho et al.2015] | 57.0 | 41.2 | 36.0 | 26.9 | 5.0 | 81.1 | 54.6 | 50.9 | 18.2 | 54.0 | 31.2 | 44.9 | 61.8 | 48.0 | 13.0 | 11.7 | 51.4 | 45.3 | 64.6 | 39.2 | 41.8
[Li et al.2016] | 65.7 | 57.8 | 47.9 | 28.9 | 6.0 | 74.9 | 48.4 | 48.4 | 14.6 | 54.4 | 23.9 | 50.2 | 69.9 | 68.4 | 24.0 | 14.2 | 52.7 | 30.9 | 72.4 | 21.6 | 43.8
Our DDT | 76.7 | 67.1 | 57.9 | 30.5 | 13.0 | 81.9 | 48.3 | 75.7 | 18.4 | 48.8 | 27.5 | 71.8 | 66.8 | 73.7 | 6.1 | 18.5 | 38.0 | 54.7 | 78.6 | 34.6 | 49.4
Table 3: Comparisons of the CorLoc metric with state-of-the-art co-
localization methods on _VOC 2012_.
### Comparisons with state-of-the-arts
#### 4.2.1 Comparisons to image co-localization methods
We first compare the results of DDT to state-of-the-arts (including SCDA) on _Object Discovery_ in Table 1. For SCDA, we also use VGG-19 to extract the convolution descriptors and perform experiments. As shown in that table, DDT outperforms other methods by about 4% in the mean CorLoc metric. Especially for the _airplane_ class, it is about 10% higher than that of [3]. In addition, note that the images of each category in this dataset contain only one object, thus, SCDA can perform well.
For _VOC 2007_ and _2012_, these datasets contain diverse objects per image, which is more challenging than _Object Discovery_. The comparisons of the CorLoc metric on these two datasets are reported in Table 2 and Table 3, respectively. It is clear that on average our DDT outperforms the previous state-of-the-arts (based on deep learning) by a large margin on both two datasets. Moreover, DDT works well on localizing small common objects, e.g., “bottle” and “chair”. In addition, because most images of these datasets have multiple objects, which do not obey SCDA’s assumption, SCDA performs badly in the complicated environment. For fair comparisons, we also use VGG-19 to extract the fully connected representations of the object proposals in [13], and then perform the remaining processes of their method (the source codes are provided by the authors). As aforementioned, due to the high dependence on the quality of object proposals, their mean CorLoc metric of VGG-19 is 41.9% and 45.6% on _VOC 2007_ and _2012_, respectively. The improvements are limited, and the performance is still significantly worse than ours.
#### 4.2.2 Comparisons to weakly supervised localization methods
To further verify the effectiveness of DDT, we also compare it with some state-of-the-art methods for weakly supervised object localization. Table 4 illustrates these empirical results on _VOC 2007_. Particularly, DDT achieves 46.9% on average which is higher than most WSOL methods in the literature. But, it still has a small gap (0.8% lower) with that of [24] which is also a deep learning based approach. This is understandable as we do _not_ use any negative data for co-localization. Meanwhile, our DDT can easily extend to handle negative data and thus perform WSOL. Moreover, DDT could handle noisy data (cf. Sec. 4.5). But, existing WSOL methods are not designed to deal with noise.
Methods | Neg. | _aero_ | _bike_ | _bird_ | _boat_ | _bottle_ | _bus_ | _car_ | _cat_ | _chair_ | _cow_ | _table_ | _dog_ | _horse_ | _mbike_ | _person_ | _plant_ | _sheep_ | _sofa_ | _train_ | _tv_ | Mean
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
[Shi et al.2013] | ✓ | 67.3 | 54.4 | 34.3 | 17.8 | 1.3 | 46.6 | 60.7 | 68.9 | 2.5 | 32.4 | 16.2 | 58.9 | 51.5 | 64.6 | 18.2 | 3.1 | 20.9 | 34.7 | 63.4 | 5.9 | 36.2
[Cinbis et al.2015] | ✓ | 56.6 | 58.3 | 28.4 | 20.7 | 6.8 | 54.9 | 69.1 | 20.8 | 9.2 | 50.5 | 10.2 | 29.0 | 58.0 | 64.9 | 36.7 | 18.7 | 56.5 | 13.2 | 54.9 | 59.4 | 38.8
[Wang et al.2015] | ✓ | 37.7 | 58.8 | 39.0 | 4.7 | 4.0 | 48.4 | 70.0 | 63.7 | 9.0 | 54.2 | 33.3 | 37.4 | 61.6 | 57.6 | 30.1 | 31.7 | 32.4 | 52.8 | 49.0 | 27.8 | 40.2
[Bilen et al.2015] | ✓ | 66.4 | 59.3 | 42.7 | 20.4 | 21.3 | 63.4 | 74.3 | 59.6 | 21.1 | 58.2 | 14.0 | 38.5 | 49.5 | 60.0 | 19.8 | 39.2 | 41.7 | 30.1 | 50.2 | 44.1 | 43.7
[Ren et al.2016] | ✓ | 79.2 | 56.9 | 46.0 | 12.2 | 15.7 | 58.4 | 71.4 | 48.6 | 7.2 | 69.9 | 16.7 | 47.4 | 44.2 | 75.5 | 41.2 | 39.6 | 47.4 | 32.2 | 49.8 | 18.6 | 43.9
[Wang et al.2014] | ✓ | 80.1 | 63.9 | 51.5 | 14.9 | 21.0 | 55.7 | 74.2 | 43.5 | 26.2 | 53.4 | 16.3 | 56.7 | 58.3 | 69.5 | 14.1 | 38.3 | 58.8 | 47.2 | 49.1 | 60.9 | 47.7
Our DDT | | 67.3 | 63.3 | 61.3 | 22.7 | 8.5 | 64.8 | 57.0 | 80.5 | 9.4 | 49.0 | 22.5 | 72.6 | 73.8 | 69.0 | 7.2 | 15.0 | 35.3 | 54.7 | 75.0 | 29.4 | 46.9
Table 4: Comparisons of the CorLoc metric with weakly supervised object
localization methods on _VOC 2007_. Note that, the “✓” in the “Neg.” column
indicates that these WSOL methods require access to a negative image set,
whereas our DDT does not.
### Computational costs of DDT
Here, we take the total 171 images in the _aeroplane_ category of _VOC 2007_ as examples to report the computational costs. The average image resolution of the 171 images is \(350\times 498\). The computational time of DDT has two main components: one is for feature extraction, the other is for deep descriptor transforming. Because we just need the first principal component, the transforming time on all the 120,941 descriptors of 512-d is only 5.7 seconds. The average descriptor extraction time is 0.18 second/image on GPU and 0.86 second/image on CPU, respectively. That shows the efficiency of the proposed DDT method in real-world applications.
### Unseen classes apart from ImageNet
In order to justify the generalization ability of DDT, we also conduct experiments on some images (of six subsets) disjoint with the images from ImageNet. Note that, the six categories of these images are unseen by pre-trained models. The six subsets were provided in [13]. Table 5 presents the CorLoc metric on these subsets. Our DDT (69.1% on average) still significantly outperforms other methods on all categories, especially for some difficult objects categories, e.g., _rake_ and _wheelchair_. In addition, the mean CorLoc metric of [13] based on VGG-19 is 51.6% on this dataset.
Methods | _Chipmunk_ | _Rhino_ | _Stoat_ | _Racoon_ | _Rake_ | _Wheelchair_ | Mean
---|---|---|---|---|---|---|---
[Cho et al.2015] | 26.6 | 81.8 | 44.2 | 30.1 | 8.3 | 35.3 | 37.7
SCDA | 32.3 | 71.6 | 52.9 | 34.0 | 7.6 | 28.3 | 37.8
[Li et al.2016] | 44.9 | 81.8 | 67.3 | 41.8 | 14.5 | 39.3 | 48.3
Our DDT | 70.3 | 93.2 | 80.8 | 71.8 | 30.3 | 68.2 | 69.1
Table 5: Comparisons of on image sets disjoint with ImageNet.
Furthermore, in Fig. 3, several successful predictions by DDT and also some failure cases on this dataset are provided. In particular, for “rake” (“wheelchair”), even though a large portion of images in these two categories contain both people and rakes (wheelchairs), our DDT could still accurately locate the common object in all the images, i.e., rakes (wheelchairs), and ignore people. This observation validates the effectiveness (especially for the high CorLoc metric on _rake_ and _wheelchair_) of our method from the qualitative perspective.
<figure><img src="content_image/1705.02758/x3.png"><figcaption>(a) _Chipmunk_</figcaption></figure>
### Detecting noisy images
In this section, we quantitatively present the ability of DDT to identify noisy images. As aforementioned, in _Object Discovery_, there are 18%, 11% and 7% noisy images in the corresponding categories. In our DDT, the number of positive values in \(P^{1}\) can be interpreted as a detection score. The lower the number is, the higher the probability of noisy images will be. In particular, no positive value at all in \(P^{1}\) presents the image as definitely a noisy image. For each category in that dataset, the ROC curve is shown in Fig. 4, which measures how the methods correctly detect noisy images. In the literature, only the method in [22] (i.e., the Image-Box model in that paper) could solve image co-localization with noisy data. From these figures, it is apparent to see that, in image co-localization, our DDT has significantly better performance in detecting noisy images than Image-Box (whose noisy detection results are obtained by re-running the publicly available code released by the authors). Meanwhile, our mean CorLoc metric without noise is about 12% higher than theirs on _Object Discovery_, cf. Table 1.
<figure><img src="content_image/1705.02758/x9.png"><figcaption>Figure 4: ROC curves illustrating the effectiveness of our DDT at identifyingnoisy images on the _Object Discovery_ dataset. The curves in red line are theROC curves of DDT. The curves in blue dashed line present the method in [Tang_et al_., 2014].</figcaption></figure>
### Further study
In the above, DDT only utilizes the information of the first principal components, i.e., \(P^{1}\). How about others, e.g., the second principal components \(P^{2}\)? In Fig. 5, we show four images containing dogs and the visualization of their \(P^{1}\) and \(P^{2}\). Through these figures, it is apparently to find \(P^{1}\) can locate the whole common object. However, \(P^{2}\) interestingly separates the head region from the torso region. Meanwhile, these two meaningful regions can be easily distinguished from the background. These observations inspire us to use DDT for the more challenging _part-based_ image co-localization task in the future, which is never touched before.
<figure><img src="content_image/1705.02758/x10.png"><figcaption>Figure 5: Four images belonging to the _dog_ category of _VOC 2007_ withvisualization of their indicator matrices P1 and P2. In visualization figures,warm colors indicate positive values, and cool colors present negative. (Bestviewed in color.)</figcaption></figure>
## 5 Conclusions
Pre-trained models are widely used in diverse applications in machine learning and computer vision. However, the treasures beneath pre-trained models are not exploited sufficiently. In this paper, we proposed Deep Descriptor Transforming (DDT) for image co-localization. DDT indeed revealed another reusability of deep pre-trained networks, i.e., convolutional activations/descriptors can play a role as a common object detector. It offered further understanding and insights about CNNs. Besides, our proposed DDT method is easy to implement, and it achieved great image co-localization performance. Moreover, the generalization ability and robustness of DDT ensure its effectiveness and powerful reusability in real-world applications.
DDT also has the potential ability in the applications of video-based unsupervised object discovery. In addition, robust PCA is promising to be used in DDT for improving the CorLoc metric. Furthermore, interesting observations in Sec. 4.6 make the more challenging but intriguing part-based image co-localization problem be a future work.
## References
* [Bilen _et al._2015] H. Bilen, M. Pedersoli, and T. Tuytelaars. Weakly supervised object detection with convex clustering. In _CVPR_, pages 1081–1089, 2015.
* [Chen _et al._2013] M. Chen, W. Li, W. Zhang, and X.-G. Wang. Dimensionality reduction with generalized linear models. In _IJCAI_, pages 1267–1272, 2013.
* [Cho _et al._2015] M. Cho, S. Kwak, C. Schmid, and J. Ponce. Unsupervised object discovery and localization in the wild: Part-based matching with bottom-up region proposals. In _CVPR_, pages 1201–1210, 2015.
* [Cinbis _et al._2015] R. G. Cinbis, J. J. Verbeek, and C. Schmid. Multi-fold MIL training for weakly supervised object localization. In _CVPR_, pages 2409–2416, 2015.
* [Davidson2009] I. Davidson. Knowledge driven dimension reduction for clustering. In _IJCAI_, pages 1034–1039, 2009.
* [Everingham _et al._2015] M. Everingham, S. M. Ali Eslami, L. Van Gool, C. K. L. Williams, J. Winn, and A. Zisserman. The PASCAL visual object classes challenge: A retrospective. _IJCV_, 111(1):98–136, 2015.
* [Ghodrati _et al._2015] A. Ghodrati, A. Diba, M. Pedersoli, T. Tuytelaars, and L. Van Gool. Deepproposal: Hunting objects by cascading deep convolutional layers. In _ICCV_, pages 2578–2586, 2015.
* [Gu _et al._2011] Q.-Q. Gu, Z.-H. Li, and J.-W. Han. Joint feature selection and subspace learning. In _IJCAI_, pages 1294–1299, 2011.
* [Joulin _et al._2010] A. Joulin, F. Bach, and J. Ponce. Discriminative clustering for image co-segmentation. In _ICCV_, pages 2984–2991, 2010.
* [Joulin _et al._2012] A. Joulin, F. Bach, and J. Ponce. Multi-class co-segmentation. In _ICCV_, pages 139–150, 2012.
* [Joulin _et al._2014] A. Joulin, K. Tang, and L. Fei-Fei. Efficient image and video co-localization with Frank-Wolfe algorithm. In _ECCV_, pages 253–268, 2014.
* [Kim _et al._2011] G. Kim, E. P. Xing, L. Fei-Fei, and T. Kanade. Distributed co-segmentation via submodular optimization on anisotropic diffusion. In _ICCV_, pages 169–176, 2011.
* [Li _et al._2016] Y. Li, L. Liu, C. Shen, and A. V. D. Hengel. Image co-localization by mimicking a good detector’s confidence score distribution. In _ECCV_, pages 19–34, 2016.
* [Nie _et al._2011] F.-P. Nie, H. Huang, C. Ding, D.-J. Luo, and H. Wang. Robust principal component analysis with non-greedy \(\ell_{1}\)-norm maximization. In _IJCAI_, pages 1433–1438, 2011.
* [Pearson1901] K. Pearson. On lines and planes of closest fit to systems of points in space. _Philosophical Magazine_, 2(11):559–572, 1901.
* [Ren _et al._2016] W. Ren, K. Huang, D. Tao, and T. Tan. Weakly supervised large scale object localization with multiple instance learning and bag splitting. _IEEE TPAMI_, 38(2):405–416, 2016.
* [Rubinstein _et al._2013] M. Rubinstein, A. Joulin, J. Kopf, and C. Liu. Unsupervised joint object discovery and segmentation in internet images. In _CVPR_, pages 1939–1946, 2013.
* [Russakovsky _et al._2015] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet large scale visual recognition challenge. _IJCV_, 115(3):211–252, 2015.
* [Shi _et al._2013] Z. Shi, T. M. Hospedales, and T. Xiang. Bayesian joint topic modelling for weakly supervised object localisation. In _ICCV_, pages 2984–2991, 2013.
* [Simonyan and Zisserman2015] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In _ICLR_, pages 1–14, 2015.
* [Siva and Xiang2011] P. Siva and T. Xiang. Weakly supervised object detector learning with model drift detection. In _CVPR_, pages 343–350, 2011.
* [Tang _et al._2014] K. Tang, A. Joulin, L. Li, and L. Fei-Fei. Co-localization in real-world images. In _CVPR_, pages 1464–1471, 2014.
* [Vedaldi and Lenc2015] A. Vedaldi and K. Lenc. MatConvNet - convolutional neural networks for MATLAB. In _ACM MM_, pages 689–692, 2015.
* [Wang _et al._2014] C. Wang, W. Ren, K. Huang, and T. Tan. Weakly supervised object localization with latent category learning. In _ECCV_, pages 431–445, 2014.
* [Wang _et al._2015] X. Wang, Z. Zhu, C. Yao, and X. Bai. Relaxed multiple-instance SVM with application to object discovery. In _ICCV_, pages 1224–1232, 2015.
* [Wei _et al._2017] X.-S. Wei, J.-H. Luo, J. Wu, and Z.-H. Zhou. Selective convolutional descriptor aggregation for fine-grained image retrieval. _IEEE TIP_, 26(6):2868–2881, 2017.
* [Yang _et al._2017] Y. Yang, D.-C. Zhan, Y. Fan, Y. Jiang, and Z.-H. Zhou. Deep learning for fixed model reuse. In _AAAI_, pages 2831–2837, 2017.
* [Zhang _et al._2009] T. Zhang, D. Tao, X. Li, and J. Yang. Patch alignment for dimensionality reduction. _IEEE TKDE_, 21(9):1299–1313, 2009.
* [Zhang _et al._2013] L. Zhang, L. Zhang, D. Tao, and X. Huang. Tensor discriminative locality alignment for hyperspectral image spectral-spatial feature extraction. _IEEE TGRS_, 51(1):242–256, 2013.
* [Zhang _et al._2016] D. Zhang, D. Meng, L. Zhao, and J. Han. Bridging saliency detection to weakly supervised object detection based on self-paced curriculum. In _IJCAI_, pages 3538–3544, 2016.
* [Zhao and Fu2015] H. Zhao and Y. Fu. Semantic single video segmentation with robust graph representation. In _IJCAI_, pages 2219–2225, 2015.
* [Zhou2016] Z.-H. Zhou. Learnware: On the future of machine learning. _Frontiers of Computer Science_, 10(4):589–590, 2016.
|
1201.2027 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 22707,
"num_imgs": 3,
"llama3_tokens_count": 5954
} | [
"content_image/1201.2027/x1.png",
"content_image/1201.2027/x2.png",
"content_image/1201.2027/x3.png"
] | # Cooperative rectification in confined Brownian ratchets
Paolo Malgaretti
paolomalgaretti@ffn.ub.es
Department de Fisica Fonamental, Universitat de Barcelona, Spain
Ignacio Pagonabarraga
Department de Fisica Fonamental, Universitat de Barcelona, Spain
J. Miguel Rubí
Department de Fisica Fonamental, Universitat de Barcelona, Spain
February 19, 2024
###### Abstract
We analyze the rectified motion of a Brownian particle in a confined environment. We show the emergence of strong cooperativity between the inherent rectification of the ratchet mechanism and the entropic bias of the fluctuations caused by spatial confinement. Net particle transport may develop even in situations where separately the ratchet and the geometric restrictions do not give rise to particle motion. The combined rectification effects can lead to bidirectional transport depending on particle size, resulting in a new route for segregation. The reported mechanism can be used to control transport in mesostructures and nanodevices in which particles move in a reduced space.
Molecular motor, Brownian ratchet, Entropic barrier, Rectification. pacs: 05.40.Jc , 81.07.Nb, 87.16.Ka, 05.10.Gg Brownian motors, identified in a variety of conditions ranging from biological Thomas et al. (2010) to synthetic systems Hänggi and Marchesoni (2009), extract work from thermal fluctuations in out of equilibrium conditions. In particular, Brownian ratchets rectify thermal fluctuation due to their interaction with a periodic asymmetric potential (ratchet) in a non-equilibrium environment. They constitute a reference class of Brownian motors and have been widely used to understand how molecular Thomas et al. (2010); Astumian (2010) as well artificial Allison and Abbott (2002); Zhu et al. (2004); Linke et al. (1999) motors operate. Geometrical constraints provide an alternative means to rectify thermal fluctuations due to the confinement they impose, reducing the system capability to explore space. Modulations in the available explored region lead to gradients in the system effective free energy, inducing a local bias in its diffusion that can promote a macroscopic net velocity for asymmetric channel profiles Rubí and Reguera (2010) or due to applied alternating fields Wambaugh et al. (1999). Geometric barriers constitue a common feature at small scales; they are found in a variety of systems, including molecular transport in zeolites Barrer (1978), ionic channels Calero et al. (2011), or in microfluidic devices Dagdug et al. (2011); Altintas et al. (2009), where their shape explains, for example, the magnitude of the rectifying electric signal observed experimentally Martens et al. (2011).
Brownian motors usually operate in spatially restricted environments where thermal rectification is affected by the geometrical constraints. Understanding such interplay is then relevant in a variety of experimental situations ranging from micrometric systems, like microfluidic devices Dagdug et al. (2011); Altintas et al. (2009) or colloids moving in optical tweezer arrays Lee et al. (2005), to nanometric conditions, as realized with molecular motors Astumian (2010); Thomas et al. (2010), down to the atomic scale where optical trapping allows to manipulate cold atoms Zelan et al. (2011).
In this Letter, we will show that the interplay between a Brownian ratchet and the geometrical constraints it experiences strongly affects the out of equilibrium dynamics of small particles and promote cooperative particle transport even when neither the Brownian ratchet nor the geometrical confinement rectify on their own. We will clarify that such net current results from the cooperative rectification of thermal fluctuations by the Brownian ratchet and the geometrical constraint and that it has a significant signal-to-noise ratio that allows to experimentally test such motion on length scales comparable to a few ratchet periods. We will also show how the interplay between Brownian and confinement rectification can lead to a significant enhancement in the rectified mean velocity and, depending on particle size, to velocity reversal providing a novel mechanism for particle segregation at the micro- and nano-scale.
<figure><img src="content_image/1201.2027/x1.png"><figcaption>Figure 1: Brownian ratchet and entropic barriers. A: local biased diffusiondue to confinement, described in terms of an effective entropic potential,S(x). B: Two-state model for a Brownian ratchet: In state1 particles slidealong the potential V1 and jump with rate ω1,2 to state2 where they diffuseuntil they jump back with rate ω2,1. The jump rates operate in differentregions along the potential period, breaking detailed balance. C: A Brownianmotor moving in a confined environment will be sensitive to the free energy A1(solid) generated by ratchet potential V1 (dot-dashed) and entropic potentialgenerated by the channel shown in panel A (dashed).</figcaption></figure>
In order to identify the generic features underlying the cooperative rectification provided by spatial confinement and Brownian ratcheting, we will analyze the dynamics of a single particle of radius \(R\) moving in a channel with variable half-width \(h(x)\), where \(x\) stands for the position along the channel longitudinal axis, as sketched in Fig. 1. Rather than analyzing explicitly the diffusion of a particle in such a channel under the action of a ratchet potential with the appropriate boundary conditions, it proves insightful to take advantage of the asymmetry of the channel geometry and describe the particle dynamics in terms of its displacement along the channel longitudinal axis and include the channel boundary as an entropic potential \(\tilde{S}(x)=k_{B}TS(x)=k_{B}T\ln 2(h(x)-R)/R\) Reguera and Rubí (2001) the particle is subject to (\(k_{B}\) stands for the Boltzmann constant and \(T\) is the absolute temperature). Such an approximation, known as Fick-Jacobs Zwanzig (1992); Reguera et al. (2006); Kalinay and Percus (2008), is exact for a uniform channel while its regime of validity for gently varying confining geometries has been explicitly elucidated Burada et al. (2007). This approach has been shown very fruitful to understand particle transport in a variety of confined systems Reguera and Rubí (2001); Calero et al. (2011).
To address the impact of entropic restrictions on the Brownian motor motion, we will analyze both a flashing ratchet Reimann (2002) , a generic model for the rectified motion of colloidal particles, and the two-state ratchet Jülicher et al. (1997), modeling the rectified motion of molecular motors along biofilaments which accounts effectively for the mechanochemical coupling characteristic of molecular motors Qian (2004).
A colloidal particle subject to a periodic external potential, \(V_{1}(x)\) expressed in units of the thermal energy \(k_{B}T\), behaves as a flashing ratchet when the random force breaks detailed balance Lee et al. (2005). This can be simply achieved with a Gaussian white noise with a second moment amplitude \(g(x)=\sqrt{D(x)+Q\left(\partial_{x}V_{1}(x)\right)^{2}}\) Reimann (2002), where \(Q\) quantifies Brownian rectification. The particle density, \(p(x)\), reads
\[\frac{\partial p}{\partial t}=\frac{\partial}{\partial x}\left\{\frac{1}{2} \left[D(x)+Q\left(\frac{\partial}{\partial x}V_{1}\right)^{2}\right]\frac{ \partial p}{\partial x}+D(x)p\frac{\partial A_{1}}{\partial x}\right\},\]
where the dimensionless free energy \(A_{1}(x)=V_{1}(x)-S(x)\), includes the entropic potential the particle is subject to due to the change in the number of available states as the channel width varies. The channel corrugation induces a position-dependent diffusion coefficient, \(D(x)\) which depends on the channel section, \(h(x)\) Reguera and Rubí (2001).
The two-state ratchet model constitutes a standard, simple framework to describe molecular motor motion. As sketched in Fig. 1, a Brownian particle jumps between two states, \(i=1,2\), which determine under which potential, \(V_{i=1,2}\), it displaces Jülicher et al. (1997). A choice of the jumping rates \(\omega_{12,21}\) that breaks detailed balance, jointly with an asymmetric potential \(V_{1}(x)\), determines the average molecular motor velocity \(v_{0}\neq 0\). The conformational changes of the molecular motors introduce an additional scale which will compete with rectification and geometrical confinement. Infinitely-processive molecular motors remain always attached to the filament along which they displace and are affected by the geometrical restrictions only while displacing along the filament; accordingly, we choose channel-independent binding rate \(\omega_{21,p}(x)=k_{21}\). On the contrary, highly non-processive molecular motors detach frequently from the biofilament and diffuse away, leading to a channel-driven binding rate \(\omega_{21,np}(x)=k_{21}/h(x)\). The interaction between the molecular motor and the biofilament is chosen for specificity as
\[V_{1}(x)=V_{0}\left[\sin\left(2\pi x\right)+\lambda\sin\left(4\pi x\right) \right],\partial_{x}V_{2}=0\] (1)
in units of \(k_{B}T\), where the position along the filament, \(x\), is expressed in units of the ratchet period, \(L\). \(\lambda\) determines the asymmetry of the ratchet potential, \(V_{1}\), while \(V_{2}\) ensures free diffusion in state \(i=2\). Motors jump to the free state only in a region of width \(\delta\) around the minima of \(V_{1}\), with rate \(\omega_{12}=k_{12}\). Accordingly, the motor densities, \(p_{1},p_{2}\) along the channel follow Jülicher et al. (1997)
\[\partial_{t}p_{1}(x)+\partial_{x}J_{1} = -\omega_{12}(x)p_{1}(x)+\omega_{21}(x)p_{2}(x)\]
\[\partial_{t}p_{2}(x)+\partial_{x}J_{2} = \omega_{12}(x)p_{1}(x)-\omega_{21}(x)p_{2}(x)\]
where \(J_{1,2}(x)=-D(x)\big{(}\partial_{x}p_{1,2}(x)+p_{1,2}(x)\partial_{x}A_{1,2}(x) \big{)}\) stands for the current densities in each of the two states in which motor displaces. Depending on the motor internal state, two dimensionless free energies, \(A_{1,2}(x)=V_{1,2}(x)-S(x)\), account for the interplay between the biofilament interaction and the channel constraints.
<figure><img src="content_image/1201.2027/x2.png"><figcaption>Figure 2: Rectification of a Brownian motor in a symmetric ratchet andsymmetric corrugated channel. Panel A,B: velocity, in units of D0/L, whereD0=kBT/6πηR and L stands for the potential period, of a flashing ratchet(squares) or a processive (circles), non-processive (triangles) motor movingaccording to the two-state model as a function of the phase-shift Δϕ fordifferent values of the parameter β/γ=0.7,0.8,0.9 (the larger the symbol size,the larger β) and fixed γ being the energetic potential ΔV1=4.0 and ΔQ=10 forthe flashing ratchet. Panel C: free energy profile for different Δϕ; thedotted box is the region where ω12≠0, the vertical dashed-dotted lines markthe position of the maxima of A1(x) in the absence of rectification. PanelD-E: rectified velocity as a function of the entropic barrier height uponvariation of R (filled points) and β (empty points) for flashing ratchet(squares) or a processive (circles), non-processive (triangles) motor movingaccording to the two-state model. Bigger points stand for bigger Δϕ beingΔϕ=0.1,0.2 for the non-processive two-state model and flashing ratchet,Δϕ=0.1,0.3 for the processive two-state model.</figcaption></figure>
The mean particle velocity is computed from the numerical solution of the Fokker-Planck equations stated above getting for the flashing ratchets \(v_{0}=LZ^{-1}\left(1-e^{\phi(L)}\right)\) and for the two-state model \(v_{0}=(J_{1}+J_{2})L\), where \(Z=\int_{0}^{L}dx\frac{e^{-\phi(x)}}{g(x)}\int_{x}^{x+L}\frac{e^{\phi(y)}}{g(y) }dy\) and \(\phi=\int_{0}^{x}\frac{D(y)V^{\prime}_{1}(y)}{g(y)^{2}}dy\).
To analyze the interplay between the ratchet potential and the entropic constraints, we will discuss a channel with the same periodicity as the ratchet. In particular, we consider that the channel half-width obeys
\[h(x)=\gamma+\beta\left[\sin\left(2\pi x+\Delta\phi\right)+\Lambda\sin\left(4 \pi x+\Delta\phi\right)\right]\] (2)
where \(\Delta\phi\) accounts for the phase difference between the ratchet and the entropic potentials while \(\Lambda\) quantifies the channel asymmetry. In turn, \(\gamma\) and \(\beta\), together with the particle radius \(R\), control the entropic barrier height because the maximum change in entropy reads \(\Delta S_{m}=\ln\frac{h(x)_{max}-R}{h(x)_{min}-R}\), where \((h(x)-R)\) the effective half-section a tracer of radius \(R\) is sensitive to.
Although when both the channel and the ratchet are symmetric, \(\lambda=\Lambda=0\), none of them can rectify on its own, the interplay of these two mechanisms leads to a net current whose magnitude, controlled by the phase shift \(\Delta\phi\), depends on the relative position of the ratchet along the channel, as shown in Fig. 2. Cooperative rectification emerges from the entropy-driven asymmetry of the hopping rates from a minimum of the effective free energy, \(A_{1}(x)\), to its closest minimum. Confronting panels A,B with panel C in Fig. 2 shows that rectification develops only when the minimum of the free energy \(A_{1}(x)\) is not equidistant from the free energy maxima, i.e. for \(\Delta\phi\neq n\pi\) (\(n=0,1\)). While for the flashing ratchet the net current is always in the direction of the shortest path the particle has to diffuse to overcome the free energy barrier, the intrinsic mechanism of two-state model leads to more involved dynamics. The interplay between internal motor reorganization and the geometrically varying environment can lead to qualitatively new scenarios, such as the reversal in the direction of motion of processive motors, as clearly shown in panel E of Fig. 2. This scenario, sensitive to the effective cross section felt by the motor as quantified by the parameter \(\beta\), cannot be obtained with flashing ratchets, or with alternative ratcheting mechanisms which do not incorporate the internal morphological changes of the displacing particle.
Even though the free energy, \(A_{1}(x)\), depends explicitly on the channel section, \(h(x)\), panels D,E of Fig. 2 show that the dependence of rectified motion of a confined particle on the channel asymmetry, \(\beta\), (open points) and particle size, \(R\) (filled points), are essentially captured when expressing the rectified velocity in terms of the maximum entropy difference (or channel aperture), \(\Delta S_{m}=\ln\frac{\gamma-R+\beta}{\gamma-R-\beta}\). Only for very small values of \(\Delta S_{m}\), approaching the limit of validity of the Fick-Jacobs approximation that prescribes a faster thermalization along the radial direction respect to the longitudinal convection ¹, the details of the channel shape and particle size affect quantitatively the net particle velocity. Therefore, the relative aperture of the channel, quantified by \(\Delta S_{m}\), captures the main dependence of the particle velocity emerging from cooperative rectification and determines, for all the ratchet models explored, the optimal regime for cooperative rectified motion.
[FOOTNOTE:1][ENDFOOTNOTE]
<figure><img src="content_image/1201.2027/x3.png"><figcaption>Figure 3: Rectification of a Brownian motor in an asymmetric ratchet andsymmetric corrugated channel. Panel A-B: velocity is normalized by v0, that isthe intrinsic velocity of a particle with the same radius R under the actionof the ratchet in the case of a flat channel with the same volume, symbols andparameters values as in figure 2. C-D: velocity as a function of β (openpoints) or R (fileed points); points as in fgure2. Δϕ=0.2,0.3,0.5,0.6 for theflashing ratchet and Δϕ=0.1,0.8,0.9 for the two-state model, the larger thesymbols the bigger Δϕ.</figcaption></figure>
The rectified velocities displayed in Fig.2 can be of order \(D_{0}/L\) for a ratchet potential of magnitude \(4k_{B}T\)i.e. in experimentally achievable regimes. In fact, for optically-driven colloids ratchet potential amplitudes one order of magnitude larger than the thermal energy can be achieved tuning the laser beam intensity while the height of the energy barrier molecular motors are subject to due to ATP hydrolysis can be as much as \(\sim 10k_{B}T\) Howard (2001).
For asymmetric ratchets, \(\lambda\neq 0\), active transport leads to net rectification even for a symmetric, \(\Lambda=0\), corrugated channel. The particle sensitivity to the channel shape leads now to strong mean particle velocity enhancement. Moreover, panels A,B of Fig. 3 show that confinement allows particles to move against the underlying ratchet rectification. The inset of panel C and panel D of Fig. 3 emphasize the non-monotonic behavior of the velocity with respect to the entropy barrier leading to maxima of the velocity enhancement and inversion. Hence, \(\Delta S_{m}\) allows to identify, generically, the optimal regime for rectified transport. Such a behavior is stronger for a flashing ratchet than for molecular motors, leading to velocities up to 40 times larger than their unconfined counterparts. Since \(\Delta S_{m}\) depends on the tracer size (filled points), panels C,D of Fig. 3 indicate a novel mechanism for particle segregation based on particle size; depending on the phase-shift, \(\Delta\phi\), bigger particles can move faster or slower than smaller ones and, by fine tuning the parameters, they can be also trapped or even move in opposite direction. Finally, if particles displace in the presence of both an asymmetric ratchet potential and channel corrugation (\(\lambda\neq 0,\Lambda\neq 0\)) the strong enhancement in the rectified particle velocity is kept and both the inversion in the direction of motion and the mechanism for particle size segregation are generically observed for appropriate values of the ratchet parameters.
In conclusion, we have studied the cooperative rectification between geometrical constraints and Brownian ratchets as a new mechanism for active transport in confined environments and have shown that their interplay profoundly affects the net motion of small particles. We have clarified the physical origin of such a rectification, which may take place even when separately neither entropic nor ratcheting can lead to net particle motion, elucidating the role played by the biased diffusion generated by the geometrical constraint. The rectified velocity can be detectable in experimentally feasible situations and can be strongly enhanced by increasing the amplitude of the ratchet barrier in the two-state model or the multiplicative noise parameter \(Q\) in the flashing ratchet. In the presence of ratchet rectification, the entropic constraints modulates the velocity in a particle-size dependent manner that leads to regimes of maximum velocity enhancement and velocity reversal, providing a new mechanism for particle segregation in confined environments. Although cooperative rectification relies on the phase difference between the ratchet potential and the corrugated channel, the particle velocity vanishes only when in registry. According to Fig. 2, a probability distribution of phase shifts, \(p(\Delta\phi)\), will still lead to rectification whenever \(\sqrt{\langle\Delta\phi^{2}\rangle}\ll\pi\), with a magnitude which will depend on \(\langle\Delta\phi\rangle\). For larger phase shift distributions, \(\sqrt{\langle\Delta\phi^{2}\rangle}\sim\pi\), cooperative rectification will survive only for asymmetric ratchets, with \(\lambda\neq 0\), and will be generically observed when both the entropic and ratchet potentials are asymmetric, \(\lambda\neq 0,\Lambda\neq 0\). The cooperative mechanism described is robust, and can be exploited to control active transport of particles in ionic channels Calero et al. (2011) or nuclear pores, confined colloids Lee et al. (2005) or nanobead transport in microfluidic devices.
We acknowledge the Dirección General de Investigación (Spain) and DURSI project for financial support under projects FIS 2008-04386 and 2009SGR-634, respectively. J.M. Rubí acknowledges financial support from Generalitat de Catalunya under program Icrea Academia
## References
* Thomas et al. (2010) G. Thomas, J. Prost, P. Martin, and J.-F. Joanny, Curr. Op. in Cell Biol. pp. 1–7 (2010).
* Hänggi and Marchesoni (2009) P. Hänggi and F. Marchesoni, Rev. Mod. Phys. **81**, 387 (2009).
* Astumian (2010) R. D. Astumian, Biophys. J. **98**, 2401 (2010).
* Allison and Abbott (2002) A. Allison and D. Abbott, Microelectronics Journal **33**, 235 (2002).
* Zhu et al. (2004) B. Y. Zhu, F. Marchesoni, and F. Nori, Phys. Rev. Lett. **92**, 180602 (2004).
* Linke et al. (1999) H. Linke, H. Xu, A. Lo, W. Sheng, A. Svensson, P. Omling, P. E. Lindelof, R. Newbury, and R. P. Taylor, Physica B **272**, 61 (1999).
* Rubí and Reguera (2010) J. M. Rubí and D. Reguera, Chem. Phys. **375**, 518 (2010).
* Wambaugh et al. (1999) J. F. Wambaugh, C. Reichhardt, C. J. Olson, F. Marchesoni, and F. Nori, PRL **83**, 5106 (1999).
* Barrer (1978) R. M. Barrer, _Zeolites and Clay Minerals as Sorbents and Molecular Sieves_ (Academic, New York, 1978).
* Calero et al. (2011) C. Calero, J. Faraudo, and M. Aguilella-Arzo, Phys. Rev. E **83**, 021908 (2011).
* Dagdug et al. (2011) L. Dagdug, A. M. Berezhkovskii, Y. A. Makhnovskii, V. Y. Zitsereman, and S. Bezrukov, J. Chem. Phys. **134**, 101102 (2011).
* Altintas et al. (2009) E. Altintas, E. Sarajlic, F. K. Bohringerb, and H. Fujita, Sensors and Actuators A **154**, 123 (2009).
* Martens et al. (2011) S. Martens, G. Schmidt, L. Schimansky-Geier, and P. Hänggi, Phys. Rev. E **83**, 051135 (2011).
* Lee et al. (2005) S.-H. Lee, K. Ladavac, M. Polin, and D. Grier, Phys. Rev. Lett. **94**, 110601 (2005).
* Zelan et al. (2011) M. Zelan, H. Hagman, G. Labaigt, S. Jonsell, and C. Dion, Phys. Rev. E **83**, 020102 (2011).
* Reguera and Rubí (2001) D. Reguera and J. M. Rubí, Phys. Rev. E **64**, 1 (2001).
* Zwanzig (1992) R. Zwanzig, J. Phys. Chem. **96**, 3926 (1992).
* Reguera et al. (2006) D. Reguera, G. Schmid, P. S. Burada, J. M. Rubí, P. Reimann, and P. Hänggi, Phys. Rev. Lett. **96**, 130603 (2006).
* Kalinay and Percus (2008) P. Kalinay and J. K. Percus, Physical Review E **78**, 021103 (2008).
* Burada et al. (2007) P. S. Burada, G. Schmid, D. Reguera, J. M. Rubí, and P. Hänggi, Physical Review E p. 051111 (2007).
* Reimann (2002) P. Reimann, Phys. Rep. **361**, 57 (2002).
* Jülicher et al. (1997) F. Jülicher, A. Ajdari, and J. Prost, RMP Colloquia **69**, 1269 (1997).
* Qian (2004) H. Qian, Phys. Rev. E **69**, 012901 (2004).
* Howard (2001) J. Howard, _Mechanics of Motor Proteins and the Cytoskeleton_ (Sinauer, Sunderland, 2001).
|
1609.04620 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 32123,
"num_imgs": 10,
"llama3_tokens_count": 8505
} | [
"content_image/1609.04620/price_example.png",
"content_image/1609.04620/CGR.png",
"content_image/1609.04620/Gtrue.png",
"content_image/1609.04620/bins_clean_15.png",
"content_image/1609.04620/Cint.png",
"content_image/1609.04620/Neff_all.png",
"content_image/1609.04620/Neff_cumsum.png",
"content_image/1609.04620/monthly_Reff.png",
"content_image/1609.04620/on_off_sef.png",
"content_image/1609.04620/trade_distribution.png"
] | # Price impact without order book: A study of the OTC credit index market
Z Eisler\({}^{1}\) and J-P Bouchaud\({}^{1}\)
\({}^{1}\) Capital Fund Management, 23 rue de l’Université, 75007 Paris
zoltan.eisler@cfm.fr
February 19, 2024
###### Abstract
We present a study of price impact in the over-the-counter credit index market, where no limit order book is used. Contracts are traded via dealers, that compete for the orders of clients. Despite this distinct microstructure, we successfully apply the propagator technique to estimate the price impact of individual transactions. Because orders are typically split less than in multilateral markets, impact is observed to be mainly permanent, in line with theoretical expectations. A simple method is presented to correct for errors in our classification of trades between buying and selling. We find a very significant, temporary increase in order flow correlations during late 2015 and early 2016, which we attribute to increased order splitting or herding among investors. We also find indications that orders advertised to less dealers may have lower price impact. Quantitative results are compatible with earlier findings in other more classical markets, further supporting the argument that price impact is a universal phenomenon, to a large degree independent of market microstructure.
## 1 Introduction
### Price impact
Liquidity in financial markets is an elusive concept, with many definitions in existence. From a practical point of view, however, one of its most important metrics is the response of price to buying and selling. This reaction is called _price impact_, and it has been treated in a long series of empirical papers (see e.g. Hasbrouck [1991], Bouchaud et al. [2004], Almgren et al. [2005], Moro et al. [2009], Bouchaud et al. [2009], Tóth et al. [2011] and refs. therein). One of the most important findings is that impact is not only mechanical but dynamic, meaning that it cannot be described exclusively by the revealed supply or demand at any given time – say the content of the visible limit order book [Weber and Rosenow, 2005]. It is rather related to underlying “latent” supply and demand [Tóth et al., 2011, Donier et al., 2015] which correspond to the intentions of market participants, and which manifest themselves over time. Most of the recent studies of impact have been carried out on transparent, listed markets. Nevertheless, many aspects of price impact appear to be universal, i.e. common to many asset classes, even to exotic ones [Donier and Bonart, 2015, Tóth et al., 2016].
In this paper, we will continue the exploration by presenting the price impact of individual transactions in credit indices, where trading does not take place in limit order books. The remainder of this section introduces these products, briefly surveys the relevant literature, and presents the data used for our study. Section 2 then describes a naive approach to calculate impact based on a standard propagator technique. Section 3 looks at the effect of misclassification between buy and sell trades, and corrects the resulting biases in our results. Section 4 discusses a temporary pattern of increased order splitting and higher impact observed in the data. Section 5 finds indications that the impact of trades increases with the number of dealers involved. Finally Section 6 concludes.
### The credit index market
Today the credit index market is fairly mature. The most liquid derivative products are proposed by Markit, we will look at four of them: two US based indices CDX IG (for Investment Grade) and CDX HY (for High Yield), and their European counterparts iTraxx Europe and iTraxx Crossover, respectively. These correspond to baskets of CDSs [Oehmke and Zawadowski, 2014], each of which represents an insurance on bonds of a given corporate issuer within the respective grade and geographical zone. The instruments are standardized, their mechanics very much resembles futures contracts, and they roll once every 6 months. At the time of the roll a new maturity is issued, these are called “series”, for example S23, S24 and S25 for iTraxx Europe. On any given day most of the liquidity is concentrated in the most recently issued five-year series of each index at the time, in the following we will only study these (see table 1).
One particularity of this market – as opposed to stocks or futures – is that it is purely _over-the-counter_ (OTC), currently _without_ any liquid limit order book. Information is fragmented, there is no single, central source to verify when one is looking for tradable prices, client orders pass through a large number of dealers instead. The latter usually do provide indicative bid/ask prices, but the actual trades are mostly done via Request for Quotes (RFQ), see also Hendershott and Madhavan [2015]. This means that the client (liquidity taker) auctions off its trades to the liquidity providers, by sending them information about the conditions of the deal (which product, buy or sell, size) either electronically or by voice, receiving competing quotes in return, and taking the best price.
To counteract the bilateral design of OTC markets which favors opacity, the Dodd-Frank Act has mandated several changes. Among them are obligatory post-trade reporting, and the creation of Swap Exchange Facilities (SEFs) which provide an organized framework to dealing in eligible OTC instruments. Today a large portion of trades is required to go through SEFs, whose volume is predominantly done via electronic RFQ. Even though they provide order books, those are – for the moment – empty.
### Literature review
Credit trading has received considerably less attention than equities or futures, and most studies have been done by or in collaboration with regulators who have privileged access to non-anonymous data. Gehde-Trapp et al. [2015] study records from the German Bundesbank regarding single-name CDS issues. They find significant price impact using a model where the effect of each trade is permanent. Shachar [2012] of the New York Federal Reserve defines buy/sell orders by assuming that the initiator of the trades is the end-user (as opposed to the dealer), and focuses to a large extent on the inventory management of dealers. The study finds evidence of ”hot-potato” trading [Lyons, 1997] whereby an initial client trade changes hands among dealers several times, while its effect is being gradually incorporated into the price. Loon and Zhong [2016] of the Securities and Exchange Commission focus on the same credit indices as our study. Their work takes a policy-maker’s point of view, and argues that the wider transparency created by the Dodd-Frank Act has improved several metrics of liquidity. They focus particularly on the transitory period during the introduction of the reform, whereas we will consider more recent data where market structure is already relatively stable. Finally, Hendershott and Madhavan [2015] analyze both electronic and voice trading in single-name CDS. Most notably they identify price impact related to information leakage, especially when the client requests prices from many dealers, and even if he finally decides not to trade.
### The dataset
Our period of study is 17 June 2015 – 31 August 2016. For the four products we have recorded a semi-realtime (several updates per minute) indicative data feed via a service called CBBT (Composite Bloomberg Bond Trader). This represents a continuous, electronic poll of recent executable prices from dealers. Nevertheless, it is not a bid or ask price, only an indicative level around which one expects to be able to transact. At some time \(t\) the indicative price is quoted as a credit spread \(s_{t}\), which is the annualized insurance premium in basis points.¹ In the following we will express all prices as basis points of the typical credit spread itself, meaning
[FOOTNOTE:1][ENDFOOTNOTE]
\[m_{t}=10^{4}\times\frac{s_{t}}{\left\langle s_{t}\right\rangle},\]
where \(\left\langle\cdot\right\rangle\) denotes a time average.
Anonymous information about trades is also available from a different source: _trade repositories_ mandated by regulation. We have used the records of two such organizations.² These include a substantial part of all trades with credit spread, timestamp, volume and other additional information.
[FOOTNOTE:2][ENDFOOTNOTE]
While the data are rich and relatively clean, the two sources (prices and trades) are independent, and there is no _a priori_ reason for perfect synchronization between the two.
* Product code | Avg. intertrade | Neff | R(ℓ=30) | G(ℓ=30)
---|---|---|---|---
| time [sec] | | [bps] | [bps]
CDX HY S24 | 75 | 2.3 | 5.8 | 3.4
CDX HY S25 | 66 | 14.2 | 51.2 | 3.9
CDX HY S26 | 90 | 1.7 | 7.1 | 4.2
CDX IG S24 | 113 | 1.6 | 11.8 | 7.5
CDX IG S25 | 92 | 9.8 | 55.0 | 4.9
CDX IG S26 | 119 | 2.8 | 10.0 | 5.3
iTraxx Crossover S23 | 128 | 1.5 | 13.4 | 9.3
iTraxx Crossover S24 | 106 | 5.7 | 65.4 | 6.6
iTraxx Crossover S25 | 199 | 1.2 | 19.9 | 10.6
iTraxx Europe S23 | 171 | 1.4 | 15.8 | 10.1
iTraxx Europe S24 | 132 | 7.1 | 67.1 | 8.4
iTraxx Europe S25 | 181 | 2.0 | 21.4 | 10.2
Table 1: Summary of the different credit index products studied. Notice that
the value of G is much more stable than that of R across different series of
the same product.
<figure><img src="content_image/1609.04620/price_example.png"><figcaption>Figure 1: An example of the time evolution of indicative credit spread and itsvalue reported for trades. We show the product iTraxx Europe S25 on 31 August2016.</figcaption></figure>
## 2 Naive propagators
Price impact is often analyzed in the context of linear models, where the market price \(m_{t}\) just before trade \(t\) is written as a linear combination of the time dependent impact of past trades [Bouchaud et al., 2004]:
\[m_{t}=\sum_{t^{\prime}<t}\left[{G}(t-t^{\prime})\epsilon_{t^{\prime}}+\eta_{t^ {\prime}}\right]+m_{-\infty}.\] (1)
\(\epsilon_{t^{\prime}}\) is the sign of the trade at time \(t^{\prime}\) (\(+\) for buyer, \(-\) for seller initiated trades), and \(\eta_{t^{\prime}}\) is an independent noise term. \({G}(\ell)\) is called the ‘propagator’, and it describes how the price at time \(t\) is modified _due to_ the trade at \(t-\ell\). In equity and futures markets, this propagator is found to decay with time, i.e. a large part of price impact is transient rather than permanent (for a recent study of the long term behaviour of \(G(\ell)\) in equities, see Brokmann et al. [2015]).
In order to calibrate the model (1) one calculates the _response function_\({\cal R}(\ell)\), which is defined as
\[{\cal R}(\ell)=\langle(m_{t+\ell}-m_{t})\cdot\epsilon_{t}\rangle,\] (2)
and which quantifies the price move _after_ a trade, but not necessarily _due to_ the trade. One then measures the autocorrelation of order signs, which is is customarily defined as
\[C(\ell)=\langle\epsilon_{t}\epsilon_{t+\ell}\rangle.\] (3)
And finally one solves the linear equation [Bouchaud et al., 2006]
\[{\cal R}(\ell)=\sum_{0<n\leq\ell}{G}(n)C(\ell-n)+\sum_{n>0}[{G}(n+\ell)-G(n)]C (n)\] (4)
to map out the numerical value of \(G(\ell)\).³
[FOOTNOTE:3][ENDFOOTNOTE]
To guess the sign of trades one often relies on some heuristic. If we denote the price of trade \(t\) by \(p_{t}\), then simply
\[\epsilon_{t}=\mathrm{sign}(p_{t}-m_{t}).\] (5)
An example of transaction and reference prices is shown in figure 1.
As one can see from figure 2, the shape of \(C(\ell)\) is well fitted by a stretched exponential. This is true for most individual products and on average across them, and it is in contrast with earlier studies in order book markets, where \(C(\ell)\) rather decays as a slow, power-law function [Bouchaud et al., 2009]. In the latter liquidity at good prices is often small [Bouchaud et al., 2006], so large orders tend to be sliced, producing a long-range autocorrelation of small trades. In OTC markets clients are encouraged to request deals that correspond to their full liquidity needs, so that after the trade is done, the dealer offloading the inventory just acquired will not have to compete for liquidity with the same client. Since trades are bilateral, the dealer knows the identity of the client, and can reward or penalize it by adjusting the bid-ask spread according to any adverse selection perceived on earlier deals [Osler et al., 2016].
Since the order sign process is not long range correlated, its autocorrelation function is integrable and one can therefore define an _effective number of correlated orders_ via
\[N_{\mathrm{eff}}=\sum_{\ell=0}^{\infty}C(\ell).\]
This value varies between \(1.2\) and \(14.2\), as reported in table 1. These differences are due to time periods, we will study this point further in Section 4
We can calculate the propagator via (4), their average across products is given in figure 3. Bouchaud et al. [2004] have shown that if \(C(\ell)\) has a power-law form, then the propagator should itself decay over time as a power-law in order to maintain the efficiency of prices. In other words, impact is mostly _transient_ in that case. On the other hand, since our \(C(\ell)\) is short ranged, the same argument predicts that \(G(\ell)\) should tend to a constant for large \(\ell\), corresponding to a non-zero _permanent impact_ component. This is indeed what we observe, see figure 3.
Calculating \(G\) from \({\cal R}\) involves inverting a matrix whose elements are related to \(C\). This operation amplifies the noise in the correlations, and so it is useful to also give approximate formulas that avoid this. If we know that \(G\) is increasing before converging to a fixed value, and \(C\) is positive, then from (4) one can find two bounds on \(G(\ell)\). In the limit when \(C(\ell)\) reaches zero much more quickly than \(G(\ell)\) goes to its asymptotic value, we can write for large \(\ell\) that
\[(2N_{\mathrm{eff}}-1)^{-1}{\cal R}(\ell)\leq G(\ell).\] (6)
Conversely, if we assume that \(G\) saturates immediately, then \({\cal R}\) will be maximal, and for large \(\ell\) we get
\[G(\ell)\leq N_{\mathrm{eff}}^{-1}{\cal R}(\ell).\] (7)
Figure 3 shows these bounds which are not excessively wide, as well as the real \(G\) calculated numerically.
In reality the propagator – which was not observed in previous studies – increases steeply but continuously in the initial period. This makes sense in the absence of a central orderbook: The information that someone bought or sold takes a finite time of \(5-10\) trades to diffuse in the market and to get incorporated into the price.
Beyond the propagators which give a microscopic description of price moves, one can also look at a more aggregate characterization by dividing the data into \(15\) minute bins. Then one can calculate in each bin \(b\) the net signed notional defined as
\[I_{b}=\sum_{t\in b}\epsilon_{t}Q_{t},\]
where the sum runs over trades in the bin, and \(Q_{t}\) is the notional value of trade \(t\). As a function of this quantity one can calculate the average price change \(r_{b}=m_{b}-m_{b-1}\) over the bin. Figure 4 confirms a strong correlation, similar results can be obtained regardless of the precise time scale. Note the concavity of this plot, also observed in many other markets with order books (see e.g. [Bouchaud et al., 2009], figure 2.5). Although we do not have enough statistics to test on our trades the \(\sqrt{Q}\) impact law universally observed in all markets studied so far, we believe that the concave shape seen in figure 4 is compatible with the general “latent liquidity” idea of Tóth et al. [2011], Donier et al. [2015].
<figure><img src="content_image/1609.04620/CGR.png"><figcaption>Figure 2: (left) Average of the sign autocorrelation C(ℓ), and the stretchedexponential fit a×exp(−bℓ)ν, with a=0.43, b=0.16 and ν=2/3. (right) Average ofthe response function R(ℓ), and the stretched exponential fit a×[1−exp(−bℓ)ν],with a=33.9, b=0.068 and ν=0.88.</figcaption></figure>
<figure><img src="content_image/1609.04620/Gtrue.png"><figcaption>Figure 3: Average value across products of various quantities: (red points)Propagator G(ℓ), (red line) fits of the former quantity with the samestretched exponential form as above. (red shaded area) Bounds on G(ℓ) based on(6) and (7). Note that as expected these are only valid for large ℓ. (blueline) The true, bias-adjusted propagator Gtrue(ℓ) calculated with the fits ofCtrue(ℓ) and Rtrue(ℓ), and divided by a factor 3 for better readability.</figcaption></figure>
<figure><img src="content_image/1609.04620/bins_clean_15.png"><figcaption>Figure 4: The average return in 15-minute windows normalized by their absolutemean (rb/⟨|rb|⟩), as a function of the imbalance normalized by its ownstandard deviation (Ib/std(Ib)). Data points have been separated into 30groups according to their rank on the horizontal axis, we show the averagereturns in each group. Outliers outside the horizontal range [−3,+3] have beendiscarded. The purple lines correspond to individual products, and the redpoints to all products together. The dashed line is a linear fit with slope1/3. Note the clear concavity of the average curve as the volume imbalanceincreases.</figcaption></figure>
## 3 Correction for the noise in order signs
In the previous section we have confirmed the existence of price impact in OTC credit indices. However, the magnitude of the effect remains to be validated for the following reason. It is known that the heuristic (5) for identifying order signs does not always give correct results. If \(\epsilon_{t}\) is incorrect, naturally all expectation values calculated from it will be incorrect as well. In order to verify such biases in our results, we are going to use a proprietary dataset including 252 trades executed over the same period by our firm (CFM).
As opposed to the detected trade sign \(\epsilon_{t}\), let us introduce the notation \(\epsilon^{\mathrm{true}}_{t}\) for the true sign of the same order, which is not a priori known, except for those of CFM. It is also convenient to introduce an auxiliary variable \(q_{t}\) that is \(1\) when we classified the trade correctly, and \(0\) when we did not. This way
\[\epsilon_{t}=q_{t}\epsilon^{\mathrm{true}}_{t}+(1-q_{t})\times(-\epsilon^{ \mathrm{true}}_{t})\equiv\epsilon^{\mathrm{true}}_{t}(2q_{t}-1).\] (8)
If we look at the above mentioned CFM trades, the rate of correct classification, described by the average \(\left\langle q_{t}\right\rangle_{t\in\mathrm{CFM}}\), is only 72%. This value is constant within noise level across different months in the sample.
As a first step we would like to show that basic correlations of the detected order signs \(\epsilon_{t}\) are the same in the CFM subset of trades and the rest. Let us compare \(C(\ell)=\langle\epsilon_{t}\epsilon_{t+\ell}\rangle\) with the subsample average
\[C_{\mathrm{CFM}}(\ell)=[\langle\epsilon_{t}\epsilon_{t+\ell}\rangle_{t\in \mathrm{CFM}}+\langle\epsilon_{t}\epsilon_{t+\ell}\rangle_{t+\ell\in\mathrm{ CFM}}]/2,\]
where we conditioned on at least one of the trades belonging to CFM. Figure 5 shows that there is a fair match, especially for short lags. This gives an indication that other statistics calculated on CFM trades may be approximately similar to the whole market, and can be used in the following.
Let us now look at how misclassification biases our earlier calculations. Let us define
\[C_{\mathrm{true}}(\ell)=\left\langle\epsilon^{\mathrm{true}}_{t} \epsilon^{\mathrm{true}}_{t+\ell}\right\rangle=\left\langle\epsilon^{\mathrm{ true}}_{t}\epsilon_{t+\ell}\right\rangle+\left\langle\epsilon^{\mathrm{true}}_ {t}(\epsilon^{\mathrm{true}}_{t+\ell}-\epsilon_{t+\ell})\right\rangle.\] (9)
We can readily measure the first term when \(t\in\mathrm{CFM}\), whereas for \(\ell\geq 1\) the second term can be rewritten as
\[\left\langle\epsilon^{\mathrm{true}}_{t}(\epsilon^{\mathrm{true}} _{t+\ell}-\epsilon_{t+\ell})\right\rangle=-\left\langle\epsilon^{\mathrm{true} }_{t}\epsilon^{\mathrm{true}}_{t+\ell}\cdot 2(q_{t+\ell}-1)\right\rangle\approx\]
\[\left\langle\epsilon^{\mathrm{true}}_{t}\epsilon^{\mathrm{true}}_ {t+\ell}\right\rangle\cdot 2(\left\langle q_{t+\ell}\right\rangle-1)=-2C^{ \mathrm{true}}(\ell)\cdot(\left\langle q_{t}\right\rangle-1).\] (10)
For the approximation step we assumed that whether or not we make a mistake in identification is independent of the two-point product of true signs. After reorganization and assuming that we can use CFM trades in part of the correlation, we get
(11)
This estimation of \(C_{\mathrm{true}}(\ell)\) is shown in figure 5.
As for the response function, one can define its “true” variant as
(12)
This is related to the response with detected order signs as
\[{\cal R}(\ell)=\langle(p_{t+\ell}-p_{t})\epsilon_{t}\rangle=\]
\[\langle(p_{t+\ell}-p_{t})\times(2q_{t}-1)\epsilon^{\mathrm{true}} _{t}\rangle\approx(2\langle q_{t}\rangle-1){\cal R}^{\mathrm{true}}(\ell).\] (13)
In the approximation step we neglect the correlation of identification error and future price change. Finally:
\[{\cal R}^{\mathrm{true}}(\ell)=\frac{{\cal R}(\ell)}{2\langle q_{ t}\rangle-1}.\] (14)
Finally one can define a true propagator \(G^{\mathrm{true}}(\ell)\) via (4) by inserting \({\cal R}^{\mathrm{true}}(\ell)\) and \(C^{\mathrm{true}}(\ell)\). If we use (11) and (14) for the approximation of these latter, one can obtain the numerical value of the true propagator averaged over all products, see figure 3. This shows that finally \(G^{\mathrm{true}}(\ell)\approx 3G(\ell)\).
<figure><img src="content_image/1609.04620/Cint.png"><figcaption>Figure 5: Cumulative sign autocorrelation functions. The estimated curvecorresponds to (11).</figcaption></figure>
## 4 Seasonal patterns in order splitting
We now revisit our results to study their variation across time. Figure 6 shows the measured value of \(N_{\mathrm{eff}}\) in monthly windows. One can see that order correlations intensify at the turn of the year, while at the same time credit spreads climb to a local maximum. Liquidity itself remains roughly constant. In figure 7 we offer a more detailed view of the effect, by showing the cumulative autocorrelation \(\sum_{\ell^{\prime}=0}^{\ell}C(\ell^{\prime})\) for each month separately (averages over all products). We do not see much structure in periods of low correlation, while during high correlation \(C\) is very positive up to 10–20 trades. Note that the mis-classifier variable \(q_{t}\) is reasonably stationary, so that this seasonality is too strong to result from an incorrect classification of trades.
<figure><img src="content_image/1609.04620/Neff_all.png"><figcaption>Figure 6: The points in color show the effective number of correlated tradesfor each product, measured in 1-month periods. The dashed black linerepresents the credit spread shifted and normalized such that the data spansthe interval [0, 1], this is an average over the four products.</figcaption></figure>
<figure><img src="content_image/1609.04620/Neff_cumsum.png"><figcaption>Figure 7: The cumulative trade sign autocorrelation ∑ℓℓ′=0C(ℓ′), averagedacross products. Each curve corresponds to a 1-month period.</figcaption></figure>
An explanation could be given in the context of the ”hot-potato” theory of Lyons [1997], advocated for credit markets in Shachar [2012]. Clients are expected to trade the full required size in a single deal, so further orders (and hence \(N_{\mathrm{eff}}>1\)) could come from the subsequent inter-dealer exchange of risk. In the period of difficult markets liquidity becomes ”recycled” as it takes a longer time for the position to find a final counter party to warehouse the risk. This, however, does not explain the increase of \({\cal R}\) which is shown in figure 8. Response is amplified in proportion to correlations, which means that these additional trades have full impact, and they likely cause a net variation of dealer inventory [Shachar, 2012]. Hence this is more likely the signature of increased real order splitting or herding among clients, in the spirit of recent papers on transparent markets [Bouchaud et al., 2009].
<figure><img src="content_image/1609.04620/monthly_Reff.png"><figcaption>Figure 8: The points in show the response function R after 30 trades for theeach product, measured in 1-month periods.</figcaption></figure>
## 5 Comparison of on-SEF and off-SEF trades
Loon and Zhong [2016] compare various forms of trading, and argue that in the transitory period after the enactment of Dodd-Frank, increased market transparency has lead to lower trading cost and price impact. Their data dates from 2013, when uncleared and off-SEF trading were still commonplace. In our more recent dataset only \(11\%\) of all trades are off-SEF, and in terms of transparency we no longer expect much difference. More importantly though, on-SEF it is mandatory to have _at least_ three competing brokers when requesting a price, whereas on electronic platforms for off-SEF this is _at most_ three brokers. It is common lore among traders that increased competition might reduce instantaneous costs, but as Hendershott and Madhavan [2015] also show, it leads to higher information leakage, and thus more impact.
It is straightforward to extend (1) to a case where trades are classified into discrete categories \(\pi\)Eisler et al. [2011], each of which has its own propagator \(G_{\pi}\). If each trade \(t^{\prime}\) falls into category \(\pi_{t^{\prime}}\), then
\[m_{t}=\sum_{t^{\prime}<t}\left[G_{\mathrm{\pi_{t^{\prime}}}}(t-t^{\prime}) \epsilon_{t^{\prime}}+\eta_{t^{\prime}}\right]+m_{-\infty}.\] (15)
We use this technique to separate on-SEF and off-SEF execution, the naive impact kernels and the corresponding theoretical bounds are shown in figure 9. Indeed, despite the high noise level we find that off-SEF trades have significantly lower impact than on-SEF ones. This is not a result of the orders themselves being smaller, the mean size and the shape of the distribution are nearly identical, see figure 10. We see this rather as support for the theory of Hendershott and Madhavan [2015], that price impact grows with the number of dealers involved.
<figure><img src="content_image/1609.04620/on_off_sef.png"><figcaption>Figure 9: Naive propagators for on-SEF and off-SEF trades. The shaded areascorrespond to the theoretical bounds derived from the multi-propagatorequivalents of (6) and (7), which are only expected to be valid for large ℓ.</figcaption></figure>
<figure><img src="content_image/1609.04620/trade_distribution.png"><figcaption>Figure 10: The distribution of order sizes on-SEF and off-SEF, close to anexponential. Both are normalized by their (nearly identical) respective means,which are shown in the legend.</figcaption></figure>
## 6 Conclusion
At first the microstructure of the OTC credit index market seems different from that of equity and futures markets, as it is centered around dealers, without a central order book. However, we find that from the point of view of order flow and price impact, the differences are only quantitative. Client orders are much less split, and as a consequence, the impact of an isolated order, as expressed by \(G\), reaches a permanent plateau. The numerical value of impact, after correcting for the imperfect identification of order signs, is the same order of magnitude as the bid-ask spreads in our daily trading experience. This is in line with what is expected based on theoretical arguments about the break-even costs of market making Wyart et al. [2008]. The propagator takes \(5-10\) trades, in real time more than \(15\) minutes, to reach its final level. Because the market is very fragmented, it takes this long for the effect of a trade to become fully incorporated in the price.
Qualitatively the behavior is in line with what was observed for other, more frequently studied products. This finding gives further support to the argument that price impact is a universal phenomenon, and it behaves similarly in classical markets and more “exotic” ones such as Bitcoin Donier and Bonart [2015], options Tóth et al. [2016] and now credit.
## Acknowledgments
The authors thank Panos Aliferis, Iacopo Mastromatteo and Bence Tóth for their ideas and critical input.
## References
* Almgren et al. [2005] R. Almgren, C. Thum, E. Hauptmann, and H. Li. Direct estimation of equity market impact. _Risk_, 18(7):58–62, 2005.
* Bouchaud et al. [2004] J.-P. Bouchaud, Y. Gefen, M. Potters, and M. Wyart. Fluctuations and response in financial markets: the subtle nature of “random” price changes. _Quantitative Finance_, 4(2):176–190, 2004.
* Bouchaud et al. [2006] J.-P. Bouchaud, J. Kockelkoren, and M. Potters. Random walks, liquidity molasses and critical response in financial markets. _Quantitative Finance_, 6(02):115–123, 2006.
* Bouchaud et al. [2009] J. P. Bouchaud, J. D. Farmer, and F. Lillo. How markets slowly digest changes in supply and demand. In T. Hens and K. R. Schenk-Hoppé, editors, _Handbook of Financial Markets: Dynamics and Evolution_, pages 57–160. North-Holland, Amsterdam, 2009.
* Brokmann et al. [2015] X. Brokmann, E. Serie, J. Kockelkoren, and J.-P. Bouchaud. Slow decay of impact in equity markets. _Market Microstructure and Liquidity_, 1(02):1550007, 2015.
* Donier and Bonart [2015] J. Donier and J. Bonart. A million metaorder analysis of market impact on the Bitcoin. _Market Microstructure and Liquidity_, 1(02):1550008, 2015.
* Donier et al. [2015] J. Donier, J. Bonart, I. Mastromatteo, and J.-P. Bouchaud. A fully consistent, minimal model for non-linear market impact. _Quantitative Finance_, 15(7):1109–1121, 2015.
* Eisler et al. [2011] Z. Eisler, J.-P. Bouchaud, and J. Kockelkoren. Models for the impact of all order book events. _Available at SSRN 1888105_, 2011.
* Gehde-Trapp et al. [2015] M. Gehde-Trapp, Y. Gündüz, and J. Nasev. The liquidity premium in CDS transaction prices: Do frictions matter? _Journal of Banking & Finance_, 61:184–205, 2015.
* Hasbrouck [1991] J. Hasbrouck. Measuring the information content of stock trades. _The Journal of Finance_, 46(1):179–207, 1991.
* Hendershott and Madhavan [2015] T. Hendershott and A. Madhavan. Click or call? auction versus search in the over-the-counter market. _The Journal of Finance_, 70(1):419–447, 2015.
* Loon and Zhong [2016] Y. C. Loon and Z. K. Zhong. Does Dodd-Frank affect OTC transaction costs and liquidity? evidence from real-time CDS trade reports. _Journal of Financial Economics_, 119(3):645–672, 2016.
* Lyons [1997] R. K. Lyons. A simultaneous trade model of the foreign exchange hot potato. _Journal of International Economics_, 42(3):275–298, 1997.
* Moro et al. [2009] E. Moro, J. Vicente, L. G. Moyano, A. Gerig, J. D. Farmer, G. Vaglica, F. Lillo, and R. N. Mantegna. Market impact and trading profile of hidden orders in stock markets. _Physical Review E_, 80(6):066102, 2009.
* Oehmke and Zawadowski [2014] M. Oehmke and A. Zawadowski. The anatomy of the CDS market. _Available at SSRN 2023108_, 2014.
* Osler et al. [2016] C. Osler, G. Bjonnes, N. Kathitziotis, et al. Bid-ask spreads in OTC markets. Technical report, 2016.
* Shachar [2012] O. Shachar. Exposing the exposed: Intermediation capacity in the credit default swap market. _Federal Researve Bank of New York Working Paper_, 2012.
* Tóth et al. [2011] B. Tóth, Y. Lemperiere, C. Deremble, J. De Lataillade, J. Kockelkoren, and J.-P. Bouchaud. Anomalous price impact and the critical nature of liquidity in financial markets. _Physical Review X_, 1(2):021006, 2011.
* Tóth et al. [2016] B. Tóth, Z. Eisler, and J.-P. Bouchaud. The square-root impact law also holds for option markets. _arXiv:1602.03043_, 2016.
* Weber and Rosenow [2005] P. Weber and B. Rosenow. Order book approach to price impact. _Quantitative Finance_, 5(4):357–364, 2005.
* Wyart et al. [2008] M. Wyart, J.-P. Bouchaud, J. Kockelkoren, M. Potters, and M. Vettorazzo. Relation between bid–ask spread, impact and volatility in order-driven markets. _Quantitative Finance_, 8(1):41–57, 2008.
|
1909.12973 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 48479,
"num_imgs": 0,
"llama3_tokens_count": 14156
} | [] | # Evolution of Vehicle Network on a Highway
Gleb Dubosarskii, , Serguei Primak, ,
Xianbin Wang,
G. Dubosarskii, S. Primak and X. Wang are with the Department of Electrical and Computer Engineering, Western University, London, Ontario, Canada, N6A 5B9 (e-mail: gdubosar@uwo.ca; slprimak@uwo.ca; xianbin.wang@uwo.ca).Manuscript received November 30, 2018; revised April 1, 2019.
###### Abstract
One of the challenges related to the investigation of vehicular networks is associated with predicting a network state regarding both short-term and long-term network evolutionary changes. This paper analyzes a case in which vehicles are located on a straight road, and the connectivity state between two consecutive cars is determined by the Markov chain model with two states. The transition probabilities of the considered model are explicitly expressed in terms of known parameters of the network using the Wang-Moayery model. Within the presented model, the network evolution is described in terms of determinative parameters, such as average link duration, average cluster lifetime, and a clusters’ existence probability between two fixed moments of time. In support of the theoretically obtained probabilistic distributions, the results of numerical simulations are provided.
Vehicular network, clustering, network evolution, link duration.
## I Introduction
Vehicular Adhoc Networks (VANETs) have become one of the frontier topics of research [1]–[19] over the last decade due to the anticipated mass deployment of self-driving vehicles. A VANET consists of a set of fast moving vehicles equipped with sensing, communication and infotainment systems. This turns the neighboring connected vehicles into a moving wireless network, that enables vehicles to connect with each other and share safety and entertainment related content. Self-driving vehicles are expected to significantly reduce the number of road accidents and traffic congestion, improve road safety, and further enable intelligent transportation.
Vehicle networking is a relatively new field that allows for the application of emerging technologies such as machine learning, which is widely used in many areas from image processing to financial data analysis. Based on extensive statistics, a neural network is capable of predicting and classifying data, and therefore, it is a powerful tool for solving a variety of different problems. Machine learning techniques have applications in optimization of information transmission between vehicles [1], [2], improving transportation safety [3], and reducing transmission delays [4]. The applications of machine learning in the aforementioned areas, as well as for network congestion control, wireless resource management, and load balancing are discussed in a comprehensive survey [5]. It includes not only classical deep learning algorithms used for the system’s state prediction from available data, but also modern developments in the field of reinforcement learning, that makes it possible to find strategies leading to long-term rewards in the cases, where a problem can be reformulated as a game. Reinforcement learning algorithms demonstrate their superiority in comparison to the previous approaches in resource management and resource allocation tasks, and allow for a reduction in computational complexity.
In recent years, significant progress has been made in the area of MAC protocols for vehicular networks [6]–[11]. Due to the dynamic topology of the network, the development of robust and efficient transmission of information in Vehicle-to-Vehicle (V2V), and Vehicle-to-RSU (V2R) scenarios becomes highly significant. One important issue here is avoiding collisions where more than one node transmits at the same time interval, because resending the same packages causes delays, which is not acceptable for delay-sensitive applications. For overcoming the problem, near collision free MAC protocols adapting to different densities of traffic flow have been proposed.
MAC protocols are intrinsically complicated, which makes them almost impossible to analyze analytically from a statistical point of view. For practical applications, it is important to estimate the average cluster size of the network and probability of multihop connectivity between two vehicles. To address these challenges, in the case of a straight road (see articles [12]–[16]), simplified models are proposed, making it possible to derive explicit expressions for the aforementioned fundamental network characteristics, and the probability of full connectivity. In [17], the authors consider a more complicated case, where vehicle networks are moving along two perpendicular roads towards the intersection. They derive formulas for the outage probability and the transmission probability, assuming that intervehicle distance has Poisson distribution.
However, these studies are limited to a current moment, while it is important to investigate how connectivity changes over time. In particular, it is important to know what the average link duration is. In the articles [18] and [19], the explicit formulas for the link duration are obtained under different assumptions. In [18], the average link duration is investigated assuming a two-way road, and that vehicles only transmit messages to other vehicles moving in the opposite direction. In [19], the average link duration is calculated in the case of one-way traffic under the assumption that speed increases linearly until it reaches the speed limit and then remains constant.
Network evolution is a stochastic process. Therefore, we can predict the state of the system for the next moment of time only with certain probability. It explains the application of the probability theory to the analysis of network evolution. The channel interruption probability is small, but positive, leading to an increase in the probability of disconnection over time. This article is aimed at describing the evolution of the network in terms of the probability of maintaining a connection between two consecutive vehicles and the vehicles forming a cluster, and investigating how rapidly the probability decreases. We derive explicit expressions for the probabilities of link duration, cluster existence over a certain amount of time, and the probability of cluster existence between two fixed moments of time. The obtained results related to cluster evolution are new, and they provide an insight into dynamical changes in the network.
It is convenient to investigate the connectivity properties of the system not at every moment, but only on a discrete uniform time mesh. It means that the difference between two consecutive moments of time has a fixed duration \(\Delta t\). Let \(t\) be the current moment of time. The Markov model determines the probability of an event that at the moment of time \(t+\Delta t\), the consecutive vehicles could establish a connection based on the communication state at the time moment \(t\). This probability is explicitly expressed in terms of macro parameters of the system [20], [21]. Using the state-transition matrix of the Markov process, it is possible to express the desired connection probabilities in terms of the matrix coefficients, and thereby, allows the calculation of the required connectivity characteristics.
Also, we consider such a stability characteristic of the connection between two consecutive cars as \(\omega\)-stable connection. This type of connection guarantees that the time between consecutive connections does not exceed \(\omega\Delta t\) (\(\Delta t\) is a timestep). In other words, this weakened condition means that at some moments of time the vehicles may fail to establish a connection, but they connect at least once at each time interval that has length \(\omega\Delta t\). This type of communication is closer to the actual operating conditions, where connection may disappear for short periods of time, but it is important to ensure that it is regularly reestablished. We derive recurrent equations for calculating the probabilities and verify the results using simulations.
The article is organized as follows. Section II describes the parameters and the structure of a considered network model. Section III investigates the statistics of node-to-node connectivity between cars assuming a fading channel between the nodes.The detailed mathematical derivations are summarized in Section V, and a brief summary is given in IV. The derived expressions are verified through a number of simulations in Sections VI, VII.
## II Network model
We consider the following probabilistic model that describes the evolution of the network over time. It is assumed that there are two states of connection _Good_ and _Bad_. In the _Good_ state, neighboring cars can establish a connection, while the _Bad_ state corresponds to the case where neighboring cars cannot connect with each other. We suppose that initially the system is in the equilibrium state; the initial probabilities of connection are calculated in the section V. We consider the system evolution process with timestep \(\Delta t\) (introduced at the end of the previous section). Let \(p\) be the probability that connected neighbouring cars cannot establish a connection at the next moment of time, in other words, \(p\) is the probability that at the next moment of time the system moves from the _Good_ state to the _Bad_ state. By analogy, let \(q\) be the probability that at the next moment the system moves from the _Bad_ state to the _Good_ state. Therefore, connectivity between two consecutive cars can be described by two-state Markov Model depicted below (letters **G** and **B** denote the _Good_ and the _Bad_ states, respectively). The explicit values of the parameters \(p\) and \(q\) are given in the next section.
Fig 1. Markov diagram of the connectivity process
with the _Good_ and the _Bad_ states
## III Probabilistic connectivity model
Everywhere in the article it is assumed that all \(n\) vehicles move along one-way road with the same constant speed \(v\). Each car can establish a connection only with the closest front and back neighbours. We assume a Rayleigh fading channel between every pair of cars. Consequently, the amplitude of the received signal is exponentially distributed with pdf
\[p(x)=\frac{1}{\lambda}e^{-x/\lambda},\] (1)
where \(\lambda\) represents the average SNR (Signal-to-noise ratio) over the fading channel. We use the Wang-Moayery model [21] to determine values of the parameters \(p\) and \(q\). According to this model, we determine state of the system by comparison signal amplitude \(A\) and threshold \(\overline{A}\). If \(A\geq\overline{A}\) then we assume, that the system is in the _Good_ state, otherwise it is in the _Bad_ state. We denote by \(p_{B}\) the probability of the _Bad_ case, therefore,
\[p_{B}=\int_{0}^{\overline{A}}p(x)dx=\int_{0}^{\overline{A}}\frac{1}{\lambda}e^ {-x/\lambda}dx=1-e^{-\overline{A}/\lambda}.\] (2)
The probability of the _Good_ state is given by the formula
\[p_{G}=1-p_{B}=e^{-\overline{A}/\lambda}.\] (3)
Assuming the Clark model, the following formula for the level crossing rate is obtained in [21]:
\[LCR(x)=\sqrt{\frac{2\pi x}{\lambda}}f_{D}e^{-x/\lambda},\] (4)
where \(f_{D}\) is Doppler shift. Also, the following two explicit formulas for parameters \(p\) and \(q\) (they are introduced in the previous section) are derived
\[p=\frac{LCR(\overline{A})}{Rp_{G}}=\frac{\sqrt{\frac{2\pi\overline{A}}{\lambda }}f_{D}}{R},\] (5)
\[q=\frac{LCR(\overline{A})}{Rp_{B}}=\frac{\sqrt{\frac{2\pi\overline{A}}{\lambda }}f_{D}e^{-\overline{A}/\lambda}}{R(1-e^{-\overline{A}/\lambda})},\] (6)
where \(R\) is the symbol rate. Taking into account (5), (6), and the following formula for the maximum Doppler shift ¹:
[FOOTNOTE:1][ENDFOOTNOTE]
\[f_{D}=\frac{vf_{c}}{c},\] (7)
where \(f_{c}\) and \(c\) are transmitted frequency and velocity of light respectively, we obtain
\[p=\frac{LCR(\overline{A})}{Rp_{G}}=\frac{\sqrt{\frac{2\pi\overline{A}}{\lambda }}vf_{c}}{Rc},\] (8)
\[q=\frac{LCR(\overline{A})}{Rp_{B}}=\frac{\sqrt{\frac{2\pi\overline{A}}{\lambda }}vf_{c}e^{-\overline{A}/\lambda}}{Rc(1-e^{-\overline{A}/\lambda})}.\] (9)
The timestep \(\Delta t\) (see description of timestep in the section I) satisfies Nyquist-–Shannon criterion
\[\Delta t\leq\frac{1}{2f_{D}}.\] (10)
## IV Main results
Let \(p_{G}(m)\) be the probability that at the moment \(m\Delta t\) two fixed consecutive cars can establish a connection. We assume that the system is initially in the equilibrium state (for more details see section V). In the above mentioned section, the probability \(p_{G}(m)\) of a successful connection between two consecutive cars at the moment of time \(m\Delta t\) satisfies the following formula:
\[p_{G}(m)=\frac{q}{p+q},\] (11)
where parameters \(p\) and \(q\) are calculated by the formulas (8) and (9). Thus, probability \(p_{G}(m)\) does not depend on \(m\), and, therefore, we denote it by \(p_{G}\).
Let symbol \(P_{twocars}(m)\) stand for the probability that connection between two cars once established has duration \(m\Delta t\). The probability \(P_{twocars}(m)\) is given by the formula
\[P_{twocars}(m)=(1-p)^{m-1}p.\] (12)
The average link duration \(\overline{T}_{twocars}\) and its variance \(\sigma_{twocars}^{2}\) of the distribution (12) are determined by the formulas
\[\overline{T}_{twocars}=\frac{\Delta t}{p},\] (13)
\[\sigma_{twocars}^{2}=\frac{(1-p)\Delta t^{2}}{p^{2}}.\] (14)
**Definition.** A _cluster_ is a such group of vehicles that any two cars in the group can communicate with each other, probably through other vehicles of the cluster.
In our model, each vehicle connects only to the closest forward and backward cars. Therefore, in the framework of our model, clusters are formed by several consecutive cars. On Fig. 2 below, five cars are depicted that form two clusters. The first cluster is formed by cars 1, 2, 3 and the second cluster is composed of cars 4 and 5. Significantly, cars 1 and 3 cannot communicate directly, and the only communication way for them is through car 2, while vehicles 3 and 4 are disconnected. We prove that the connectivity probability is given by the formula \(p/(p+q)\). Therefore, connectivity state between two consecutive vehicles depends on the parameters listed in the section III. It is worth mentioning that this probability is close to \(1\) (stable connection) if \(p\approx 1\) and \(q\approx 0\).
We derive the probability \(P_{clust}(m)\) that the cluster formed by the cars \(k,k+1,\ldots,k+s\) once being formed exists exactly time \(m\Delta t\). Let us introduce parameter \(\gamma\) by the formula
\[\gamma=\begin{cases}2,&\mbox{if }1<k\mbox{ and }k+s<n,\\ 1,&\mbox{if }k=1\mbox{ or }k+s=n,\mbox{ but not both, }\\ 0,&\mbox{if }k=1\mbox{ and }k+s=n.\end{cases}\] (15)
The probability \(P_{clust}(m)\) satisfies the formula
\[P_{clust}(m)=(1-p)^{s(m-1)}(1-q)^{\gamma(m-1)}(1-(1-p)^{s}(1-q)^{\gamma}).\] (16)
Fig 2. Network of 5 cars that form two clusters.
The average existence cluster lifetime \(\overline{T}_{clust}\) and variance \(\sigma_{clust}^{2}\) of the distribution (16) can be obtained by the formulas
\[\overline{T}_{clust}=\frac{\Delta t}{1-(1-p)^{s}(1-q)^{\gamma}}.\] (17)
\[\sigma_{clust}^{2}=\frac{(1-q)^{\gamma}(1-p)^{s}\Delta t^{2}}{(1-(1-q)^{\gamma }(1-p)^{s})^{2}},\] (18)
where \(\gamma\) is determined in (15).
We study not only the duration of a cluster lifetime, but also the probability that cluster does exist between two given moments of time. More precisely, we derive the following formula for the probability \(P_{clust}(m,l)\) that the cluster that consists of cars \(k,k+1,\ldots,k+s\) is formed at the moment of time \(m\Delta t\), exists until \(l\Delta t\) (\(l\geq m\)), and does not exist at the moment \((l+1)\Delta t\) as follows:
(19)
where \(p_{G}\) and \(\gamma\) are given by the formulas (11) and (15).
Finally, we analyse the property of \(\omega\)-stability of a connection that is defined as the ability of two vehicles to establish connection within every time interval \(\omega\Delta t\). More formally, we assume that connection is \(\omega\)-stable between moments \(m\Delta t\) and \(l\Delta t\), if there is at least one successful connection not later that the moment \((m+\omega)\Delta t\), time difference between every two consecutive connections does not exceed \(\omega\Delta t\) and the last connection is established at the moment \((l-\omega)\Delta t\) or later. At the end of the section V, we derive the algorithm for finding the probability that the connection between two cars is \(\omega\)-stable on a pre-selected interval. The algorithm has linear complexity and recurrently calculates the probability under the assumption of known parameters \(p\) and \(q\).
## V Mathematical derivations
### _Probability of two cars being connected at the moment \(m\Delta t\)_
Let \(p_{G}(m)\) be the probability that there is a connection between predetermined consecutive cars at the moment of time \(m\Delta t\) and by \(p_{B}(m)\) the probability that there is no connection at \(m\Delta t\). Let us find the limiting distributions \(p_{G}(\infty)=\lim_{m\to\infty}p_{G}(m)\) and \(p_{B}(\infty)=\lim_{m\to\infty}p_{B}(m)\). According to the Markov model (see Fig. 1), we have
\[p_{G}(m)=p_{G}(m-1)(1-p)+p_{B}(m-1)q,\] (20)
\[p_{G}(m)+p_{B}(m)=1.\] (21)
When \(m\to\infty\), the equations (20) and (21) take the following form
\[p_{G}(\infty)=p_{G}(\infty)(1-p)+p_{B}(\infty)q,\] (22)
\[p_{G}(\infty)+p_{B}(\infty)=1.\] (23)
Solving system of linear equations (22) and (23), we obtain
\[p_{G}(\infty)=\frac{q}{p+q},\] (24)
\[p_{B}(\infty)=\frac{p}{p+q}.\] (25)
We assume that at the moment when we observe the system, it has reached its limiting distribution, therefore, the probabilities at every moment \(m\Delta t\) are equal to the limiting probabilities:
\[p_{G}(m)=\frac{q}{p+q},\] (26)
\[p_{B}(m)=\frac{p}{p+q}.\] (27)
Thus, we denote them \(p_{G}\) and \(p_{B}\).
### _Distribution of link duration_
Let us consider two consecutive cars. We find the probability \(P_{twocars}(m)\) of the event that the connection lifetime between these cars is exactly \(m\Delta t\). In other words, if the connection between cars is firstly established at the moment of time \(\Delta t\), we calculate the probability that the cars keep the connection up to the time instant \(m\Delta t\), and at the time \((m+1)\Delta t\) the connection cannot be established. This probability is given by the formula
\[P_{twocars}(m)=(1-p)^{m-1}p,\] (28)
since at the time \(\Delta t\) the system is in the _Good_ state, and the probability that it remains in the _Good_ state at the moments \(2\Delta t\), \(3\Delta t,\ldots m\Delta t\) is \((1-p)^{m-1}\) (see Markov diagram on Fig. 1), and the probability that the system changes the state from _Good_ to _Bad_ at the time instant \((m+1)\Delta t\) is \(p\).
Using relation (28), we can calculate the average link duration \(\overline{T}_{twocars}\) and variance \(\sigma_{twocars}^{2}\) of the distribution by the formulas
\[\overline{T}_{twocars}=\sum_{l=1}^{\infty}l\Delta tP_{twocars}(l)=\sum_{l=1}^{ \infty}l\Delta t(1-p)^{l-1}p=\frac{\Delta t}{p},\] (29)
\[\sigma_{twocars}^{2}=\sum_{l=1}^{\infty}(l\Delta t)^{2}P_{twocars }(l)-\overline{T}_{twocars}^{2}\\ =\sum_{l=1}^{\infty}l^{2}\Delta t^{2}(1-p)^{l-1}p-\frac{\Delta t^ {2}}{p^{2}}\\ =\Big{(}\frac{2-p}{p^{2}}-\frac{1}{p^{2}}\Big{)}\Delta t^{2}= \frac{(1-p)\Delta t^{2}}{p^{2}}.\] (30)
### _Distribution of a cluster lifetime_
In this section we find the probability \(P_{clust}(m)\) that the existence duration of a cluster composed by cars \(k\), \(k+1,\ldots,k+s\) equals exactly \(m\Delta t\). We could assume that the cluster exists since the moment \(\Delta t\) until the moment \(m\Delta t\) and does not exist at the moment \((m+1)\Delta t\). The probability \(P_{clust}(m)\) satisfies the formula
\[P_{clust}(m)=(1-p)^{s(m-1)}(1-q)^{\gamma(m-1)}(1-(1-p)^{s}(1-q)^{\gamma}),\] (31)
where \(\gamma\) is defined by (15).
Let us derive the equation (31). We consider only the case where \(k>1\) and \(k+s<n\), because in other cases derivations are similar. The probability \(P_{clust}(m)\) can be calculated by the formula
\[P_{clust}(m)=P_{1}P_{2},\] (32)
where \(P_{1}\) is a probability that the cluster exists at the moments \(2\Delta t,\ldots m\Delta t\) under the condition that it exists at the moment \(\Delta t\), and \(P_{2}\) is a probability that the cluster does not exist at the moment \((m+1)\Delta t\) under the assumption that it exists at the moment \(m\Delta t\). First, we calculate the probability \(P_{1}\). The fact that the cluster \(k,k+1,\ldots,k+s\) exists at the moment \(\Delta t\) simply means that at this moment there is a connection between every pair of cars \((k,k+1)\), \({(k+1,k+2)}\), \(\ldots\), \({(k+s-1,k+s)}\) and there is no connection between pairs of cars \((k,k-1)\) and \((k+s,k+s+1)\). The probability that all pairs of cars \((k,k+1)\), \({(k+1,k+2)}\), \(\ldots\), \({(k+s-1,k+s)}\) preserve a connection at the moment \(2\Delta t\) is \((1-p)^{s}\) (see Markov diagram on Fig. 1). The probability that the pairs of cars \((k,k-1)\) and \((k+s,k+s+1)\) continue to be disconnected at the moment \(2\Delta t\) is \((1-q)^{2}\). Thus, the probability that the cluster exists at the moment \(2\Delta t\) is \((1-p)^{s}(1-q)^{2}\). Continuing these derivations until the moment \(m\Delta t\), we find the probability \(P_{1}\) given as follows:
\[P_{1}=(1-p)^{s(m-1)}(1-q)^{2(m-1)},\] (33)
since there are \(m-1\) transitions between steps \(\Delta t\) and \(m\Delta t\) and at each step we need to multiply the answer by the transitional probability \((1-p)^{s}(1-q)^{2}\). Finally, we derive the probability \(P_{2}\). The cluster should cease to exist at the moment \((m+1)\Delta t\). The probability that the cluster continues to exist at the next moment is \((1-p)^{s}(1-q)^{2}\), therefore, the probability that it does not exist is
\[P_{2}=1-(1-p)^{s}(1-q)^{2}.\] (34)
Formulas (32)–(34) prove the equation (31).
The average time of cluster existence \(\overline{T}_{clust}\) and its variance \(\sigma_{clust}^{2}\) are determined by the following formulas:
\[\overline{T}_{clust}=\sum_{l=1}^{\infty}l\Delta tP_{clust}(l)\\ =\sum_{l=1}^{\infty}l\Delta t(1-p)^{s(l-1)}(1-q)^{\gamma(l-1)}(1- (1-p)^{s}(1-q)^{\gamma})\\ =\frac{\Delta t}{1-(1-p)^{s}(1-q)^{\gamma}}.\] (35)
\[\sigma_{clust}^{2}=\sum_{l=1}^{\infty}(l\Delta t)^{2}P_{clust}(l) -\overline{T}_{clust}^{2}\\ =\frac{(1+(1-p)^{s}(1-q)^{\gamma})\Delta t^{2}}{(1-(1-p)^{s}(1-q) ^{\gamma})^{2}}-\Big{(}\frac{\Delta t}{1-(1-p)^{s}(1-q)^{\gamma}}\Big{)}^{2}\\ =\frac{(1-q)^{\gamma}(1-p)^{s}\Delta t^{2}}{(1-(1-q)^{\gamma}(1-p )^{s})^{2}}.\] (36)
### _Probability of cluster existence between fixed moments of time_
In the section, we derive the probability of a cluster existence between two particular moments of time. We consider only the case where \(k>1\) and \(k+s<n\), because in other cases derivations are similar. However, before solving the problem, we find the probability \(P_{clustbeg}(m)\) that at the time \(m\Delta t\) the cluster that consists of cars \(k,k+1,\ldots,k+s\), is formed, and it does not exist at the moment \((m-1)\Delta t\). Let us denote the probability of the event that at the moment \(m\Delta t\) there is a cluster consisting of cars \(k,k+1\), \(\ldots,k+s\) by \(P_{1}\) and the probability that the cluster \(k,k+1\), \(\ldots,k+s\) exists at the moments \((m-1)\Delta t\) and \(m\Delta t\) by \(P_{2}\). Therefore, the probability \(P_{clustbeg}(m)\) is given as follows:
\[P_{clustbeg}(m)=P_{1}-P_{2}.\] (37)
Next, we calculate each of the probabilities \(P_{1}\) and \(P_{2}\) separately. Firstly, we establish that
\[P_{1}=p_{G}^{s}(1-p_{G})^{2}.\] (38)
Indeed, if the cluster \(k,k+1\), \(\ldots,k+s\) exists at the moment \(m\Delta t\) then the pairs of cars \((k,k+1)\), \((k+1,k+2),\ldots,(k+s-1,k+s)\) are connected (each with the probability \(p_{G}\)). Thus, the probability of this event is \(p_{G}^{s}\), since there are \(s\) such pairs. Additionally, there is no connection between the pairs of cars \((k,k-1)\) and \((k+s\), \(k+s+1)\) (each of disconnections occurs with the probability \(1-p_{G}\)). Multiplying the aforementioned probabilities, we get the relation (38). Regarding the probability \(P_{2}\), we prove the formula
\[P_{2}=(1-p)^{s}p_{G}^{s}(1-p_{G})^{2}(1-q)^{2}.\] (39)
The probability \(P_{2}\) can be represented as follows:
\[P_{2}=P_{3}P_{4},\] (40)
where \(P_{3}\) is a probability that the cluster \(k,k+1,\ldots,k+s\) exists at the moment \((m-1)\Delta t\) and \(P_{4}\) is a probability that it continues to exist at the moment of time \(m\Delta t\) under the assumption that it exists at the moment \((m-1)\Delta t\). Repeating derivations for the probability \(P_{1}\) in the case of the time instant \((m-1)\Delta t\), we have
\[P_{3}=P_{1}=p_{G}^{s}(1-p_{G})^{2}.\] (41)
Next, we prove the relation
\[P_{4}=(1-p)^{s}(1-q)^{2}.\] (42)
Suppose, the cluster exists at the moment of time \((m-1)\Delta t\). The probability that a connection between any pair of the consecutive cars of the cluster is preserved at the moment \(m\Delta t\) equals \(1-p\). Consequently, the probability that the connection between all \(s\) pairs of cars is preserved equals \((1-p)^{s}\). The pairs of cars \((k-1,k)\) and \((k+s,k+s+1)\) cannot establish a connection at the moment \((m-1)\Delta t\), the probability that these pairs remain disconnected is \((1-q)^{2}\). Multiplying two probabilities \((1-p)^{s}\) and \((1-q)^{2}\), we derive the formula (42).
From (38)–(42), we conclude that
\[P_{clustbeg}(m)=p_{G}^{s}(1-p_{G})^{2}\\ -(1-p)^{s}p_{G}^{s}(1-p_{G})^{2}(1-q)^{2}.\] (43)
Using the obtained probabilities, we can find the probability \(P_{clust}(m,l)\) that the cluster \(k,k+1,\ldots k+s\) exists between moments of time \(m\Delta t\) and \(l\Delta t\)\((l\geq m)\). In other words, it is the probability that the cluster exists from the time instant \(m\Delta t\) to \(l\Delta t\)\((l\geq m)\), and does not exist at the moments \((m-1)\Delta t\) and \((l+1)\Delta t\). The probability \(P_{clust}(m,l)\) satisfies the formula
\[P_{clust}(m,l)=(p_{G}^{s}(1-p_{G})^{2}\\ -(1-p)^{s}p_{G}^{s}(1-p_{G})^{2}(1-q)^{2})\\ \times(1-p)^{s(l-m)}(1-q)^{2(l-m)}(1-(1-p)^{s}(1-q)^{2}).\] (44)
The probability \(P_{clust}(m,l)\) can be decomposed in the product of three terms as follows:
\[P_{clust}(m,l)=P_{clustbeg}(m)P_{5}P_{6},\] (45)
where \(P_{5}\) is a probability that the cluster exists until the moment \(l\Delta t\), and \(P_{6}\) is a probability that the cluster ceases to exist at the moment \((l+1)\Delta t\) assuming that it exists at the moment \(l\Delta t\). First, we consider the probability \(P_{5}\) and prove the formula
\[P_{5}=(1-p)^{s(l-m)}(1-q)^{2(l-m)}.\] (46)
By analogy with derivations of (33), we deduce that the probability, that the connection established between the pairs of cars \((k,k+1)\), \((k+1,k+2),\ldots,(k+s-1,k+s)\) at the moment of time \(m\Delta t\) is preserved at the time instances \((m+1)\Delta t,\ldots,l\Delta t\), equals \((1-p)^{s(l-m)}\). Following the same logic of (33), we derive that the probability, that the pairs of cars \((k-1,k)\) and \((k+s,k+s+1)\) do not communicate at the moments of time \((m+1)\Delta t,\ldots,l\Delta t\) under the condition that the connection is not established at the moment \((m-1)\Delta t\) equals \((1-q)^{2(l-m)}\). The probability \(P_{5}\) can be calculated as a product of these probabilities and, therefore, (46) is proven. By repeating steps of the proof of the equation (34), we conclude that
\[P_{6}=1-(1-p)^{s}(1-q)^{2}.\] (47)
From the formulas (43), (45)–(47), we derive (44).
### \(\omega\)_-stable connection_
**Definition** A connection between moments of time \(m\) and \(l\) (\(m\leq l\)) is \(\omega\)-stable if the time difference between every two consecutive connections does not exceed time \(\omega\Delta t\). Additionally we assume that there exists at least one successful connection established not later than \((m+\omega)\Delta t\), and the last connection is established at the moment of time \((l-\omega)\Delta t\) or later.
It does not make sense to consider \(\omega\)-stable connection if \(l<m+\omega\), because in this case every connection is \(\omega\)-stable. Therefore, we assume that \(l\geq m+\omega\). Also, we suppose that \(\omega\) is an integer number and \(\omega\geq 2\).
In the section, we find the probability \(P_{\omega}(m,l)\) that the connection between two consecutive cars is \(\omega\)-stable between the moments \(m\Delta t\) and \(l\Delta t\). Let us introduce the function \(h(a,b)\) equaling the probability that at the moment of time \(a\Delta t\), the last connection is established at the time instant \(b\Delta t\), and the function \(g(a)\) equaling the probability that there is no connection at the time interval \([0,a\Delta t]\). To derive the function \(g(a)\) explicitly, we should multiply the probability \((1-p_{G})\), that at the moment \(m\Delta t\) there is no connection, by the probability \((1-q)^{a-m}\) that at the moments \((m+1)\Delta,\ldots a\Delta\) there is no connection as well. Therefore, the function \(g(a)\) has the form
\[g(a)=\begin{cases}(1-p_{G})(1-q)^{a-m},&\mbox{if }a<m+\omega\\ 0,&\mbox{if }a\geq m+\omega.\end{cases}\] (48)
The following recurrences take place:
\[h(a,b)=0,b<a-\omega,\] (49)
\[h(a,b)=h(a-1,b)(1-q),a-\omega\leq b<a-1,\] (50)
\[h(a,a-1)=h(a-1,a-1)p,\] (51)
\[h(a,a)=h(a-1,a-1)(1-p)+\sum_{r=a-\omega}^{a-2}h(a-1,r)q+g(a-1)q.\] (52)
The formula (49) holds, since for \(b<a-\omega\), the probability \(h(a,b)\) equals zero due to the fact that the time difference between the last connection at the moment of time \(b\Delta t\) and the next connection exceeds \(\omega\Delta t\). The formula (50) holds, because if \(b<a-1\) then at the moment \((a-1)\Delta t\) the system is in the _Bad_ state, and the probability that it remains in this state is \(1-q\). The formula (51) is derived by applying the same logic. Finally, in (52), the recursive formula for the probability that the connection is established at the time instant \(a\Delta t\), is obtained. Due to the \(\omega\)-stability of the connection, the previous time of the connection lies between \((a-\omega)\Delta t\) and \((a-1)\Delta t\). The summand \(h(a-1,a-1)(1-p)\) occurs in the formula (52), since it is the probability that at the moment of time \((a-1)\Delta t\) the system is in the _Good_ state and remains at this state up to the moment of time \(a\Delta t\). The sum \(\sum_{r=a-\omega}^{a-2}h(a-1,r)q\) contains probabilities \(h(a-1,r)\) that the time instant \(r\Delta t\) is the last moment of time when the system is in the _Good_ state. It means that the system is in the _Bad_ state at the moment \((a-1)\Delta t\) and moves to the _Good_ state at the moment \(a\Delta t\). It explains the multiplication of the terms \(h(a-1,l)\) by \(q\).
The probability \(P_{\omega}(m,l)\) can be calculated by the formula
\[P_{\omega}(m,l)=\sum_{r=l-\omega}^{l}h(l,r),\] (53)
because the last moment of time before \(l\Delta t\) when the system is in the _Good_ state, can be only \(l\Delta t,(l-1)\Delta t,\ldots\), \((l-\omega)\Delta t\).
The following formulas hold:
\[h(m,m)=p_{G},\] (54)
\[h(m+1,m+1)=p_{G},\] (55)
where \(p_{G}\) is determined by the formula (24). The equality (54) takes place , since the probability that connection is established at the moment \(m\Delta\) is exactly \(p_{G}\), the formula (55) is obtained similarly, taking into account that \(\omega\geq 2\).
Under condition of \(a>b\), application of the relations (50) and (51) provides the expression
\[h(a,b)=h(a-1,b)(1-q)=h(a-2,b)(1-q)^{2}=\ldots\\ =h(b+1,b)(1-q)^{a-b-1}=h(b,b)(1-q)^{a-b-1}p.\] (56)
From (52) and (56), we obtain
\[h(a,a)=h(a-1,a-1)(1-p)\\ +\sum_{r=a-\omega}^{a-2}h(a-1,r)q+g(a-1)q\\ =h(a-1,a-1)(1-p)\\ +\sum_{r=a-\omega}^{a-2}h(r,r)(1-q)^{a-r-2}pq+g(a-1)q.\] (57)
By substituting \(a-1\) instead of \(a\) into (57), we derive
\[h(a-1,a-1)\\ =h(a-2,a-2)(1-p)\\ +\sum_{r=a-\omega-1}^{a-3}h(r,r)(1-q)^{a-r-3}pq+g(a-2)q.\] (58)
Multiplying (58) by \((1-q)\), we get the following equality:
\[h(a-1,a-1)(1-q)=h(a-2,a-2)(1-p)(1-q)\\ +\sum_{r=a-\omega-1}^{a-3}h(r,r)(1-q)^{a-r-2}pq+g(a-2)q(1-q).\] (59)
Subtraction (59) from (57) gives us
\[h(a,a)-h(a-1,a-1)(1-q)=h(a-1,a-1)(1-p)\\ -h(a-2,a-2)(1-p)(1-q)+h(a-2,a-2)pq\\ -h(a-\omega-1,a-\omega-1)(1-q)^{\omega-1}pq\\ +g(a-1)q-g(a-2)q(1-q).\] (60)
After simplifying (60), we finally derive
\[h(a,a)=h(a-1,a-1)(2-p-q)\\ -h(a-2,a-2)(1-p-q)\\ -h(a-\omega-1,a-\omega-1)(1-q)^{\omega-1}pq\\ +g(a-1)q-g(a-2)q(1-q).\] (61)
Let us introduce the function \(f(a)\) by the formula
\[f(a)=h(a,a).\] (62)
Expressions (61) and (62) provide the relation
\[f(a)=f(a-1)(2-p-q)-f(a-2)(1-p-q)\\ -f(a-\omega-1)(1-q)^{\omega-1}pq+g(a-1)q-g(a-2)q(1-q).\] (63)
Our next goal is to express the probability \(P_{\omega}(m,l)\) in terms of the function \(f(a)\). Substitution \(a=l+1\) in (52) gives us
\[h(l+1,l+1)=h(l,l)(1-p)+q\sum_{r=l+1-\omega}^{l-1}h(l,r)+g(l)q.\] (64)
Subtracting (53) multiplied by \(q\) from (64) produces the following equality:
\[h(l+1,l+1)-qP_{\omega}(m,l)\\ =h(l,l)(1-p)-h(l,l)q-h(l,l-\omega)q+g(l)q.\] (65)
Applying (56) to (65), we get
\[h(l+1,l+1)-qP_{\omega}(m,l)=h(l,l)(1-p-q)\\ -h(l-\omega,l-\omega)(1-q)^{\omega-1}pq+g(l)q.\] (66)
Therefore, taking into account (62), we derive
\[P_{\omega}(m,l)=\frac{1}{q}\Big{\{}f(l+1)-f(l)(1-p-q)\\ +f(l-\omega)(1-q)^{\omega-1}pq-g(l)q\Big{\}}.\] (67)
From (54) and (55), we obtain
\[f(m)=p_{G},\ f(m+1)=p_{G}.\] (68)
Combining previous formulas we derive the algorithm below for calculating the probability \(P_{\omega}(m,l)\). In lines 2, 3, 4 and 7 two arrays \(f\) and \(g\) are declared and initialized to zero. Lines 5, 6 and 9 are obtained by the formulas (68) and (48), respectively. Lines 12–14 are derived by (63), the return value in the line 17 is obtained by the formula (67).
```
1:function Probability of \(\omega\)-stable connection(\(m,l,\omega,p,q\))
2: \(f=ARRAY[1..l+1];\)
3: \(g=ARRAY[1..l+1];\)
4: \(f=\mathbf{0};\)
5: \(f(m)=\frac{q}{p+q};\)
6: \(f(m+1)=\frac{q}{p+q};\)
7: \(g=\mathbf{0};\)
8: for \(a=m\) to \(m+\omega-1\) do
9: \(g(a)=\frac{p}{p+q}(1-q)^{a-m};\)
10: end for
11: for \(a=m+2\) to \(l+1\) do
12:
\[f(a)=f(a-1)(2-p-q)\\ -f(a-2)(1-p-q)\\ +g(a-1)q-g(a-2)q(1-q);\]
13: if \(a-\omega-1\geq m\) then
14: \(f(a)=f(a)-f(a-\omega-1)(1-q)^{\omega-1}pq;\)
15: end if
16: end for
17: return
\[\frac{1}{q}\Big{\{}f(l+1)-f(l)(1-p-q)\\ +f(l-\omega)(1-q)^{\omega-1}pq-g(l)q\Big{\}};\]
18:end function
```
Function for calculating the probability \(P_{\omega}(m,l)\)
## VI Simulations
In order to verify the theoretical development we conduct a number of numerical simulations with the following parameters:
\[R=10^{5}\text{symbol/sec},fc=3.9\text{GHz},\overline{A}/\lambda=0.1.\]
The speed of cars \(v\) takes the following values:
\[v=30,60,90\text{ km/h}.\]
The Doppler frequency shift is calculated by the formula
\[f_{D}=\frac{vf_{c}}{c}.\]
For the speeds \(v=30,60,90\text{ km/h}\) Doppler frequency \(f_{D}\) equals \(108,217,325\text{ Hz}\), respectively. The timestep \(\Delta t\) between two consecutive transmissions should satisfy the Nyquist–Shannon criteria
\[\Delta t\leq\frac{1}{2f_{D}}.\]
We assume that \(\Delta t\) is given by the formula
\[\Delta t=\frac{1}{10f_{D}}.\]
For speeds \(v=30,60,90\text{ km/h}\), values of the parameter \(\Delta t\) are \(9\times 10^{-4},4.6\times 10^{-4},3\times 10^{-4}\) seconds, respectively. Under these assumptions from (5) and (6), we get the following values of parameters \(p\) and \(q\) for speeds \(v=30,60,90\text{ km/h}\):
\[p=8.5\times 10^{-4},1.7\times 10^{-3},2.5\times 10^{-3},\]
\[q=8.1\times 10^{-3},1.6\times 10^{-2},2.5\times 10^{-2}.\]
We assume that the network consists of \(n=10\) vehicles.
We use logarithmic scale for all graphs below. We consider the probabilities of the event that the system remains connected throughout the time interval \([0,t]\). By this, we mean that the connection is established at each moment of time on a discrete time grid during the interval \([0,t]\). If we increase the value of the parameter \(t\), then we increase the number of time moments when the vehicles should remain connected, so the probability of this event decreases. In other words, the considered probabilities are monotonically decreasing functions of the parameter \(t\). All graphs are represented as straight lines, since the probabilities exponentially decreasing with time, which in logarithmic scale, is represented as linear decreasing straight lines. We compare the results obtained by the formulas and the results of numerical simulations. The simulation results obtained by generating evolution of the vehicle network on the road many times, calculating number of the cases where required characteristics of the network occur and dividing this number by the number of trials. The values of parameters \(p\) and \(q\) are too small (about \(10^{-3}\)–\(10^{-4}\)) that results in negligible probabilities computed by the formulas (12), (16) and (19). It makes impossible to achieve an appropriate simulation precision for the reasonable time. Therefore, we do not depict the corresponding simulation results on the Figs 3, 4 and 5.
Fig. 3 presents graphs of the distribution (12) of the link duration between two consecutive cars for \(v=30,60,90\) km/h.
Fig 3. Graphs of the distribution (12) of the link duration between two consecutive cars for \(v=30,60,90\) km/h
Fig. 4 shows the distribution (16) of the lifetime of the cluster that consists of cars 2, 3 and 4 for \(v=30,60,90\) km/h.
To illustrate decreasing of the probability (19) of the cluster existence between fixed moments of time \(m\Delta t\) and \(l\Delta t\), we fix the value of the parameter \(m=2\) and vary the value of \(l=m,m+1,\ldots\). We assume that the cluster consists of cars with numbers 2, 3 and 4. The graphs of the obtained functions of parameter \(l\) are shown on the Fig. 5 below.
Fig. 6 demonstrates both the simulation results and the probabilities computed by the algorithm from the section V-E. We fix an initial moment of time \(m\Delta t\) and assume that \(m=2\), and vary \(l=m,m+1,\ldots\). Simulations results are obtained after 50000 generations of the car evolution process.
Fig 4. Graphs of the probability \(P_{clust}(m)\) given by the formula (16) for \(v=30,60,90\) km/h
Fig 5. Graphs of the probability \(P_{clust}(r,t)\) of the cluster existence between times given by the formula (19).
Fig 6. Graphs of numerical simulation of the probability of 3-stable connection between times \(m=2\) and \(l\) with variable \(l\), and the probability \(P_{\omega}(m,l)\) returned by algorithm from the section V-E.
## VII Simulations for large values
of parameters \(p\) and \(q\)
In the section, we perform simulations that confirm the correctness of the formulas (12), (16) and (19) for increased values of parameters \(p\) and \(q\), which have the order of \(10^{-2}\). The results are represented on Figs. 7, 8 and 9 below. The graphs perfectly match each other, which proves the correctness of the above mentioned formulas.
Fig. 7 presents two graphs of the numerical simulation and probability of the link duration \(P_{twocars}\) given by the formula (12) for \(p=0.02\), \(q=0.02\), \(\Delta t=0.01\) and \(10^{5}\) iterations.
Below on Fig. 8, the simulation results of the cluster lifetime duration are presented for the cluster that consists of cars \(2\), \(3\) and \(4\), and also, the graph of the predicted distribution \(P_{clust}(m)\) for \(p=0.02\),\(q=0.02\), \(\Delta t=0.01\) and \(10^{5}\) iterations.
Fig 7. Graphs of the numerical simulation of the distribution of the connection lifetime duration between two consecutive cars and distribution (12).
Fig 8. Graphs of the numerical simulations of the cluster lifetime and the probability \(P_{clust}(m)\).
Simulation results of the probability of the cluster existence between times \(15\Delta t\) and \(15\Delta t,16\Delta t\ldots 30\Delta t\) are shown on Fig 9. Thus, we assume value of \(m=15\) to be constant and change the value of the parameter \(l=15,16,\ldots 30\). Fig. 9 depicts the graph of the probability (19) for \(p=0.05\), \(q=0.05\), \(\Delta t=0.01\) and \(10^{6}\) iterations.
Fig 9. Graphs of the numerical simulation of the probability of the cluster existence between times \(15\Delta t\) and \(15\Delta t,16\Delta t\ldots 30\Delta t\) and \(P_{clust}(m,l)\).
## VIII Conclusion
In the article, we consider evolution of the vehicle network on a highway and assume that the connection between each pair of consecutive cars can be established with a certain probability. We derive the probability distributions that describe evolution of such characteristics of the network as a distribution of the link duration between each pair of cars, cluster lifetime, and _etc_. All the derivations are performed under the assumption that the connectivity model is described by the two state Markov chain model, where the parameters of the model are explicitly expressed through the parameters of the network.
## References
* [1] R.K. Kansla, K. Bansal, ”A Machine Learning Approach to Beaconing for Vehicular Ad Hoc Network (A Review)”. _IOSR J. Comput. Eng._, vol. 16, no. 4, pp. 52–55, 2014.
* [2] N. Taherkhani, S. Pierre, ”Centralized and Localized Data Congestion Control Strategy for Vehicular Ad Hoc Networks Using a Machine Learning Clustering Algorithm”, _IEEE Trans. Intell. Transp. Syst._, vol. 17, no. 11, pp. 3275 – 3285, 2016.
* [3] Z. Peng, S. Gao, Z. Li, B. Xiao, Y. Qian, ”Vehicle Safety Improvement through Deep Learning and Mobile Sensing”, _IEEE Netw._, vol. 32, no. 4, pp. 28 – 33, 2018.
* [4] Q. Zheng, K. Zheng, H. Zhang, V. C. M. Leung, ”Delay-Optimal Virtualized Radio Resource Scheduling in Software-Defined Vehicular Networks via Stochastic Learning”, _IEEE Trans. Veh. Technol._, vol. 65, no. 10, pp. 7857 – 7867, 2016.
* [5] H. Ye, L. Liang, G. Ye Li, J. Kim, L. Lu, M. Wu, ”Machine Learning for Vehicular Networks: Recent Advances and Application Examples”, _IEEE Veh. Technol. Mag._, vol. 13, no. 2, pp. 94 – 101, 2018.
* [6] V. Nguyen, T. Oo, P. Chuan, C. Hong ”An Efficient Time Slot Acquisition on the Hybrid TDMA/CSMA Multichannel MAC in VANETs”, _IEEE Commun. Lett._, vol. 20, no. 5, pp. 970 – 973, 2016.
* [7] Z. Tianjiao, Z. Qi, ”Game-based TDMA MAC protocol for vehicular network”,_IEEE J. Commun. Netw._, vol. 19, no. 3, pp. 209 – 217, 2017.
* [8] C. Shao, S. Leng, Y. Zhang, A. Vinel, M. Jonsson, ”Performance Analysis of Connectivity Probability and Connectivity-Aware MAC Protocol Design for Platoon-Based VANETs”, _IEEE Trans. Veh. Technol._, vol. 64, no. 12, pp. 5596 – 5609, 2015.
* [9] Y. Cao, H. Zhang, D. Wu, D. Yuan, ”OGCMAC: A Novel OFDM Based Group Contention MAC for VANET Control Channel”, _IEEE Trans. Wireless Commun._, vol. 16, pp. 5796 – 5809, no. 7, 2017.
* [10] S. Bharati, W. Zhuang, ”CAH-MAC: Cooperative ADHOC MAC for Vehicular Networks”, _IEEE J. Sel. Areas Commun._, vol. 31, no. 9, pp. 470 – 479, 2013.
* [11] Y. Yao, K. Zhang, X. Zhou, ”A Flexible Multi-Channel Coordination MAC Protocol for Vehicular Ad Hoc Networks”, _IEEE Commun. Lett._, vol 21, no. 6, pp. 1305 – 1308, 2017.
* [12] B. Wang, T. Adams, W. Jin, Q. Meng, ”The process of information propagation in a traffic stream with a general vehicle headway: A revisit”, _Transp. Research Emerg. Technol. C_, vol. 18, no. 3, pp. 367–375, 2010.
* [13] A. Babu and V. Muhammed Ajeer, ”Analytical model for connectivity of vehicular ad hoc networks in the presence of channel randomness,” _Int. J. Commun. Syst._, vol. 26, no. 7, pp. 927–946, 2011.
* [14] S. Kwon, Y. Kim, and N. B. Shroff, ”Analysis of connectivity and capacity in 1-d vehicle-to-vehicle networks,” _IEEE Trans. Wireless Commun._, vol. 15, no. 12, pp. 8182–8194, 2016.
* [15] H. Wang, R. P. Liu, W. Ni, W. Chen, I. B. Collings, ”VANET Modeling and Clustering Design Under Practical Traffic, Channel and Mobility Conditions”, _IEEE Trans. Commun._, vol. 63, no. 3, pp. 870 – 881, 2015.
* [16] G. Dubosarskii, S. Primak, X. Wang, ”On the higher order statistics of car clustering in vehicle communications networks on a road,” in Proc. _28th IEEE Int. Symp. Pers., Indoor Mobile Radio Commun. (PIMRC)_, 2017.
* [17] S. T. Hasson, Z. Y. Hasan, ”Roads clustering approach’s in VANET models”, in Proc. _Conf. New Trends Inf. Commun. Technol. Appl. (NTICT)_, pp. 316 – 321, 2017.
* [18] A. Kesting, M. Treiber, D. Helbing ”Connectivity Statistics of Store-and-Forward Intervehicle Communication”, _IEEE Trans. Intell. Transp. Syst._, vol. 11, no. 1, pp. 172 – 181, 2010.
* [19] G. Yan and S. Olariu, ”A Probabilistic Analysis of Link Duration in Vehicular Ad Hoc Networks”, _IEEE Trans. Intell. Transp. Syst._, vol. 12, no. 4, pp. 1227 – 1236, 2011.
* [20] H. Wang, N. Moayeri, ”Finite-state Markov channel — a useful model for radio communication channels”, _IEEE Trans. Veh. Technol._, vol 44, no. 1, 163–171, 1995.
* [21] S. Primak, V. Kontorovich, V. Lyandres, _Stochastic methods and their applications to communications: stochastic differential equations approach._ 2005, p. 434.
\begin{tabular}{c c}
& Gleb Dubosarskii received his B.E. and M.S. degrees in Applied Mathematics from the Ural Federal University in 2009 and 2011, respectively, and his Candidate of Science Degree in Approximation Theory from the Krasovskii Institute of Mathematics and Mechanics of the Ural Branch of the Russian Academy of Sciences in 2014. Currently, he is a Ph.D. student in the Department of Electrical and Computer Engineering at Western University, Canada. His current research interests are in the areas of communications, machine learning and mathematical modeling. \\
\end{tabular}
\begin{tabular}{c c}
& Serguei L. Primak (S’94-M’97) received the MSEE degree from St. Petersburg University of Telecommunications, St. Petersburg, Russia, in 1991 and the PhD degree in electrical engineering from the Ben-Gurion University of the Negev, Beer-Sheva, Israel, in 1996. Currently, he is a lecturer and post-doctoral fellow with the University of Western Ontario, London, Ontario, Canada. His current interests are in the field of ultrawideband radar applications, random signal generations, modeling of wave propagation in a city, timefrequency analysis, and inverse problems of electromagnetics. He is a member of the IEEE. \\
\end{tabular}
\begin{tabular}{c c}
& Xianbin Wang(S’98-M’99-SM’06-F’17) is a Professor and Tier 1 Canada Research Chair at Western University, Canada. He received his Ph.D. degree in electrical and computer engineering from National University of Singapore in 2001. Prior to joining Western, he was with Communications Research Centre Canada (CRC) as a Research Scientist/Senior Research Scientist between July 2002 and Dec. 2007. From Jan. 2001 to July 2002, he was a system designer at STMicroelectronics. His current research interests include 5G technologies, Internet-of-Things, communications security, machine learning and locationing technologies. Dr. Wang has over 350 peer-reviewed journal and conference papers, in addition to 29 granted and pending patents and several standard contributions. Dr. Wang is a Fellow of Canadian Academy of Engineering, a Fellow of IEEE and an IEEE Distinguished Lecturer. He has received many awards and recognitions, including Canada Research Chair, CRC Presidents Excellence Award, Canadian Federal Government Public Service Award, Ontario Early Researcher Award and six IEEE Best Paper Awards. He currently serves as an Editor/Associate Editor for IEEE Transactions on Communications, IEEE Transactions on Broadcasting, and IEEE Transactions on Vehicular Technology and He was also an Associate Editor for IEEE Transactions on Wireless Communications between 2007 and 2011, and IEEE Wireless Communications Letters between 2011 and 2016. Dr. Wang was involved in many IEEE conferences including GLOBECOM, ICC, VTC, PIMRC, WCNC and CWIT, in different roles such as symposium chair, tutorial instructor, track chair, session chair and TPC co-chair. \\
\end{tabular}
|
1611.01641 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 255793,
"num_imgs": 1,
"llama3_tokens_count": 103872
} | [
"content_image/1611.01641/newton.png"
] | # Finite dimensional invariant KAM tori for tame vector fields
**L. Corsi\({}^{*}\), R. Feola\({}^{**}\), M. Procesi\({}^{\dagger}\) \({}^{*}\) Georgia Institute of Technology, Altanta, lcorsi6@math.gatech.edu; \({}^{**}\) SISSA, Trieste, rfeola@sissa.it; \({}^{\dagger}\) Università di Roma Tre, procesi@mat.uniroma3.it**
###### Abstract
We discuss a Nash-Moser/ KAM algorithm for the construction of invariant tori for _tame_ vector fields. Similar algorithms have been studied widely both in finite and infinite dimensional contexts: we are particularly interested in the second case where tameness properties of the vector fields become very important. We focus on the formal aspects of the algorithm and particularly on the minimal hypotheses needed for convergence. We discuss various applications where we show how our algorithm allows to reduce to solving only linear forced equations. We remark that our algorithm works at the same time in analytic and Sobolev class.
###### Contents
Contents
* 1 Introduction
* 2 Functional setting and main result * 2.1 The Phase Space * 2.2 Polynomial decomposition * 2.3 Normal form decomposition * 2.4 Main result
* 3 Triangular decomposition and Mel’nikov conditions
* 4 Applications * 4.1 Example 1: Reversible Nash-Moser. * 4.2 Example 2: Hamiltonian KAM/Nash-Moser. * 4.3 Example 3: Reversible KAM/Nash-Moser. * 4.4 Example 4: KAM with reducibility. * 4.5 Application to the NLS. * 4.6 Some comments
* 5 Proof of the result * 5.1 The KAM step * 5.2 Proof of Theorem 2.25: iterative scheme.
* A Smooth functions and vector fields on the torus
* B Properties of Tame and regular vector fields * B.1 Proof of Lemma 4.8
* C Proof of Proposition 3.5
* D Time analytic case
## 1 Introduction
The aim of this paper is to provide a general and flexible approach to study the existence, for finite or infinite dimensional dynamical systems of of finite dimensional invariant tori carrying a quasi-periodic flow. To this purpose we discuss an iterative scheme for finding invariant tori for the dynamics of a vector field \(F=N_{0}+G\), where \(N_{0}\) is a linear vector field which admits an invariant torus and \(G\) is a _perturbation_.
By an _invariant torus_ of \(\dot{u}=F(u)\), with \(u\in E\) a Banach space, we mean an embedding \(\mathds{T}^{d}\to E\), which is invariant under the dynamics of \(F\), i.e. the vector field \(F\) is tangent to the embedded torus. Similarly an _analytic_ invariant torus is a map \(\mathds{T}_{s}^{d}\to E\) (with \(s>0\)) where \(\mathds{T}^{d}_{s}\) is a _thickened_ torus¹.
[FOOTNOTE:1][ENDFOOTNOTE]
It is very reasonable to work in the setting of the classical Moser scheme of [1], namely \(F\) acts on a product space \((\theta,y,w)\) where \(\theta\in\mathds{T}^{d}_{s},y\in\mathds{C}^{d_{1}}\) while \(w\in\ell_{a,p}\) some separable scale of Hilbert spaces. The variables \(\theta\) appear naturally as a parametrization for the invariant torus of \(N_{0}\). The \(y\) variables are constants of motion for \(N_{0}\), in applications they naturally appear as “conjugated” to \(\theta\), for instance in the Hamiltonian setting they come from the symplectic structure. The variables \(w\) describe the dynamics in the directions orthogonal to the torus. The main example that we have in mind is²
[FOOTNOTE:2][ENDFOOTNOTE]
\[N_{0}=\omega^{(0)}\cdot\partial_{\theta}+\Lambda^{(0)}w\partial_{w}\] (1.1)
where \(\omega^{(0)}\in\mathds{R}^{d}\) is a constant vector while \(\Lambda^{(0)}\) is a block-diagonal skew self–adjoint operator, independent of \(\theta\). Note that \(N_{0}\) has the invariant torus \(y=0,w=0\) , where the vector field reduces to \(\dot{\varphi}=\omega^{(0)}\).
Regarding the normal variables \(w\), we do not need to specify \(\ell_{a,p}\) but only give some properties, see Hypothesis 2.1, which essentially amount to requiring that \(\ell_{a,p}\) is a weighted sequence space³ where \(a\geq 0\) is an exponential weight while \(p>0\) is polynomial, for example
[FOOTNOTE:3][ENDFOOTNOTE]
\[w=\{w_{j}\}_{j\in{\mathcal{I}}\subseteq\mathds{N}}\,,\quad w_{j}\in\mathds{C} \,,\quad\|w\|_{a,p}^{2}:=\sum_{j\in{\mathcal{I}}\subseteq\mathds{N}}\lambda_{j }^{2p}e^{2a\lambda_{j}}|w_{j}|^{2}\,,\quad 0<\lambda_{j}\leq\lambda_{j+1} \ldots\quad\lambda_{i}\to\infty.\]
Note that if \({\mathcal{I}}\) is a finite set our space is finite dimensional.
The existence of an invariant torus in the variables \((\theta,y,w)\) means the existence of a map \(\mathds{T}^{d}_{s}\overset{h}{\to}\mathds{C}^{d_{1}}\times\ell_{a,p}\) of the form \(\theta\mapsto h(\theta)=(h^{(y)}(\theta),h^{(w)}(\theta))\) such that
\[{\mathcal{F}}(F,h)\equiv{\mathcal{F}}(h):=F^{(\mathtt{v})}(\theta,h^{(y)}( \theta),h^{(w)}(\theta))-\partial_{\theta}h^{(\mathtt{v})}\cdot F^{(\theta)}( \theta,h^{(y)}(\theta),h^{(w)}(\theta))=0,\quad\mathtt{v}=y,w,\] (1.2)
hence it coincides with the search for zeros of the functional \({\mathcal{F}}\) with unknown \(h\). Here \(h\) lives in some Banach space, say \(H^{q}(\mathds{T}^{d}_{s};\mathds{C}^{d_{1}}\times\ell_{a,p})\) on which \({\mathcal{F}}\) is at least differentiable. Note moreover that \({\mathcal{F}}(N_{0},0)=0\) trivially. Since we are in a perturbative setting, once one has the torus embedding, one can study the dynamics of the variables \(\theta\) restricted to the torus and look for a change of variables \(\varphi\mapsto\theta(\varphi)\) which conjugates the dynamics of \(\theta\) to the linear dynamics \(\dot{\varphi}=\omega\) with \(\omega\sim\omega^{(0)}\) a rationally independent vector. Obviously this could be done directly by looking for a _quasi-periodic solution_ i.e. a map
\[h:\varphi\to(h^{(\theta)}(\varphi),h^{(y)}(\varphi),h^{(w)}(\varphi)),\quad h \in H^{q}(\mathds{T}^{d}_{s};\mathds{C}^{d}\times\mathds{C}^{d_{1}}\times\ell_ {a,p}).\]
which solves the functional equation
\[{\mathcal{F}}(h):=F(h^{(\theta)}(\varphi),h^{(y)}(\varphi),h^{(w)}(\varphi))- \omega\cdot\partial_{\varphi}h=0.\]
If we take \(\ell_{a,p}=\emptyset\), \(d_{1}=d\), this is the classical KAM framework of Kolmogorov [2], Arnold [3] and Moser [4]; see also [5, 6, 7].
Even in the simplest setting, equation (1.2) cannot be solved by classical Implicit Function Theorem. In fact typically the operator \({\mathcal{F}}\) in (1.2) linearized at \(F=N_{0},h=0\) is not invertible on \(H^{q}(\mathds{T}^{d}_{s};\mathds{C}^{d_{1}}\times\ell_{a,p})\) since it has a spectrum which accumulates to zero (the so-called small divisors). To overcome this problem one can use a _Nash-Moser_ iterative scheme in order to find a sequence of approximate solutions rapidly converging to the true solution. The fast convergence is used to control the loss of regularity due to the small divisors. Such schemes are adaptations to Banach spaces of the Newton method to find zeros of functions, see [8].
<figure><img src="content_image/1611.01641/newton.png"><figcaption>Figure 1.1: Three steps of the Newton algorithm hn+1:=hn−(dhF(hn))−1[F(hn)]</figcaption></figure>
In order to run an algorithm of this type one must be able to control the linearized operator in a neighborhood of the expected solution; see Figure 1.1. Due to the presence of small divisors it is not possible to invert such operator as a functional from a Sobolev space to itself (not even the operator linearized about zero). However, since the Newton scheme is quadratic, one may accept that \(d_{h}{\mathcal{F}}^{-1}\) is well defined as an unbounded “tame” operator provided that one has a good control on the loss of regularity.
Of course, if on the one hand in order to achieve such control it is in general not sufficient to give lower bounds on the eigenvalues, on the other hand one surely needs to avoid zero or “too small” eigenvalues. To this end one typically uses parameter modulation. Precisely one assumes that \(\omega^{(0)},\Lambda^{(0)}\) depend (non trivially and with some regularity) on some parameters \(\xi\in\mathds{R}^{k}\) for some \(k\). Unfortunately, equations from physics may come with no such external parameters, so that one needs to extract them from the initial data: however we shall not address this issue here.
An equivalent approach to the Nash-Moser scheme is to find a change of coordinates
\[(y,w)\rightsquigarrow(\tilde{y}+h^{(y)}(\theta),\tilde{w}+h^{(w)}(\theta))\]
such that the push forward of the vector field has an invariant torus at \(\tilde{y}=0\), \(\tilde{w}=0\). Clearly the equation for \(h\) is always (1.2), hence the solvability conditions on the inverse of the linearized operator appear also in this context.
Following this general strategy one can look for different types of changes of variables such that the push forward of the vector field has a particularly simple form, and the existence of an invariant torus follows trivially. For instance in classical KAM Theorems the idea is to look for an _affine_ change of variables such that not only \(y=0,w=0\) is an invariant torus but the vector filed \(F\) linearized in the directions normal to the torus is diagonal. This means that in the quadratic scheme one not only performs a traslation but also a linear change of variables which approximately diagonalizes the linearized operator at each step. Naturally this makes the inversion of \(d_{h}{\mathcal{F}}\) much simpler. It is well known that it is possible to diagonalize a finite dimensional matrix if it has distinct eigenvalues. Then, in order to diagonalize the linearized operator at an approximate solution, one asks for lower bounds on the differences of the eigenvalues (the so called “Second Mel’nikov” conditions). Under these assumptions the bounds on the inverse follow by just imposing lower bounds on the eigenvalues (the so called the First Mel’nikov conditions). This requirement together with some structural hypotheses on the system (Hamiltonianity, reversibility, ….) provides existence and linear stability of the possible solution. More in general if one wants to cancel non linear terms in the vector fields one needs to add some more restrictive conditions on the linear combinations of the eigenvalues (higher order Mel’nikov conditions). Naturally such conditions are not necessary for the invertibility, actually in many applications they cannot be imposed (already in the case of the NLS on the circle the eigenvalues are double) and typically in a Nash-Moser scheme they are not required.
Quadratic algorithms for the construction of finite dimensional invariant tori have been used in the literature both in finite, see for instance [9] and references therein, or infinite dimensional setting. Starting from [10, 11], this problem has been widely studied in the context of _Hamiltonian_ PDEs; a certainly non exaustive list of classical results could be for instance [12, 13, 14, 15, 16, 17, 18, 19, 20, 21], in which either the KAM scheme or the Lyapunov-Schmidt decomposition and the Newton iteration method are used.
The aforementioned literature is mostly restricted to semilinear PDEs on the circle. More recently also other extensions have been considered, such as cases where the spatial variable is higher dimensional, or when the perturbation is unbounded. Regarding the higher dimensional cases, besides the classical papers by Bourgain, we mention also [22, 23, 24, 25, 26, 27, 28, 29, 30], where also manifolds other than the torus are considered.
The first results with unbounded perturbations can be found in [18, 31, 32, 33, 34, 35]; in these papers the authors follows the classical KAM approach, which is based on the so-called “second Mel’nikov” conditions. Unfortunately their approach fails in the case of _quasi-linear_ PDEs (i.e. when the nonlinearity contains derivatives of the same order as the linear part). This problem has been overcome in [36, 37, 38], for the periodic case, and in [39, 40], first for forced and then also for autonomous cases, see also [41, 42, 43, 44]. The key idea of such papers is to incorporate into the reducibility scheme techniques coming from pseudo-differential calculus. The extensions for the autonomous cases are based on the ideas developed in [45], where the Hamiltonian structure is exploited in order to extend results on forced equations to autonomous cases.
In all the results mentiond so far, the authors deal with two problems: the convergence of the iterative scheme, and the invertibility of the linearized operator in a neighborhood of the expected solution. Typically these problems are faced at the same time, by giving some non-degeneracy conditions on the spectrum of the linearized operator in order to get estimates on its inverse. However, while the bounds on the linearized operator clearly depend on the specific equation one is dealing with, the convergence of the scheme is commonly believed to be adjustable case by case.
Our purpose is to separate the problems which rely only on abstract properties of the vector fields from those depending on the particular equation under study.
Our point of view is to look for a change of coordinates, say \(\Psi\), such that the push-forward of the vector field has an invariant torus at the origin. In fact all the results described above can be interpreted in this way. Typically one chooses a priori a _group_\(\mathcal{G}\) of changes of coordinates in which one looks for \(\Psi\). Then, for many choices of \(\mathcal{G}\), one may impose smallness conditions on the perturbation (depending on the choice of \(\mathcal{G}\)) and perform an iterative scheme which produces \(\Psi\), provided that the parameters \(\xi\) belong to some _Cantor like_ set (again depending on the choice of \(\mathcal{G}\)). In this paper we impose some mild conditions on the group \(\mathcal{G}\) such that an iterative algorithm can be performed. Then we explicitly state the _smallness conditions_ and the conditions on the parameters. Then, in Section 4 we show some particularly relevant choices of \(\mathcal{G}\) (in fact we always describe the _algebra_ which generates \(\mathcal{G}\)) and the resulting _Cantor like_ sets. In this way, in order to apply our theorem to a particular vector field, one has to: first choose a group, then verify the smallness conditions and finally check that the _Cantor like_ set is not empty. As it might be intuitive, the simpler the structure of \(\mathcal{G}\) the more complicated is to prove that the resulting _Cantor like_ set in not empty.
The present paper is mainly inspired by the approach in [45], but we follow a strategy more similar to the one of [1]. In particular this allows us to cover also non-Hamiltonian cases, which require different techniques w.r.t. [45]; compare Subsections 4.2 and 4.3. We essentially produce an algorithm which interpolates a Nash-Moser scheme and KAM scheme. On the one hand we exploit the functional approach of the Nash-Moser scheme, which allows to use and preserve the “PDE structure” of the problems; on the other hand we leave the freedom of choosing a convenient set of coordinates during the iteration (which is typical of a KAM scheme). This allows us to deal with more general classes of vector fields and with analytic nonlinearities. This last point is particularly interesting in applications to quasi-linear PDEs, where the only results in the literature are for Sobolev regularity; see [46]. In fact, we develop a formalism which allows us to cover cases with both analytic or Sobolev regularity, by exploiting the properties of _tame_ vector fields, introduced in Definition 2.13 and discussed in Appendix B.
Another feature of our algorithm is related to the “smallness conditions”; see Constraints 2.21 and D.1 and the assumption (2.54). Clearly in every application the smallness is given by the problem, and one needs adjusts the algorithm accordingly. Again, while this is commonly believed to be easy to achieve, our point of view is the opposite, i.e. finding the mildest possible condition that allow the algorithm to converge, and use them only when it is necessary. Of course if on the one hand this makes the conditions intricated, on the other hand it allows more flexibility to the algorithm.
_Description of the paper._ Let us discuss more precisely the aim of the present paper. We consider vector fields of the form
(1.3)
where \(N_{0}=\omega^{(0)}\cdot\partial_{\theta}+\Lambda^{(0)}w\partial_{w}\) and \(G\) is a _perturbation_, i.e. (1.3) admits an approximately invariant torus. Note that this does not necessarily mean that \(G\) is small, but only that it is approximately tangent to the torus. Recall that \(\xi\in{\mathcal{O}}_{0}\in\mathds{R}^{k}\) is a vector of parameters.
Then the idea, which goes back to Moser [1], is to find a change of coordinates such that in the new coordinates the system (1.3) takes the form
(1.4)
with \(\omega\sim\omega^{(0)}\), the average of \(\tilde{G}(\theta,0,0)\) is zero and \(\tilde{G}^{(\mathtt{v})}(\theta,0,0)\equiv 0\) for \(\mathtt{v}=y,w\). More precisely in our main Theorem 2.25 we prove the convergence of an iterative algorithm which provides a change of variables transforming (1.3) into (1.4), for all choices of \(\xi\) in some explicitly defined set \({\mathcal{O}}_{\infty}\) (which however might be empty).
The changes of variables are not defined uniquely, and one can specify the problem by – for instance – identifying further terms in the Taylor expansion of \(\tilde{G}\) w.r.t. the variables \(y\) and \(w\) which one wants to set to zero. Of course different choices of changes of variables modify the set \({\mathcal{O}}_{\infty}\) so that in the applications it is not obvious to understand which is the best choice. In fact finding the setting in which one is able to prove that \({\mathcal{O}}_{\infty}\) is non empty and possibly of positive measure is the most difficult part in the applications. We do not address this problem at all. Our aim is instead to study very general classes of changes of variables and find general Hypotheses on the functional setting, the vector field under study and the terms of the Taylor series that one wants to cancel, under which such an algorithm can be run, producing an explicit set \({\mathcal{O}}_{\infty}\).
In particular in our phase space \(\mathds{T}^{d}_{s}\times\mathds{C}^{d_{1}}\times\ell_{a,p}\) we _do not distinguish_ the cases where either \(s\) or \(a\) are equal to zero (Sobolev cases) from the analytic cases. In the same spirit we do not require that the vector field is analytic but only that it is \(C^{q}\) for some large finite \(q\). The key ingredients of the paper are the following.
**Tame Vector fields**. We require that \(F\) is \(C^{k}\)-_tame up to order \(q\)_, see Definition 2.13, namely it is tame together with its Taylor expansion up to finite order \(k\) w.r.t. \(y,w\) and it is regular up to order \(q\) in \(\theta\), see Subsection 2.2. We make this definition quantitative by denoting a _tameness constant_ for \(G\) by \(C_{\vec{v},p}(G)\), here \(\vec{v}\) contains all the information relative to the domain of definition of \(G\), while \(p\) gives us the Sobolev regularity. In Appendix B we describe some properties of tame vector fields which we believe are interesting for themselves. Finally our vector fields are not necessarily bounded, instead they may lose some regularity, namely we allow
\[F:\mathds{T}^{d}_{s}\times\big{\{}y\in\mathds{C}^{d_{1}}\,:\,|y|_{1}<r^{ \mathtt{s}}\big{\}}\times\big{\{}w\in\ell_{a,p+\nu}\,:\,\|w\|_{a,{\mathfrak{p} }_{1}}<r\big{\}}\to\mathds{C}^{d}\times\mathds{C}^{d_{1}}\times\ell_{a,p}\] (1.5)
for some fixed \(\nu\geq 0\). The properties we require are very general and are satisfied by large class of PDEs, for instance it is well known that these properties are satisfied by a large class of composition operators on Sobolev spaces; see [4, 47].
**The \(({\mathbb{{\mathcal{N}}},{\mathcal{X}},{\mathcal{R}}})\) decomposition.** We choose a subspace of polynomials of maximal degree \(\mathtt{n}\), which we call \({\mathcal{X}}\), containing all the terms we wants to “cancel out” from \(G\). This space contains the algebra of the changes of variables we shall apply. Clearly the subspace \({\mathcal{X}}\) must be chosen so that a vector field with no terms in \({\mathcal{X}}\) possesses an invariant torus. In order to identify the part of \(F\) belonging to \({\mathcal{X}}\) we Taylor-expand it about \(y=0\), \(w=0\): since \(F\) is assumed to be a \(C^{q}\) vector field, this obviously requires that \(q\) is larger than \(\mathtt{n}\) i.e. the maximal degree of the monomials in \({\mathcal{X}}\). With some abuse of notation (see Definition 2.11 and comments before it) we denote this operation as a projection \(\Pi_{{\mathcal{X}}}F\).
We also define a space of polynomial vector fields \({\mathcal{N}}\) (which does not intersect \({\mathcal{X}}\)) such that \(N_{0}\in{\mathcal{N}}\). We allow a lot of freedom on the choice of \({\mathcal{N}}\), provided that it satisfies some rather general hypotheses, in particular all vector fields in \({\mathcal{N}}\) should have an invariant torus at zero, and \({\mathcal{N}}\) should contain the unperturbed vector field \(N_{0}\); in fact we shall require a stronger condition on \(N_{0}\), i.e. that it is diagonal; see the example in (1.1) and Definition 2.20 for a precise formulation.
Our space of \(C^{k}\)-tame vector fields is then decomposed uniquely as \({\mathcal{X}}\oplus{\mathcal{N}}\oplus{\mathcal{R}}\), and we may write
\[(\mathds{1}-\Pi_{\mathcal{X}})F=N+R\]
where \(N=\Pi_{\mathcal{N}}F\) while \(R=\Pi_{\mathcal{R}}F\) is a remainder.
**The Invariant subspace \(\cal E\).** We choose a space of vector fields \({\mathcal{E}}\) (see Definition 2.19) where we want our algorithm to run. Such space appears naturally in the applications where usually one deals with special structures (such as Hamiltonian or reversible structure) that one wishes to preserve throughout the algorithm. As one might expect, the choice of the space \({\mathcal{E}}\) influences the set \({\mathcal{O}}_{\infty}\). In the applications to PDEs the choice of the space \({\mathcal{E}}\) is often quite subtle: we give some examples in Section 4.
**Regular vector fields.** We choose a subspace of polynomial vector fields, which we denote by _regular vector fields_ and endow with a Hilbert norm \(|\cdot|_{\vec{v},p}\) with the only restriction that they should satisfy a set of properties detailed in Definition 2.18, for instance we require that all regular vector fields are tame with tameness constant equal to the norm \(|\cdot|_{a,p}\) and moreover that for \(p={\mathfrak{p}}_{1}\) this tameness constant is _sharp_. Throughout our algorithm we shall apply close to identity changes of variables which preserve \({\mathcal{E}}\) and are generated by such vector fields (this is the group \(\mathcal{G}\) of changes of variables). This latter condition can probably be weakened but, we believe, not in a substantial way: on the other hand it is very convenient throughout the algorithm.
**Our Goal**.: _We fix any decomposition \(({\mathcal{N}},{\mathcal{X}},{\mathcal{R}})\), any space \({\mathcal{E}}\) and any space of regular vector fields \({\mathcal{A}}\) provided that they satisfy Definitions 2.17, 2.19 and 2.18. We assume that \(F=N_{0}+G\) belongs to \({\mathcal{E}}\), is \(C^{\mathtt{n}+2}\) tame (the value of \(\mathtt{n}\) being fixed by \({\mathcal{E}}\)) and that \(\Pi_{\mathcal{X}}F=\Pi_{{\mathcal{X}}}G\) is appropriately small while \((\mathds{1}-\Pi_{{\mathcal{X}}})F\) is “controlled”. We look for a change of coordinates \({\mathcal{H}}_{\infty}\) such that for all \(\xi\in{\mathcal{O}}_{\infty}\) one has⁴_
[FOOTNOTE:4][ENDFOOTNOTE]
\[\Pi_{{\mathcal{X}}}({\mathcal{H}}_{\infty})_{*}F\equiv 0\,.\]
At the purely formal level, a change of coordinates \(\Phi\) generated by a bounded vector field \(g\), transforms \(F\) into
\[\Phi_{*}(F)\sim F+[g,F]+O(g^{2})\sim e^{[g,\cdot]}F.\] (1.6)
We find the change of variable we look for via an iterative algorithm; at each step we need \(\Phi\) to be such that \(\Pi_{\mathcal{X}}(\Phi_{*}F)=0\) up to negligible terms, so we need to find \(g\) such that
\[\Pi_{\mathcal{X}}(F+[g,F])=0\,;\]
in other words we need to invert the operator \(\Pi_{\mathcal{X}}[F,\cdot]\). Since one expects \(g\sim X:=\Pi_{{\mathcal{X}}}F\) (which is assumed to be small) then, at least formally, the term \([g,\Pi_{{\mathcal{X}}}F]\) is negligible and one needs to solve
\[\Pi_{\mathcal{X}}([(1-\Pi_{\mathcal{X}})F,g])-X:=u\] (1.7)
with \(g\sim X\) and \(u\sim X^{2}\). Equation (1.7) is called _homological equation_ and in order to solve it one needs the “approximate invertibility” for the operator
\[{\mathfrak{A}}(\cdot):=\Pi_{\mathcal{X}}[(1-\Pi_{\mathcal{X}})F,\cdot].\] (1.8)
Then the iteration is achieved by setting \(\Psi_{0}:=\mathds{1}\) and
\[\Psi_{n}:=\Phi^{1}_{g_{n}}\circ\Psi_{n-1}\,,\quad F_{n}:=(\Psi_{n})_{*}F\] (1.9)
where \(\Phi^{1}_{g_{n}}\) is the time-\(1\) flow map generated by the vector field \(g_{n}\) which in turn solves the homological equation
\[\Pi_{\mathcal{X}}([(1-\Pi_{\mathcal{X}})F_{n-1},g_{n}])-X_{n-1}:=u_{n}.\] (1.10)
Since we need to preserve the structure, namely we need that at each step \(F_{n}\in{\mathcal{E}}\), then at each step the change of variables \(\Phi^{1}_{g_{n}}\) should preserve \({\mathcal{E}}\). In fact we require that \(g_{n}\) is a _regular_ vector field,
In order to pass from the formal level to the convergence of the scheme we need to prove that \(g_{n}\) and \(u_{n}\) satisfy appropriate bounds.
**Homological equation.** We say that a set of parameters \({\mathcal{O}}\) satisfies the homological equation ( for \((F,K,\vec{v}^{0},\rho)\)) if there exist \(g,u\) which satisfy (1.7) with appropriate bounds (depending on the parameters \(K,\vec{v}^{0},\rho\); \(\vec{v}^{0}\) controls the domain of definition, \(\rho\) controls the size of the change of variables and \(K\) is an _ultravioled cut-off_), see Definition 2.23. Since \(F\) is a merely differentiable vector field the bounds are delicate since expressions like (1.6) may be meaningless in the sense that –apparently– the new vector field \(F_{n+1}\) is less regular than \(F_{n}\), and hence it is not obvious that one can iterate the procedure. Indeed the commutator loses derivatives and thus one cannot use Lie series expansions in order to describe the change of variables. However one can use Lie series expansion formula on polynomials, such as \(\Pi_{{\mathcal{X}}}\Phi_{*}F\). We show that, provided that \(X\) is small while \(R,N-N_{0}\) are appropriately bounded, we obtain a converging KAM algorithm.
Note that the smallness of \(X\) implies the existence of a sufficiently good approximate solution; on the other hand, we only need very little control on \((\mathds{1}-\Pi_{\mathcal{X}})F_{n}\), which results on very weak (but quite cumbersome) assumptions on \(R\) and especially on \(N-N_{0}\); see (1.12) and Remark 2.22.
**Compatible changes of variables.** It is interesting to notice that at each step of the iterative scheme (1.9) we may apply any change of variables \({\mathcal{L}}_{n}\) with the only condition that it does not modify the bounds, i.e. \(\Pi_{\mathcal{X}}({\mathcal{L}}_{n})_{\star}F_{n-1}\sim\Pi_{\mathcal{X}}F_{n-1}\) (and the same for the other projections). We formalize this idea in Definition 2.24 where we introduce the changes of variables compatible with \((F,K,\vec{v}^{0},\rho)\). Then we may set
\[{\mathcal{H}}_{n}=\Phi^{1}_{g_{n}}\circ{\mathcal{L}}_{n}\circ{\mathcal{H}}_{n- 1}\,,\quad F_{n}:=({\mathcal{H}}_{n})_{*}F\] (1.11)
and the algorithm is still convergent.
This observation is essentially tautological but it might well be possible that in a new set of coordinates it is simpler to invert the operator \({\mathfrak{A}}_{n}:=\Pi_{\mathcal{X}}{\rm ad}((1-\Pi_{\mathcal{X}})F_{n-1})\) (for instance \({\mathfrak{A}}_{n}\) may be diagonal up to a negligible remainder, see Subsection 4.4). Note that since the approximate invertibility of \({\mathfrak{A}}_{n}\) is in principle independent from the coordinates (provided the change does not lose regularity), if one knows that a change of variables simplifying \({\mathfrak{A}}_{n}\) exists, there might be no need to apply it in order to deduce bounds on the approximate inverse. This is in fact the strategy of the papers [39, 40], where the authors study fully nonlinear equations and prove existence of quasi-periodic solutions with Sobolev regularity. On the other hand one might modify the definition of the subspace \({\mathcal{X}}\) in such a way that \({\mathcal{L}}_{n}\) is the time one flow of a regular bounded vector field in \({\mathcal{X}}\). The best strategy clearly depends on the application, so we leave the \({\mathcal{L}}_{n}\) as an extra degree of freedom. We can summarize our result as follows, for the notations we refer to the informal Definitions written above.
**Theorem**.: _Fix \(\nu\geq 0\) as in (1.5), and fix a decomposition \(({\mathcal{N}},{\mathcal{X}},{\mathcal{R}})\) , a subspace \({\mathcal{E}}\) and a space of regular vector fields \({\mathcal{A}}\). Fix parameters \(\varepsilon_{0},{\mathtt{R}}_{0},{\mathtt{G}}_{0},{\mathfrak{p}}_{2}\) satisfying appropriate constraints. Let \(N_{0}\) be a diagonal vector field and consider a vector field_
\[F_{0}:=N_{0}+G_{0}\in{\mathcal{E}}\]
_which is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\)._
_Fix \(\gamma_{0}>0\) and assume that_
\[\gamma_{0}^{-1}C_{\vec{v}_{0},{\mathfrak{p}}_{2}}(G_{0})\leq{\mathtt{G}}_{0}\, ,\quad\gamma_{0}^{-1}C_{\vec{v}_{0},{\mathfrak{p}}_{2}}(\Pi_{{\mathcal{N}}}^{ \perp}G_{0})\leq{\mathtt{R}}_{0}\,,\quad\gamma_{0}^{-1}|\Pi_{{\mathcal{X}}}G_{ 0}|_{\vec{v}_{0},{\mathfrak{p}}_{1}}\leq\varepsilon_{0}\,,\quad\gamma_{0}^{-1} |\Pi_{{\mathcal{X}}}G_{0}|_{\vec{v}_{0},{\mathfrak{p}}_{2}}\leq{\mathtt{R}}_{0 }\,,\] (1.12)
_here \(C_{\vec{v},p}(G)\) is a tameness constant, while \(|\cdot|_{\vec{v},p}\) is the norm on regular vector fields._
_For all_ \(n\geq 0\) _we define recursively changes of variables_ \({\mathcal{L}}_{n},\Phi_{n}\) _and compact sets_ \({\mathcal{O}}_{n}\) _as follows._
_Set_ \({\mathcal{H}}_{-1}={\mathcal{H}}_{0}=\Phi_{0}={\mathcal{L}}_{0}=\mathds{1}\)_, and for_ \(0\leq j\leq n-1\) _set recursively_ \({\mathcal{H}}_{j}=\Phi_{j}\circ{\mathcal{L}}_{j}\circ{\mathcal{H}}_{j-1}\) _and_ \(F_{j}:=({\mathcal{H}}_{j})_{*}F_{0}:=N_{0}+G_{j}\)_. Let_ \({\mathcal{L}}_{n}\) _be any_ compatible change of variables _for_ \((F_{n-1},K_{n-1},\vec{v}_{n-1},\rho_{n-1})\) _and_ \({\mathcal{O}}_{n}\) _be any compact set_
\[{\mathcal{O}}_{n}\subseteq{\mathcal{O}}_{n-1}\,,\]
_which_ satisfies the homological equation _for_ \((({\mathcal{L}}_{n})_{*}F_{n-1},K_{n-1},\vec{v}^{\text{\tiny 0}}_{n-1},\rho_{n -1})\)_, let_ \(g_{n}\) _be the solution of the homological equation and_ \(\Phi_{n}\) _the time-1 flow map generated by_ \(g_{n}\)_._
_Then the sequence_ \({\mathcal{H}}_{n}\) _converges for all_ \(\xi\in{\mathcal{O}}_{0}\) _to some change of variables_
\[{{\mathcal{H}}}_{\infty}={{\mathcal{H}}}_{\infty}(\xi):D_{{a_{0}},p}({s_{0}}/{ 2},{r_{0}}/{2})\longrightarrow D_{\frac{a_{0}}{2},p}({s_{0}},{r_{0}}).\]
_such that defining \(F_{\infty}:=({\mathcal{H}}_{\infty})_{*}F_{0}\) one has_
\[\Pi_{\mathcal{X}}F_{\infty}=0\quad\forall\xi\in{\mathcal{O}}_{\infty}:=\bigcap _{n\geq 0}{\mathcal{O}}_{n}\]
_and_
\[\gamma_{0}^{-1}C_{\vec{v}_{\infty},{\mathfrak{p}}_{1}}(\Pi_{{\mathcal{N}}}F_{ \infty}-N_{0})\leq 2{\mathtt{G}}_{0},\quad\gamma_{0}^{-1}C_{\vec{v}_{\infty},{ \mathfrak{p}}_{1}}(\Pi_{{\mathcal{R}}}F_{\infty})\leq 2{\mathtt{R}}_{0}\]
_with \(\vec{v}_{\infty}:=(\gamma_{0}/2,{\mathcal{O}}_{\infty},s_{0}/2,a_{0}/2)\)._
While the scheme is quite general, as a drawback the set of parameters \(\xi\) for which the invariant torus exists is defined in a very complicated way, in terms of the approximate invertibility of the operators \({\mathfrak{A}}_{n}\).
In order to get a simpler description of the good parameters we may require that \(({\mathcal{N}},{\mathcal{X}},{\mathcal{R}})\) is a “triangular decomposition” i.e. there exists a decomposition \({\mathcal{X}}=\oplus_{j=1}^{\mathtt{b}}{\mathcal{X}}_{j}\) such that for all \(N\in{\mathcal{N}}\) the action of the operator \({\mathfrak{N}}:=\Pi_{{\mathcal{X}}}[N,\cdot]\) is block diagonal while for all \(R\in{\mathcal{R}}\) the action of \({\mathfrak{R}}:=\Pi_{{\mathcal{X}}}[R,\cdot]\) is strictly upper triangular, see Definition 3.1. In Proposition 3.5 we show that under such hypotheses the problem of solving the homological equation (1.10) is reduced to proving the approximate invertibility of \({\mathfrak{N}}\), the so called _Melnikov conditions_, which is typically much simpler to analyze in the applications. Indeed solving (1.10) amounts to inverting \({\mathfrak{A}}\) but since \({\mathfrak{R}}\) is upper triangular and \({\mathfrak{N}}\) is diagonal, the Neumann series
\[{\mathfrak{A}}^{-1}=({\mathfrak{N}}+{\mathfrak{R}})^{-1}={\mathfrak{N}}^{-1} \sum_{j}(-1)^{j}({\mathfrak{N}}^{-1}{\mathfrak{R}})^{j}\]
is a finite sum.
Note that in order to produce a triangular decomposition one can associate degrees to \(w,y\) (say resp. \(1\) and \(\mathtt{d}>0\)) in order to induce a degree decomposition (see Remark 3.2) which gives to the space of polynomial vector fields a graded Lie algebra structure. By Lemma 3.3 triangularity is achieved when \({\mathcal{N}}\) contains only terms with degree \(0\) while \({\mathcal{R}}\) contains only terms with degree \(>0\) while \({\mathcal{X}}=\oplus_{j=1}^{\mathtt{b}}{\mathcal{X}}_{j}\) is the decomposition of \({\mathcal{X}}\) in subspaces of increasing degree. Note that this can be done for any choice for \(\mathtt{d}\).
Another natural way to understand the nature of the subspaces \({\mathcal{X}},{\mathcal{N}},{\mathcal{R}}\) is through “rescalings”. This means introducing a special degree decomposition where the degree of \(y\) is the same \(\mathtt{s}\) as the one used in the definition of the domains in the phase space, see (1.5). This latter degree decomposition separates terms which behave differently under rescaling of the order on magnitude of the domains \(r\to\lambda r\). Note that in this way \({\mathcal{X}}\) contains terms with negative scaling, \({\mathcal{N}}\) contains the terms with scaling zero and finally \({\mathcal{R}}\) contains terms with positive scaling.
Typically the invertibility of \({\mathfrak{N}}\) relies on non degeneracy conditions on the eigenvalues which provides a lower bounds on the small divisors. The size \(\gamma_{0}\) of such denominators is essentially given by the problem. A vector field is considered a perturbation if its size is small with respect to the size of the small divisor. Hence in giving the smallness conditions on \(G\) one must find a modulation between the size of the remainder \(R\), which can be made small by a rescaling, and the size of \({\mathcal{X}}\) which grows under the rescaling. The polynomial vector fields in \(N\) are more delicate. Indeed some such terms do not change under rescaling. Of course \(\Pi_{{\mathcal{N}}}G\) should be much smaller with respect to \(N_{0}\) but in fact one does not need that \(\Pi_{{\mathcal{N}}}G\) is small with respect to \(\gamma_{0}\) provided that \(\Pi_{\mathcal{X}}G\) is sufficiently small. This further justifies why the smallness conditions on the vector field \(G\) are imposed separately on each term. We refer the reader to Paragraph 4.6 for more details.
In Section 4 we discuss various applications and examples and we show how our algorithm allows to interpolate between the Nash-Moser algorithm and the classical KAM one. Example 1. is the classic Nash-Moser approach, where one fixes the subspace \({\mathcal{X}}\) to be as simple as possible.
In Example 2. we study Hamiltonian vector fields, and exploit the Hamiltonian structure in order to simplify the Melnikov conditions; this is a reinterpretation to our setting of the strategy of [45]. We conclude subsection 4.2 by collecting our results into Theorem 4.11.
In Example 3. we only assume that our vector fiels is reversible and we simplify the Melnikov conditions by making an appropriate choice of the sets \({\mathcal{X}},{\mathcal{N}},{\mathcal{R}}\), this is a new result which we believe should enable us to prove existence of quasi-periodic solutions in various settings, see [46] for the case of the fully nonlinear NLS on the circle. We conclude subsection 4.3 by collecting our results into Theorem 4.16.
In Examples 2. and 3. our formulation essentially decouples the dynamics on the approximate invariant torus, which is given by the equation for \(\theta\) and \(y\), and the dynamics in the normal direction \(w\). More precisely in these cases the invertibility of \({\mathfrak{N}}_{n}\) follow from the conditions:
* The frequency vector \(\omega_{n}(\xi):=\langle F_{n}^{(\theta)}(\theta,0,0)\rangle\) needs to be diophantine;
* The operator \({\mathfrak{L}}_{n}:=\omega_{n}\cdot\partial_{\theta}+d_{w}F^{(w)}(\theta,0,0)\) acting on \(H^{p}(\mathds{T}_{s}^{d},\ell_{a,p})\) must be “approximatively invertible”.
Note that \({\mathfrak{L}}_{n}\) has the same form of the linearized operator of a forced equation, namely the case where \(F^{\theta}=\omega\cdot\partial_{\theta}\) and the frequency vector \(\omega\) plays the rôle of an external parameter. Moreover if \(F\) is a vector field coming from a PDE (possibly after one step of Birkhoff Normal Form), then \({\mathfrak{L}}_{n}\) differs from the linearization of a composition operator by a finite rank term, this is an essential property in the study of quasi-linear PDEs.
In Example 4 we prove a KAM theorem for a class of Hamiltonian vector fields corresponding to the classical paper [15], but requiring only finite differentiability and imposing milder smallness conditions, comparable to those of [48]. We conclude subsection 4.4 by stating a result on the existence of reducible invariant tori; this is the finitely differentiable version of [15, 48].
Finally in subsection 4.5 we discuss how to apply Examples 2 and 4 to an NLS with Fourier multipliers, respectively in the case of a Lie group and of \([0,\pi]\) with Dirichlet boundary conditions.
**Acknowledgements**. We thank L. Biasco, F. Giuliani, J. Massetti and R. Montalto for useful comments on the manuscript. This research was supported by the European Research Council under FP7 “Hamiltonian PDEs and small divisor problems: a dynamical systems approach”, by PRIN2012 “Variational and perturbative aspects of non linear differential problems" and McMaster University.
## 2 Functional setting and main result
In this paragraph we introduce all the relevant notation and tools we need. In particular we define our phase space, the vector fields we will deal with and the type of change of variables we need in order to perform our KAM algorithm, as explained in the introdution.
### The Phase Space
Our starting point is an infinite dimensional space with a product structure \(V_{a,p}:=\mathds{C}^{d}\times\mathds{C}^{d_{1}}\times\ell_{a,p}\). Here \(\ell_{a,p}\) is a scale of separable Hilbert spaces endowed with norms \(\|\cdot\|_{a,p}\), in particular this means that \(\|f\|_{a,p}\leq\|f\|_{a^{\prime},p^{\prime}}\) if \((a,p)\leq(a^{\prime},p^{\prime})\) lexicographically.
**Hypothesis 2.1**.: _The space \(\ell_{0,0}\) is endowed with a bilinear scalar product_
\[f,g\in\ell_{0,0}\mapsto f\cdot g\in\mathds{C}.\]
_The scalar product identifies the dual \(\ell_{a,p}^{*}\) with \(\ell_{-a,-p}\) and is such that_
\[\|w\|_{0,0}^{2}=w\cdot\overline{w}\,,\quad|g\cdot f|\leq\|g\|_{a,p}\|f\|_{-a,- p}\,\,\quad|g\cdot f|\leq\|g\|_{0,0}\|f\|_{0,0}\leq\|g\|_{a,p}\|f\|_{0,0}.\] (2.1)
_We denote the set of variables \(\mathtt{V}:=\big{\{}\theta_{1},\ldots,\theta_{d},y_{1},\ldots,y_{d_{1}},w\big{\}}\). Moreover we make the following assumption on the scale \(\ell_{a,p}\). We assume that there is a non-decreasing family \((\ell_{K})_{K\geq 0}\) of closed subspaces of \(\ell_{a,p}\) such that \(\cup_{K\geq 0}\ell_{K}\) is dense in \(\ell_{a,p}\) for any \(p\geq 0\), and that there are projectors_
\[\Pi_{\ell_{K}}:\ell_{0,0}\to\ell_{K},\quad\Pi_{\ell_{K}}^{\perp}:=\mathds{1}- \Pi_{\ell_{K}},\] (2.2)
_such for all \(p,\alpha,\beta\geq 0\) there exists a constant \(\mathtt{C}=\mathtt{C}(a,p,\alpha,\beta)\) such that one has_
\[\|\Pi_{\ell_{K}}w\|_{a+\alpha,p+\beta}\leq\mathtt{C}e^{\alpha K}K ^{\beta}\|w\|_{a,p}\;\;\forall w\in\ell_{a,p},\] (2.3a)
\[\|\Pi_{\ell_{K}}^{\perp}w\|_{a,p}\leq\mathtt{C}e^{-\alpha K}K^{- \beta}\|w\|_{a+\alpha,p+\beta},\;\;\;\forall w\in\ell_{a+\alpha,p+\beta}.\] (2.3b)
We shall need two parameters, \({\mathfrak{p}}_{0}<{\mathfrak{p}}_{1}\). Precisely \({\mathfrak{p}}_{0}>d/2\) is needed in order to have the Sobolev embedding and thus the algebra properties, while \({\mathfrak{p}}_{1}\) will be chosen very large and is needed in order to define the phase space.
**Definition 2.1** (Phase space).: _Given \({\mathfrak{p}}_{1}\) large enough, we consider the toroidal domain_
\[\mathds{T}^{d}_{s}\times D_{a,p}(r):=\mathds{T}^{d}_{s}\times B_{r^{2}}\times \boldsymbol{B}_{r,a,p,{\mathfrak{p}}_{1}}\,,\subset V_{a,p}\] (2.4)
_where_
\[\mathds{T}^{d}_{s}:=\big{\{}\theta\in\mathds{C}^{d}\,:\,{\rm Re}( \theta)\in\mathds{T}^{d},\ \max_{h=1,\ldots,d}|{\rm Im}\,\theta_{h}|<s\big{\}}\,,\]
_and we denote by \(\mathds{T}^{d}:=(\mathds{R}/2\pi\mathds{Z})^{d}\) the \(d\)-dimensional torus._
**Remark 2.2**.: _Note that \(\boldsymbol{B}_{r,a,p,{\mathfrak{p}}_{1}}\) is the intersection of \(\ell_{a,p}\) with the ball of radius \(r\) in \(\ell_{a,{\mathfrak{p}}_{1}}\), thus our phase space clearly depends on the parameter \({\mathfrak{p}}_{1}\). We drop it in the notations since it will be fixed once and for all, while \(a,p,s,r\) vary throughout the algorithm and we carefully need to keep track of them._
Fix some numbers \(s_{0},a_{0}\geq 0\) and \(r_{0}>0\). Given \(s\leq s_{0}\), \(a,a^{\prime}\leq a_{0}\), \(r\leq r_{0}\), \(p,p^{\prime}\geq{\mathfrak{p}}_{0}\). We endow the space \(V_{a,p}\) with the following norm. For \(u=(u^{(\theta)},u^{(y)},u^{(w)})\in\mathds{T}^{d}_{s}\times D_{a,p}(r)\)
\[\|u\|_{V_{a,p}}=\frac{1}{\max\{1,s_{0}\}}\|u^{(\theta)}\|_{\mathds{C}^{d}}+ \frac{1}{r_{0}^{(\mathtt{s})}}\|u^{(y)}\|_{\mathds{C}^{d_{1}}}+\frac{1}{r_{0}} \|u^{(w)}\|_{\ell_{a,p}},\] (2.5)
Now consider maps
\[f:\mathds{T}^{d}_{s}\times D_{a^{\prime},p^{\prime}}(r) \to V_{a,p}\] (2.6)
\[(\theta,y,w) \to(f^{(\theta)}(\theta,y,w),f^{(y)}(\theta,y,w),f^{(w)}(\theta,y ,w))\,,\]
with
\[f^{(\mathtt{v})}(\theta,y,w)=\sum_{l\in\mathds{Z}^{d}}f^{(\mathtt{v})}_{l}(y,w )e^{{\rm i}l\cdot\theta}\,,\quad\mathtt{v}\in{\mathtt{V}}\,,\]
where \(f^{(\mathtt{v})}(\theta,y,w)\in\mathds{C}\) for \(\mathtt{v}=\theta_{i},y_{i}\) while \(f^{(w)}(\theta,y,w)\in\ell_{a,p}\). We shall use also the notation \(f^{(\theta)}(\theta,y,w)\in\mathds{C}^{d}\), \(f^{(y)}(\theta,y,w)\in\mathds{C}^{d_{1}}\).
**Remark 2.3**.: _We think of these maps as families of torus embeddings from \(\mathds{T}^{d}_{s}\) into \(V_{a,p}\) depending parametrically on \(y,w\in D_{a^{\prime},p^{\prime}}(r)\), and this is the reason behind the choice of the norm; see below._
We define a norm (pointwise on \(y,w\)) by setting
\[\|f\|_{s,a,p}^{2}:=\|f^{(\theta)}\|^{2}_{s,p}+\|f^{(y)}\|^{2}_{s,p}+\|f^{(w)} \|^{2}_{s,a,p}\] (2.7)
where
\[\|f^{(\theta)}\|_{s,p}:=\left\{\begin{aligned} &\frac{1}{s_{0}} \sup_{i=1,\ldots,d}\|f^{(\theta_{i})}(\cdot,y,w)\|_{H^{p}(\mathds{T}^{d}_{s})} \quad s\leq s_{0}\neq 0\\ &\sup_{i=1,\ldots,d}\|f^{(\theta_{i})}(\cdot,y,w)\|_{H^{p}( \mathds{T}^{d}_{s})}\,,\quad s=s_{0}=0\end{aligned}\right.\] (2.8)
\[\|f^{(y)}\|_{s,p}:=\frac{1}{r_{0}^{\mathtt{s}}}\sum_{i=1}^{d_{1}}\|f^{(y_{i})} (\cdot,y,w)\|_{H^{p}(\mathds{T}^{d}_{s})}\] (2.9)
\[\|f^{(w)}\|_{s,a,p}:=\frac{1}{r_{0}}\left(\|f^{(w)}\|_{H^{p}(\mathds{T}^{d}_{s };\ell_{a,{\mathfrak{p}}_{0}})}+\|f^{(w)}\|_{{H^{{\mathfrak{p}}_{0}}(\mathds{T }^{d}_{s};\ell_{a,p})}}\right),\] (2.10)
where \(H^{p}(\mathds{T}^{d}_{s})=H^{p}(\mathds{T}^{d}_{s};\mathds{C})\) is the standard space of analytic functions in the strip of size \(s\) which are Sobolev on the boundary with norm
\[\|u(\cdot)\|_{H^{p}(\mathds{T}^{d}_{s})}^{2}:=\sum_{l\in\mathds{Z}^{d}}|u_{l}| ^{2}e^{2s|l|}\langle l\rangle^{2p},\qquad\langle l\rangle:=\max\{1,|l|\}.\] (2.11)
If \(s=0\) clearly one has that \(H^{p}(\mathds{T}_{s}^{d})=H^{p}(\mathds{T}^{d})\) is the standard Sobolev space. More in general given a Banach space \(E\) we denote by \(H^{p}(\mathds{T}^{d}_{s};E)\) the space of analytic functions in the strip of size \(s\) which are Sobolev in \(\theta\) on the boundary with values in \(E\) endowed with the natural norm. Note that trivially \(\|\partial_{\theta}^{p^{\prime}}u\|_{H^{p}(\mathds{T}^{d}_{s})}=\|u\|_{H^{p+p^ {\prime}}(\mathds{T}^{d}_{s})}\).
**Remark 2.4**.: _We can interpret (2.10) as follows. We associate \(f^{(w)}\) with a function of \(\theta\) defined as_
\[{\mathtt{f}}_{p}(\theta):=\sum_{l\in\mathds{Z}^{d}}\|f^{(w)}_{l}(y,w)\|_{a,p}e ^{{\rm i}\theta\cdot l},\] (2.12)
_and then we have_
\[\|f^{(w)}\|_{s,a,p}=\|{\mathtt{f}}_{{\mathfrak{p}}_{0}}\|_{H^{p}(\mathds{T}^{d }_{s})}+\|{\mathtt{f}}_{p}\|_{H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d}_{s})}.\]
**Remark 2.5**.: _If \(\ell_{a,p}=H^{p}(\mathds{T}^{r}_{a})\) then fixing \({\mathfrak{p}}_{0}\geq(d+r)/2\) we have that \(\|\cdot\|_{s,a,p}\) in (2.10) is equivalent to \(\|\cdot\|_{H^{p}(\mathds{T}^{d}_{s}\times\mathds{T}^{r}_{a})}\)_
By recalling the definition in (2.5) we have that
\[\|f\|_{s,a,p}\sim\|f\|_{H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d};V_{a,p})\cap{H^{ p}(\mathds{T}^{d};V_{a,{\mathfrak{p}}_{0}})}}:=\|f\|_{H^{p}(\mathds{T}^{d};V_{ a,{\mathfrak{p}}_{0}})}+\|f\|_{H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d};V_{a,p})}\] (2.13)
**Remark 2.6**.: _Formula (2.7) depends on the point \((y,w)\), hence it is not a norm for vector fields and this is very natural in the context of Sobolev regularity. Indeed in the scale of domains \(\mathds{T}^{d}_{s}\times D_{a,p}(r)\) one controls only the \({\mathfrak{p}}_{1}\) norm of \(w\) (see Definition 2.1), and hence there is no reason for which one may have_
\[\sup_{(y,w)\in D_{a,p}(r)}\|f\|_{s,a,p}<\infty\,.\]
_Naturally if one fixes \(p={\mathfrak{p}}_{1}\) one may define as norm of \(F\) the quantity \(\sup_{(y,w)\in D_{a,{\mathfrak{p}}_{1}}(r)}\|F\|_{s,a,{\mathfrak{p}}_{1}}\)._
_The motivation for choosing the norm (2.7) is the following. Along the algorithm we need to control commutators of vector fields. In the analytic case, i.e. if \(s_{0}\neq 0\), one may keep \(p\) fixed and control the derivatives via Cauchy estimates by reducing the analyticity, so the phase space can be defined in terms of the fixed \(p\). However, since we do not want to add the hypothesis \(s_{0}\neq 0\), we have to leave \(p\) as a parameter and use tameness properties of the vector field (see Definition 2.13) as in the Sobolev Nash-Moser schemes._
It is clear that any \(f\) as in (2.6) can be identified with “unbounded” vector fields by writing
\[f\leftrightarrow\sum_{\mathtt{v}\in{\mathtt{V}}}f^{(\mathtt{v})}(\theta,y,w) \partial_{\mathtt{v}},\] (2.14)
where the symbol \(f^{(\mathtt{v})}(\theta,y,w)\partial_{\mathtt{v}}\) has the obvious meaning for \(\mathtt{v}=\theta_{i},y_{i}\) while for \(\mathtt{v}=w\) is defined through its action on differentiable functions \(G:\ell_{a,p}\to\mathds{C}\) as
\[f^{(w)}(\theta,y,w)\partial_{w}G:=d_{w}G[f^{(w)}(\theta,y,w)].\]
Similarly, provided that \(|f^{(\theta)}(\theta,y,w)|\) is small for all \((\theta,y,w)\in\mathds{T}^{d}_{s}\times D_{a,p}(r)\) we may lift \(f\) to a map
\[\Phi:=(\theta+f^{(\theta)},y+f^{(y)},w+f^{(w)}):\,\mathds{T}^{d}_{s}\times D_{ a^{\prime},p^{\prime}}\to\mathds{T}^{d}_{s_{1}}\times\mathds{C}^{d_{1}}\times \ell_{a,p}\,,\quad\text{for some}\;s_{1}\geq s\,,\] (2.15)
and if we set \(\|\theta\|_{s,a,p}:=1\) we can write
\[\|\Phi^{(\mathtt{v})}\|_{s,a,p}=\|\mathtt{v}\|_{s,a,p}+\|f^{(\mathtt{v})}\|_{s ,a,p}\,,\;\mathtt{v}=\theta,y,w\,.\]
Note that
\[\|y\|_{s,a,p}=r_{0}^{-\mathtt{s}}|y|_{1}\,,\quad\,\|w\|_{s,a,p}=r_{0}^{-1}\|w \|_{a,p}.\]
**Remark 2.7**.: _There exists \({\mathtt{c}}={\mathtt{c}}(d)\) such that if \(\|f\|_{s,a,{\mathfrak{p}}_{1}}\leq{\mathtt{c}}\rho\) one has_
\[\Phi:\mathds{T}^{d}_{s}\times D_{a+\rho a_{0},p}(r)\to\mathds{T}^{d}_{s+\rho s _{0}}\times D_{a,p}(r+\rho r_{0}).\]
We are interested in vector fields defined on a scale of Hilbert spaces; precisely we shall fix \(\rho,\nu,q\geq 0\) and consider vector fields
\[F:\mathds{T}^{d}_{s}\times D_{a+\rho a_{0},p+\nu}(r)\times{\mathcal{O}}\to V_{ a,p}\,,\] (2.16)
for some \(s<s_{0}\), \(a+\rho a_{0}\leq a_{0}\), \(r\leq r_{0}\) and all \(p+\nu\leq q\). Moreover we require that \({\mathfrak{p}}_{1}\) in Definition 2.1 satisfies \({\mathfrak{p}}_{1}\geq{\mathfrak{p}}_{0}+\nu+1\).
**Definition 2.8**.: _Fix \(0\leq\rho,{\varrho}\leq 1/2\), and consider two differentiable maps \(\Phi=\mathds{1}+f\), \(\Psi=\mathds{1}+g\) as in (2.15) such that for all \(p\geq{\mathfrak{p}}_{0}\), \(2\rho s_{0}\leq s\leq s_{0}\), \(2\rho r_{0}\leq r\leq r_{0}\) and \(0\leq a\leq a_{0}(1-2{\varrho})\) one has_
\[\Phi,\Psi:\mathds{T}^{d}_{s-\rho s_{0}}\times D_{a+{\varrho}a_{0},p}(r-\rho r_ {0})\to\mathds{T}^{d}_{s}\times D_{a,p}(r).\] (2.17)
_If_
\[\mathds{1}=\Psi\circ\Phi:\mathds{T}^{d}_{s-2\rho s_{0}}\times D_{ a+2{\varrho}a_{0},p} (r-2\rho r_{0}) \longrightarrow \ \mathds{T}^{d}_{s}\times D_{a,p}(r)\] (2.18)
\[\ (\theta,y,w) \longmapsto \ (\theta,y,w)\]
_we say that \(\Psi\) is a left inverse of \(\Phi\) and write \({\Phi}^{-1}:=\Psi\)._
_Moreover fix \(\nu\geq 0\), \(0\leq{\varrho}^{\prime}\leq 1/2\). Then for any \(F:\mathds{T}^{d}_{s}\times D_{a+{\varrho}^{\prime}a_{0},p+\nu}(r)\to V_{a,p}\), with \(0\leq a\leq a_{0}(1-2{\varrho}-{\varrho}^{\prime})\), we define the “pushforward” of \(F\) as_
\[\Phi_{*}F:=d\Phi({\Phi}^{-1})[F({\Phi}^{-1})]:\mathds{T}^{d}_{s-2\rho s_{0}} \times D_{a+(2{\varrho}+{\varrho}^{\prime})a_{0},p+\nu}(r-2\rho r_{0})\to V_{a ,p}\,.\] (2.19)
We need to introduce parameters \(\xi\in{\mathcal{O}}_{0}\) a compact set in \(\mathds{R}^{\mathfrak{d}}\). Given any compact \({\mathcal{O}}\subseteq{\mathcal{O}}_{0}\) we consider Lipschitz families of vector fields
\[F:\mathds{T}^{d}_{s}\times D_{a^{\prime},p^{\prime}}(r)\times{\mathcal{O}}\to V _{a,p}\,,\] (2.20)
and say that they are _bounded_ vector fields when \(p=p^{\prime}\) and \(a=a^{\prime}\). Given a positive number \(\gamma\) we introduce the weighted Lipschitz norm
\[\|F\|_{\vec{v},p}=\|F\|_{\gamma,{\mathcal{O}},s,a,p}:=\sup_{\xi\in{\mathcal{O} }}\|F(\xi)\|_{s,a,p}+\gamma\sup_{\xi\neq\eta\in{\mathcal{O}}}\frac{\|F(\xi)-F( \eta)\|_{s,a,p-1}}{|\xi-\eta|}\,.\] (2.21)
and we shall drop the labels \(\vec{v}=(\gamma,{\mathcal{O}},s,a)\) when this does not cause confusion. More in general given \(E\) a Banach space we define the Lipschitz norm as
\[\|F\|_{\gamma,{\mathcal{O}},E}:=\sup_{\xi\in{\mathcal{O}}}\|F(\xi)\|_{E}+ \gamma\sup_{\xi\neq\eta\in{\mathcal{O}}}\frac{\|F(\xi)-F(\eta)\|_{E}}{|\xi- \eta|}\,.\] (2.22)
**Remark 2.9**.: _Note that in some applications one might need to assume a higher regularity in \(\xi\). In this case it is convenient to define the weighted \(q_{1}\)-norm_
\[\|F\|_{\vec{v},p}=\|F\|_{\gamma,{\mathcal{O}},s,a,p}:=\sum_{\begin{subarray}{c }h\in\mathds{N}^{\mathfrak{d}}\\ |h|\leq q_{1}\end{subarray}}\gamma^{|h|}\sup_{\xi\in{\mathcal{O}}}\|\partial_{ \xi}^{h}F(\xi)\|_{s,a,p-|h|}.\]
_Where the derivatives are in the sense of Whitney._
_Throughout the paper we shall always use the Lipschitz norm (2.21), although all the properties hold verbatim also for the \(q_{1}\)-norm._
**Definition 2.10**.: _We shall denote by \({\mathcal{V}}_{\vec{v},p}\) with \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) the space of vector fields as in (2.16) with \({\varrho}=0\). By slight abuse of notation we denote the norm \(\|\cdot\|_{\gamma,{\mathcal{O}},s,a,p}=\|\cdot\|_{\vec{v},p}\) ._
### Polynomial decomposition
In \({\mathcal{V}}_{\vec{v},p}\) we identify the closed _monomial_ subspaces
\[\mathcal{V}^{(\mathtt{v},0)}:=\{f\in{\mathcal{V}}_{\vec{v},p}\,: \;f=f^{(\mathtt{v},0)}(\theta)\partial_{\mathtt{v}}\}\,,\quad\mathtt{v}\in \mathtt{V}\] (2.23)
\[\mathcal{V}^{(\mathtt{v},\mathtt{v}^{\prime})}:=\{f\in{\mathcal{V }}_{\vec{v},p}\,:\;f=f^{(\mathtt{v},\mathtt{v}^{\prime})}(\theta)[\mathtt{v}^{ \prime}]\partial_{\mathtt{v}}\}\,,\quad\mathtt{v}\in\mathtt{V}\,,\quad\mathtt{ v}^{\prime}=y_{1},\ldots,y_{d_{1}},w\]
where \(\mathtt{v}_{1},\ldots,\mathtt{v}_{k}\in{\mathtt{U}}:=\{y_{1},\ldots,y_{d_{1}},w\}\) are ordered and \(f^{(\mathtt{v},\mathtt{v}_{1},\ldots,\mathtt{v}_{k})}\) is a multilinear form symmetric w.r.t. repeated variables.
As said after (2.6) it will be convenient to use also vector notation so that, for instance
\[f^{(y,y)}(\theta)y\cdot\partial_{y}\in{\mathcal{V}}^{(y,y)}=\bigoplus_{i\leq j =1,\ldots,d_{1}}{\mathcal{V}}^{(y_{i},y_{j})}\]
with \(f^{(y,y)}(\theta)\) a \(d_{1}\times d_{1}\) matrix.
Note that the polynomial vector fields of (maximal) degree \(k\) are
\[{\mathcal{P}}_{k}:=\bigoplus_{\mathtt{v}\in{\mathtt{V}}}\,\bigoplus_{j=0}^{k} \bigoplus_{(\mathtt{v}_{1},\ldots,\mathtt{v}_{j})\in{\mathtt{U}}}\mathcal{V}^{ (\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{j})}\,,\]
so that, given a polynomial \(F\in{\mathcal{P}}_{k}\) we may define its “projection” onto a monomial subspace \(\Pi_{\mathcal{V}^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{j})}}\) in the natural way.
Since we are not working on spaces of polynomials, but on vector fields with finite regularity, we need some more notations. Given a \(C^{k+1}\) vector field \(F\in{\mathcal{V}}_{\vec{v},p}\), we introduce the notation
\[F^{(\mathtt{v},0)}(\theta):=F^{(\mathtt{v})}(\theta,0,0),\quad F^{(\mathtt{v}, \mathtt{v}^{\prime})}(\theta)[\cdot]:=d_{\mathtt{v}^{\prime}}F^{(\mathtt{v})}( \theta,0,0)[\cdot],\quad\mathtt{v}\in\mathtt{V}\,,\quad\mathtt{v}^{\prime}=y_{ 1},\ldots,y_{d_{1}},w\] (2.24)
\[F^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{k})}(\theta)[\cdot,\dots,\cdot ]=\frac{1}{\alpha(\mathtt{v}_{1},\ldots,\mathtt{v}_{k})!}\Big{(}\prod_{i=1}^{k }d_{\mathtt{v}_{i}}\Big{)}F^{(\mathtt{v})}(\theta,0,0)[\cdot,\dots,\cdot]\,, \quad\mathtt{v}\in\mathtt{V}\,,\quad\mathtt{v}_{i}=y_{1},\ldots,y_{d_{1}},w\,,\]
where we assume that \(\mathtt{v}_{1},\ldots\mathtt{v}_{k}\) are ordered and the \((d+1)\)-dimensional vector \(\alpha(\mathtt{v}_{1},\ldots,\mathtt{v}_{k})\) denotes the multiplicity of each component.
By Taylor approximation formula any vector field in \({\mathcal{V}}_{\vec{v},p}\) which is \(C^{k+1}\) in \(y,w\) may be written in a unique way as sum of its Taylor polynomial in \({\mathcal{P}}_{k}\) plus a \(C^{k+1}\) (in \(y,w\)) vector field with a zero of order at least \(k+1\) at \(y=0\), \(w=0\). We think of this as a direct sum of vector spaces and introduce the notation
\[\Pi_{{\mathcal{V}}^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{k})}}F:=F^{( \mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{k})}(\theta)[\mathtt{v}_{1},\dots, \mathtt{v}_{k}]\,,\] (2.25)
we refer to such operators as _projections_.
**Definition 2.11**.: _We identify the vector fields in \({\mathcal{V}}_{\vec{v},p}\) which are \(C^{k+1}\) in \(y,w\), with the direct sum_
\[{\mathcal{W}}_{\vec{v},p}^{(k)}={\mathcal{P}}_{k}\oplus{\mathcal{R}}_{k}\,,\]
_where \({\mathcal{R}}_{k}\) is the space of \(C^{k+1}\) (in \(y,w\)) vector fields with a zero of order at least \(k+1\) at \(y=0\), \(w=0\). On \({\mathcal{W}}_{\vec{v},p}^{(k)}\) we induce the natural norm for direct sums, namely for_
\[f=\sum_{\mathtt{v}\in{\mathtt{V}}}\,\sum_{j=0}^{k}\sum_{(\mathtt{v}_{1},\ldots ,\mathtt{v}_{j})\in{\mathtt{U}}}f^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v} _{j})}(\theta)[\mathtt{v}_{1},\dots,\mathtt{v}_{j}]\partial_{\mathtt{v}}+f_{{ \mathcal{R}}_{k}}\,\qquad f_{{\mathcal{R}}_{k}}\in{\mathcal{R}}_{k}\,,\]
_we set_
\[\|f\|_{\vec{v},p}^{(k)}:=\sum_{\mathtt{v}\in{\mathtt{V}}}\,\sum_{j=0}^{k}\sum_ {(\mathtt{v}_{1},\ldots,\mathtt{v}_{j})\in{\mathtt{U}}}\|f^{(\mathtt{v}, \mathtt{v}_{1},\dots,\mathtt{v}_{j})}(\cdot)[\mathtt{v}_{1},\dots,\mathtt{v}_{ j}]\|_{\vec{v},p}+\|f_{{\mathcal{R}}_{k}}\|_{\vec{v},p}\,.\] (2.26)
**Remark 2.12**.: _Note that with this definition if \(k=\infty\) we are considering analytic maps with the norm_
\[\sum_{\mathtt{v}\in{\mathtt{V}}}\,\sum_{j=0}^{\infty}\sum_{(\mathtt{v}_{1}, \ldots,\mathtt{v}_{j})\in{\mathtt{U}}}\|f^{(\mathtt{v},\mathtt{v}_{1},\dots, \mathtt{v}_{j})}(\cdot)[\mathtt{v}_{1},\dots,\mathtt{v}_{j}]\|_{\vec{v},p}.\]
We can and shall introduce in the natural way the polynomial subspaces and the norm (2.26) also for maps \(\Phi=(\theta+f^{(\theta)},y+f^{(y)},w+f^{(w)})\) with
\[\Phi:\mathds{T}^{d}_{s}\times D_{a^{\prime},p^{\prime}}(r)\times{\mathcal{O}} \to\mathds{T}^{d}_{s_{1}}\times D_{a,p}(r_{1})\,,\]
since the Taylor formula holds also for functions of this kind.
We also denote
\[\langle{\mathcal{V}}{(\mathtt{v},\mathtt{v}_{{1}},\cdots,\mathtt{ v}_{{k}})}\rangle:=\!\{f\in{\mathcal{V}}^{(\mathtt{v},\mathtt{v}_{1},\ldots, \mathtt{v}_{k})}\!:f=\langle f^{(\mathtt{v},\mathtt{v}_{1},\ldots,\mathtt{v}_{ k})}\rangle\cdot\partial_{\mathtt{v}}\},\] (2.27)
\[{\mathcal{V}}^{(\mathtt{v},\mathtt{v}_{{1}},\cdots,\mathtt{v}_{{k }})}_{0}:=\{f\in{\mathcal{V}}^{(\mathtt{v},\mathtt{v}_{{1}},\cdots,\mathtt{v}_ {{k}})}\!:f=(f^{(\mathtt{v},\mathtt{v}_{{1}},\cdots,\mathtt{v}_{{k}})}-\langle f ^{(\mathtt{v},\mathtt{v}_{{1}},\cdots,\mathtt{v}_{{k}})}\rangle)\cdot\partial_ {\mathtt{v}}\}\,,\]
where \(\langle f\rangle:=\fint_{\mathds{T}^{d}}f(\theta)d\theta\).
**Tame vector fields**. We now define vector fields behaving “tamely” when composed with maps \(\Phi\). Let us fix a degree \(\mathtt{n}\in\mathds{N}\) and thus a norm
\[\|f\|_{\vec{v},p}=\|f\|_{\vec{v},p}^{({\mathtt{n}})}\,.\] (2.28)
**Definition 2.13**.: _Fix a large \(q\geq{\mathfrak{p}}_{1}\), \(k\geq 0\) and a set \({\mathcal{O}}\). Consider a \(C^{k+\mathtt{n}+1}\) vector field_
_We say that \(F\) is \(C^{k}\)-tame (up to order \(q\)) if there exists a scale of constants \(C_{\vec{v},p}(F)\), with \(C_{\vec{v},p}(F)\leq C_{\vec{v},p_{1}}(F)\) for \(p\leq p_{1}\), such that the following holds._
_For all \({\mathfrak{p}}_{0}\leq p\leq p_{1}\leq q\) consider any \(C^{{\mathtt{n}}+1}\) map \(\Phi=(\theta+f^{(\theta)},y+f^{(y)},w+f^{(w)})\) with \(\|f\|_{\vec{v}^{\prime},{\mathfrak{p}}_{1}}<1/2\) and_
\[\Phi:\mathds{T}^{d}_{s^{\prime}}\times D_{a_{1},p_{1}}(r^{\prime})\times{ \mathcal{O}}\to\mathds{T}^{d}_{s}\times D_{a,p+\nu}(r)\,,\quad\text{ for some }\quad r^{\prime}\leq r\,,s^{\prime}\leq s;\]
_and denote \(\vec{v}^{\prime}=(\gamma,{\mathcal{O}},s^{\prime},a,r^{\prime})\). Then for any \(m=0,\ldots,k\) and any \(m\) vector fields_
\[h_{1},\dots,h_{m}:\;\mathds{T}^{d}_{s^{\prime}}\times D_{a_{1},p_{1}}(r^{ \prime})\times{\mathcal{O}}\to V_{a,p+\nu},\]
_one has_
_for all \((y,w)\in D_{a_{1},p_{1}}(r^{\prime})\) and \(p\leq q\). Here \(d_{\mathtt{U}}F\) is the differential of \(F\) w.r.t. the variables \({\mathtt{U}}:=\{y_{1},\ldots,y_{d_{1}},w\}\) and the norm is the one defined in (2.28)._
_We say that a_ bounded _vector field_ \(F\) _is tame if the conditions (T_\({}_{m}\)_) above hold with_ \(\nu=0\)_. We call_ \(C_{\vec{v},p}(F)\) _the_ \(p\)_-_tameness constants of \(F\)_._
**Remark 2.14**.: _Note that in the definition above appear two regularity indices: \(k\) being the maximum regularity in \(y,w\) and \(q\) the one in \(\theta\)._
**Remark 2.15**.: _Definition 2.13 is quite natural if one has to deal with functions and vector fields which are merely differentiable. In order to clarify what we have in mind we consider the following example. Let \(L\) be a linear operator_
\[L:H^{p}(\mathds{T}^{d})\to H^{p}(\mathds{T}^{d})\,.\]
_In principle there is no reason for \(L\) to satisfy a bound like_
\[\|Lu\|_{p}\lessdot\|L\|_{{\mathcal{L}},p}\|u\|_{{\mathfrak{p}}_{0}}+\|L\|_{{ \mathcal{L}},{\mathfrak{p}}_{0}}\|u\|_{p}\] (2.29)
_where \(\|\cdot\|_{{\mathcal{L}},p}\) is the \(H^{p}\)-operator norm. However if \(L=M_{a}\) is a multiplication operator, i.e. \(M_{a}u=au\) for some \(a\in H^{p}(\mathds{T}^{d})\) then it is well known that_
\[\|M_{a}u\|_{p}\leq\kappa_{p}(\|a\|_{p}\|u\|_{{\mathfrak{p}}_{0}}+\|a\|_{{ \mathfrak{p}}_{0}}\|u\|_{p})\]
_which is (2.29) since \(\|a\|_{p}=\|M_{a}\|_{{\mathcal{L}},p}\). In this case we may set for all \(p\leq q\)\(C_{p}(M_{a})=\kappa_{q}\|a\|_{p}\,\), where \(q\) is the highest possible regularity. This is of course a trivial (though very common in the applications) example in which the tameness constants and the operator norm coincide; we preferred to introduce Definition 2.13 since it is the most general class in which we are able to prove our result._
**Remark 2.16**.: _It is trivial to note that, given a sequence \(C_{\vec{v},p}(F)\) of tameness constants for the field \(F\), then any increasing sequence \(\tilde{C}_{\vec{v},p}(F)\) such that \(C_{\vec{v},p}(F)\leq\tilde{C}_{\vec{v},p}(F)\) for any \(p\) is a possible choice of tameness constants for \(F\). This leads to the natural question of finding a sharp sequence which then could be used as norm. Throughout the paper we shall write \(C_{\vec{v},p}(F)\leq C\) to mean that the tameness constants of \(F\) can (and shall) be chosen in order to satisfy the bound._
### Normal form decomposition
**Definition 2.17** (\(({\mathcal{N}},{\mathcal{X}},{\mathcal{R}})\)**-decomposition)**.: _Let \({\mathcal{N}},{\mathcal{X}}\subseteq{\mathcal{P}}^{(\mathtt{n})}\) have the following properties:_
* _if_ \({\mathcal{N}}\cap\mathcal{V}^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{j}) }\neq\emptyset\) _then either_ \({\mathcal{N}}\cap\mathcal{V}^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{j}) }=\mathcal{V}^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{j})}\) _or_ \({\mathcal{N}}\cap\mathcal{V}^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{j}) }=\langle\mathcal{V}^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{j})}\rangle\) _for all_ \(j\leq{\mathtt{n}}\)_;_
* _one has_ \({\mathcal{V}}^{(\mathtt{v},0)}\subset{\mathcal{X}}\) _for_ \(\mathtt{v}=y,w\)_._
_We then decompose_
\[{\mathcal{W}}_{\vec{v},p}=C^{2{\mathtt{n}}+3}\cap{\mathcal{W}}^{({\mathtt{n}}) }_{\vec{v},p}:={\mathcal{N}}\oplus{\mathcal{X}}\oplus{\mathcal{R}}\]
_where \(C^{2{\mathtt{n}}+3}\) is the set of vector fields with \((2{\mathtt{n}}+3)\)-regularity in \(y,w\), \({\mathcal{R}}\) contains all of \({\mathcal{R}}_{\mathtt{n}}\) and all the polynomials generated by monomials not in \({\mathcal{N}}\oplus{\mathcal{X}}\). We shall denote \(\Pi_{\mathcal{R}}:=\mathds{1}-\Pi_{\mathcal{N}}-\Pi_{\mathcal{X}}\) and more generally for \({\mathcal{S}}={\mathcal{N}},{\mathcal{X}},{\mathcal{R}}\) we shall denote \(\Pi^{\perp}_{{\mathcal{S}}}:=\mathds{1}-\Pi_{\mathcal{S}}\)._
The following Definition is rather involved since we are trying to make our result as general as possible. However in the applications we have in mind, it turns out that one can choose \({\mathcal{A}}_{s,a,p}\) satisfying the properties of the Definition below in an explicit and natural way; see Section 4.
**Definition 2.18** (**Regular vector fields)**.: _Given a subset \({\mathcal{A}}_{s,a,p}\subset{\mathcal{P}}^{(\mathtt{n})}\) of polynomial vector fields \(f:\mathds{T}^{d}_{s}\times D_{a,p+\nu}(r)\to V_{a,p}\) we say that \({\mathcal{A}}\) is a space of regular vector fields if the following holds._
_Given a compact set \({\mathcal{O}}\in{\mathcal{O}}_{0}\) we denote by \({\mathcal{A}}_{\vec{v},p}\) with \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) the set of Lipschitz families \({\mathcal{O}}\to{\mathcal{A}}_{s,a,p}\). We require that \({\mathcal{A}}_{s,a,p}\) is a scale of Hilbert spaces w.r.t a norm \(|\cdot|_{s,a,p}=|\cdot|_{s,a,p,\nu}\) and we denote by \(|\cdot|_{\vec{v},p}\) the corresponding \(\gamma\)-weighted Lipschitz norm._
1. \({\mathcal{V}}^{(\mathtt{v},0)}\in{\mathcal{A}}_{\vec{v},p}\) _for_ \(\mathtt{v}=y,w\)___while either_ \({\mathcal{A}}_{\vec{v},p}^{(\theta)}=\emptyset\) _or_ \({\mathcal{A}}_{\vec{v},p}^{(\theta)}={\mathcal{V}}^{(\theta,0)}_{0}\)_._
2. _All_ \(f\in{\mathcal{A}}_{\vec{v},q}\) _are_ \(C^{k}\) _tame up to order_ \(q\) _for all_ \(k\)_, with tameness constants_ \[C_{\vec{v},p}(f)=\mathtt{C}\,|f|_{\vec{v},p}.\] (2.30) _for some_ \(C\) _depending on_ \({\mathfrak{p}}_{0}\) _on the dimensions_ \(d,d_{1}\) _and on the maximal regularity_ \(q\)_. Moreover_ \(|\cdot|_{{\mathfrak{p}}_{1}}\) _is a_ sharp _tameness constant, namely there exists a_ \(\mathtt{c}\) _depending on_ \({\mathfrak{p}}_{0},{\mathfrak{p}}_{1},d,d_{1}\) _such that_ \[|f|_{\vec{v},{\mathfrak{p}}_{1}}\leq\mathtt{c}\,C_{\vec{v},{\mathfrak{p}}_{1}} (f)\] (2.31) _for any_ \(f\) _and any tameness constant_ \(C_{\vec{v},{\mathfrak{p}}_{1}}(f)\)_._
3. _For_ \(K>1\) _there exists smoothing projection operators_ \(\Pi_{K}:{\mathcal{A}}_{\vec{v},p}\to{\mathcal{A}}_{\vec{v},p}\) _such that_ \(\Pi_{K}^{2}=\Pi_{K}\)_,_ _for_ \(p_{1}\geq 0\)_, one has_ \[\qquad|\Pi_{K}F|_{\vec{v},p+p_{1}}\leq\mathtt{C}K^{{p_{1}}}|F|_{ \vec{v},p}\] (2.32) \[\qquad|F-\Pi_{K}F|_{\vec{v},p}\leq\mathtt{C}K^{{-p_{1}}}|F|_{\vec {v},p+p_{1}}\] (2.33) _finally if_ \(C_{\vec{v},p}(F)\) _is any tameness constant for_ \(F\) _then we may choose a tameness constant such that_ \[\qquad C_{\vec{v},p+p_{1}}(\Pi_{K}F)\leq\mathtt{C}K^{{p_{1}}}C_{\vec{v},p}(F)\qquad\] (2.34) _We denote by_ \(E^{(K)}\) _the subspace where_ \(\Pi_{K}E^{(K)}=E^{(K)}\)_._
4. _Let_ \({\mathcal{B}}\) _be the set of bounded vector fields in_ \({\mathcal{A}}_{s,a,p}\ni f:\mathds{T}^{d}_{s}\times D_{a,p}(r)\to V_{a,p}\) _with the corresponding norm_ \(|\cdot|_{s,a,p,0}\)_. For all_ \(f\in{\mathcal{B}}\) _such that_ \[|f|_{\vec{v},{\mathfrak{p}}_{1}}\leq{\mathtt{c}}{\rho},\] (2.35) _with_ \(\rho>0\) _small enough, the following holds:_ _(i) The map_ \(\Phi:=\mathds{1}+f\) _is such that_ \[\Phi:\mathds{T}^{d}_{s}\times D_{a,p}(r)\times{\mathcal{O}}\longrightarrow \mathds{T}^{d}_{s+\rho s_{0}}\times D_{a,p}(r+\rho r_{0}).\] (2.36) _(ii) There exists a vector field_ \(h\in{\mathcal{B}}\) _such that_ * \(|h|_{\vec{v}_{1},p}\leq 2|f|_{\vec{v},p}\)_,_ _the map_ \(\Psi:=\mathds{1}+h\) _is such that_ \[\Psi:\mathds{T}^{d}_{s-\rho s_{0}}\times D_{a,p}(r-\rho r_{0})\times{\mathcal{ O}}\to\mathds{T}^{d}_{s}\times D_{a,p}(r).\] (2.37) * _for all_ \((\theta,y,w)\in\mathds{T}^{d}_{s-2\rho s_{0}}\times D_{a,{\mathfrak{p}}_{1}}(r -2\rho r_{0})\) _one has_ \[\Psi\circ\Phi(\theta,y,w)=(\theta,y,w).\] (2.38)
5. _Given any regular bounded vector field_ \(g\in\mathcal{B}\)_,_ \(p\geq{\mathfrak{p}}_{1}\) _with_ \(|g|_{\vec{v},{\mathfrak{p}}_{1}}\leq{\mathtt{c}}\rho\) _then for_ \(0\leq t\leq 1\) _there exists_ \(f_{t}\in\mathcal{B}\) _such that the time_\(-t\) _map of the flow of_ \(g\) _is of the form_ \(\mathds{1}+f_{t}\) _moreover we have_ \(|f_{t}|_{\vec{v},p}\leq 2|g|_{\vec{v}_{1},p}\) _where_ \(\vec{v}_{1}=(\lambda,{\mathcal{O}},s-\rho s_{0},a,r)\)_._
**Definition 2.19**.: _Consider \(\mathcal{E}\) a subspace⁵ of \({\mathcal{V}}_{\vec{v},p}\). We say that \({\mathcal{E}}\) is compatible with the \(({\mathcal{N}},{\mathcal{X}},{\mathcal{R}})\)-decomposition if_
[FOOTNOTE:5][ENDFOOTNOTE]
* _any_ \(F\in{\mathcal{E}}\cap{\mathcal{X}}\) _is a regular vector field;_
* _for any_ \(F\in{\mathcal{E}}\cap{\mathcal{P}}_{\mathtt{n}}\) _one has_ \(\Pi_{{\mathcal{U}}}F\in{\mathcal{E}}\) _for_ \({\mathcal{U}}={\mathcal{N}},{\mathcal{X}},E^{(K)}\)_;_
* _denoting_ \[{{\mathcal{B}}}_{\mathcal{E}}:=\Big{\{}f\in{\mathcal{X}}\cap{\mathcal{B}}\,:\; \Phi_{f}^{t}\mbox{ is }\;\mathcal{E}\;{\mbox{ preserving for all }}t\in[0,1] \Big{\}}\subset{\mathcal{X}}\cap{\mathcal{B}}\,,\] (2.39) _one has_ \[\forall g\in{\mathcal{B}}_{\mathcal{E}},F\in{\mathcal{E}}:\quad[g,F]\in{ \mathcal{E}}\,,\quad\forall g,h\in{\mathcal{B}}_{\mathcal{E}}:\quad\Pi_{ \mathcal{X}}[g,h]\in{\mathcal{B}}_{\mathcal{E}}\,.\] (2.40)
**Definition 2.20** (**Normal form)**.: _We say that \(N_{0}\in{\mathcal{N}}\cap{\mathcal{E}}\) is a diagonal vector field if for all \(K>1\)_
\[{\rm ad}(N_{0})\Pi_{E^{(K)}}\Pi_{\mathcal{X}}=\Pi_{E^{(K)}}\Pi_{\mathcal{X}}{ \rm ad}(N_{0})\,,\quad\mbox{on ${\mathcal{B}}_{\mathcal{E}}$.}\] (2.41)
### Main result
Let us fix once and for all a \(({\mathcal{N}},{\mathcal{X}},{\mathcal{R}})\)-decomposition. Before stating the result we need to introduce parameters (which shall depend on the application) fulfilling the following constraints.
**Constraint 2.21** (**The exponents)**.: _We fix parameters \(\varepsilon_{0},{\mathtt{R}}_{0},{\mathtt{G}}_{0},\mu,\nu,\eta,\chi,\alpha, \kappa_{1},\kappa_{2},\kappa_{3},{\mathfrak{p}}_{2}\) such that the following holds._
* \(0<\varepsilon_{0}\leq{\mathtt{R}}_{0}\leq{\mathtt{G}}_{0}\) _with_ \(\varepsilon_{0}{\mathtt{G}}_{0}^{3},\varepsilon_{0}{\mathtt{G}}_{0}^{2}{ \mathtt{R}}_{0}^{-1}<1\)_._
* \(\mu,\nu,\kappa_{3}\geq 0\)_,_ \({\mathfrak{p}}_{2}>{\mathfrak{p}}_{1}\)_,_ \(0\leq\alpha<1\)_,_ \(1<\chi<2\) _such that_ \(\alpha\chi<1\)_._
* _Setting_ \(\kappa_{0}:=\mu+\nu+4\) _and_ \(\Delta{\mathfrak{p}}:={\mathfrak{p}}_{2}-{\mathfrak{p}}_{1}\) _one has_ \[\kappa_{1} >\max(\frac{\kappa_{0}+\kappa_{3}}{\chi},\frac{\kappa_{0}}{\chi-1 })\,,\] (2.42a) \[\kappa_{2} >\max\big{(}\frac{2\kappa_{0}}{2-\chi},\frac{1}{1-\alpha\chi}((1+ \alpha)\kappa_{0}+2\max(\kappa_{1},\kappa_{3})-\chi\kappa_{1})\big{)}\,,\] (2.42b) \[\eta >\mu+(\chi-1)\kappa_{2}+1\,,\] (2.42c) \[\Delta{\mathfrak{p}} >\max\big{(}\kappa_{0}+\chi\kappa_{2}+\max(\kappa_{1},\kappa_{3}) ,\frac{1}{1-\alpha}(\kappa_{0}+(\chi-1)\kappa_{2}+\max(\kappa_{1},\kappa_{3})) \big{)}\,,\] (2.42d) \[\alpha\Delta{\mathfrak{p}} \leq\kappa_{2}+\chi\kappa_{1}-\kappa_{0}-\max(\kappa_{1},\kappa_{ 3})\,.\] (2.42e)
* _there exists_ \(K_{0}>1\) _such that_ \[\log K_{0}\geq\frac{1}{\log\chi}C,\] (2.43) _with_ \(C\) _a given function of_ \(\mu,\nu,\eta,\alpha,\kappa_{1},\kappa_{2},\kappa_{3},{\mathfrak{p}}_{2}\)___and moreover_ \[{\mathtt{G}}_{0}^{2}{\mathtt{R}}_{0}^{-1}\varepsilon_{0}K_{0}^{ \kappa_{0}}\max(1,{\mathtt{R}}_{0}{\mathtt{G}}_{0}K_{0}^{\kappa_{0}+(\chi-1) \kappa_{2}})<1\,,\] (2.44a) \[\max(K_{0}^{\kappa_{1}},\varepsilon_{0}K_{0}^{\kappa_{3}})K_{0}^{ \kappa_{0}-\Delta{\mathfrak{p}}+(\chi-1)\kappa_{2}}{\mathtt{G}}_{0}\varepsilon _{0}^{-1}\max\Big{(}1,{\mathtt{R}}_{0},\varepsilon_{0}{\mathtt{G}}_{0}K_{0}^{ \alpha\Delta{\mathfrak{p}}}\Big{)}\leq 1\,,\] (2.44b) \[\max(K_{0}^{\kappa_{1}},\varepsilon_{0}K_{0}^{\kappa_{3}})K_{0}^{ \kappa_{0}-\chi\kappa_{1}}{\mathtt{G}}_{0}{\mathtt{R}}_{0}^{-1}\max\Big{(}{ \mathtt{R}}_{0},\varepsilon_{0}{\mathtt{G}}_{0}K_{0}^{\alpha\Delta{\mathfrak{p }}}\Big{)}\leq 1\,.\] (2.44c)
**Remark 2.22**.: _In the applications the constants \({\mathtt{G}}_{0},{\mathtt{R}}_{0},\varepsilon_{0}\) in Constraint 2.21 are given by the problem under study and typically they depend parametrically on \({\rm diam}({\mathcal{O}}_{0})\sim\gamma\); then one wishes to show that for \(\gamma\) small enough it is possible to choose all other parameters in order to fulfill Constraint 2.21. Often this implies requiring that \(K_{0}\to\infty\) as \(\gamma\to 0\). In order to highlight this dependence one often uses \(\varepsilon_{0}\) as parameter and introduces \(\mathtt{g},\mathtt{r}\) such that_
\[{\mathtt{G}}_{0}\sim\varepsilon_{0}^{\mathtt{g}}\,,\quad{\mathtt{R}}_{0}\sim \varepsilon_{0}^{\mathtt{r}}\,,\quad\text{ with }\quad\mathtt{g}\leq\mathtt{r} \leq 1\,,\quad\min\{1+3\mathtt{g},1+2\mathtt{g}-\mathtt{r}\}>0\,.\] (2.45)
_Then given \(\kappa_{0},\kappa_{3}\) one looks for \(\alpha,\chi,\kappa_{1},\kappa_{2},{\mathfrak{p}}_{2}\) satisfying (2.42) and, setting \(K_{0}=\varepsilon_{0}^{-\mathtt{a}}\), the constraints (2.44) become constraints on \(\mathtt{a}\). Another typical procedure is to write \({\mathtt{G}}_{0},{\mathtt{R}}_{0},\varepsilon_{0}\) as powers of \(K_{0}\), see paragraph 4.6. Note that in (2.45) we need \(1+3\mathtt{g}>0\) but in principle we allow \(\mathtt{g}<0\); same for \(\mathtt{r}\). This means in particular that \({\mathtt{G}}_{0}\) and \({\mathtt{R}}_{0}\) might be very large._
**Definition 2.23** (**Homological equation)**.: _Let \(\gamma>0\), \(K\geq K_{0}\), consider a compact set \({\mathcal{O}}\subset{\mathcal{O}}_{0}\) and set \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) and \(\vec{v}^{\text{\tiny 0}}=(\gamma,{\mathcal{O}}_{0},s,a,r)\). Consider a vector field \(F\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\) i.e._
\[F=N_{0}+G:{\mathcal{O}}_{0}\times D_{a,p+\nu}(r)\times\mathds{T}^{d}_{s}\to V_ {a,p}\,,\]
_which is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). We say \(\mathcal{O}\) satisfies the homological equation, for \((F,K,\vec{v}^{\text{\tiny 0}},\rho)\) if the following holds._
_1. For all_ \(\xi\in\mathcal{O}\) _one has_ \(F(\xi)\in{\mathcal{E}}\) _and_ \(|\Pi_{{\mathcal{X}}}G|_{\vec{v},{\mathfrak{p}}_{2}-1}\leq\mathtt{C}\,C_{\vec{v },{\mathfrak{p}}_{2}}(\Pi_{{\mathcal{N}}}^{\perp}G)\)_._
_2. there exist a bounded regular vector field_ \(g\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\cap E^{(K)}\) _such that_
* \(g\in{\mathcal{B}}_{\mathcal{E}}\) _for all_ \(\xi\in{\mathcal{O}}\)_,_
* _one has_ \(|g|_{\vec{v}^{\text{\tiny 0}},{\mathfrak{p}}_{1}}\leq\mathtt{C}|g|_{\vec{v},{ \mathfrak{p}}_{1}}\leq{\mathtt{c}}\rho\) _and for_ \({\mathfrak{p}}_{1}\leq p\leq{\mathfrak{p}}_{2}\)__ \[|g|_{\vec{v},p}\leq\gamma^{-1}K^{\mu}(|\Pi_{K}\Pi_{{\mathcal{X}}}G|_{\vec{v},p }+K^{\alpha(p-{\mathfrak{p}}_{1})}|\Pi_{K}\Pi_{{\mathcal{X}}}G|_{\vec{v},{ \mathfrak{p}}_{1}}\gamma^{-1}C_{{\vec{v},p}}(G))\,,\] (2.46) \[|\Pi_{\mathcal{X}}[\Pi_{{\mathcal{X}}}^{\perp}G,g]|_{\vec{v},p-1}\leq C_{\vec{ v},p+1}(G)|g|_{\vec{v},{\mathfrak{p}}_{1}}+C_{\vec{v},{\mathfrak{p}}_{1}}(G)|g |_{\vec{v},p+\nu+1}\]
* _setting_ \(u:=\Pi_{K}\Pi_{{\mathcal{X}}}({\rm ad}(\Pi_{{\mathcal{X}}}^{\perp}F)[g]-F),\) _one has_ _assssasfffff___ \[|u|_{\vec{v},{\mathfrak{p}}_{1}} \leq\varepsilon_{0}\gamma^{-1}K^{-\eta+\mu}C_{\vec{v},{\mathfrak{ p}}_{1}}(G)|\Pi_{K}\Pi_{{\mathcal{X}}}G|_{\vec{v},{\mathfrak{p}}_{1}},\] (2.47) \[|u|_{\vec{v},{\mathfrak{p}}_{2}} \leq\gamma^{-1}K^{\mu}\left(|\Pi_{K}\Pi_{{\mathcal{X}}}G|_{\vec{v },{\mathfrak{p}}_{2}}C_{\vec{v},{\mathfrak{p}}_{1}}(G)+K^{\alpha({\mathfrak{p} }_{2}-{\mathfrak{p}}_{1})}|\Pi_{K}\Pi_{{\mathcal{X}}}G|_{\vec{v},{\mathfrak{p} }_{1}}C_{\vec{v},{\mathfrak{p}}_{2}}(G)\right);\]
* _setting_ \(\vec{v}^{\prime}=(\gamma,{\mathcal{O}},s-\rho s_{0},a,r-\rho r_{0})\)_, and let_ \(\Phi\) _the change of variables generated by_ \(g\)_, one has that_ (2.48)
**Definition 2.24** (**Compatible changes of variables)**.: _Let the parameters in Constraint 2.21 be fixed. Fix also \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\), \(\vec{v}^{\text{\tiny 0}}=(\gamma,{\mathcal{O}}_{0},s,a,r)\) with \({\mathcal{O}}\subseteq{\mathcal{O}}_{0}\) a compact set, parameters \(K\geq K_{0},\rho<1\). Consider a vector field \(F=N_{0}+G\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\) which is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\) and such that,_
\[F\in{\mathcal{E}}\quad\forall\xi\in{\mathcal{O}}\,,\qquad|\Pi_{{\mathcal{X}}}G |_{\vec{v},{\mathfrak{p}}_{2}-1}\leq\mathtt{C}C_{\vec{v},{\mathfrak{p}}_{2}}( \Pi_{{\mathcal{N}}}^{\perp}G).\]
_We say that a left invertible \({\mathcal{E}}\)-preserving change of variables_
\[{\mathcal{L}},{\mathcal{L}}^{-1}:\mathds{T}^{d}_{s}\times D_{a,{\mathfrak{p}}_ {1}}(r)\times{\mathcal{O}}_{0}\to\mathds{T}^{d}_{s+\rho s_{0}}\times D_{a-\rho a _{0},{\mathfrak{p}}_{1}}(r+\rho r_{0})\]
_is compatible with \((F,K,\vec{v},\rho)\) if the following holds:_
* \({\mathcal{L}}\) _is “close to identity”, i.e. denoting_ \(\vec{v}^{\text{\tiny 0}}_{1}:=(\gamma,{\mathcal{O}}_{0},s-\rho s_{0},a-\rho a_ {0},r-\rho r_{0})\) _one has_ \[\|({\mathcal{L}}-\mathds{1})h\|_{\vec{v}^{\text{\tiny 0}}_{1},{ \mathfrak{p}}_{1}}\leq\mathtt{C}\varepsilon_{0}K^{-1}\|h\|_{\vec{v}^{\text{ \tiny 0}},{\mathfrak{p}}_{1}}\,.\] (2.49)
* \({\mathcal{L}}_{*}\) _conjugates the_ \(C^{\mathtt{n}+2}\)_-tame vector field_ \(F\) _to the vector field_ \(\hat{F}:={\mathcal{L}}_{*}F=N_{0}+\hat{G}\) _which is_ \(C^{\mathtt{n}+2}\)_-tame; moreover denoting_ \(\vec{v}_{2}:=(\gamma,{\mathcal{O}},s-2\rho s_{0},a-2\rho a_{0},r-2\rho r_{0})\) _one may choose the tameness constants of_ \(\hat{G}\) _so that_ \[C_{\vec{v}_{2},{\mathfrak{p}}_{1}}(\hat{G})\leq C_{\vec{v},{ \mathfrak{p}}_{1}}(G)(1+\varepsilon_{0}K^{-1})\,,\] (2.50) \[C_{\vec{v}_{2},{\mathfrak{p}}_{2}}(\hat{G})\leq\mathtt{C}\big{(} C_{\vec{v},{\mathfrak{p}}_{2}}(G)+\varepsilon_{0}K^{\kappa_{3}}C_{\vec{v},{ \mathfrak{p}}_{1}}(G)\big{)}\]
* \({\mathcal{L}}_{*}\) _“preserves the_ \(({\mathcal{N}},{\mathcal{X}},{\mathcal{R}})\)_-decomposition”, namely one has_ \[\Pi_{\mathcal{N}}^{\perp}({\mathcal{L}}_{*}\Pi_{\mathcal{N}}F)=0\,,\quad\qquad \Pi_{{\mathcal{X}}}({\mathcal{L}}_{*}\Pi_{{\mathcal{X}}}^{\perp}F)=0\,.\] (2.51)
Given \(\gamma_{0}>0\) we set for \(n\geq 0\)
\[\quad\mathtt{G}_{n}=\mathtt{G}_{0}(1+\sum_{j=1}^{n}2^{-j}),\quad \mathtt{R}_{n}=\mathtt{R}_{0}(1+\sum_{j=1}^{n}2^{-j}),\quad K_{n}=(K_{0})^{ \chi^{n}},\,\gamma_{n}=\gamma_{n-1}(1-\frac{1}{2^{n+2}}),\] (2.52)
\[a_{n}=a_{0}(1-\frac{1}{2}\sum_{j=1}^{n}2^{-j})\,,\quad r_{n}=r_{ 0}(1-\frac{1}{2}\sum_{j=1}^{n}2^{-j}),\quad s_{n}=s_{0}(1-\frac{1}{2}\sum_{j=1 }^{n}2^{-j})\,,\]
\[\Pi_{n}:=\Pi^{(K_{n})}\,,\quad\Pi_{n}^{\perp}:=\mathds{1}-\Pi_{n} \,,E_{n}=E^{(K_{n})},\quad\rho_{n}:=\frac{1}{2^{n+5}}\]
Finally, for all \(n\geq 0\) we denote \(\vec{v}_{n}=(\gamma_{n},{\mathcal{O}}_{n},s_{n},a_{n},r_{n})\), \(\vec{v}^{\text{\tiny 0}}_{n}=(\gamma_{n},{\mathcal{O}}_{0},s_{n},a_{n},r_{n})\).
We reformulate our main result, stated in the Introduction, in a more precise way. This is useful for applications, where one needs to have information of the sequence of vector fields \(F_{n}\) and on the changes of variables \({\mathcal{H}}_{n}\) in order to prove that the set \({\mathcal{O}}_{\infty}\) is not empty.
**Theorem 2.25** (**Abstract KAM)**.: _Fix a decomposition and a subspace \({\mathcal{E}}\) as in Definitions 2.17 and 2.19. Fix parameters \(\varepsilon_{0},{\mathtt{R}}_{0},{\mathtt{G}}_{0},\mu,\nu,\eta,\chi,\alpha, \kappa_{1},\kappa_{2},\kappa_{3},{\mathfrak{p}}_{2}\) satisfying Constraint 2.21. Let \(N_{0}\) be a diagonal vector field as in Definition 2.20 and consider a vector field_
\[F_{0}:=N_{0}+G_{0}\in{\mathcal{E}}\cap{\mathcal{W}}_{\vec{v}_{0},p}\] (2.53)
_which is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\)._
_Fix \(\gamma_{0}>0\) and assume that_
\[\gamma_{0}^{-1}C_{\vec{v}_{0},{\mathfrak{p}}_{2}}(G_{0})\leq{\mathtt{G}}_{0}\, ,\quad\gamma_{0}^{-1}C_{\vec{v}_{0},{\mathfrak{p}}_{2}}(\Pi_{{\mathcal{N}}}^{ \perp}G_{0})\leq{\mathtt{R}}_{0}\,,\quad\gamma_{0}^{-1}|\Pi_{{\mathcal{X}}}G_{ 0}|_{\vec{v}_{0},{\mathfrak{p}}_{1}}\leq\varepsilon_{0}\,,\quad\gamma_{0}^{-1} |\Pi_{{\mathcal{X}}}G_{0}|_{\vec{v}_{0},{\mathfrak{p}}_{2}}\leq{\mathtt{R}}_{0 }\,.\] (2.54)
_For all \(n\geq 0\) we define recursively changes of variables \({\mathcal{L}}_{n},\Phi_{n}\) and compact sets \({\mathcal{O}}_{n}\) as follows._
_Set \({\mathcal{H}}_{-1}={\mathcal{H}}_{0}=\Phi_{0}={\mathcal{L}}_{0}=\mathds{1}\), and for \(0\leq j\leq n-1\) set recursively \({\mathcal{H}}_{j}=\Phi_{j}\circ{\mathcal{L}}_{j}\circ{\mathcal{H}}_{j-1}\) and \(F_{j}:=({\mathcal{H}}_{j})_{*}F_{0}:=N_{0}+G_{j}\). Let \({\mathcal{L}}_{n}\) be any change of variables compatible with \((F_{n-1},K_{n-1},\vec{v}_{n-1},\rho_{n-1})\), and \({\mathcal{O}}_{n}\) be any compact set_
\[{\mathcal{O}}_{n}\subseteq{\mathcal{O}}_{n-1}\,,\] (2.55)
_which satisfies the homological equation for \((({\mathcal{L}}_{n})_{*}F_{n-1},K_{n-1},\vec{v}^{\text{\tiny 0}}_{n-1},\rho_{n -1})\). For \(n>0\) let \(g_{n}\) be the regular vector field defined in item (2) of Definition 2.23 and set \(\Phi_{n}\) the time-1 flow map generated by \(g_{n}\)._
_Then \(\Phi_{n}\) is left invertible and \(F_{n}:=(\Phi_{n}\circ{\mathcal{L}}_{n})_{*}F_{n-1}\in{\mathcal{W}}_{\vec{v}^{ \text{\tiny 0}}_{n},p}\) is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). Moreover the following holds._
* _Setting_ \(G_{n}=F_{n}-N_{0}\) _then_ \[\Gamma_{n,{\mathfrak{p}}_{1}} :=\gamma_{n}^{-1}C_{\vec{v}_{n},{\mathfrak{p}}_{1}}(G_{n})\leq{ \mathtt{G}}_{n},\quad\Gamma_{n,{\mathfrak{p}}_{2}}:=\gamma_{n}^{-1}C_{\vec{v}_ {n},{\mathfrak{p}}_{2}}(G_{n})\leq{\mathtt{G}}_{0}K_{n}^{\kappa_{1}},\] (2.56) \[\Theta_{n,{\mathfrak{p}}_{1}} :=\gamma_{n}^{-1}C_{\vec{v}_{n},{\mathfrak{p}}_{1}}(\Pi_{{ \mathcal{N}}}^{\perp}G_{n})\leq{\mathtt{R}}_{n},\quad\Theta_{n,{\mathfrak{p}}_ {2}}:=\gamma_{n}^{-1}C_{\vec{v}_{n},{\mathfrak{p}}_{2}}(\Pi_{{\mathcal{N}}}^{ \perp}G_{n})\leq{\mathtt{R}}_{0}K_{n}^{\kappa_{1}}\] \[\delta_{n} :=\gamma_{n}^{-1}|\Pi_{{\mathcal{X}}}G_{n}|_{\vec{v}_{n},{ \mathfrak{p}}_{1}}\leq K_{0}^{\kappa_{2}}\varepsilon_{0}K_{n}^{-\kappa_{2}}, \quad\gamma_{n}^{-1}|\Pi_{{\mathcal{X}}}G_{n}|_{\vec{v}_{n},{\mathfrak{p}}_{2} }\leq{\mathtt{R}}_{0}K_{n}^{\kappa_{1}}\] \[|g_{n}|_{\vec{u}_{n},{\mathfrak{p}}_{1}} \leq K_{0}^{\kappa_{2}}\varepsilon_{0}{\mathtt{G}}_{0}K_{n-1}^{- \kappa_{2}+\mu+1},\quad|g_{n}|_{\vec{u}_{n},{\mathfrak{p}}_{2}}\leq{\mathtt{R} }_{0}{\mathtt{G}}_{0}^{-1}K_{n-1}^{-\nu-1+\chi\kappa_{1}}\] _where_ \(\vec{u}_{n}=(\gamma_{n},{\mathcal{O}}_{n},s_{n}+12\rho_{n}s_{0},a_{n}+12\rho_{ n}a_{0},r_{n}+12\rho_{n}r_{0})\)_._
* _The sequence_ \({\mathcal{H}}_{n}\) _converges for all_ \(\xi\in{\mathcal{O}}_{0}\) _to some change of variables_ \[{{\mathcal{H}}}_{\infty}={{\mathcal{H}}}_{\infty}(\xi):D_{{a_{0}},p}({s_{0}}/{ 2},{r_{0}}/{2})\longrightarrow D_{\frac{a_{0}}{2},p}({s_{0}},{r_{0}}).\] (2.57)
* _Defining_ \(F_{\infty}:=({\mathcal{H}}_{\infty})_{*}F_{0}\) _one has_ \[\Pi_{\mathcal{X}}F_{\infty}=0\quad\forall\xi\in{\mathcal{O}}_{\infty}:=\bigcap _{n\geq 0}{\mathcal{O}}_{n}\] (2.58) _and_ \[\gamma_{0}^{-1}C_{\vec{v}_{\infty},{\mathfrak{p}}_{1}}(\Pi_{{\mathcal{N}}}F_{ \infty}-N_{0})\leq 2{\mathtt{G}}_{0},\quad\gamma_{0}^{-1}C_{\vec{v}_{\infty},{ \mathfrak{p}}_{1}}(\Pi_{{\mathcal{R}}}F_{\infty})\leq 2{\mathtt{R}}_{0}\] _with_ \(\vec{v}_{\infty}:=(\gamma_{0}/2,{\mathcal{O}}_{\infty},s_{0}/2,a_{0}/2)\)_._
Proof.: The proof of this result is deferred to Section 5. ∎
**Remark 2.26**.: _Note that if one makes the further assumption that \(s_{0}>0\), the smallness conditions as well as the definition of the set of parameters in Definition 2.23 simplify drastically: in particular one may choose \({\mathfrak{p}}_{2}={\mathfrak{p}}_{1}\). We are not makeing this assumption because our aim was to have a unified proof; however we discuss the time analytic case in Appendix D for completeness._
## 3 Triangular decomposition and Mel’nikov conditions
In most applications one may redefine the sets on which one can solve the homological equation in a more direct way, by introducing the so-called _Mel’nikov conditions_.
We start by introducing some notation.
**Definition 3.1** (**Triangular decomposition)**.: _We say that a decomposition \(({\mathcal{N}},{\mathcal{X}},{\mathcal{R}})\) is triangular if \({\mathcal{X}}\) admits a block decomposition_
\[{\mathcal{X}}=\bigoplus_{j=1}^{\mathtt{b}}{\mathcal{X}}_{j}\] (3.1)
_such that for all \(N\in{\mathcal{N}},R\in{\mathcal{R}}\) setting_
\[{\mathfrak{N}}:=\Pi_{{\mathcal{X}}}{\rm ad}(N)\,,\quad{\mathfrak{R}}:=\Pi_{{ \mathcal{X}}}{\rm ad}(R)\] (3.2)
_then \({\mathfrak{N}}\) is block diagonal and \({\mathfrak{R}}\) is strictly upper triangular, i.e._
\[{\mathfrak{N}}:\;{\mathcal{X}}_{i}\to{\mathcal{X}}_{i}\,,\quad{\mathfrak{R}}: \;{\mathcal{X}}_{i}\to\bigoplus_{j>i}{\mathcal{X}}_{j}\,.\]
**Remark 3.2**.: _In order to construct a triangular decomposition one generally associates some degree to the variables⁶:_
[FOOTNOTE:6][ENDFOOTNOTE]
\[{\rm deg}(\theta)=0\,,\quad{\rm deg}(w)=1\,,\quad{\rm deg}(y)=\mathtt{d},\]
_this automatically fixes the degree of a monomial vector field as_
\[{\rm deg}(y^{j}e^{{\rm i}\theta\cdot\ell}w^{\alpha}\partial_{\mathtt{v}})=j \mathtt{d}+|\alpha|-{\rm deg}(\mathtt{v}),\]
_moreover one verifies that if \(g\) has degree \(d_{1}\) and \(f\) degree \(d_{2}\) then \([f,g]\) has degree \(d_{1}+d_{2}\). Finally we remark that \({\mathcal{V}}^{(\mathtt{v},0)}\) has negative degree for \(\mathtt{v}=y,w\) and degree equal to zero for \(\mathtt{v}=\theta\), in the same way \({\mathcal{V}}^{(\mathtt{v},\mathtt{v})}\) has always degree zero for \(\mathtt{v}=y,w\) while \({\mathcal{V}}^{\theta,\mathtt{v}}\) has positive degree. Then to a polynomial we may associate its minimal and maximal degree. In the same way if a \(C^{k+1}\) function has zero projection on all spaces \({\mathcal{V}}^{\mathtt{v},\mathtt{v}_{1},\cdots,\mathtt{v}_{h}}\), with \(h\leq k\) and degree \(\leq d\) we call \(d\) its minimal degree. In many applications it is convenient to place all monomials of degree \(\leq 0\) in \({\mathcal{N}}\cap{\mathcal{X}}\) and all those of positive degree in \({\mathcal{R}}\)._
**Lemma 3.3**.: _Any decomposition such that \({\mathcal{N}}\) contains only polyomials of degree zero and \({\mathcal{R}}\) contains only terms of minimal degree \(>0\) is triangular w.r.t. the degree decomposition of \({\mathcal{X}}=\oplus_{j}{\mathcal{X}}_{j}\) where the \({\mathcal{X}}_{j}\) are spaces of homogeneous polynomials with increasing degree \(\mathtt{d}_{j}\)._
Proof.: Given any polynomial \(P\) of positive minimal degree the operator \({\rm ad}P\) has positive degree, namely its action on any polynomial increases its minimal degree. Now for any \(N\in{\mathcal{N}}\)\(\Pi_{\mathcal{X}}{\rm ad}N\) preserves the degree and thus maps each \({\mathcal{X}}_{j}\) into itself. By definition the maximal degree in \({\mathcal{X}}\) is given by \(\mathtt{d}_{b}\). Then if we have a tame function \(f\) with minimal degree \(>\mathtt{d}_{b}-\max(1,\mathtt{d})\) the operator \(\Pi_{\mathcal{X}}{\rm ad}f\) on \({\mathcal{X}}\) is equal to zero, and this is just a property of polynomial subspaces in \({\mathcal{R}}\). Since all \(R\in{\mathcal{R}}\) have positive degree, then obviously \(\Pi_{\mathcal{X}}{\rm ad}R\) is upper triangular. ∎
Once we have a triangular decomposition we introduce the following notion
**Definition 3.4** (**Mel’nikov conditions)**.: _Let \(\gamma,\mu_{1}>0\), \(K\geq K_{0}\), consider a compact set \({\mathcal{O}}\subset{\mathcal{O}}_{0}\) and set \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) and \(\vec{v}^{\text{\tiny 0}}=(\gamma,{\mathcal{O}}_{0},s,a,r)\). Consider a vector field \(F\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\) i.e._
\[F=N_{0}+G:{\mathcal{O}}_{0}\times D_{a,p+\nu}(r)\times\mathds{T}^{d}_{s}\to V_ {a,p}\,,\]
_which is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). We say \(\mathcal{O}\) satisfies the Mel’nikov conditions for \((F,K,\vec{v}^{\text{\tiny 0}})\) if the following holds._
_1. For all_ \(\xi\in\mathcal{O}\) _one has_ \(F(\xi)\in{\mathcal{E}}\) _and_ \(|\Pi_{{\mathcal{X}}}G|_{\vec{v},{\mathfrak{p}}_{2}-1}\leq\mathtt{C}\,C_{\vec{v },{\mathfrak{p}}_{2}}(\Pi_{{\mathcal{N}}}^{\perp}G)\)_._
_2. Setting_ \({\mathfrak{N}}:=\Pi_{K}\Pi_{\mathcal{X}}{\rm ad}(\Pi_{\mathcal{N}}F)\) _for all_ \(\xi\in{\mathcal{O}}\) _there exists a block-diagonal operator_ \({\mathfrak{W}}:E^{(K)}\cap{\mathcal{X}}\cap{\mathcal{E}}\to E^{(K)}\cap{ \mathcal{B}}_{\mathcal{E}}\) _such that for any vector field_ \(X\in E^{(K)}\cap{\mathcal{X}}\cap{\mathcal{E}}\)__
* _one has_ \[|{\mathfrak{W}}X|_{\vec{v},p}\] (3.3)
* _setting_ \(u:=(\Pi_{K}{\rm ad}(\Pi_{{\mathcal{N}}}F)[{\mathfrak{W}}X]-X)\) _one has_ _assssasfffff___ \[|u|_{\vec{v},{\mathfrak{p}}_{1}} \leq\varepsilon_{0}\gamma^{-1}K^{-\eta+\mu_{1}}C_{\vec{v},{ \mathfrak{p}}_{1}}(G)|X|_{\vec{v},{\mathfrak{p}}_{1}},\] (3.4) \[|u|_{\vec{v},{\mathfrak{p}}_{2}} \leq\gamma^{-1}K^{\mu_{1}}\left(|X|_{\vec{v},{\mathfrak{p}}_{2}}C _{\vec{v},{\mathfrak{p}}_{1}}(G)+K^{\alpha({\mathfrak{p}}_{2}-{\mathfrak{p}}_{ 1})}|X|_{\vec{v},{\mathfrak{p}}_{1}}C_{\vec{v},{\mathfrak{p}}_{2}}(G)\right).\]
Then we have the following result.
**Proposition 3.5** (**Homological equation)**.: _Let \(\gamma>0\), \(K\geq K_{0}\), consider a compact set \({\mathcal{O}}\subset{\mathcal{O}}_{0}\) and set \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) and \(\vec{v}^{\text{\tiny 0}}=(\gamma,{\mathcal{O}}_{0},s,a,r)\). Consider a vector field \(F\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\) i.e._
\[F=N_{0}+G:{\mathcal{O}}_{0}\times D_{a,p+\nu}(r)\times\mathds{T}^{d}_{s}\to V_ {a,p}\,,\]
_which is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). Assume that \(\gamma\sim{\rm diam}{\mathcal{O}}_{0}\) and set_
\[\Gamma_{p}:=\gamma^{-1}C_{\vec{v},p}(G),\quad\Theta_{p}:=\gamma^{-1}C_{\vec{v} ,p}(\Pi_{{\mathcal{N}}}^{\perp}G).\] (3.5)
_Assume finally that for any \(f\in{\mathcal{B}}_{\mathcal{E}}\)_
\[|\Pi_{\mathcal{X}}[\Pi_{\mathcal{R}}G,f]|_{\vec{v},p-1}\leq C_{ \vec{v},p+1}(\Pi_{{\mathcal{N}}}^{\perp}G)|f|_{\vec{v},{\mathfrak{p}}_{1}}+C_{ \vec{v},{\mathfrak{p}}_{1}}(\Pi_{{\mathcal{N}}}^{\perp}G)|f|_{\vec{v},p+\nu+1},\] (3.6)
\[|\Pi_{\mathcal{X}}[\Pi_{\mathcal{N}}G,f]|_{\vec{v},p-1}\leq C_{ \vec{v},p+1}(G)|f|_{\vec{v},{\mathfrak{p}}_{1}}+C_{\vec{v},{\mathfrak{p}}_{1}} (G)|f|_{\vec{v},p+\nu+1},\]
_If \({\mathcal{O}}\) satisfies the Mel’nikov conditions of Definition 3.4 for \((F,K,\vec{v}^{\text{\tiny 0}})\) then \({\mathcal{O}}\) satisfies items 1. and 2.a-b-c of Definition 2.23 provided that we fix_
\[\mu=(b+1)(\mu_{1}+\nu+1)+\mathtt{t}\] (3.7)
_where \(\mathtt{t}>0\) is such that_
\[(1+\Theta_{{\mathfrak{p}}_{1}}(1+\Gamma_{{\mathfrak{p}}_{1}}))^{b}(1+\Gamma_{{ \mathfrak{p}}_{1}})\leq K_{0}^{\mathtt{t}}.\]
Proof.: The proof is deferred to Appendix C. ∎
## 4 Applications
In order to use the _Mel’nikov conditions_ in Definition 3.4 instead of the _homological equation_ in Definition 2.23 in Theorem 2.25 we need to prove that also item \(2.d\) of Definition 2.23 holds. This latter point depends strongly on the application so we discuss it in various examples.
Clearly the simplest possible case is \(\ell_{a,p}=0\) or a finite dimensional space. In any case we need to work in some subspace \({\mathcal{E}}\) endowed with some structure (say reversible or Hamiltonian). To this purpose we restrict \(\ell_{a,p}\) as follows.
**Definition 4.1**.: _We assume that \(\ell_{a,p}\) has a product structure \(\ell_{a,p}=h_{a,p}\times h_{a,p}\) with \(w=(z^{+},z^{-})\) and \(h_{a,p}\) is a scale of Hilbert spaces w.r.t. a norm \(\|\cdot\|_{a,p}\) satisfying (2.1). Moreover we assume that the subspaces \(\ell_{K}\) have a product structure as well \(\ell_{K}=h_{K}\times h_{K}\) with the \(h_{K}\) satisfying Hypothesis 2.1._
### Example 1: Reversible Nash-Moser.
Let us first discuss the “minimal choice”, i.e. where in all the definitions we make the simplest possible choices.
Clearly the minimal choice for \({\mathcal{X}}\) is
\[{\mathcal{X}}:={\mathcal{V}}^{(y,0)}\oplus{\mathcal{V}}^{(w,0)},\] (4.1)
whereas for \({\mathcal{N}}\) one can make for instance the classical choice
\[{\mathcal{N}}:={\mathcal{V}}^{(\theta,0)}\oplus{\mathcal{V}}^{(w,w)}\oplus{ \mathcal{V}}^{(y,y)}\oplus{\mathcal{V}}^{(y,w)}\oplus{\mathcal{V}}^{(w,y)}.\] (4.2)
The decomposition (4.1) and (4.2) is _trivially_ triangular, see Definition 3.1, provided that we set \(\mathtt{b}=1\) since for any \(R\in{\mathcal{R}},X\in{\mathcal{X}}\) one has
\[\Pi_{\mathcal{X}}[R,X]=0\,.\] (4.3)
Note that it is a degree decomposition with \(\mathtt{d}(y)=1\), where \({\mathcal{N}}\) is generated by all the monomials of degree zero and \({\mathcal{X}}\) is generated by all those of negative degree.
We choose the regular vector fields as \({\mathcal{A}}={\mathcal{X}}\), by setting for \(f=\big{(}0,f^{(y)}(\theta),f^{(w)}(\theta)\big{)}\),
\[|f|_{\vec{v},p}:=\|f\|^{(1)}_{\vec{v},p}=\|f\|_{\vec{v},p},\]
with the projectors \(\Pi_{K}\) defined as
\[(\Pi_{K}f^{(y,0)})(\theta):=\sum_{|\ell|\leq K}f^{(y,0)}_{\ell}e^ {{\rm i}\ell\cdot\theta},\] (4.4)
\[(\Pi_{K}f^{(w,0)})(\theta):=\sum_{|\ell|\leq K}\Pi_{\ell_{K}}f^{( w,0)}_{\ell}e^{{\rm i}\ell\cdot\theta}.\]
**Lemma 4.2**.: _The regular vector fields defined above satisfy all the properties of Definition 2.18; moreover the norm \(|f|_{\vec{v},p}\) is a sharp tameness constant for all \(p\geq{\mathfrak{p}}_{1}\)._
Proof.: Item \((1)\) is trivial and the bound (2.30) follows essentially by an explicit computation (see the proof of Lemma B.7 for more details). Now by Definition 2.13 we have that the bounds \((T_{m})\) hold for any change of variables \(\Phi\) and any \(y,w\). Hence for \(\Phi\equiv\mathds{1}\) and \(y=0=w\), for any \(p\geq{\mathfrak{p}}_{0}\), one has
\[|f|_{\vec{v},p}=\|f\|_{\vec{v},p}=\|f(\Phi)\|_{\vec{v},p}\leq C_{\vec{v},p}(f) +C_{\vec{v},{\mathfrak{p}}_{0}}(f)\|\Phi\|_{\vec{v},p}\leq\mathtt{c}C_{\vec{v} ,p}(f),\] (4.5)
where the last inequality holds since \(\|\Phi\|_{\vec{v},p}\equiv 1\) independently of \(p\) (recall that the map \(\Phi\) is evaluated at \(w=y=0\)). This means that \(|f|_{\vec{v},p}\) is a sharp tameness constant: this trivially implies that item \((2)\) in Definition 2.18 hold. Let us check item \((3)\). Recall the definition of the projectors in (4.4) and of the norm in (2.7); for \(\mathtt{v}=\theta,y\) one has
\[\|\Pi_{K}f\|^{2}_{s+s_{1},a,p+p^{\prime}} =\sum_{|l|\leq K}|f^{(\mathtt{v},0)}_{l}|^{2}\langle l\rangle^{2( p+p^{\prime})}e^{2|l|(s+s_{1})}\] (4.6)
The latter bounds holds also for the norm (2.21), hence (2.32) holds. Similarly the estimate (2.32) holds also for \(\mathtt{v}=w\). Moreover for \(\mathtt{v}=w\) we can write
\[(\mathds{1}-\Pi_{K})f^{(w,0)}(\theta)=\sum_{|l|>K}e^{{\rm i}\ell\cdot\theta}f_ {l}^{(w,0)}+\sum_{|l|\leq K}(\mathds{1}-\Pi_{\ell_{K}})f_{l}^{(w,0)}e^{{\rm i} \ell\cdot\theta},\]
hence one has for \(p,p^{\prime}\in\mathds{N}\)
\[\|(\mathds{1}-\Pi_{K})f^{(w,0)}(\theta)\|^{2}_{s,a,p} \leq\sum_{|l|>K}\langle l\rangle^{2p}\|f_{l}^{(w,0)}\|^{2}_{a,{ \mathfrak{p}}_{0}}e^{2s|l|}+\sum_{|l|>K}\langle l\rangle^{2p}e^{2s|l|}\|( \mathds{1}-\Pi_{\ell_{K}})f^{(w,0)}_{l}\|_{a,{\mathfrak{p}}_{0}}^{2}\] (4.7)
\[+\sum_{|l|>K}\langle l\rangle^{2{\mathfrak{p}}_{0}}\|(\mathds{1}- \Pi_{\ell_{K}})f_{l}^{(w,0)}\|^{2}_{a,p}e^{2s|l|}+\sum_{l\in\mathds{Z}}\langle l \rangle^{2{\mathfrak{p}}_{0}}e^{2s|l|}\|(\mathds{1}-\Pi_{\ell_{K}})f^{(w,0)}_{ l}\|_{a,p}^{2}\]
\[+\sum_{|l|>K}\langle l\rangle^{2{\mathfrak{p}}_{0}}\|\Pi_{\ell_{K }}f_{l}^{(w,0)}\|^{2}_{a,p}e^{2s|l|}+\sum_{|l|\leq K}\langle l\rangle^{2p}e^{2 s|l|}\|(\mathds{1}-\Pi_{\ell_{K}})f^{(w,0)}_{l}\|_{a,{\mathfrak{p}}_{0}}^{2}\]
\[\leq 2K^{-2p^{\prime}}\sum_{l\in\mathds{Z}}\langle l\rangle^{2(p+ p^{\prime})}\|f_{l}^{(w,0)}\|^{2}_{a,{\mathfrak{p}}_{0}}e^{2s|l|}\]
\[+2K^{-2p^{\prime}}\sum_{l\in\mathds{Z}}\langle l\rangle^{2{ \mathfrak{p}}_{0}}\|f_{l}^{(w,0)}\|^{2}_{a,p+p^{\prime}}e^{2s|l|}\]
\[+\mathtt{c}\sum_{l\in\mathds{Z}}\langle l\rangle^{2(p+p^{\prime}) }K^{-2(p^{\prime}+p)}K^{2{\mathfrak{p}}_{0}}K^{2(p-{\mathfrak{p}}_{0})}\|f_{l} ^{(w,0)}\|^{2}_{a,{\mathfrak{p}}_{0}}e^{2s|l|}\]
\[+\mathtt{c}\sum_{l\in\mathds{Z}}\langle l\rangle^{2{\mathfrak{p}} _{0}}e^{2s|l|}K^{2(p-{\mathfrak{p}}_{0})}K^{-2(p+p^{\prime}-{\mathfrak{p}}_{0} )}\|f^{(w,0)}_{l}\|_{a,p+p^{\prime}}^{2}\]
\[\leq CK^{-2p^{\prime}}\|f^{(w,0)}\|^{2}_{s,a,p+p^{\prime}},\]
and the latter bounds holds also for the norm (2.21). Similar bounds holds also for \(\mathtt{v}=\theta,y\), hence (2.33) holds. Condition (2.34) is trivial. Finally, items \(4\) and \(5\) can be checked easily since the map generated by vector fields in \({\mathcal{A}}\) are simply translations. ∎
By looking at the homological equation it is clear that the minimal requirement for the vector field is that \(F^{(y)}(\theta,y,0)=-F^{(y)}(-\theta,y,0)\), otherwise even when \(\ell_{a,p}=\emptyset\) one can easily produce examples in which invariant tori do not exist⁷.
[FOOTNOTE:7][ENDFOOTNOTE]
Following Sevryuk (see for instance [7] and references therein) one expects to require that the vector field satisfies some appropriate symmetry: this can be stated by saying that the vector field is reversible w.r.t. some involution. Naturally one needs also the “unperturbed vector field” \(N_{0}\) to be reversible w.r.t. the chosen involution, being the vector field that identifies the approximately invariant torus. In the applictaions to PDEs, one typically deals with \(N_{0}\) of the form
\[N_{0}=\omega^{(0)}\cdot\partial_{\theta}+{\rm i}\Lambda^{(0)}w\partial_{w}= \omega^{(0)}\cdot\partial_{\theta}+{\rm i}\Omega^{(0)}z^{+}\partial_{z^{+}}-{ \rm i}\Omega^{(0)}z^{-}\partial_{z^{-}}\] (4.8)
with \(\Omega^{(0)}\) a linear operator which is \(\theta\)-independent and block-diagonal w.r.t. all the \(h_{K}\). Thus \(N_{0}\) is a diagonal operator as in Definition 2.20. Unfortunately such \(N_{0}\) is not reversible w.r.t. the “simple” involution \((\theta,y,w)\to(-\theta,y,w)\), but it is reversible w.r.t. the involution \(S:(\theta,y,(z^{+},z^{-}))\to(-\theta,y,(z^{-},z^{+}))\). Therefore, as for the subspace \({\mathcal{E}}\) we choose
\[{\mathcal{E}}={\mathcal{E}}^{(0)}:=\left\{F\in{\mathcal{V}}_{\vec{v},p}:\quad \begin{pmatrix}F^{(\theta)}(-\theta,y,(z^{-},z^{+}))\\ F^{(y)}(-\theta,y,(z^{-},z^{+}))\\ F^{(z^{+})}(-\theta,y,(z^{-},z^{+}))\\ F^{(z^{-})}(-\theta,y,(z^{-},z^{+}))\end{pmatrix}=-\begin{pmatrix}-F^{(\theta) }(\theta,y,(z^{+},z^{-}))\\ F^{(y)}(\theta,y,(z^{+},z^{-}))\\ F^{(z^{-})}(\theta,y,(z^{+},z^{-}))\\ F^{(z^{+})}(\theta,y,(z^{+},z^{-}))\end{pmatrix}\right\}\,,\] (4.9)
i.e. the vector fields which are reversible w.r.t. the involution \(S\).
The conditions of Definitions 2.17 and 2.19 are trivially fulfilled with \(\mathtt{n}=1\) and
\[{\mathcal{B}}_{\mathcal{E}}:=\{g=(0,g^{(y)}(\theta),g^{(w)}(\theta))\,:\qquad g ^{(y)}(-\theta)=g^{(y)}(\theta)\,,\quad g^{(z^{+})}(-\theta)=g^{(z^{-})}( \theta)\}\,.\] (4.10)
Now consider a vector field of the form \({\mathcal{E}}\ni F=N_{0}+G\) and our aim is to apply Theorem 2.25 to \(F\) provided that \(G\) is sufficiently small. The simplest possible choice of compatible change of variables is, \({\mathcal{L}}_{n}:=\mathds{1}\) for all \(n\). With such choices, our scheme is the standard a Nash-Moser algorithm to find solutions of the torus embedding equation (1.2). Indeed each \(\Phi_{n}\) is a traslation in the \(y,w\) direction
\[y\to y+g_{n}^{(y)}(\theta)\,,\quad w\to w+g_{n}^{(w)}(\theta)\]
so that
\[{\mathcal{H}}_{n}:\;y\to y+h_{n}^{(y)}(\theta)\,,\quad w\to w+h_{n}^{(w)}( \theta)\,,\quad h_{n}=\sum_{j=0}^{n}g_{j}\,,\] (4.11)
\[F_{n}=F(\theta,y+h_{n}^{(y)},w+h_{n}^{(w)})-\partial_{\theta}h_{n}\cdot F^{( \theta)}(\theta,y+h_{n}^{(y)},w+h_{n}^{(w)}).\]
Note that \(h_{n}\) is simply an _approximate solution_ for the torus embedding equation (1.2), indeed one has that \(F_{n}^{(y)}(\theta,0,0),F_{n}^{(w)}(\theta,0,0)\to 0\) as \(n\to\infty\).
The difference with the standard Nash-Moser algorithm is therefore only in the point of view: instead of looking for a torus embedding, we are looking for a translation in the \(y,w\) variables which puts the embedding to zero.
With the above assumptions and assuming also that the smallness conditions (2.54) are satisfied, then we can apply Theorem 2.25. Now we show that in this case the set of parameters satisfying the the Mel’nikov conditions of Definition 3.4 also satisfies the homological equation of Definition 2.23.
**Proposition 4.3**.: _Let \(\gamma>0\), \(K\geq K_{0}\), consider a compact set \({\mathcal{O}}\subset{\mathcal{O}}_{0}\) and set \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) and \(\vec{v}^{\text{\tiny 0}}=(\gamma,{\mathcal{O}}_{0},s,a,r)\). Consider the vector field \(F\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\) with (see (4.8))_
\[F=N_{0}+G:{\mathcal{O}}_{0}\times D_{a,p+\nu}(r)\times\mathds{T}^{d}_{s}\to V_ {a,p}\,,\]
_which is \(C^{3}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). Assume that \(\gamma\sim{\rm diam}{\mathcal{O}}_{0}\) and \(F\in{\mathcal{E}}\) defined in (4.9). If \({\mathcal{O}}\) satisfies the Mel’nikov conditions of Definition 3.4 for \((F,K,\vec{v}^{\text{\tiny 0}})\) then \({\mathcal{O}}\) satisfies the Homological equation of Definition 2.23 provided that we fix parameters \(\mu=\mu_{1}\)._
Proof.: We note that (4.3) implies \(\Pi_{\mathcal{X}}{\rm ad}(\Pi_{\mathcal{X}}^{\perp}G)\)= \(\Pi_{\mathcal{X}}{\rm ad}(\Pi_{\mathcal{N}}G)\) so we may set \(g={\mathfrak{W}}\,\Pi_{K}\Pi_{\mathcal{X}}G\in{\mathcal{B}}_{\mathcal{E}}\) for \(\xi\in{\mathcal{O}}\). It is easily seen that the first of (2.46) follows from (3.3). As for the second equation we use the sharpness of \(|\cdot|_{\vec{v},p}\). Indeed by Lemma B.1 we know that \(C_{\vec{v},p+1}(G)|g|_{\vec{v},{\mathfrak{p}}_{1}}+C_{\vec{v},{\mathfrak{p}}_{ 1}}(G)|g|_{\vec{v},p+\nu+1}\) is a tameness constant for \([\Pi_{X}^{\perp}G,g]\). Then the bound follows by Lemma 4.2. Regarding 2.c, one simply notes that formula (3.4) implies (2.47).
Finally in order to prove 2.d, we start by showing that the inequality (2.48) holds by substituting the l.h.s. with a tameness constant \(C_{\vec{v},{\mathfrak{p}}_{2}-1}(\Pi_{{\mathcal{X}}}\Phi_{*}F)\); this follows from Lemmata B.3, B.1, B.6 and Remark B.2, see the proof of estimate (5.12) for more details. Therefore, the bound (2.48) follows from the sharpness of \(|\cdot|_{\vec{v},p}\) for any \(p\). ∎
By Proposition 4.3 the set \({\mathcal{O}}_{\infty}\) of Theorem 2.25 contains the intersection over \(n\) of the sets in which the Mel’nikov conditions are satisfied for \((F_{n},K_{n},\vec{v}_{n}^{\text{\tiny 0}})\), therefore we now analyze the Mel’nikov conditions. The operator \(\Pi_{\mathcal{X}}{\rm ad}(\Pi_{\mathcal{N}}F_{n})\) has the form
\[\Pi_{\mathcal{X}}{\rm ad}(\Pi_{\mathcal{N}}F_{n})=\big{(}F^{(\theta)}_{n}( \theta,0,0)\cdot\partial_{\theta}\big{)}\,\mathds{1}+\begin{pmatrix}F^{(y,y)}_ {n}(\theta)&F^{(y,w)}_{n}(\theta)\\ F^{(w,y)}_{n}(\theta)&F^{(w,w)}_{n}(\theta)\end{pmatrix}\,.\] (4.12)
Recall that \(F^{(\mathtt{v}_{i},\mathtt{v}_{j})}\) are defined in (2.24). Note that this operator maps \({\mathcal{B}}_{\mathcal{E}}\) in \({\mathcal{X}}\cap{\mathcal{E}}\). Finding \({\mathfrak{W}}\) satisfying (3.3) and (3.4) is now equivalent to finding an approximate inverse for a \(K_{n}\)-truncation of (4.12), which unfortunately seems a quite delicate question.
A possible simplification occurs if instead of (4.2) we consider the decomposition (recall (2.27))
\[{\mathcal{N}}:=\langle{\mathcal{V}}^{(\theta,0)}\rangle\oplus{\mathcal{V}}^{(w ,w)}\oplus{\mathcal{V}}^{(y,y)}\oplus{\mathcal{V}}^{(y,w)}\oplus{\mathcal{V}}^ {(w,y)}\ \quad{\mathcal{X}}={\mathcal{A}}:={\mathcal{V}}^{(y,0)}\oplus{ \mathcal{V}}^{(w,0)}\oplus{\mathcal{V}}^{(\theta,0)}_{0}\,,\]
and leave \({\mathcal{E}}\) unchanged; it is easily seen that the equivalent of Lemma 4.2 holds, and that
\[{\mathcal{B}}_{\mathcal{E}}:=\{g=(g^{(\theta)}(\theta),g^{(y)}(\theta),g^{(w)} (\theta))\,:\;g^{(\theta)}(-\theta)=-g^{(\theta)}(\theta)\,,\;g^{(y)}(-\theta) =g^{(y)}(\theta)\,,\;g^{(z^{+})}(-\theta)=g^{(z^{-})}(\theta)\}\,.\] (4.13)
Note that in this case we would obtain a stronger result, since the dynamics on the model torus would be linear.
We divide \({\mathcal{X}}\) by degree decomposition as in (3.1), with \(\mathtt{b}=2\) and \({\mathcal{X}}_{1}={\mathcal{V}}^{(y,0)}\oplus{\mathcal{V}}^{(w,0)}\), \({\mathcal{X}}_{2}={\mathcal{V}}^{(\theta,0)}_{0}\); this decomposition is triangular by Remark 3.2 and the equivalent of Proposition 4.3 holds.
As before we fix \({\mathcal{L}}_{n}=\mathds{1}\) for all \(n\). Now the maps \(\Phi_{n}\) are a translation in the \(y,w\) direction composed with a torus diffeomorphism . They are hence of the form
\[\theta\to\theta+h_{n}^{(\theta)}(\theta),\quad y\to y+h_{n}^{(y)}(\theta)\,, \quad w\to w+h_{n}^{(w)}(\theta)\] (4.14)
defined in such a way that \(F_{n}^{(\theta,0)}(\theta)=\omega^{(n)}+O(|g_{n}|)\) (here \(\omega^{(n)}\) is the average of \(F_{n}^{(\theta,0)}(\theta)\) w.r.t. \(\theta\)).
Regarding the Mel’nikov conditions we have that, by definition, \(\Pi_{\mathcal{X}}{\rm ad}(\Pi_{\mathcal{N}}F_{n})\) is block-diagonal on \({\mathcal{X}}={\mathcal{X}}_{1}\oplus{\mathcal{X}}_{2}\) and its action on \({\mathcal{X}}_{1}\) is of the form
\[\big{(}\omega^{(n)}\cdot\partial_{\theta}\big{)}\,\mathds{1}+\begin{pmatrix}F^ {(y,y)}_{n}(\theta)&F^{(y,w)}_{n}(\theta)\\ F^{(w,y)}_{n}(\theta)&F^{(w,w)}_{n}(\theta)\end{pmatrix}\,,\] (4.15)
while the action on \({\mathcal{X}}_{2}\) is simply \(\omega^{(n)}\cdot\partial_{\theta}\). Thus the Mel’nikov conditons (3.3),(3.4) on the component \({\mathcal{X}}_{2}\) amount to requiring that \(\omega^{(n)}\) is \(\gamma,\tau\) diophantine up to order \(K_{n}\). All the difficulty is now reduced to inverting (4.15).
Note that, under the same Diophantine hypotheses on \(\omega^{(n)}\), the operator (4.12) can be reduced to the form (4.15) by choosing at each step \(n\), the change of variables \({\mathcal{L}}_{n}\) to be the torus diffeomorphism which reduces \(F_{n}^{(\theta)}(\theta,0,0)\) to its mean value. Of course one needs to verify that the \({\mathcal{L}}_{n}\) are in fact a sequence of compatible changes of variables as in Defintion 2.24.
If we assume that the subspaces \(\ell_{K}\) of Hypothesis 2.1 are finite dimensional then the invertibility of \(\Pi_{K_{n}}\Pi_{\mathcal{X}}{\rm ad}(\Pi_{\mathcal{N}}F_{n})\Pi_{K_{n}}\) can be imposed by requiring that its eigenvalues are non-zero (the so-called first Mel’nikov condition); however, unless \(\ell_{K}\) is uniformly bounded (i.e. when \(\ell_{a,p}\) is a finite dimensional space) it is not at all trivial to obtain from such condition the bounds (3.3) and (3.4).
To the best of our knowledge the only examples in which one has enough control on (4.15) as it is, are the forced cases, i.e. when \(F^{(\theta)}=\omega_{0}\) and there are no \(y\) variables, that is \(d_{1}=0\). In this case one can use the so-called multiscale approach; see for instance [49, 25, 28]. Otherwise one needs a more refined decomposition; see below.
### Example 2: Hamiltonian KAM/Nash-Moser.
The following section is essentially a reformulation in our notations of the approach proposed in [45]. We start by remarking that when the vector field (2.53) is Hamiltonian, it is natural to apply to it only symplectic changes of variables: this amounts to completing the maps introduced in (4.14) to symplectic ones. We now describe our procedure and at the end of the subsection we state the Theorem with an application to the NLS equation.
**Definition 4.4** (**Symplectic structure)**.: _Recall that we assumed \(\ell_{a,p}\) to have the product structure of Definition 4.1. We endow the phase space with the symplectic structure \(d\theta\wedge dy+{\rm i}dz^{+}\wedge dz^{-}\)._
We consider the decomposition
\[{\mathcal{N}}:=\langle{\mathcal{V}}^{(\theta,0)}\rangle\oplus{\mathcal{V}}^{(w ,w)}\oplus{\mathcal{V}}^{(y,w,w)}\,,\quad{\mathcal{X}}:={\mathcal{V}}_{0}^{( \theta,0)}\oplus{\mathcal{V}}^{(y,0)}\oplus{\mathcal{V}}^{(y,y)}\oplus{ \mathcal{V}}^{(y,w)}\oplus{\mathcal{V}}^{(w,0)}\,.\] (4.16)
This decomposition satisfies (2.17) with \(\mathtt{n}=2\) and it is a degree decomposition with \({\rm deg}(y)=2\). Using the notation of (3.1) we have \(\mathtt{b}=3\) and
\[{\mathcal{X}}_{1}={\mathcal{V}}^{(y,0)}\,,\quad{\mathcal{X}}_{2}={\mathcal{V}} ^{(y,w)}\oplus{\mathcal{V}}^{(w,0)}\,,\quad{\mathcal{X}}_{3}={\mathcal{V}}_{0} ^{(\theta,0)}\oplus{\mathcal{V}}^{(y,y)}.\]
We remark that if one wants to solve the torus embedding equation taking advantage of the Hamiltonian structure, then the decompostion (4.16) appears naturally since it is the minimal decomposition containing (4.2) and preserving the Hamiltonian structure. More precisely given a change of coordinates as in (4.14), completing it to a symplectic one produces an element of \({\mathcal{X}}\) in (4.16).
Note that by (i) of Definition 2.19, the bigger is the set \({\mathcal{X}}\), the more delicate is the choice of \({\mathcal{A}}\).
**Definition 4.5** (**Finite rank vector fields)**.: _We consider vector fields \(f:\mathds{T}^{d}_{s}\times D_{a,p+\nu}(r)\to V_{a,p}\) of the form_
\[f=\sum_{v\in\mathtt{V}}f^{(v,0)}\partial_{v}+(f^{(y,y)}y+f^{(y,w )}\cdot w)\cdot\partial_{y}\,,\] (4.17)
\[f^{(y_{i},w)}\in H^{p}(\mathds{T}^{d}_{s};\ell_{-a,-{\mathfrak{p }}_{0}-\nu})\cap H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d}_{s};\ell_{-a,p-{ \mathfrak{p}}_{1}-{\mathfrak{p}}_{0}-\nu})\,,\quad\langle f^{(\theta,0)} \rangle=0\]
_and we set for \(p\geq{\mathfrak{p}}_{1}\)_
\[|f|_{s,a,p}: =\sum_{u=\theta,y,w}\|f^{(u,0)}\|_{s,a,p}+\max_{i,j=1,\dots,d_{1} }\|f^{(y_{i},y_{j})}\|_{s,a,p}+\] (4.18)
\[+\frac{1}{r_{0}^{\mathtt{s}}} \max_{i=1,\ldots,d_{1}}\left(\|f^{(y_{i},w)}\|_{H^{p}(\mathds{T}^ {d}_{s};\ell_{-a,-{\mathfrak{p}}_{0}-\nu})}+\|f^{(y_{i},w)}\|_{H^{{\mathfrak{p }}_{0}}(\mathds{T}^{d}_{s};\ell_{-a,p-{\mathfrak{p}}_{1}-{\mathfrak{p}}_{0}- \nu})}\right)\]
_We say that \(f\) is of finite rank if \(|f|_{s,a,p}<\infty\). We denote by \({\mathcal{A}}_{s,a,p}\) the space of finite rank vector fields. Given a compact set \({\mathcal{O}}\subseteq{\mathcal{O}}_{0}\) we denote by \({\mathcal{A}}_{\vec{v},p}\) with \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) the set of Lipschitz families \({\mathcal{O}}\to{\mathcal{A}}_{s,a,p}\) with the corresponding \(\gamma\)-weighted Lipschitz norm which we denote by \(|\cdot|_{\vec{v},p}\)._
**Remark 4.6**.: _Note that \({\mathcal{V}}^{(y,w)}\) is not contained in the set of finite rank vector fields; indeed in general by the identification of \(\ell_{a,p}^{*}\) with \(\ell_{-a,-p}\) one has that and \(g\in{\mathcal{V}}^{(y_{i},w)}\) can be written as \(g^{(y_{i},w)}(\theta)\cdot w\,\partial_{y_{i}}\) where_
\[g^{(y_{i},w)}\in H^{p}(\mathds{T}^{d}_{s},\ell_{-a,-{\mathfrak{p}}_{0}-\nu}) \cap H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d}_{s},\ell_{-a,-p-\nu})\,.\]
_On the other hand (4.17) is a stronger condition. Our – notationally quite unpleasant – choice of \(\ell_{-a,p-{\mathfrak{p}}_{1}-{\mathfrak{p}}_{0}-\nu}\) is needed in order to verify condition (2.31) in Definition 2.18._
**Definition 4.7**.: _Given \(K>0\) and a vector field \(f\in{\mathcal{A}}\) we define the projection \(\Pi_{K}f\) as_
\[(\Pi_{K}f^{(\mathtt{v},0)})(\theta):=\sum_{|\ell|\leq K}f^{( \mathtt{v},0)}_{\ell}e^{{\rm i}\ell\cdot\theta},\qquad\mathtt{v}=\theta,y\,,\] (4.19)
\[(\Pi_{K}f^{(w,0)})(\theta):=\sum_{|\ell|\leq K}\Pi_{\ell_{K}}f^{( w,0)}_{\ell}e^{{\rm i}\ell\cdot\theta},\quad(\Pi_{K})f^{(y_{i},y_{j})}(\theta) :=\sum_{|\ell|\leq K}f^{(y_{i},y_{j})}_{\ell}e^{{\rm i}\ell\cdot\theta},i,j=1, \ldots,d\,,\]
\[(\Pi_{K}f^{(y_{i},w)})(\theta):=\sum_{|\ell|\leq K}\Pi_{\ell_{K}} f^{(y_{i},w)}_{\ell}e^{{\rm i}\ell\cdot\theta}\,,\]
_and we define \(E^{(K)}\) as the subspace of \({\mathcal{A}}_{\vec{v},p}\) where \(\Pi_{K}\) acts as the identity._
**Lemma 4.8**.: _The finite rank vector fields of Definition 4.5 satisfy all the conditions of Definition 2.18._
Proof.: The Hilbert structure comes from the fact that (4.18) is defined by using the norm of an Hilbert space on each component. Item \(1\) follows by the definition while item \(2\) formula (2.30) is proved in B.7. To prove bound (2.31) in item \(2\) we reason as follow. Let us study the \((y,w)\) component since the other are trivial. For \(\Phi=\mathds{1}\) one has that
\[\|f^{(y,w)}\circ\Phi\|_{s,a,{\mathfrak{p}}_{1}}=\max_{i=1,\ldots,d}\frac{1}{r_ {0}^{\mathtt{s}}}\sum_{l\in\mathds{Z}}\langle l\rangle^{2{\mathfrak{p}}_{1}}|f _{l}^{y_{i},w}\cdot w|^{2}\leq\max_{i=1,\ldots,d}\frac{1}{r_{0}^{\mathtt{s}}} \sum_{l\in\mathds{Z}}\langle l\rangle^{2{\mathfrak{p}}_{1}}\|f^{(y_{i},w)}_{l} \|^{2}_{-a,-{\mathfrak{p}}_{0}-\nu}\|w\|^{2}_{a,{\mathfrak{p}}_{0}+\nu}\] (4.20)
using the Cauchy-Schwartz inequality. By the sharpness of the latter inequality we deduce that any tameness constant must be larger that the right hand side of (4.20), which in turn is bounded from below by \(\frac{1}{2}|f^{(y,w)}|_{s,a,{\mathfrak{p}}_{1}}\).
Items \(4,5\) are proved in Lemmata B.8 and B.9 in the Appendix.
Finally we need to show that item \(3\) holds. Now given a vector field \(f\in{\mathcal{A}}_{\vec{v},p}\), the components \(f^{(\mathtt{v},0)}\) are discussed in (4.6) and (4.7), and the components \(f^{(\mathtt{v},0)}\) and \(f^{(y,y)}\) can be treated in the same way. By the definition of the projector 4.7 one has that the last component \((y,w)\) behaves essentially as the component \((w,0)\). Hence again the smoothing bounds hold by reasoning as done in (4.6) and (4.7). ∎
We choose (\(J\) is the standard symplectic matrix)
\[{\mathcal{E}}={\mathcal{E}}^{(0)}_{\rm Ham}:=\Big{\{}F\in{\mathcal{V}}_{\vec{v },p}:\;F=(\partial_{y}H,-\partial_{\theta}H,{\rm i}J\partial_{w}H)\,,\quad H( \theta,y,z^{+},\overline{z^{+}})\in\mathds{R}\Big{\}},\] (4.21)
while the regular vector fields are given by Definition 4.5. Note that by construction \({\mathcal{E}}^{(0)}_{\rm Ham}\cap{\mathcal{X}}\equiv{\mathcal{A}}\), indeed the condition \(J\partial_{w}H(\theta,y,w)=F^{(w)}(\theta,y,w)\in\ell_{a,p}\) implies that \(\partial_{w}F^{(y)}:=-\partial_{w}\partial_{\theta}H(\theta,y,w)\in\ell_{a,p}\) as well. Then \({\mathcal{B}}_{\mathcal{E}}\) is the space of regular Hamiltonian vector fields. The conditions of Definitions 2.17 and 2.19 are trivially fulfilled. Note that the degree decomposition preserves the Hamiltonian structure.
**Lemma 4.9**.: _Consider a tame vector field \(f\in{\mathcal{A}}_{\vec{v},p}\cap{\mathcal{E}}\) (i.e. regular vector field according to Definition 4.5 which is Hamiltonian). There exists a \(\mathtt{c}\) (depending at most on \({\mathfrak{p}}_{0}\) and on the dimensions \(d,d_{1}\)) such that for any tameness constant_
\[|f|_{\vec{v},p}\leq\mathtt{c}\,C_{\vec{v},p+1}(f)\] (4.22)
_for any \(p\geq{\mathfrak{p}}_{1}\)._
Proof.: On the components \((\mathtt{v},0)\), \(\mathtt{v}=\theta,y,w\) and \((y,y)\) the bound (4.22) is proved in Lemma 4.2. Let us study the \((y,w)-\)component. First recall that, since we are in a Hamiltonian setting, then one has \(f^{(y,w)}(\theta)=-iJ\partial_{\theta}f^{(w,0)}(\theta)\). Hence for \(\Phi\equiv\mathds{1}\) and \(y=0=w\) one has for any \(p\geq{\mathfrak{p}}_{0}\) that
\[|f^{(y,w)}(\theta)|_{\vec{v},p}\leq|\partial_{\theta}f^{(w,0)}(\theta)|_{\vec{ v},p}\leq|f^{(w,0)}(\theta)|_{\vec{v},p+1}=\|f^{(w,0)}\circ{\Phi}\|_{\vec{v},p +1}\leq\mathtt{c}C_{\vec{v},p+1}(f).\] (4.23)
Threfore the assertion follows. ∎
As in Subsection 4.1 we now relate the Mel’nikov conditions to the homological equation.
**Proposition 4.10**.: _Let \(\gamma>0\), \(K\geq K_{0}\), consider a compact set \({\mathcal{O}}\subset{\mathcal{O}}_{0}\) and set \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) and \(\vec{v}^{\text{\tiny 0}}=(\gamma,{\mathcal{O}}_{0},s,a,r)\). Consider a vector field \(F\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\cap{\mathcal{E}}^{(0)}_{\rm Ham}\) of the form_
\[F=N_{0}+G:{\mathcal{O}}_{0}\times D_{a,p+\nu}(r)\times\mathds{T}^{d}_{s}\to V_ {a,p}\,,\]
_where \({\mathcal{E}}^{(0)}_{\rm Ham}\) is defined (4.21) and \(N_{0}\) is defined in (4.8) with \(\Omega^{(0)}\) self-adjoint. Assume that \(F\) is \(C^{4}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). Assume that \(\gamma\sim{\rm diam}{\mathcal{O}}_{0}\) and set_
\[\Gamma_{p}:=\gamma^{-1}C_{\vec{v},p}(G),\quad\Theta_{p}:=\gamma^{-1}C_{\vec{v} ,p}(\Pi_{{\mathcal{N}}}^{\perp}G).\] (4.24)
_If \({\mathcal{O}}\) satisfies the Mel’nikov conditions of Definition 3.4 for \((F,K,\vec{v}^{\text{\tiny 0}})\) then \({\mathcal{O}}\) satisfies the homological equation of Definition 2.23 provided that we fix parameters \(\mu\) and \(\mathtt{t}\) as in (3.7)._
Proof.: We wish to apply Proposition 3.5 in order to prove that items 1. and 2.a-b-c of Definition 2.23 are satisfied for \({\mathcal{O}}\) satisfying the Mel’nikov conditions. In order to do so we need to prove (3.6). The desired bounds follow from Lemma 4.9 and from the bounds (B.1) on tameness constants of commutators. We now prove item 2.d. We claim that there exists a choice of a tameness constant \(C_{\vec{v},{\mathfrak{p}}_{2}-1}(\Pi_{{\mathcal{X}}}\Phi_{*}F)\) which satisfies (2.48). Indeed this follows from Lemmata B.3, B.1, B.6 and Remark B.2, see the proof of estimate (5.12) for more details. The bound (2.48) follows from Lemma 4.9.
In fact one may prove (2.48) directly (obtaining a slightly better bound). Let \(\Phi=\mathds{1}+f\) be the time one flow map of the field \(g\) defined by Proposition 3.5 and \(\Phi^{-1}=\mathds{1}+\tilde{f}\) its inverse. Note that \(f\) is of the form (4.17) and \(\Pi_{{\mathcal{X}}}\Phi_{*}F=\Pi_{{\mathcal{X}}}F\circ{\Phi^{-1}}+df[F\circ{ \Phi^{-1}}]\). The bound on the first summand follows by item 1. The only non trivial term in the second summand is given by
By the Hamiltonian structure the operator⁸
[FOOTNOTE:8][ENDFOOTNOTE]
\[i\sigma_{3}d_{w}F^{(w)}\,,\quad\sigma_{3}:=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\]
is self-adjoint, hence the bound (2.48) follows by the tame estimates on \(F\) and the fact that \(f^{(y,w)}\in\ell_{-a,-{\mathfrak{p}}_{0}-\nu}\). Hence the assertion follows. ∎
Again, under the same assumptions as in Proposition 4.10 and of course assuming the smallness conditions (2.54), we can apply Theorem 2.25; by Proposition 4.10 the set \({\mathcal{O}}_{\infty}\) of Theorem 2.25 contains the intersection over \(n\) of the sets \(\mathcal{C}_{n}\) in which the Mel’nikov conditions are satisfied for \((F_{n},K_{n},\vec{v}_{n}^{\text{\tiny 0}})\), therefore we now analyze the Mel’nikov conditions.
The operator \(\Pi_{\mathcal{X}}{\rm ad}(\Pi_{\mathcal{N}}F_{n})\), restricted to the blocks \({\mathcal{X}}_{1},{\mathcal{X}}_{3}\) coincide with the operator \((\omega^{(n)}\cdot\partial_{\theta})\,\mathds{1}\) while on the block \({\mathcal{X}}_{2}={\mathcal{V}}^{(w,0)}\oplus{\mathcal{V}}^{(y,w)}\), we get
\[\begin{pmatrix}\big{(}\omega^{(n)}\cdot\partial_{\theta}\big{)}\,\mathds{1}-F^ {(w,w)}_{n}(\theta)&0\\ -F^{(y,w,w)}_{n}(\theta)&\big{(}\omega^{(n)}\cdot\partial_{\theta}\big{)}\, \mathds{1}+\big{(}F^{(w,w)}_{n}(\theta)\big{)}^{*}\end{pmatrix}\,.\] (4.25)
Note that the operators appearing on the diagonal of (4.25) are \(i\sigma_{3}\) times a self-adjoint operator, moreover the whole operator maps Hamiltonian vector fields into Hamiltonian vector fields.
As before, the Mel’nikov conditons (3.3),(3.4) on the component \({\mathcal{X}}_{1},{\mathcal{X}}_{3}\) amount to requiring that \(\omega^{(n)}\) is \(\gamma,\tau\) diophantine up to order \(K_{n}\).
In conclusion we have proved the following Theorem.
**Theorem 4.11**.: _Consider a vector field \(F\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\cap{\mathcal{E}}^{(0)}_{\rm Ham}\) of the form_
\[F=N_{0}+G:{\mathcal{O}}_{0}\times D_{a,p+\nu}(r)\times\mathds{T}^{d}_{s}\to V_ {a,p}\,,\]
_where \({\mathcal{E}}^{(0)}_{\rm Ham}\) is defined (4.21) and \(N_{0}\) is defined in (4.8) with \(\Omega^{(0)}\) self-adjoint. Assume that \(F\) is \(C^{4}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). Fix \(\gamma>0\) such that \(\gamma\sim{\rm diam}{\mathcal{O}}_{0}\) and assume that \(G\) satisfies the smallness conditions (2.54) of Theorem 2.25. Then there exists an invariant torus for \(F\) provided that \(\xi\) belong to the set \({\mathcal{O}}_{\infty}\) of Theorem 2.25. Finally \({\mathcal{O}}_{\infty}\) contains \(\bigcap_{n}\mathcal{C}_{n}\) where \(\mathcal{C}_{n}\) is the set of \(\xi\) such that \(\omega_{n}\) is \((\gamma,\tau)-\)diophantine and the matrix \(\mathfrak{N}_{n}\) in (4.25) is approximatively invertible with tame bounds like (3.3) and (3.4)._
We claim that, in the applications, proving the approximate invertibility of (4.25) is significantly simpler than proving the invertibility of (4.15).
### Example 3: Reversible KAM/Nash-Moser.
We now wish to obtain a triangular decomposition for the Melnikov conditions as in (4.25) but without restricting to Hamiltonian vector fields. To this purpose we set
\[{\mathcal{N}}:={\mathcal{V}}^{(\theta,0)}\oplus{\mathcal{V}}^{(w,w)}\,,\quad{ \mathcal{X}}:={\mathcal{V}}^{(y,0)}\oplus{\mathcal{V}}^{(y,y)}\oplus{\mathcal{ V}}^{(y,w)}\oplus{\mathcal{V}}^{(w,0)}.\] (4.26)
or
\[{\mathcal{N}}:=\langle{\mathcal{V}}^{(\theta,0)}\rangle\oplus{\mathcal{V}}^{(w ,w)}\,,\quad{\mathcal{X}}:={\mathcal{V}}^{(\theta,0)}_{0}\oplus{\mathcal{V}}^{ (y,0)}\oplus{\mathcal{V}}^{(y,y)}\oplus{\mathcal{V}}^{(y,w)}\oplus{\mathcal{V} }^{(w,0)}.\] (4.27)
such choices are compatible with Definition 2.17 with \(\mathtt{n}=1\). Note that both cases come from a degree decomposition provided that we fix \(1<{\rm deg}(y)<2\), therefore they are trivially triangular. Now the degree decomposition of (3.1), say in case (4.27), reads \(\mathtt{b}=4\) and gives \({\mathcal{X}}_{1}={\mathcal{V}}^{(y,0)}\), \({\mathcal{X}}_{2}={\mathcal{V}}^{(w,0)}\), \({\mathcal{X}}_{3}={\mathcal{V}}^{(y,w)}\) and \({\mathcal{X}}_{4}={\mathcal{V}}^{(\theta,0)}_{0}\oplus{\mathcal{V}}^{(y,y)}\).
We define the space of regular vector field \({\mathcal{A}}_{\vec{v},p}\) as the “finite rank vector field” of Definition 4.5. and introduce the smoothing operator \(\Pi_{K}\) as in Definition 4.7. By lemma 4.8 such vector fields satisfy all the conditions of Definition 2.18.
Regarding the choice of \({\mathcal{E}}\), we require the reversibility condition (4.9), moreover, in order to satisfy condition (i) and (iii) of Definition 2.19 we set
\[{\mathcal{E}}={\mathcal{E}}^{(1)}:=\Big{\{}F\in{\mathcal{E}}^{(0)}:\;d_{w}F^{( y)}(\theta,y,w)\in H^{p}(\mathds{T}^{d}_{s};\ell_{-a,-{\mathfrak{p}}_{0}-\nu}) \cap H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d}_{s};\ell_{-a,p-{\mathfrak{p}}_{1}-{ \mathfrak{p}}_{0}-\nu})\,,\;\Big{\}}.\] (4.28)
If we set \({\mathcal{L}}_{n}=\mathds{1}\) as before, we get changes of variables of the form
\[y\to y+h^{(y,0)}(\theta)+h^{(y,y)}(\theta)y+h^{(y,w)}(\theta)\cdot w\,,\quad w \to w+h^{(w,0)}(\theta)\,,\quad\theta\to\theta+h^{(\theta,0)}(\theta)\]
i.e. the changes of variables (4.14) of Example 1, composed with a \((y,w)\)-linear change of variable of finite rank. Note that a regular \(g\in{\mathcal{X}}\) is in \({\mathcal{B}}_{\mathcal{E}}\) if it satisfies (4.13).
On \({{\mathcal{E}}}^{(1)}\) we give a slightly stronger definition of tame vector field.
**Definition 4.12**.: _We say that a \(C^{3}-\)tame vector field \(F\in{\mathcal{E}}^{(1)}\) is “adjoint-tame” if there exists a choice of tameness constants \(C_{\vec{v},p}(F)\) such that, for any \(\Phi\) generated by \(g\in{\mathcal{B}}_{{\mathcal{E}}}\) and for any \(h\) as in Definition 2.13, the adjoint⁹ of \(d_{\mathtt{U}}F(\Phi)\) is tame and satisfies the bounds. Setting_
[FOOTNOTE:9][ENDFOOTNOTE]
\[X^{p}:=H^{p+\nu}(\mathds{T}^{d}_{s};\mathds{C}^{d_{1}}\times\ell_{-a,-{ \mathfrak{p}}_{0}-\nu})\cap H^{{\mathfrak{p}}_{0}+\nu}(\mathds{T}^{d}_{s}; \mathds{C}^{d_{1}}\times\ell_{-a,p-{\mathfrak{p}}_{1}-{\mathfrak{p}}_{0}-\nu})\]
_and_
\[Y^{p}:=H^{p}(\mathds{T}^{d}_{s};V_{-a,-{\mathfrak{p}}_{0}})\cap H^{{\mathfrak{ p}}_{0}}(\mathds{T}^{d}_{s};V_{-a,p-{\mathfrak{p}}_{1}-{\mathfrak{p}}_{0}})\]
_for \(p\geq{\mathfrak{p}}_{1}\), one has (see formula (2.22))_
(4.29)
We have the following Lemmata.
**Lemma 4.13**.: _Consider a regular vector field \(f\in{\mathcal{A}}_{\vec{v},p}\). Then \(f\) satisifes (4.29) with \(C_{\vec{v},p}(f)=|f|_{\vec{v},p}\). Moreover there exists a \(\mathtt{c}\) (depending at most on \({\mathfrak{p}}_{0}\) and on the dimensions \(d,d_{1}\)) such that for any tameness constant satisfying (4.29) one has_
\[|f|_{\vec{v},p}\leq\mathtt{c}\,C_{\vec{v},p}(f)\] (4.30)
_for any \(p\geq{\mathfrak{p}}_{1}\)._
Proof.: We only sketch the proof of the Lemma when \(\Phi\) is the identity: the general case is essentially identical due to the simple structure of \({\mathcal{A}}_{\vec{v},p}\) and \({\mathcal{B}}\). The only non-trivial components are \(f^{(y_{i},w)}\cdot w\). The adjoint of the differential is then the map \(\lambda\to f^{(y_{i},w)}\lambda\) with \(\lambda\in H^{p}(\mathds{T}^{d}_{s})\). The result follows by the definition of \(|\cdot|_{\vec{v},p}\). ∎
**Lemma 4.14**.: _The adjoint-tame vector fields are closed with respect to close to identity changes of variables \(\Phi=\mathds{1}+f\) generated by \(\psi\in{\mathcal{B}}_{{\mathcal{E}}}\). In particular, setting \(F_{+}=\Phi_{*}F\), one has that the tameness constants in (B.7) satisfy condition \((T1)^{*}\)._
_Proof._: Fix a vector field \(F:\mathds{T}^{d}_{s}\times D_{a,p+\nu}(r)\times{\mathcal{O}}\to V_{a,p}\) which is \(C^{3}\)–tame, By Remark 2.7, we know that if \(|f|_{\vec{v},{\mathfrak{p}}_{1}}=\mathtt{c}\rho\) with \(\mathtt{c}\) small enough then there exists \(\Phi^{-1}=\mathds{1}+\tilde{f}\) with \(|\tilde{f}|_{\vec{v},p}\sim|f|_{\vec{v},p}\sim|\psi|_{\vec{v},p}\) and one has, by Lemma B.3\(F_{+}:=\Phi_{*}F:\mathds{T}^{d}_{s-2\rho s_{0}}\times D_{a,p+\nu}(r-2\rho r_{0 })\times{\mathcal{O}}\to V_{a-2\rho a_{0},p}\) is \(C^{3}\)–tame up to order \(q-\nu-1\), with scale of constants
\[C_{\vec{v}_{2},p}(F_{+})\leq(1+\rho)\Big{(}C_{\vec{v},p}(F)+C_{\vec{v},{ \mathfrak{p}}_{0}}(F)C_{\vec{v}_{1},p+\nu+1}(f)\Big{)},\] (4.31)
where \(\vec{v}:=(\lambda,{\mathcal{O}},s,a,r)\), \(\vec{v}_{1}:=(\lambda,{\mathcal{O}},s-\rho s_{0},a-{\varrho}a_{0},r-\rho r_{0})\) and \(\vec{v}_{2}:=(\lambda,{\mathcal{O}},s-2\rho s_{0},a-2\rho a_{0},r-2\rho r_{0})\). Now by Lemma 4.13 since \(f,\tilde{f}\) are “regular” vector fields, they are also “adjoint-tame”.
Consider a transformation \(\Gamma\) generated by \(g\in{\mathcal{B}}_{{\mathcal{E}}}\). We need to check that \((d_{\mathtt{U}}F_{+}(\Gamma))^{*}[h]\) satisfies (4.29) with \(C_{\vec{v},p}(F)\rightsquigarrow C_{\vec{v}_{2},p}(F_{+})\).
One can write \(F_{+}=F\circ{\Phi^{-1}}+df(\Phi)[F\circ\Phi^{-1}]\) and study the two summands separately. First note that \(\Phi\circ\Gamma=\Psi=\mathds{1}+k\) with \(k\in{\mathcal{B}}\) such that
\[|k|_{\vec{v}_{1},p}\leq|\psi|_{\vec{v},p}|g|_{\vec{v},{\mathfrak{p}}_{1}}+| \psi|_{\vec{v},{\mathfrak{p}}_{1}}|g|_{\vec{v},p}.\]
In particular \(k\) is “adjoint-tame” since it is regular. On the other hand if one has two linear (in \(y,w\)) vector fields \(A\) and \(B\) which are “adjoint-tame”, then \(AB\) is clearly “adjoint-tame”. Now one can write \((F_{+}\circ{\Gamma})=F(\Phi^{-1}\circ{\Gamma})+df(\Phi\circ{\Gamma})[F(\Phi^{- 1}\circ\Gamma)]\). Let us study for instance the first summand. Essentially (4.29) follows by the chain rule, the property \((T1)^{*}\) on \(F\), and the tame estimates on the differential of \(k\) and on its adjoint.
One has that \(d_{\mathtt{U}}F(\Phi^{-1}\circ\Gamma)=d_{\mathtt{U}}F(\Phi^{-1}\circ\Gamma)d_{ \mathtt{U}}(\Phi^{-1}\circ\Gamma)\). The estimate (4.29) follows by \((T1)^{*}\) on the field \(F\) and the estimates on \(k\). Jus as an example consider the term from the differential of \(df[F\circ\Phi^{-1}]\) is, for \(i=1,\ldots,d_{1}\), the operator \(f^{(y_{i},w)}\cdot d_{w}F^{(w)}(\theta,y,w)[\cdot]\). For \(h\in H^{p}(\mathds{T}^{d}_{s};\mathds{C})\) we have the estimate
\[\|(d_{w}F^{(w)}(\Psi))^{*}f^{(y_{i},w)}h\|_{\gamma,{\mathcal{O}}, H^{p+\nu}(\mathds{T}^{d}_{s};\ell_{-a,-{\mathfrak{p}}_{0}-\nu})\cap H^{{ \mathfrak{p}}_{0}+\nu}(\mathds{T}^{d}_{s};\ell_{-a,p-{\mathfrak{p}}_{1}-{ \mathfrak{p}}_{0}-\nu})}\stackrel{{(T1)^{*}}}{{\leq}}\] (4.32)
\[\leq(C_{\vec{v},p}(F)+C_{\vec{v},{\mathfrak{p}}_{0}}(F)|k|_{\vec{ v},p+\nu})\|f^{(y_{i},w)}h\|_{H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d}_{s}; \mathds{C})}+C_{\vec{v},{\mathfrak{p}}_{0}}(F)(1+|k|_{\vec{v},{\mathfrak{p}}_{ 1}})\|f^{(y_{i},w)}h\|_{H^{p}(\mathds{T}^{d}_{s};\mathds{C})}\]
\[\leq(C_{\vec{v},p}(F)+C_{\vec{v},{\mathfrak{p}}_{0}}(F)|k|_{\vec{ v},p+\nu})|\psi|_{\vec{v},{\mathfrak{p}}_{0}}\|h\|_{H^{{\mathfrak{p}}_{0}}( \mathds{T}^{d}_{s};\mathds{C})}+\]
\[+C_{\vec{v},{\mathfrak{p}}_{0}}(F)(1+|k|_{\vec{v},{\mathfrak{p}}_ {1}})(\|h\|_{H^{p}(\mathds{T}^{d}_{s};\mathds{C})}|\psi|_{\vec{v},{\mathfrak{p }}_{1}}+\|h\|_{H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d}_{s};\mathds{C})}|\psi|_{ \vec{v},p})\]
\[\leq C_{\vec{v},{\mathfrak{p}}_{0}}(F)(1+2|\psi|_{\vec{v},{ \mathfrak{p}}_{1}}|g|_{\vec{v},{\mathfrak{p}}_{1}})|\psi|_{\vec{v},{\mathfrak{ p}}_{1}}\|h\|_{H^{p}(\mathds{T}^{d}_{s};\mathds{C})}\]
\[+\|h\|_{H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d}_{s};\mathds{C})} \Big{[}C_{\vec{v},p}(F)|\psi|_{\vec{v},{\mathfrak{p}}_{1}}+C_{\vec{v},{ \mathfrak{p}}_{0}}(F)|\psi|_{\vec{v},p+\nu}+C_{\vec{v},{\mathfrak{p}}_{0}}(F)| \psi|_{\vec{v},{\mathfrak{p}}_{1}}|g|_{\vec{v},p+\nu}\Big{]}.\]
∎
**Proposition 4.15**.: _Let \(\gamma>0\), \(K\geq K_{0}\), consider a compact set \({\mathcal{O}}\subset{\mathcal{O}}_{0}\) and set \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) and \(\vec{v}^{\text{\tiny 0}}=(\gamma,{\mathcal{O}}_{0},s,a,r)\). Consider a vector field \(F\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\cap{\mathcal{E}}^{(1)}\) of the form_
\[F=N_{0}+G:{\mathcal{O}}_{0}\times D_{a,p+\nu}(r)\times\mathds{T}^{d}_{s}\to V_ {a,p}\,,\]
_where \({\mathcal{E}}^{(1)}\) is defined (4.28) and \(N_{0}\) is defined in (4.8) with \(\Omega^{(0)}\) self-adjoint. Assume that \(F\) is \(C^{3}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\) and adjoint-tame. Assume that \(\gamma\sim{\rm diam}{\mathcal{O}}_{0}\) and set_
\[\Gamma_{p}:=\gamma^{-1}C_{\vec{v},p}(G),\quad\Theta_{p}:=\gamma^{-1}C_{\vec{v} ,p}(\Pi_{{\mathcal{N}}}^{\perp}G).\] (4.33)
_If \({\mathcal{O}}\) satisfies the Mel’nikov conditions of Definition 3.4 for \((F,K,\vec{v}^{\text{\tiny 0}})\) then \({\mathcal{O}}\) satisfies the Homological equation of Definition 2.23 provided that we fix parameters \(\mu\) and \(\mathtt{t}\) as in (3.7). Moreover \(\Phi_{*}F\) (defined in (2.48)) is adjoint-tame._
Proof.: We wish to apply Proposition (3.5) in order to prove that items \((1)\) and 2.a,b,c of Definition 2.23 are satisfied. In order to do so we need to prove (3.6). One has that (3.6) follows directly by using the adjoint-tameness estimates on \(\Pi_{{\mathcal{R}}}G\) and \(\Pi_{{\mathcal{N}}}G\) as done in the proof of Lemma 4.14. Regarding item \(2.d\). To prove estimate (2.48) one can use the adjoint-tameness of \(G\) to get the bound for the term \(\Phi_{*}G\). The term \(\Phi_{*}N_{0}\) must be treated as done in (5.31) of Proposition 5.1 by using the norm \(|\cdot|_{\vec{v},p}\) instead of the tameness constant. This can be done using the (3.6) and the fact that \(g\) in Definition 2.23 satisfies the homological equation (2.47). The adjoint-tameness of \(\Phi_{*}F\) follows by Lemma 4.14 since \(g\in{\mathcal{B}}_{{\mathcal{E}}}\) by definition. ∎
As in the previous examples, under the same assumptions as in Proposition 4.15 and of course assuming (2.54), we can apply Theorem 2.25; by Proposition 4.15 the set \({\mathcal{O}}_{\infty}\) of Theorem 2.25 contains the intersection over \(n\) of the sets in which the Mel’nikov conditions are satisfied for \((F_{n},K_{n},\vec{v}_{n}^{\text{\tiny 0}})\), therefore we now analyze the Mel’nikov conditions (3.4) in this case.
The operator \(\Pi_{\mathcal{X}}{\rm ad}(\Pi_{\mathcal{N}}F_{n})\)decomposes as follows: we get the operator \((\omega^{(n)}\cdot\partial_{\theta})\,\mathds{1}\) on the blocks \({\mathcal{X}}_{1},{\mathcal{X}}_{4}\) while on the blocks \({\mathcal{X}}_{2},{\mathcal{X}}_{3}\), we get
\[\big{(}\omega^{(n)}\cdot\partial_{\theta}\big{)}\,\mathds{1}-F^{(w,w)}_{n}( \theta)\,,\quad\big{(}\omega^{(n)}\cdot\partial_{\theta}\big{)}\,\mathds{1}+ \Big{(}F^{(w,w)}(\theta)\Big{)}^{*}\] (4.34)
respectively. As in the previous example the Diophantine condition on \(\omega^{(n)}\) is used in order to solve the homological equations on \({\mathcal{X}}_{1},{\mathcal{X}}_{4}\). In conclusion we have proved the following Theorem, which is the analogous of Theorem 4.11 in the reversible case, simply requiring less regularity for \(F\), adjoint-tameness and of course the reversible structure instead of the Hamiltonian one.
**Theorem 4.16**.: _Consider a vector field \(F\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\cap{\mathcal{E}}^{(1)}\) of the form_
\[F=N_{0}+G:{\mathcal{O}}_{0}\times D_{a,p+\nu}(r)\times\mathds{T}^{d}_{s}\to V_ {a,p}\,,\]
_where \({\mathcal{E}}^{(1)}\) is defined (4.28) and \(N_{0}\) is defined in (4.8) with \(\Omega^{(0)}\) self-adjoint. Assume that \(F\) is \(C^{3}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\) and adjoint-tame. Fix \(\gamma>0\) such that \(\gamma\sim{\rm diam}{\mathcal{O}}_{0}\) and assume that \(G\) satisfies the smallness conditions (2.54) of Theorem 2.25. Then there exists an invariant torus for \(F\) provided that \(\xi\) belong to the set \({\mathcal{O}}_{\infty}\) of Theorem 2.25. Finally \({\mathcal{O}}_{\infty}\) contains \(\bigcap_{n}\mathcal{C}_{n}\) where \(\mathcal{C}_{n}\) is the set of \(\xi\) such that \(\omega_{n}\) is \((\gamma,\tau)-\)diophantine and the matrices (4.34) are approximatively invertible with tame bounds like (3.3) and (3.4)._
As in the previous example one could also apply our KAM scheme with the decomposition (4.2) but using as changes of variables operators \({\mathcal{L}}_{n}\) which block diagonalize (4.12) into (4.34).
### Example 4: KAM with reducibility.
Up to now we have just reduced our problem to the inversion of (4.25) or (4.34). Inverting such operators is not trivial and requires some subtle multiscale arguments as discussed in Example 2, subsection 4.5 (see also [45]).
Clearly a major simplification would appear if we were able to diagonalize (4.34)-(4.25). This is indeed the classical KAM approach (see [4], [15]) but it requires much stronger non resonance conditions, i.e. the second Mel’nikov conditions. How to use reducibility in order to prove bounds of the form (4.34)-(4.25) for nonlinear PDEs on a circle has been discussed in various papers, see [40], [39], [41]. Here we briefly recall the main point in the simplest possible context. We consider a Hamiltonian case and assume to work with the decomposition (4.16). Let suppose that in Definition 4.4 we have
\[h_{a,p}=\oplus_{j\in\mathds{N}}h_{j},\quad\ell_{a,p}=\oplus_{j\in\mathds{N}} \ell_{j}\,,\quad\ell_{j}=h_{j}\times h_{j}\,,\quad w_{j}=(z_{j}^{+},z_{j}^{-})\]
with \(h_{j}\) finite dimensional subspaces, for simplicity suppose them one-dimensional. Then one may introduce finite dimensional monomial subspaces¹⁰\({\mathcal{V}}^{(\mathtt{v},z^{\sigma_{1}}_{j_{1}},\cdots,z^{\sigma_{k}}_{j_{k} })}\), with \(\sigma_{i}=\pm 1\).
[FOOTNOTE:10][ENDFOOTNOTE]
For a linear operator \(A(\theta)\in{\mathcal{L}}(\ell_{a,p},\ell_{a,p})\) one considers its block decomposition \(\{A_{j}^{i}\}_{i,j\in\mathds{Z}^{r}}\) and the off-diagonal decay norm
\[(|A|_{s,a,p}^{\rm dec})^{2}:=\sum_{h=(h_{1},h_{2})\in\mathds{N}\times\mathds{Z }^{d}}\langle h\rangle^{2p}e^{2(a|h_{1}|+s|h_{2}|)}\sup_{j\in\mathds{N}}|A_{j+ h_{1}}^{j}(h_{2})|^{2}\] (4.35)
where \(|A_{j}^{i}|\) is the operator norm on \({\mathcal{L}}(\ell_{i},\ell_{j})\) . Then we consider the corresponding weighted Lipschitz norm which we denote \(|\cdot|^{\rm dec}_{\vec{v},p}\). This gives a special role to diagonal \(\theta\)-independent vector fields, so we define
\[{\mathcal{N}}_{0}:=\langle{\mathcal{V}}^{(\theta,0)}\rangle\bigoplus_{j,\sigma }\langle{\mathcal{V}}^{(z^{\sigma}_{j},z^{\sigma}_{j})}\rangle\cap{\mathcal{V} }^{(w,w)}\,,\]
Then one can choose \({\mathcal{E}}\) as
\[{\mathcal{E}}^{(2)}_{\rm Ham}:=\{F\in{\mathcal{E}}^{(0)}_{\rm Ham}:\;\quad d_{ w}F^{(w)}(\theta,y,u)=D+M\}\] (4.36)
with \(D\) diagonal and \(M\) a bounded operator with finite \(|\cdot|^{\rm dec}_{\vec{v},p}\) norm. We are in the framework of [15] or [48], but we are **not** requiring analiticity, therefore we need some tameness properties, which we ensure by choosing the norm \(|\cdot|^{\rm dec}_{\vec{v},p}\). Let us briefly recall the notations. We consider the Hamiltonian vector field
\[F_{0}=\omega_{0}\cdot\partial_{\theta}+{\rm i}\sum_{j}\Omega_{j}^{(0)}z^{+}_{j }\partial_{z^{+}_{j}}-{\rm i}\sum_{j}\Omega_{j}^{(0)}z^{-}_{j}\partial_{z^{-}_ {j}}+G_{0}(\xi,y,\theta,w)\]
with the following assumptions.
1. _Non-degeneracy_. We require that for all \(j\neq k\), \(\ell\in\mathds{T}^{d}\) \[\big{|}\{\xi\in{\mathcal{O}}_{0}:\omega_{0}\cdot l+\Omega^{(0)}_{j}\pm\Omega_{ k}^{(0)}=0\}\big{|}=0\,,\qquad\Omega^{(0)}_{j}\pm\Omega_{k}^{(0)}\neq 0\quad \forall\xi\in{\mathcal{O}}_{0}\,\] (4.37)
2. _Frequency Asymptotics_. We assume that \(\xi\to\omega^{(0)}(\xi)\) is a lipeomorphism and \[|\omega(\xi)|,|\omega(\xi)|^{\rm Lip},|\Omega^{(0)}_{j}(\xi)-j^{\nu}|,|\Omega^ {(0)}_{j}(\xi)|^{\rm Lip}<M\,,\quad|\xi(\omega)|^{\rm Lip}\leq L\,,\quad \forall\xi\in{\mathcal{O}}_{0}\] for some \(\nu>1\).
3. _Regularity_. We require \(G_{0}\in{\mathcal{E}}^{(2)}_{\rm Ham}\) with \(D=0\), more precisely \(G\) is \(C^{4}\) tame bounded Hamiltonian vector field with \(\Pi_{{\mathcal{N}}_{0}}G=0\) well defined and Lipschitz for \(\xi\in{\mathcal{O}}_{0}\) a compact set of positive measure.
4. _Smallness_. In Constraint 2.21 fix \[\alpha=0\,,\quad\chi=3/2\,,\quad\kappa_{1}=2\kappa_{0}+1,\kappa_{3}=3\kappa_{0 }+1\,,\quad\kappa_{2}=4\kappa_{0}+1\,,\] \[\eta=\mu+2\kappa_{2}+3\,,\quad\Delta{\mathfrak{p}}=9\kappa_{0}+3,\quad{ \mathfrak{p}}_{1}=\frac{d+2}{2}+\nu+1\] this leaves as only parameter \(\mu\). Suppose that \({\mathtt{G}}_{0}\sim{\mathtt{R}}_{0}\sim 1\), fix \(\varepsilon_{0}\) small and set \(K_{0}=\varepsilon_{0}^{-1/5\kappa_{0}}\) so that the smallness conditions (2.43),(2.44a)–(2.44c) are satisfied. We assume finally that \[{\mathcal{P}}_{0}:=\Pi_{{\mathcal{N}}}F_{0}-\Pi_{{\mathcal{N}}_{0}}F_{0}=\Pi_{ {\mathcal{N}}}G_{n}-\Pi_{{\mathcal{N}}_{0}}G_{0}=P^{(0)}(\theta)w\partial_{w}+ \frac{{\rm i}}{2}Jw\cdot\sum_{i}\partial_{\theta_{i}}P^{(0)}(\theta)w\partial_ {y_{i}}\] is small i.e. \[\gamma_{0}^{-1}|P^{(0)}|^{\rm dec}_{\vec{v}_{0},{\mathfrak{p}}_{1}}\leq \varepsilon_{0}\]
We are in the context of Subsection 4.2, so we have Proposition 4.10 with \(\mathtt{b}=3,\mathtt{t}=1\), and for convenience let us set
\[\mu=4(\mu_{1}+\nu)+5\,,\quad\mu_{1}=2\tau+2\,.\] (4.38)
**Theorem 4.17**.: _Fix \(\tau>d+1+\frac{2}{\nu-1}\) and \(\varepsilon_{0}(LM)^{\tau+1}\ll 1\). Let \(F_{0}\) be a \(C^{4}\) tame vector field up to order \(q={\mathfrak{p}}_{2}+2\) satisfying assumptions A. to D. Then for \(\gamma_{0}\) small enough there exists positive measure Cantor like set \({\mathcal{O}}_{\infty}(\gamma_{0})\subset{\mathcal{O}}_{0}\) of asymptotically full Lebesgue measure as \(\gamma\to 0\) and a symplectic close to identity change of variables \({\mathcal{H}}_{\infty}\) such that_
\[\Pi_{\mathcal{X}}({\mathcal{H}}_{\infty})_{*}F_{0}=0\,,\quad(\Pi_{\mathcal{N}} -\Pi_{{\mathcal{N}}_{0}})({\mathcal{H}}_{\infty})_{*}F_{0}=0,\,,\quad\forall \xi\in{\mathcal{O}}_{\infty}(\gamma_{0})\]
_so that \(F_{0}\) has a reducible KAM torus._
We apply Theorem 2.25 to \(F_{0}\) and we aim to show that we can produce a non-empty set \({\mathcal{O}}_{\infty}\) by choosing the \({\mathcal{L}}_{n}\) appropriately. By definition, at each step \(n\), set
\[\Pi_{{\mathcal{N}}_{0}}F_{n} ={\mathcal{D}}_{n}=\omega_{n}\cdot\partial_{\theta}+{\rm i}\sum_{ j}\Omega_{j}^{(n)}z^{+}_{j}\partial_{z^{+}_{j}}-{\rm i}\sum_{j}\Omega_{j}^{(n) }z^{-}_{j}\partial_{z^{-}_{j}}\,,\] (4.39)
\[\Pi_{{\mathcal{N}}}F_{n}-\Pi_{{\mathcal{N}}_{0}}F_{n} ={\mathcal{P}}_{n}=P^{(n)}(\theta)w\partial_{w}+\frac{{\rm i}}{2} Jw\cdot\sum_{i}\partial_{\theta_{i}}P^{(n)}(\theta)w\partial_{y_{i}}.\] (4.40)
**Lemma 4.18** (KAM reduction step).: _Fix \(\vec{v}_{n}=(\gamma_{n},{\mathcal{O}}_{n},s_{n},a_{n},r_{n})\), \(\vec{v}^{0}_{n}=(\gamma_{n},{\mathcal{O}}_{0},s_{n},a_{n},r_{n})\) and \(K_{n}\gg 1\) as in (2.52). Given a Hamiltonian vector field \({\mathcal{D}}_{n}+{\mathcal{P}}_{n}\) as in (4.39) with_
\[\rho_{n}(M\gamma_{n})^{-1}K_{n}^{2\tau+1}|P^{(n)}|^{\rm dec}_{\vec{v}_{n},{ \mathfrak{p}}_{1}}\ll 1\]
_there exists a symplectic change of variables \({\mathcal{L}}_{n+1}\), which conjugates_
\[({\mathcal{L}}_{n+1})_{*}({\mathcal{D}}_{n}+{\mathcal{P}}_{n}):=\widehat{ \mathcal{D}}_{n}+\widehat{\mathcal{P}}_{n}\]
_(here \(\widehat{\mathcal{D}}_{n}\) is the projection of the conjugate vector field onto \({\mathcal{N}}_{0}\) ) so that_
1. \({\mathcal{L}}_{n+1}\) _is the time one flow of the Hamiltonian vector field_ \[\mathcal{S}_{n+1}:=S^{(n+1)}(\theta)w\partial_{w}+\frac{{\rm i}}{2}Jw\cdot\sum _{i}\partial_{\theta_{i}}S^{(n+1)}(\theta)w\partial_{y_{i}}\,,\quad{\rm with} \;|S^{(n+1)}|^{\rm dec}_{\vec{v}^{0}_{n},p}\leq M\gamma_{n}^{-1}K_{n}^{2\tau+1 }|P^{(n)}|_{\vec{v}_{n},p}^{\rm dec}.\]
2. _Set_ \[\widehat{{\mathcal{O}}}_{n}:=\{\xi\in{\mathcal{O}}_{n}:\;|\omega^{(n)}\cdot l+ \Omega^{(n)}_{j}\pm\Omega_{k}^{(n)}|\geq\frac{M\gamma_{n}}{K_{n}^{\tau}},\quad |\ell|\leq K_{n}\}.\] (4.41) _For all_ \(\xi\in\widehat{{\mathcal{O}}}_{n}\)_,_ \(\mathcal{S}_{n+1}\) _solves the Homological equation_ \[[\mathcal{S}_{n+1},{\mathcal{D}}_{n}]+\Pi_{K_{n}}{\mathcal{P}}_{n}=0\,,\quad( \Pi_{K}P)_{ij}=\left\{\begin{aligned} P_{ij}&\quad{ \rm if}\;|i-j|\leq K\\ 0&\quad{\rm otherwise}\end{aligned}\right.\]
3. _Setting_ \(\widehat{v}_{n}:=(\gamma_{n},\widehat{\mathcal{O}}_{n},s_{n},a_{n},r_{n})\) _we have the bounds_ \[|\widehat{P}^{(n)}|^{\rm dec}_{\widehat{v}_{n},{\mathfrak{p}}_{1} }\leq|S^{(n+1)}|^{\rm dec}_{\vec{v},{\mathfrak{p}}_{1}}|P^{(n)}|^{\rm dec}_{ \vec{v},{\mathfrak{p}}_{1}}+K_{n}^{-({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}|P ^{(n)}|^{\rm dec}_{\vec{v}_{n},{\mathfrak{p}}_{2}}\,,\] \[|\widehat{P}^{(n)}|^{\rm dec}_{\widehat{v}_{n},{\mathfrak{p}}_{2} }\leq|P^{(n)}|^{\rm dec}_{\vec{v}_{n},{\mathfrak{p}}_{2}}+{\rm const}|S^{(n+1) }|^{\rm dec}_{\vec{v},{\mathfrak{p}}_{2}}|P^{(n)}|^{\rm dec}_{\vec{v}_{n},{ \mathfrak{p}}_{2}}\,,\]
Now we may apply this reduction step at each step in our iteration in Theorem 2.25, provided that we show that the \({\mathcal{L}}_{n}\) are compatible changes of variables. This we prove by induction. In fact we recursively obtain that
\[\gamma_{n}^{-1}|P^{(n)}|_{\vec{v},{\mathfrak{p}}_{1}}^{\rm dec}\leq K_{0}^{ \kappa_{1}}\varepsilon_{0}K_{n}^{-\kappa_{1}}\,,\quad\gamma_{n}^{-1}|P^{(n)}|_ {\vec{v},{\mathfrak{p}}_{2}}^{\rm dec}\leq 2\mathtt{G}_{0}K_{n}^{\kappa_{1}}\]
so that the \({\mathcal{L}}_{n}\) are compatible since \(\kappa_{3}>\kappa_{1}+2\tau+1\). Now we can choose a set \({\mathcal{O}}_{n+1}\) which satisfies the Melnikov conditions (4.25) at step \(n\) for \(\widehat{F}_{n}=({\mathcal{L}}_{n})_{*}F_{n}\). Since \(\Pi_{\mathcal{N}}\widehat{F}_{n}=({\mathcal{L}}_{n})_{*}\Pi_{\mathcal{N}}F_{n}\), this amounts to finding an approximate inverse for ad\((\widehat{\mathcal{D}}_{n}+\widehat{\mathcal{P}}_{n})\) which satisfies (3.3) with \(\mu_{1}\) fixed in (4.38). Now, finding a partial inverse of (4.25) is equivalent to finding an inverse to \({\rm ad}\,\widehat{\mathcal{D}}_{n}\), since \({\rm ad}\,\widehat{\mathcal{P}}_{n}\) can be put inside the remainder, see formula (2.47). In turn the invertibility of \({\rm ad}\,\widehat{\mathcal{D}}_{n}\) on \({\mathcal{X}}_{2}\) is ensured by requiring lower bounds on the eigenvalues, i.e. choosing at each step
\[{{\mathcal{O}}}_{n+1}:=\{\xi\in\widehat{\mathcal{O}}_{n}:\;|\widehat{\omega}_{ n}\cdot l+\widehat{\Omega}^{(n)}_{j}|\geq\frac{\gamma_{n}M}{K_{n}^{\tau}}\,, \quad|\ell|\leq K_{n}\}\] (4.42)
in Theorem 2.25. One can easily check that, troughout the algorithm \(\xi\to\omega^{(n)}(\xi)\) and \(\xi\to\widehat{\omega}^{(n)}(\xi)\) are lipeomorphisms and
\[|\omega^{(n)}-\omega_{0}|+\gamma_{0}|\omega^{(n)}-\omega_{0}|^{\rm Lip},| \Omega^{(n)}_{j}-\Omega^{(0)}_{j}|+\gamma_{0}|\Omega^{(n)}_{j}-\Omega^{(0)}_{j }|^{\rm Lip}<\gamma_{0}\varepsilon_{0}\,,\quad|\xi_{n}(\omega)|^{\rm Lip}\leq 2 L\,,\quad\mbox{in}\;{\mathcal{O}}_{n},\]
the same for the corresponding \(\widehat{\cdot}\) quantities.
**Lemma 4.19**.: _For \(\tau>d+1+\frac{2}{\nu-1}\) and \(\varepsilon_{0}(LM)^{\tau+1}\ll 1\) we have that \(|{\mathcal{O}}_{0}\setminus\cap_{n}{\mathcal{O}}_{n}|\to 0\) as \(\gamma_{0}\to 0\)._
_Proof._: This is proved in [15] Corollary C. ∎
The main point in this approach is that at each step, while constructing our approximate solutions by inverting (4.25), we apply a change of variables which approximately diagonalizes the linearized operator up to a negligible remainder. Condition (4.41), ensures that the sequence of linear changes of variables converges. In fact this last approach could be made slightly more flexible, indeed in our construction we have used the norm (4.35) on the component \({\mathcal{V}}^{(w,w)}\oplus{\mathcal{V}}^{(y,w,w)}\) in order to perform the reduction. This imposes some unnecessary conditions on the changes of variables, since the only unavoidable request on the \({\mathcal{L}}_{n}\) is that they preserve \({\mathcal{E}}\) and do not disrupt the bounds. A typical example is a change of variables of the form \({\mathcal{L}}w(x)=w(x+a(\theta,x))\). One does not expect such change of variables to have finite (4.35) norm but, if \(a\) is chosen appropriately, it can be a compatible change of variables.
### Application to the NLS.
Let us specify to a PDE context; typical examples are \(\ell_{a,p}=H^{p}(\mathds{T}^{d}_{a})\) or \(\ell_{p}=\ell_{a=0,p}=H^{p}(\mathbb{G})\) with \(\mathbb{G}\) a compact Lie group or homogeneous manifold. Then we may suppose that \(\Omega^{(0)}\) in (4.8) is diagonal w.r.t. the Fourier basis (and self-adjoint) while \(G\) is a composition operator w.r.t. the \(w\) variable. For simplicity of the exposition, let us restrict ourself to semi-linear PDEs with no derivative in the nonlinearity, so that \(G(w(x))=f(w(x))\) with \(f\) a \(C^{k}\) map \(\mathds{C}^{2}\to\mathds{C}^{2}\). More precisely we choose
\[{\mathcal{E}}={\mathcal{E}}^{(1)}_{\rm Ham}:=\Big{\{}F\in{\mathcal{E}}^{(0)}_{ \rm Ham}:\quad d_{w}F^{(w)}(\theta,y,u)=D+M+R\,,\quad u\in D_{a,p}(r)\Big{\}}\] (4.43)
where \(D\) is a diagonal operator \(M\) is a multiplication operator while \(R\) is finite rank. Since we apply symplectic changes of variables which are the identity plus a traslation plus a finite rank operator, \({\mathcal{E}}^{(1)}_{\rm Ham}\) is preserved throughout our algorithm.
As an example consider the NLS equation on a simple _compact_ Lie group \(\mathbb{G}\):
\[{\rm i}\partial_{t}u+\Delta u+M_{\xi}u=\epsilon g(|u|^{2})u\,,\quad\] (4.44)
here \(g(y)\in C^{q}(\mathds{R},\mathds{R})\) with \(q\) large, while \(M_{\xi}\) is a Fourier multiplier
\[M_{\xi}\phi_{i}(x)=\xi_{i}\phi_{i}(x)\,,\quad i=1,\cdots,d\]
where \(\phi_{n}(x)\) are distinct eigenfunctions of the Laplace-Beltrami operator. We introduce the variables \(\theta,y,w=(z,\bar{z})\) by writing (for \(I_{n}>0\) )
\[u(x)=\sum_{j=1}^{d}\sqrt{I_{j}+y_{j}}e^{{\rm i}\theta_{j}}\phi_{j}(x)+z(x)\,,\] (4.45)
where \(z(x)\) belongs to the orthogonal complement of Span\((\phi_{i}(x))_{i=1}^{d}\) which by definition is \(\ell_{p}=\ell_{0,p}\). As norm we choose the one induced on \(\ell_{p}\) by \(H^{p}(\mathbb{G})\). This change of variables is symplectic and one obtains a Hamiltonian vector field \(F=N_{0}+G\) of the form (1.3), where \(\omega^{(0)}_{i}(\xi)=\lambda_{i}+\xi_{i}\), with \(\lambda_{i}\) being the eigenvalue of the Laplace-Beltrami operator associated to \(\phi_{i}\). Correspondingly \(\Lambda^{(0)}\) is the Laplace-Beltrami operator restricted to the complement of Span\((\phi_{n}(x))_{n=1}^{d}\). Since the non-linearity is a composition operator on \(H^{p}\), then classical results ensure the \(C^{k}\) tameness of the vector field for all \(k\). Moreover the restriction of a multiplication operator to \(\ell_{p}\) is a multiplication operator up to a finite rank operator (\(\ell_{p}\) is the complement of a finite dimensional subspace), then \(F\in{\mathcal{E}}^{(1)}_{\rm Ham}\). Now we fix \(\gamma_{0}>1/2\) and take \({\mathtt{G}}_{0},{\mathtt{R}}_{0},\varepsilon_{0}\sim\epsilon\) in (2.21). In this way the NLS equations satisfies all the hypotheses of Theorem 4.11 and hence we deduce the existence of an invariant torus in the set \({\mathcal{O}}_{\infty}\).
Let us now discuss the measure estimates for the sets \(\mathcal{C}_{n}\) in this setting. It is easily seen that \(\omega^{(n)}_{j}=\lambda_{j}+\xi_{j}+O(\epsilon)\) so imposing the diophantine conditions on such sequence is simple. As one could expect the key point is the inversion of the operator in (4.25). In turn, since the operator is triangular, this amounts to inverting the diagonal, i.e. the operator
\[{\mathfrak{L}}_{n}=(\omega^{(n)}\cdot\partial_{\theta}\big{)}\,\mathds{1}-F^{( w,w)}_{n}(\theta).\] (4.46)
acting on \({\mathcal{V}}^{(w,0)}\). We first remark that \(F^{(w,w)}_{n}(\theta)\) is the linearized of \(F^{(w)}\) at an approximate solution up to a finite rank term. This follows from the fact that our changes of variables are traslations plus finite rank. From this we deduce that \(F^{(w,w)}_{n}(\theta)\) is a multiplication operator plus a finite rank one. Now in order to prove that the estimates (3.3) and (3.4) hold for (4.46) for a large set of \(\xi\) one can use a multiscale theorem, such as the one in [19],[24] or [28]. Indeed one may verify that \({\mathfrak{L}}_{n}\) in (4.46) fits all the hypotheses of Proposition 5.8 in [28] so that the tame estimates on the inverse follow by conditions on the eigenvalues. The measure estimates follow by eigenvalue variation (again just as in [28]).
If \(\mathbb{G}=\mathds{T}^{k}\), this general strategy is carried on in full details in [45], in the more complicated case of a multiplication potential; see also the application to the wave equation [50]. Since the authors follow the Nash-Moser approach they never apply the symplectic change of variables which block-diagonalize, but only deduce its existence by the Hamiltonian structure.
If \(\mathbb{G}=\mathds{T}\) and we look for odd solutions, then the hypoteses A. to D. of subsection 4.4 are satisfied and using the same change of variables as in (4.45) and reasoning as above, one can see that \(F\in{\mathcal{E}}^{(2)}_{\rm Ham}\), see (4.36), and therefore one can apply Theorem 4.17 and obtain that the invariant torus is also reducible; note that in this case the procedure is complete in the sense that the set \({\mathcal{O}}_{\infty}\) has positive measure.
**Theorem**.: _The NLS equation (4.44) admits reducible and linearly stable quasi-periodic solutions for all \(\varepsilon\) small enough and for all \(\xi\) in a positive measure set._
Note that one could avoid adding the Fourier multiplier in (4.44) and obtain the parameters by using Birkhoff Normal form (this is done for instance in [51]).
In our application, we have not considered a reversible case: this is due to the fact that the natural structure for the semilinear NLS is the Hamiltonian one. On the other hand, if one considers DNLS (Derivative NLS) there are interesting examples which are not Hamiltonian but instead they are reversible; see for instance [32, 41]. We believe (see [52, 46]) that our result can be fruitfully applied also to the fully nonlinear autonomous case.
### Some comments
The examples 1-4 show that our KAM approach interpolates from the Nash-Moser scheme to the KAM. Each different choice of decomposition depends on the specific application one is studying. Note that we could always choose the simplest decomposition (4.2) (or (4.16) in the Hamiltonian setting) and achieve the block-decoupling of the linearized operators through the compatible change of variables \({\mathcal{L}}_{n}\).
A final remark is in order. In the PDEs setting of subsection 4.5, applying the changes of variables of Lemma 4.18 implies some loss of information. Indeed it is easily to see that changes of variables do not preserve \({\mathcal{E}}^{(1)}_{\rm Ham}\) unless we can show that the matrices in \(S^{(n)}\) are Töplitz (up to finite rank).
Unfortunately in most applications this is not the case: indeed in classic KAM scheme the PDEs structure is essentially ignored and one works in \({\mathcal{E}}^{(2)}_{\rm Ham}\). This has been an obstacle in extending the KAM theory to higher spatial dimensions. In the latter case it is convenient to choose as \({\mathcal{E}}\) a slightly more involved class of vector field, the so called _quasi-Töplitz_ or _Töplitz-Lipschitz_ vector fields. The idea is to retain some information on the original PDE structure by showing that the linearized operator in the \(w-\)component can be “approximated” by piecewise multiplication operators.
A very good idea is to follow the approach of [39],[40] where the Authors take advantage of the Second Mel’nikov conditions (4.41) but do not apply the changes of variables which diagonalize the linearized operator. In this way at each step they preserve the PDE structure. The key observation is that, in a Sobolev regularity setting, the bounds (3.4) and (3.3) follow from corresponding bounds in some special coordinate system (i.e. the one in which the operator is diagonal) provided that the change of variables to the new coordinates is well-defined as an operator from the phase space to itself. Then there is no need to actually apply the change of variables. In the analytic context however this approach presents some difficulties, as one can see easily already in the case of the torus diffeomorphism, i.e. in studying the conjugate of a vector field \(F\) under the map
\[\mathds{T}^{d}_{s}\ni\theta\mapsto\theta+g(\theta),\] (4.47)
for some \(s\geq 0\), and \(g(\theta)\) small. Note that this change of variables is necessary in order to pass from (4.12) to (4.15). The map (4.47) induces an operator on the functions \(f\in H^{p}(\mathds{T}^{d}_{s})\) defined as \(({\mathcal{T}}f)(\theta)=f(\theta+g(\theta))\). It is easy to check that in the Sobolev context, i.e. \(s=0\), the map (4.47) is a diffeomorphism of \(\mathds{T}^{d}\) into itself and hence \({\mathcal{T}}\) is well-defined from \(H^{p}(\mathds{T}^{d})\) into itself. On the contrary, if \(s>0\) one has that the map (4.47) maps \(\mathds{T}^{d}_{s}\) into \(\mathds{T}^{d}_{s^{\prime}}\) with \(s^{\prime}<s\), in other words there is a loss of analyticity tied to the size of \(g\). In this case one cannot follow the strategy used in [40]. By following directly such strategy one loses all the analyticity after a finite number of steps. This is due to the fact that, even if the iterative Nash-Moser scheme is coordinate independent, some of the estimates we perform actually depend on the system of coordinates. The basic idea of our approach is in fact to use at each step the system of coordinates which is more adapted to the problem one is studying. This point explains also the rôle of the compatible transformations \({\mathcal{L}}_{n}\). Indeed such transformation are not fundamental in proving the convergence of the scheme, but are introduced as a degree of freedom in order to study the set of good parameters in 3.4. In particular such transformations are the key point in order to study problems in the analytic setting.
Up to now we have only considered degree decompositions, however one can certainly consider cases in which \({\mathcal{N}},{\mathcal{X}},{\mathcal{R}}\) are not triangular. For instance we may consider \({\mathcal{E}}\) as in Example 4.2 with the same \({\mathcal{X}}\) as in (4.16) (i.e. it contains only terms of degree \(\leq 0\) w.r.t. the decomposition with deg\((y)=2\)) but where \({\mathcal{R}}\) contains only (and all) functions of degree \(\geq 3\). Now this choice respects Definitions 2.17 and 2.19 but clearly the decomposition is not triangular since \(\Pi_{\mathcal{X}}{\rm ad}N\) is not block diagonal. However it is easily seen that \(\Pi_{\mathcal{X}}{\rm ad}(R)=0\) for any \(R\in{\mathcal{R}}\) so that \(\Pi_{\mathcal{X}}{\rm ad}(\Pi_{\mathcal{X}}^{\perp}F)=\Pi_{\mathcal{X}}{\rm ad }(\Pi_{\mathcal{N}}F)\). In fact if one divides
\[{\mathcal{N}}=\bigoplus_{j=0}^{2}{\mathcal{N}}_{j}\]
in terms of increasing degree then \(\Pi_{\mathcal{X}}{\rm ad}(\Pi_{{\mathcal{N}}_{0}}F)\) is block diagonal on \({\mathcal{X}}=\oplus{\mathcal{X}}_{j}\) while \(\Pi_{\mathcal{X}}{\rm ad}(\Pi_{{\mathcal{N}}_{0}}^{\perp}F)\) is upper triangular, so that solving the homological equation only depends on inverting \(\Pi_{\mathcal{X}}{\rm ad}(\Pi_{{\mathcal{N}}_{0}}F)\) as in the previous example. This means that we can apply Theorem2.25 having a pretty explicit description of the set \({\mathcal{O}}_{\infty}\).
On the smallness conditionLet us comment smallness conditions in Constraint 2.21. First of all one can note that conditions (2.42) are trivially non empty. Indeed given any \(\mu,\nu,\kappa_{3},\alpha,{\mathfrak{p}}_{1},\chi\), (which typically are fixed by the problem) then (2.42a),(2.42b) and (2.42c) give lower bounds one \(\kappa_{1},\kappa_{2},\eta\), while (2.42d) and (2.42e) constrain \({\mathfrak{p}}_{2}\) in a non-empty interval.
The conditions on \(\varepsilon_{0},{\mathtt{G}}_{0},{\mathtt{R}}_{0}\) are more subtle. Indeed (2.43) gives a lower bound on the size of \(K_{0}\), but then it is not trivial to show that (2.44) can be fulfilled. Clearly if \({\mathtt{G}}_{0}\sim{\mathtt{R}}_{0}\sim\varepsilon_{0}\) all the conditions reduce to a smallness condition on \(\varepsilon_{0}\) in terms of \(K_{0}\). If \({\mathtt{G}}_{0}\) or \({\mathtt{R}}_{0}\) are large the problem gets more complicated, see also Remark 2.22. Unfortunately this situation appears in many applications, for this reason we have not made any simplifying assumption on the relative sizes.
As one sees in (2.54), the parameters \({\mathtt{G}}_{0},{\mathtt{R}}_{0},\varepsilon_{0}\) give upper bounds on the size of \(G_{0},\Pi_{{\mathcal{N}}^{\perp}}G_{0},\Pi_{{\mathcal{X}}}G_{0}\) w.r.t. the parameter \(\gamma_{0}\). In applications \(G_{0},\gamma_{0}\) are essentially given so that the only hope of modulating the bounds comes from modifying the domain \(\mathds{T}^{d}_{s}\times D_{a,p}(r)\times{\mathcal{O}}_{0}\). To this purpose we first make the trivial remark that if one shrinks to a suitably small neighborhood of zero then polynomials of high degree become very small. In order to formalize this fact we define a scaling degree as follows.
\[\mathtt{s}(\theta)=0\,,\quad\mathtt{s}(y)=\mathtt{s}\,,\quad\mathtt{s}(w)=1.\]
Then given a monomial vector field this fixes the scaling as
\[\mathtt{s}(y^{j}e^{{\rm i}\theta\cdot\ell}w^{\alpha}\partial_{\mathtt{v}})= \mathtt{s}j+|\alpha|-\mathtt{s}(\mathtt{v}).\]
By construction the scaling is additive w.r.t. commutators and behaves just as the degree in Remark 3.2. We have the following result.
**Lemma 4.20**.: _Consider a tame vector field \(F\) as in Definition 2.13 of minimal scaling \(\bar{\mathtt{s}}\). Consider the rescaling \(r_{0}\rightsquigarrow\delta r_{0}\), \(r\rightsquigarrow\delta r\). Then one has_
\[C_{\vec{v}_{1},p}(F)\leq\delta^{\bar{\mathtt{s}}}C_{\vec{v},p}(F),\] (4.48)
_with \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) and \(\vec{v}_{1}=(\gamma,{\mathcal{O}},s,a,\delta r)\)._
This definition of scaling induces a natural scaling decomposition of a vector field. In particular we remark that by construction \({\mathcal{N}}\) contains necessarily some terms of scaling zero, while \({\mathcal{X}}\) contains terms of negative scaling. However one can fix the scaling so that \({\mathcal{R}}\) contains only terms of positive scaling. Hence by Lemma 4.20 terms of positive scaling can be made small. In conclusion given the constants \(\bar{\mathtt{G}}_{0},\bar{\varepsilon}_{0},\bar{\mathtt{R}}_{0}\) in a ball \(\bar{r}_{0}\), one can try to fulfill 2.44 by rescaling \(r_{0}=\delta\bar{r_{0}}\). in this way \({\mathtt{R}}_{0}\) becomes smaller, \({\mathtt{G}}_{0}\) at best does not grow, while \(\varepsilon_{0}\) necessarily grows. This procedure produces upper and lower bounds on \(\delta\).
Consider the decomposition in (4.16) where the degree is equal to the scaling. Then \({\mathcal{N}}\) contains only terms of degree zero and hence \({\mathtt{G}}_{0}\) is scaling invariant. \({\mathcal{R}}\) contains only terms of positive degree so that rescaling \({\mathtt{R}}_{0}\rightsquigarrow\delta\bar{{\mathtt{R}}}_{0}\) at least. \({\mathcal{X}}\) has negative degree \(\geq-2\), hence \(\varepsilon_{0}\rightsquigarrow\delta^{-2}\bar{\varepsilon}_{0}\). As explained in Remark 2.22 constants \(\bar{{\mathtt{G}}_{0}},\bar{\varepsilon}_{0},\bar{{\mathtt{R}}}_{0}\) are expected to depend only on \(\gamma\). Using the rescaling we have introduced the extra parameter \(\delta\), which should be chosen in terms of \(\gamma\) in order to fulfill the conditions (2.44). Actually it can be useful to make a finer analysis for \(\bar{\varepsilon}_{0}\). Indeed one can bound separately the terms of degree \(-2,-1,0\) in \(\bar{\varepsilon}_{0}\). Let us denote them by \(\bar{\varepsilon}_{0}^{(i)}\), \(i=-2,-1,0\). Then \({\varepsilon}_{0}\rightsquigarrow\sum_{i=-2}^{0}\delta^{i}\bar{\varepsilon}_{0 }^{(i)}\), and hence the smallness conditions can be taken asymetrically. The same holds for \({\mathtt{R}}_{0}\). This has been discussed with slightly different notation in [48].
## 5 Proof of the result
We divide the proof of Theorem 2.25 into two pieces: we first show one step in full details (this is the same in both cases) and then we prove that it is indeed possible to perform infinitely many steps and that the procedure converges.
### The KAM step
**Proposition 5.1**.: _Let \(\gamma_{0}\), \(a_{0}\), \(r_{0}\), \(s_{0}\), \(\varepsilon_{0}\) and \(K_{0}\) be the constants appearing in Theorem 2.25. Fix \(\gamma,a,r,s\geq 0\) so that_
\[\frac{\gamma_{0}}{2}\leq\gamma\leq\gamma_{0},\,\,\frac{a_{0}}{2}\leq a\leq a_{ 0},\;\frac{s_{0}}{2}\leq s\leq s_{0},\;\frac{r_{0}}{2}\leq r\leq r_{0}\] (5.1)
_and \(0<\rho<1\) such that_
\[r-8\rho r_{0}>\frac{r_{0}}{2},\;\;{\rm if}\;s_{0}\neq 0\;{\rm then}\;s-8\rho s _{0}>\frac{s_{0}}{2},\,\,{\rm if}\;a_{0}\neq 0\,\,{\rm then}\;\;a-8\rho a_{0}> \frac{a_{0}}{2}.\] (5.2)
_Consider a vector field:_
\[F:\mathds{T}^{d}_{s}\times D_{a,p+\nu}(r)\times{\mathcal{O}}_{0}\to V_{a,p},\] (5.3)
_which is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). Let \(N_{0}\) be the diagonal vector field appearing in Theorem 2.25 and write \(F=N_{0}+(F-N_{0})=N_{0}+G\). Set \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\), denote by \({\mathcal{O}}\subseteq{\mathcal{O}}_{0}\) some set of parameters \(\xi\) for which \(F(\xi)\in{\mathcal{E}}\) and \(|\Pi_{{\mathcal{X}}}G|_{\vec{v},{\mathfrak{p}}_{2}-1}\leq\mathtt{C}C_{\vec{v}, {\mathfrak{p}}_{2}}(\Pi_{{\mathcal{N}}}^{\perp}G)\) and define also_
\[\Gamma_{p}:=\gamma^{-1}C_{\vec{v},p}(G),\quad\Theta_{p}:=\gamma^{-1}C_{\vec{v} ,p}(\Pi_{{\mathcal{N}}}^{\perp}G),\quad\delta:=\gamma^{-1}|\Pi_{{\mathcal{X}}} G|_{\vec{v},{\mathfrak{p}}_{1}}.\] (5.4)
_Fix \(K>K_{0}\) and assume that_
\[\rho^{-1}K^{\mu+\nu+3}{\Gamma_{{\mathfrak{p}}_{1}}}\delta\leq\epsilon\] (5.5)
_with \(\epsilon=\epsilon({\mathfrak{p}}_{1},d)\) small enough. Consider a map \({\mathcal{L}}\) compatible with \((F,K,\vec{v},\rho)\) (see Definition 2.24) and set_
\[\hat{F}=N_{0}+\hat{G}:=({\mathcal{L}})_{*}F\,.\]
_Consider any \({\mathcal{O}}_{+}\subseteq{\mathcal{O}}\) solving the homological equation for \((\hat{F},K,\vec{v}^{\text{\tiny 0}}_{2},\rho)\) and set_
\[\vec{w}_{i} =(\gamma,{\mathcal{O}}_{+},s-i\rho s_{0},a-i\rho a_{0},r-i\rho r_ {0}),\] (5.6)
\[\vec{v}_{i}^{\text{\tiny 0}} =(\gamma,{\mathcal{O}}_{0},s-i\rho s_{0},a-i\rho a_{0},r-i\rho r_ {0}),\]
_for \(i=1,\ldots,8\). The following holds:_
_(i)_ _there exists an invertible (see Def._ 2.8_) change of variables_
\[\Phi_{+}:=\mathds{1}+{f}_{+}:\mathds{T}^{d}_{s-4\rho s_{0}}\times D_{a-4\rho a _{0},p}(r-4\rho r_{0})\times{\mathcal{O}}_{0}\longrightarrow\mathds{T}^{d}_{s- 2\rho s_{0}}\times D_{a-2\rho a_{0},p}(r-2\rho r_{0}),\] (5.7)
_with_ \(f_{+}\) _a regular vector field (see Def._ 4.5_) such that_ \(\Phi_{+}\) _is_ \({\mathcal{E}}\) _preserving for all_ \(\xi\in{\mathcal{O}}_{+}\) _and is generated by a vector field_ \(g_{+}\) _which satisfies the bound_
\[|g_{+}|_{\vec{w}_{2},{\mathfrak{p}}_{1}} \leq\mathtt{C}\Gamma_{{\mathfrak{p}}_{1}}K^{\mu}\delta\] (5.8)
\[|g_{+}|_{\vec{w}_{2},{\mathfrak{p}}_{2}} \leq{\mathtt{C}}K^{\mu+1}(\Theta_{{\mathfrak{p}}_{2}}+\varepsilon _{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p}}_{1}}+K^{\alpha({\mathfrak{p}}_{2}-{ \mathfrak{p}}_{1})}\delta(\Gamma_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa _{3}}\Gamma_{{\mathfrak{p}}_{1}}))\,;\]
_(ii) fix_ \(s_{+},a_{+},r_{+}\) _as_
\[s_{+}=s-8\rho s_{0},\quad a_{+}=a-8\rho a_{0},\quad r_{+}=r-8\rho r_{0}\] (5.9)
_then_
\[F_{+}:=(\Phi_{+})_{*}\hat{F}=N_{0}+G_{+}:\mathds{T}^{d}_{s_{+}}\times D_{a_{+} ,p+\nu}(r_{+})\times{\mathcal{O}}_{0}\to V_{a,p}\,;\] (5.10)
_(iii) setting_ \(\gamma_{0}/2\leq\gamma_{+}\leq\gamma\)_,_ \(\vec{v}_{+}:=(\gamma_{+},{\mathcal{O}}_{+},a_{+},s_{+})=\vec{w}_{8}\)_,_ \(F_{+}\) _is tame and denoting the tameness constants as_
\[\Gamma_{+,p}:=\gamma_{+}^{-1}C_{\vec{v}_{+},p}(G_{+}),\quad\Theta_{+,p}:= \gamma_{+}^{-1}C_{\vec{v}_{+},p}(\Pi_{{\mathcal{N}}}^{\perp}G_{+}),\quad\delta _{+}:=\gamma_{+}^{-1}|\Pi_{{\mathcal{X}}}G_{+}|_{\vec{v}_{+},{\mathfrak{p}}_{1}}\]
_one can fix_
\[\Gamma_{+,{\mathfrak{p}}_{1}} =\frac{\gamma}{\gamma_{+}}(1+\varepsilon_{0}K^{-1})\Gamma_{{ \mathfrak{p}}_{1}}+\mathtt{C}K^{\mu}{\Gamma_{{\mathfrak{p}}_{1}}}\delta(K^{\nu +1}\Gamma_{{\mathfrak{p}}_{1}}+\varepsilon_{0}K^{-\eta})\,,\] (5.11)
\[\Gamma_{+,{\mathfrak{p}}_{2}} ={\mathtt{C}}\Big{(}\Gamma_{{\mathfrak{p}}_{2}}+\varepsilon_{0}{ \Gamma_{{\mathfrak{p}}_{1}}}K^{\kappa_{3}}+{\Gamma_{{\mathfrak{p}}_{1}}}K^{\mu +\nu+2}(\Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{ \mathfrak{p}}_{1}}+K^{\alpha(p-{\mathfrak{p}}_{1})}\delta(\Gamma_{{\mathfrak{p }}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Gamma_{{\mathfrak{p}}_{1}}))\Big{)}\,,\]
\[\Theta_{+,{\mathfrak{p}}_{1}} =\frac{\gamma}{\gamma_{+}}(1+\varepsilon_{0}K^{-1})\Theta_{{ \mathfrak{p}}_{1}}+\mathtt{C}K^{\mu}{\Gamma_{{\mathfrak{p}}_{1}}}\delta(K^{\nu +1}\Gamma_{{\mathfrak{p}}_{1}}+\varepsilon_{0}K^{-\eta})\,,\] (5.12)
\[\Theta_{+,{\mathfrak{p}}_{2}} =\mathtt{C}\big{(}\Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{ \kappa_{3}}\Theta_{{\mathfrak{p}}_{1}}+K^{\mu+\nu+2}{\Gamma_{{\mathfrak{p}}_{1 }}}\big{(}\Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{ \mathfrak{p}}_{1}}+K^{\alpha({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}\delta( \Gamma_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Gamma_{{\mathfrak{p} }_{1}})\big{)}\big{)}\,,\]
\[\gamma_{+}^{-1} |\Pi_{{\mathcal{X}}}(G_{+})|_{\vec{v}_{+},{\mathfrak{p}}_{2}-1} \leq\Theta_{+,{\mathfrak{p}}_{2}}\]
_and_
\[\delta_{+} \leq{\mathtt{C}}{\Gamma_{{\mathfrak{p}}_{1}}}\big{(}\delta^{2} \Gamma^{2}_{{\mathfrak{p}}_{1}}K^{2\mu+2\nu+4}+\delta\varepsilon_{0}K^{\mu- \eta}\big{)}+K^{\mu+\nu+2-({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}\big{(} \Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p} }_{1}}\big{)}\] (5.13)
Proof.: First of all we note that by the definition of \({\mathcal{L}}\) one has
\[\hat{F}:\mathds{T}^{d}_{s-2\rho s_{0}}\times D_{a-2\rho a_{0},p+\nu}(r-2\rho r _{0})\times{\mathcal{O}}_{0}\to V_{a,p}\]
and \(\hat{F}\in{\mathcal{E}}\) for each \(\xi\) in \({\mathcal{O}}\). By (2.51) we have
\[\Pi_{\mathcal{X}}\hat{G}=\Pi_{\mathcal{X}}\hat{F}=\Pi_{\mathcal{X }}({\mathcal{L}})_{*}F=\Pi_{\mathcal{X}}({\mathcal{L}})_{*}\Pi_{\mathcal{X}}F= \Pi_{\mathcal{X}}({\mathcal{L}})_{*}\Pi_{\mathcal{X}}G\,,\]
\[\Pi_{{\mathcal{N}}}^{\perp}\hat{G}=\Pi_{\mathcal{N}}^{\perp}\hat{ F}=\Pi_{\mathcal{N}}^{\perp}({\mathcal{L}})_{*}F=\Pi_{\mathcal{N}}^{\perp}({ \mathcal{L}})_{*}\Pi_{\mathcal{N}}^{\perp}F=\Pi_{\mathcal{N}}^{\perp}({ \mathcal{L}})_{*}\Pi_{\mathcal{N}}^{\perp}G\,.\]
Now since \(G\) is \(C^{\mathtt{n}}\)-tame and \({\mathcal{N}},{\mathcal{X}}\) have maximal degree \(\leq\mathtt{n}\) we have that the tameness constants of \(\Pi_{\mathcal{N}}G,\Pi_{{\mathcal{R}}}G\) as well as \(|\Pi_{\mathcal{X}}G|\) are controlled by the tameness constant of \(G\) .
By (2.50) we have the bounds
\[C_{\vec{w}_{2},{\mathfrak{p}}_{1}}(\hat{G})\leq C_{\vec{v},{ \mathfrak{p}}_{1}}(G)(1+\varepsilon_{0}K^{-1})\leq\gamma{\Gamma_{{\mathfrak{p} }_{1}}}(1+\varepsilon_{0}K^{-1})\,,\] (5.14)
\[C_{\vec{w}_{2},{\mathfrak{p}}_{2}}(\hat{G})\leq C_{\vec{v},{ \mathfrak{p}}_{2}}(G)+\varepsilon_{0}K^{\kappa_{3}}C_{\vec{v},{\mathfrak{p}}_{ 1}}(G)\leq\gamma(\Gamma_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}} \Gamma_{{\mathfrak{p}}_{1}})\,,\]
\[C_{\vec{w}_{2},{\mathfrak{p}}_{1}}(\Pi_{\mathcal{N}}^{\perp}\hat {G})\leq C_{\vec{v},{\mathfrak{p}}_{1}}(\Pi_{\mathcal{N}}^{\perp}G)(1+ \varepsilon_{0}K^{-1})\leq\gamma\Theta_{{\mathfrak{p}}_{1}}(1+\varepsilon_{0}K ^{-1})\,,\] (5.15)
\[C_{\vec{w}_{2},{\mathfrak{p}}_{2}}(\Pi_{\mathcal{N}}^{\perp}\hat {G})\leq C_{\vec{v},{\mathfrak{p}}_{2}}(\Pi_{\mathcal{N}}^{\perp}G)+ \varepsilon_{0}K^{\kappa_{3}}C_{\vec{v},{\mathfrak{p}}_{1}}(\Pi_{\mathcal{N}}^ {\perp}G)\leq\gamma(\Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}} \Theta_{{\mathfrak{p}}_{1}})\,,\]
and
\[C_{\vec{w}_{2},{\mathfrak{p}}_{1}}(\Pi_{\mathcal{X}}\hat{G})\leq C _{\vec{v},{\mathfrak{p}}_{1}}(\Pi_{\mathcal{X}}G)(1+\varepsilon_{0}K^{-1})\leq 2 \gamma\delta\,,\] (5.16)
\[C_{\vec{w}_{2},{\mathfrak{p}}_{2}}(\Pi_{\mathcal{X}}\hat{G})\leq C _{\vec{v},{\mathfrak{p}}_{2}}(\Pi_{\mathcal{X}}G)+\varepsilon_{0}K^{\kappa_{3} }C_{\vec{v},{\mathfrak{p}}_{1}}(\Pi_{\mathcal{X}}G)\leq\gamma(\Theta_{{ \mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p}}_{1}})\,,\]
\[|\Pi_{{\mathcal{X}}}\hat{G}|_{\vec{w}_{2},{\mathfrak{p}}_{2}-1} \leq\gamma(\Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{ \mathfrak{p}}_{1}}).\]
Our aim is to define for \(\xi\in{\mathcal{O}}_{+}\) a vector field \({g}_{+}\) as the “approximate” solution of the equation
\[\Pi_{K}\Pi_{{\mathcal{X}}}[{g}_{+},\Pi_{\mathcal{X}}^{\perp}\hat{F}]=\Pi_{K} \Pi_{{\mathcal{X}}}\hat{F}.\] (5.17)
By Definition, if \(\xi\in{\mathcal{O}}_{+}\) then we can find \({g}_{+}\) satisfying properties (a),(b),(c),(d) of Definition 2.23.
By (2.46) one gets
\[|{g}_{+}|_{\vec{w}_{2},p}\leq \mathtt{C}\gamma^{-1}K^{\mu}(|\Pi_{K}\Pi_{{\mathcal{X}}}\hat{G}|_ {\vec{w}_{2},p}+K^{\alpha(p-{\mathfrak{p}}_{1})}|\Pi_{K}\Pi_{{\mathcal{X}}} \hat{G}|_{\vec{w}_{2},{\mathfrak{p}}_{1}}\gamma^{-1}C_{{\vec{w}_{2},p}}(\hat{G }))\] (5.18)
and hence, using (5.14),(5.16) and (2.32) we have (5.8) Moreover, by condition \((a)\) of Definition 2.23, one has \(g_{+}\in{\mathcal{B}}_{\mathcal{E}}\) for \(\xi\in{\mathcal{O}}_{+}\) and \(|g_{+}|_{\vec{v}_{2}^{{\text{\tiny 0}}},{\mathfrak{p}}_{1}}\leq\mathtt{C}|g_{+ }|_{\vec{w}_{2},{\mathfrak{p}}_{1}}\).
Now, if \(\epsilon\) in (5.5) is small enough, by Definition 2.18 item 5, \(g_{+}\) generates a change of variables \({\Phi}_{+}=\mathds{1}+{f}_{+}\)and \(|{f}_{+}|_{\vec{v}_{3}^{\text{\tiny 0}},p}\leq 2|\tilde{g}_{+}|_{\vec{v}_{2}^{ \text{\tiny 0}},p}\). Finally, for possibly smaller \(\epsilon\) one has (5.7), by using Definition 2.18 item 4. Note that the smallness conditions on \(\epsilon\) come only from this two conditions.
First we note that, since \(N_{0}\) is diagonal (recall Definition 2.20), one has
\[G_{+} :=(\Phi_{+})_{*}N_{0}+(\Phi_{+})_{*}\hat{G}-N_{0}=\int_{0}^{1}dt \,(\Phi_{+})_{*}^{t}[g_{+},N_{0}]+(\Phi_{+})_{*}\hat{G}\] (5.19)
\[=\int_{0}^{1}dt(\Phi_{+})_{*}^{t}\Pi_{K}\Pi_{{\mathcal{X}}}[g_{+} ,N_{0}+\Pi_{\mathcal{X}}^{\perp}\hat{G}]-\int_{0}^{1}dt(\Phi_{+})_{*}^{t}\Pi_{ K}\Pi_{{\mathcal{X}}}[g_{+},\Pi_{\mathcal{X}}^{\perp}\hat{G}]+(\Phi_{+})_{*} \hat{G}\]
where
\[u:=\Pi_{K}\Pi_{{\mathcal{X}}}[g_{+},N_{0}+\Pi_{\mathcal{X}}^{\perp}\hat{G}]- \Pi_{K}\Pi_{\mathcal{X}}\hat{G}\] (5.20)
and \(u\) in (5.20) satisfies (2.47), so by applying (5.14) and (5.16), we get
\[|u|_{\vec{w}_{2},{\mathfrak{p}}_{1}} \leq\mathtt{C}\gamma\varepsilon_{0}\Gamma_{{\mathfrak{p}}_{1}}K^{ -\eta+\mu}\delta\] (5.21)
\[|u|_{\vec{w}_{2},{\mathfrak{p}}_{2}} \leq\mathtt{C}\gamma K^{\mu+1}\left((\Theta_{{\mathfrak{p}}_{2}}+ \varepsilon_{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p}}_{1}})\Gamma_{{\mathfrak{p} }_{1}}+K^{\alpha({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}\delta(\Gamma_{{ \mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Gamma_{{\mathfrak{p}}_{1}}) \right);\]
Regarding the first summand of (5.19), using Lemma B.3, we have
\[C_{\vec{w}_{8},{\mathfrak{p}}_{1}}\left(\int_{0}^{1}dt\Phi_{*}^{ t}(\Pi_{K}\Pi_{\mathcal{X}}\hat{G}+u)\right) \stackrel{{\eqref{dorata},\eqref{kam32}}}{{\leq}} \gamma{\mathtt{C}}\delta(1+\gamma\varepsilon_{0}\Gamma_{{\mathfrak{p}}_{1}}K^{ -\eta+\mu})\] (5.22)
Moreover
\[\!\!\!\!C_{\vec{w}_{8},{\mathfrak{p}}_{2}}\left(\int_{0}^{1}dt \Phi_{*}^{t}(\Pi_{K}\Pi_{\mathcal{X}}\hat{G}+u)\right) \leq(1+\rho)(C_{\vec{w}_{6},{\mathfrak{p}}_{2}}(\Pi_{\mathcal{X}} \hat{G}+u)+C_{\vec{w}_{6},{\mathfrak{p}}_{1}}(\Pi_{\mathcal{X}}\hat{G}+u)|f_{+ }|_{\vec{w}_{7},{\mathfrak{p}}_{2}+\nu+1})\] (5.23)
\[\stackrel{{(\ref{cribbio42}),\eqref{eq106},\eqref{kam 32}}}{{\leq}} {\mathtt{C}}\gamma K^{\mu+1}\left((\Theta_{{\mathfrak{p}}_{2}}+ \varepsilon_{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p}}_{1}})\Gamma_{{\mathfrak{p} }_{1}}+K^{\alpha({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}\delta(\Gamma_{{ \mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Gamma_{{\mathfrak{p}}_{1}}) \right)\,.\]
The second term of (5.19) can be estimated as follows. First we note that, since \(\hat{G}\) is \(C^{\mathtt{n}+2}\)-tame up to order¹¹\(q+1\), then the vector field \(\Pi_{K}\Pi_{{\mathcal{X}}}[g_{+},\Pi_{\mathcal{X}}^{\perp}\hat{G}]\) is \(C^{\mathtt{n}+1}\)-tame up to order \(q\). This implies, due to the presence of the projection onto \({\mathcal{X}}\), that it is indeed \(C^{k}\)-tame for all \(k\). The tameness constant is given by
[FOOTNOTE:11][ENDFOOTNOTE]
\[\stackrel{{\eqref{P3}}}{{\leq}}KC_{\vec{w}_{8},p-1}( \Pi_{{\mathcal{X}}}[g_{+},\Pi_{\mathcal{X}}^{\perp}\hat{G}])\] (5.24)
\[\leq\mathtt{C}K^{\nu+1}(|g_{+}|_{\vec{w}_{8},p}C_{\vec{w}_{8},{ \mathfrak{p}}_{0}}(\Pi_{\mathcal{X}}^{\perp}\hat{G})+|g_{+}|_{\vec{w}_{8},{ \mathfrak{p}}_{0}}C_{\vec{w}_{8},p}(\Pi_{\mathcal{X}}^{\perp}\hat{G}))\]
\[\stackrel{{{rmk}\ref{monomi}}}{{\leq}}\mathtt{C}K^{ \nu+1}(|g_{+}|_{\vec{w}_{8},p}C_{\vec{w}_{8},{\mathfrak{p}}_{0}}(\hat{G})+|g_{ +}|_{\vec{w}_{8},{\mathfrak{p}}_{0}}C_{\vec{w}_{8},p}(\hat{G})),\]
hence, by (5.14) and (5.8), we have
\[C_{\vec{v}_{+},{\mathfrak{p}}_{1}}(\Pi_{K}\Pi_{{\mathcal{X}}}[g_ {+},\Pi_{\mathcal{X}}^{\perp}\hat{G}]) \leq\gamma\mathtt{C}\Gamma^{2}_{{\mathfrak{p}}_{1}}K^{\mu+\nu+1} \delta\,,\] (5.25)
\[C_{\vec{v}_{+},{\mathfrak{p}}_{2}}(\Pi_{K}\Pi_{{\mathcal{X}}}[g_ {+},\Pi_{\mathcal{X}}^{\perp}\hat{G}]) {\leq}{\mathtt{C}}\gamma K^{\mu+\nu+2}\Gamma_{{\mathfrak{p}}_{1}} \Big{(}(\Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{ \mathfrak{p}}_{1}}+K^{\alpha({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}\delta( \Gamma_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Gamma_{{\mathfrak{p} }_{1}})\Big{)}\,.\] (5.26)
Therefore using Lemma B.3 we have
\[C_{\vec{w}_{8},{\mathfrak{p}}_{1}}(\int_{0}^{1}dt(\Phi_{+})_{*}^{t}\Pi_{K}\Pi_ {{\mathcal{X}}}[g_{+},\Pi_{\mathcal{X}}^{\perp}\hat{G}])\leq\gamma\mathtt{C} \Gamma_{{\mathfrak{p}}_{1}}^{2}K^{\mu+\nu+1}\delta\] (5.27)
and
\[C_{\vec{w}_{8},{\mathfrak{p}}_{2}}(\int_{0}^{1}dt(\Phi_{+})_{*}^ {t}\Pi_{K}\Pi_{{\mathcal{X}}}[g_{+},\Pi_{\mathcal{X}}^{\perp}\hat{G}])\leq \gamma{\mathtt{C}} K^{\mu+\nu+2}\Gamma_{{\mathfrak{p}}_{1}}\Big{(}\Theta_{{ \mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p}}_{1}}\] (5.28)
\[+K^{\alpha({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}\delta(\Gamma_{ {\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Gamma_{{\mathfrak{p}}_{1}}) \Big{)}\]
by (5.5).
Finally, again by Lemma B.3, we estimate the third summand in (5.19) as
\[C_{\vec{w}_{8},{\mathfrak{p}}_{1}}(\Phi_{*}(\hat{G}))\] (5.29)
and
\[C_{\vec{w}_{8},{\mathfrak{p}}_{2}}((\Phi_{+})_{*}\hat{G}) \leq(1+{\mathtt{c}}^{-1}|f_{+}|_{\vec{w}_{7},{\mathfrak{p}}_{1}}) \Big{(}C_{\vec{w}_{6},{\mathfrak{p}}_{2}}(\hat{G})+C_{\vec{w}_{6},{\mathfrak{p }}_{1}}(\hat{G})|f_{+}|_{\vec{w}_{7},{\mathfrak{p}}_{2}+\nu+1}\Big{)}\] (5.30)
\[\stackrel{{\eqref{formosa},\eqref{stimatotale2}}}{{ \leq}}\gamma{\mathtt{C}}\Big{(}\Gamma_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{ \kappa_{3}}{\Gamma_{{\mathfrak{p}}_{1}}}\]
\[\qquad\qquad+{\Gamma_{{\mathfrak{p}}_{1}}}K^{\mu+\nu+2}(\Theta_{{ \mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p}}_{1}}+K^ {\alpha(p-{\mathfrak{p}}_{1})}\delta(\Gamma_{{\mathfrak{p}}_{2}}+\varepsilon_{ 0}K^{\kappa_{3}}\Gamma_{{\mathfrak{p}}_{1}}))\Big{)}\,.\]
The bounds (5.11) follow by collecting together (5.22), (5.23), (5.27),(5.29) and (5.30).
Let us study \(\Theta_{+,p}\). First of all we see that
\[\Pi_{{\mathcal{N}}}^{\perp}G_{+}=\Pi_{\mathcal{N}}^{\perp}F_{+}=\Pi_{\mathcal{ N}}^{\perp}(\Phi_{+})_{*}\hat{F}=\Pi_{{\mathcal{N}}}^{\perp}\Big{(}(\Phi_{+})_ {*}N_{0}+(\Phi_{+})_{*}(\Pi_{{\mathcal{N}}}\hat{G})+(\Phi_{+})_{*}(\Pi_{{ \mathcal{N}}}^{\perp}\hat{G})\Big{)}\,.\] (5.31)
In order to estimate the first term \(\Pi_{{\mathcal{N}}}^{\perp}(\Phi_{+})_{*}N_{0}\), we first note that
\[\Pi_{{\mathcal{N}}}^{\perp}(\Phi_{+})_{*}N_{0} =\Pi_{{\mathcal{N}}}^{\perp}((\Phi_{+})_{*}N_{0}-N_{0})=\Pi_{ \mathcal{N}}^{\perp}\int_{0}^{1}dt\,(\Phi_{+}^{t})_{*}[g_{+},N_{0}]=\Pi_{ \mathcal{N}}^{\perp}\int_{0}^{1}dt\,(\Phi_{+}^{t})_{*}(\Pi_{K}\Pi_{{\mathcal{X }}}[g_{+},N_{0}])\] (5.32)
\[=\Pi_{\mathcal{N}}^{\perp}\Big{(}\int_{0}^{1}dt(\Phi_{+})_{*}^{t} \Pi_{K}\Pi_{{\mathcal{X}}}[g_{+},N_{0}+\Pi_{\mathcal{X}}^{\perp}\hat{G}]-\int_ {0}^{1}dt(\Phi_{+})_{*}^{t}\Pi_{K}\Pi_{{\mathcal{X}}}[g_{+},\Pi_{\mathcal{X}}^ {\perp}\hat{G}]\Big{)}\,,\]
substituting (5.20) and using Remark B.2 in order to remove the projection we have
\[C_{\vec{w}_{8},{\mathfrak{p}}_{1}}(\Pi_{{\mathcal{N}}}^{\perp}( \Phi_{+})_{*}N_{0})\stackrel{{\eqref{gallina5},\eqref{gallina6}}}{ {\leq}}\mathtt{C}\gamma K^{\mu}{\Gamma_{{\mathfrak{p}}_{1}}}\delta(K^{\nu+1} \Gamma_{{\mathfrak{p}}_{1}}+\varepsilon_{0}K^{-\eta})\,,\] (5.33)
\[C_{\vec{w}_{8},{\mathfrak{p}}_{2}}(\Pi_{{\mathcal{N}}}^{\perp}( \Phi_{+})_{*}N_{0})\stackrel{{\eqref{kam18},\eqref{gallina6}}}{{ \leq}}{\mathtt{C}}\gamma K^{\mu+\nu+2}\Gamma_{{\mathfrak{p}}_{1}}\Big{(}\Theta _{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p}}_{1}} +K^{\alpha(p-{\mathfrak{p}}_{1})}\delta(\Gamma_{{\mathfrak{p}}_{2}}+ \varepsilon_{0}K^{\kappa_{3}}\Gamma_{{\mathfrak{p}}_{1}})\Big{)}\,.\]
In order to bound the second summand we use Lemma B.6 with \({\mathcal{U}}={\mathcal{N}}\) and obtain
\[C_{\vec{w}_{8},{\mathfrak{p}}_{1}}(\Pi_{{\mathcal{N}}}^{\perp}( \Phi_{+})_{*}(\Pi_{{\mathcal{N}}}\hat{G})) \leq{\mathtt{C}}|f_{+}|_{\vec{w}_{6},{\mathfrak{p}}_{1}+\nu+1}C_{ \vec{w}_{6},{\mathfrak{p}}_{1}}(\Pi_{{\mathcal{N}}}\hat{G})\stackrel{{ rmk\ref{monomi}}}{{\leq}}{\mathtt{C}}|f_{+}|_{\vec{w}_{6},{ \mathfrak{p}}_{1}+\nu+1}C_{\vec{w}_{6},{\mathfrak{p}}_{1}}(\hat{G})\] (5.34)
\[\stackrel{{\eqref{formosa}}}{{\leq}}\gamma{\mathtt{C} }K^{\mu+\nu+1}\delta\Gamma^{2}_{{\mathfrak{p}}_{1}},\]
and
\[C_{\vec{w}_{8},{\mathfrak{p}}_{2}}(\Pi_{{\mathcal{N}}}^{\perp}( \Phi_{+})_{*}(\Pi_{{\mathcal{N}}}\hat{G})) \leq{\mathtt{C}}\Big{(}C_{\vec{w}_{2},{\mathfrak{p}}_{2}}(\hat{G} )|f_{+}|_{\vec{w}_{6},{\mathfrak{p}}_{1}}+C_{\vec{w}_{2},{\mathfrak{p}}_{1}}( \hat{G})|f_{+}|_{\vec{w}_{6},{\mathfrak{p}}_{2}}\Big{)}\] (5.35)
\[\stackrel{{(\ref{formosa})}}{{\leq}}\gamma{\mathtt{C} }{\Gamma_{{\mathfrak{p}}_{1}}}K^{\mu+1}(\Theta_{{\mathfrak{p}}_{2}}+ \varepsilon_{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p}}_{1}}+K^{\alpha(p-{ \mathfrak{p}}_{1})}\delta(\Gamma_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa _{3}}\Gamma_{{\mathfrak{p}}_{1}}))\]
Regarding the third summand, using Remark B.2, Lemma B.3 and (5.15) we obtain
\[C_{\vec{w}_{8},{\mathfrak{p}}_{1}}(\Pi_{\mathcal{N}}^{\perp}( \Phi_{+})_{*}(\Pi_{{\mathcal{N}}}^{\perp}\hat{G})) \stackrel{{\eqref{stimatotale2}}}{{\leq}}\gamma(1\!+ \!\mathtt{C}{\Gamma_{{\mathfrak{p}}_{1}}}K^{\mu}\delta)(1+\varepsilon_{0}K^{-1 })\Theta_{{\mathfrak{p}}_{1}},\] (5.36)
\[\!\!\!C_{\vec{w}_{8},{\mathfrak{p}}_{2}}(\Pi_{\mathcal{N}}^{\perp }(\Phi_{+})_{*}(\Pi_{{\mathcal{N}}}^{\perp}\hat{G})) \!\!\!\!\stackrel{{\eqref{stimatotale2}}}{{\leq}}\!\! \!\mathtt{C}\gamma(\Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}} \Theta_{{\mathfrak{p}}_{1}})+\]
\[+{\mathtt{C}}\gamma K^{\mu+\nu+2}{\Theta_{{\mathfrak{p}}_{1}}} \big{(}\Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{ \mathfrak{p}}_{1}}+K^{\alpha(p-{\mathfrak{p}}_{1})}\delta(\Gamma_{{\mathfrak{p }}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Gamma_{{\mathfrak{p}}_{1}})\big{)}.\]
By collecting together (5.33), (5.34), (5.35) and (5.36) we obtain the first two lines of (5.12). In order to prove the last of (5.12) we use item (d) of Definition 2.23. Indeed we substitute (5.8) ,(5.14) and (5.15) in (2.48) and we obtain the desired bound.
In order to prove the bound for \(\delta_{+}\) we first write
\[\Pi_{{\mathcal{X}}}{G}_{+}=\Pi_{\mathcal{X}}(\hat{F}+[\hat{F},g_{+}]+Q),\qquad Q :=(\Phi_{+})_{*}\hat{F}-(\hat{F}+[\hat{F},g_{+}])\] (5.37)
and hence
\[\Pi_{\mathcal{X}}{G}_{+} =\Pi_{\mathcal{X}}\hat{F}+\Pi_{\mathcal{X}}[\Pi_{\mathcal{X}}^{ \perp}\hat{F},g_{+}]+\Pi_{\mathcal{X}}[\Pi_{\mathcal{X}}\hat{F},g_{+}]+\Pi_{ \mathcal{X}}Q\] (5.38)
\[=u+\Pi_{K}^{\perp}\Big{(}\Pi_{\mathcal{X}}\hat{G}+\Pi_{\mathcal{X }}[\Pi_{\mathcal{X}}^{\perp}\hat{G},g_{+}]+\Pi_{\mathcal{X}}[\Pi_{\mathcal{X}} \hat{G},g_{+}]\Big{)}+\Pi_{K}\Pi_{\mathcal{X}}[\Pi_{\mathcal{X}}\hat{G},g_{+}] +\Pi_{\mathcal{X}}Q\]
\[=u+\Pi_{K}^{\perp}\Big{(}\Pi_{\mathcal{X}}\hat{G}+\Pi_{\mathcal{X }}[\hat{G},g_{+}]\Big{)}+\Pi_{K}\Pi_{\mathcal{X}}[\Pi_{\mathcal{X}}\hat{G},g_{ +}]+\Pi_{\mathcal{X}}Q\]
where \(u\) is defined in (5.20) and bounded in (5.21).
The second summand in (5.38) can be bounded as
\[|\Pi_{K}^{\perp}\Big{(}\Pi_{\mathcal{X}}\hat{G}+\Pi_{\mathcal{X}} [\hat{G},g_{+}]\Big{)}|_{\vec{w}_{8},{\mathfrak{p}}_{1}}\stackrel{{ \eqref{P2}}}{{\leq}}K^{-({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})+2} |\Pi_{\mathcal{X}}\hat{G}+\Pi_{\mathcal{X}}[\hat{G},g_{+}])|_{\vec{w}_{6},{ \mathfrak{p}}_{2}-2}\] (5.39)
\[\quad\quad+{\mathtt{C}}\gamma K^{-({\mathfrak{p}}_{2}-{\mathfrak{ p}}_{1})+2}(\Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{ \mathfrak{p}}_{1}}).\]
We can choose the tameness constant of the third summand in (5.38) as follows
\[C_{\vec{w}_{8},{\mathfrak{p}}_{1}}(\Pi_{K}\Pi_{\mathcal{X}}[\Pi_ {\mathcal{X}}\hat{F},g_{+}]) \stackrel{{\eqref{P3}}}{{\leq}}KC_{\vec{w}_{8},{ \mathfrak{p}}_{1}-1}(\Pi_{K}\Pi_{\mathcal{X}}[\Pi_{\mathcal{X}}\hat{F},g_{+}]) \stackrel{{\eqref{commu2}}}{{\leq}}{\mathtt{C}}K^{\nu+1}C_{\vec{w} _{8},{\mathfrak{p}}_{1}}(\Pi_{\mathcal{X}}\hat{F})|g_{+}|_{\vec{w}_{8},{ \mathfrak{p}}_{1}}\] (5.40)
\[\stackrel{{\eqref{kam11},\eqref{stimatotale2}}}{{\leq }}\gamma{\mathtt{C}}K^{\mu+\nu+1}{\Gamma_{{\mathfrak{p}}_{1}}}\delta^{2}\,.\]
Finally we deal with the last summand in (5.38) as follows. Using Remark B.5 and Definition 2.20 one can reason as in (5.19) and write
\[\Pi_{{\mathcal{X}}}Q =\Pi_{{\mathcal{X}}}\left(\int_{0}^{1}dt\int_{0}^{t}ds(\Phi_{+})_ {*}^{s}\left(\big{[}g_{+},[g_{+},\hat{G}]\big{]}+\big{[}g_{+},\Pi_{K}\Pi_{ \mathcal{X}}(\hat{F}+u-[g_{+},\Pi_{{\mathcal{X}}}^{\perp}\hat{G}])\big{]} \right)\right)\,.\] (5.41)
Regarding the first summand in (5.41) one uses (B.12b) and obtains
\[C_{\vec{w}_{8},{\mathfrak{p}}_{1}} \left(\Pi_{{\mathcal{X}}}\int_{0}^{1}ds\int_{0}^{t}ds(\Phi_{+})_{ *}^{s}\left(\big{[}g_{+},[g_{+},\hat{G}]\big{]}\right)\right)\leq C_{\vec{w}_{ 6},{\mathfrak{p}}_{1}+2}(\hat{G})|g_{+}|_{\vec{w}_{6},{\mathfrak{p}}_{1}}|g_{+ }|_{\vec{w}_{6},{\mathfrak{p}}_{1}+\nu+2}\] (5.42)
\[\stackrel{{(\ref{stimatotale2})}}{{\leq}}\mathtt{C}K^ {\nu+2+2\mu}\Gamma_{{\mathfrak{p}}_{1}}^{2}\delta^{2}\left(C_{\vec{w}_{6},{ \mathfrak{p}}_{1}+2}(\Pi_{K}\hat{G})+C_{\vec{w}_{6},{\mathfrak{p}}_{1}+2}(\Pi_ {K}^{\perp}\hat{G})\right)\]
\[\stackrel{{(\ref{formosa})}}{{\leq}}{\mathtt{C}} \gamma\delta^{2}\Gamma_{{\mathfrak{p}}_{1}}^{2}K^{\nu+2\mu+4}\Big{(}\Gamma_{{ \mathfrak{p}}_{1}}+K^{-({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}(\Gamma_{{ \mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}{\Gamma_{{\mathfrak{p}}_{1}}}) \Big{)}\,.\]
Again we are using the fact that \(G\) is \(C^{\mathtt{n}+2}\)-tame to infer that the double commutator is \(C^{\mathtt{n}}\)-tame, and then the projection onto \({\mathcal{X}}\) in order to recover the \(C^{k}\)-tameness for all \(k\). Regarding the second summand in (5.41), since each term is a polynomial in \(E^{(K)}\), using Lemmata B.3, B.1-(iii) and the bounds (5.22), (5.25) we obtain
\[C_{\vec{w}_{8},{\mathfrak{p}}_{1}}(\int_{0}^{1}\int_{0}^{t} (\Phi_{+})_{*}^{s}[g_{+},\Pi_{K}\Pi_{\mathcal{X}}(\hat{F}+u-[g_{+ },\Pi_{\mathcal{X}}^{\perp}\hat{G}])])\leq\] (5.43)
\[\leq\mathtt{C}|g_{+}|_{\vec{w}_{6},{\mathfrak{p}}_{1}+1}C_{\vec{w }_{6},{\mathfrak{p}}_{1}+\nu+1}(\Pi_{K}\Pi_{\mathcal{X}}(\hat{F}+u-[g_{+},\Pi_ {\mathcal{X}}^{\perp}\hat{G}]))\]
\[\leq{\mathtt{C}}\gamma K^{2\mu+2\nu+4}\Gamma_{{\mathfrak{p}}_{1}} ^{3}\delta^{2}\,.\]
Therefore, collecting together (5.43) and (5.42) we obtain
\[C_{\vec{v}_{+},{\mathfrak{p}}_{1}}(\Pi_{{\mathcal{X}}}Q)\leq{\mathtt{C}}\gamma \delta^{2}\Gamma^{2}_{{\mathfrak{p}}_{1}}K^{2\mu+2\nu+4}\Big{(}\Gamma_{{ \mathfrak{p}}_{1}}+K^{-({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}(\Gamma_{{ \mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}{\Gamma_{{\mathfrak{p}}_{1}}}) \Big{)}.\] (5.44)
In conclusion one has
\[C_{\vec{v}_{+},{\mathfrak{p}}_{1}}(\Pi_{{\mathcal{X}}}F_{+}) \leq{\mathtt{C}}\gamma{\Gamma_{{\mathfrak{p}}_{1}}}K^{\mu+\nu+1} \Big{(}\delta^{2}\Gamma_{{\mathfrak{p}}_{1}}K^{\mu+\nu+3}\big{(}\Gamma_{{ \mathfrak{p}}_{1}}+K^{-({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}(\Gamma_{{ \mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}{\Gamma_{{\mathfrak{p}}_{1}}}) \big{)}+\] (5.45)
\[K^{-({\mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}(\Theta_{{\mathfrak{ p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p}}_{1}}+K^{\alpha({ \mathfrak{p}}_{2}-{\mathfrak{p}}_{1})}\delta(\Gamma_{{\mathfrak{p}}_{2}}+ \varepsilon_{0}K^{\kappa_{3}}{\Gamma_{{\mathfrak{p}}_{1}}})\Big{)}+{\Gamma_{{ \mathfrak{p}}_{1}}}\delta\varepsilon_{0}K^{-\eta+\mu}.\]
Recalling that the norm \(|\cdot|_{\vec{v}_{+},{\mathfrak{p}}_{1}}\) is the sharp tameness constant (see (2.31)), then (5.45) implies the bound (5.13) since \(\delta{\Gamma_{{\mathfrak{p}}_{1}}}K^{\mu+\nu+3}\leq 1\) by (5.5). ∎
### Proof of Theorem 2.25: iterative scheme.
We now prove Theorem 2.25 by induction on \(n\). The induction basis is trivial with \(g_{0}=0\). Assuming (2.56) up to \(n\) we prove the inductive step using the “KAM step” of Proposition 5.1. First of all we ensure that
\[\rho_{n}^{-1}K_{n}^{\mu+\nu+3}\Gamma_{n,{\mathfrak{p}}_{1}}\delta_{n}\leq \mathtt{c},\] (5.46)
which, by the inductive hypothesis and (2.52) reads
\[2^{n+9}K_{0}^{(\mu+\nu+3-\kappa_{2})\chi^{n}}\mathtt{G}_{0}\varepsilon_{0}K_{0 }^{\kappa_{2}}\leq\mathtt{c};\] (5.47)
this is true because by (2.42b) and the fact that \(K_{0}\) is large enough depending on \(\chi,d,{\mathfrak{p}}_{0}\), the left hand side (5.47) is decreasing in \(n\) so that (5.46) follows form
\[K_{0}^{\mu+\nu+4}\mathtt{G}_{0}\varepsilon_{0}<1\]
which is indeed implied by (2.44a) because \({\mathtt{G}}_{0}\geq{\mathtt{R}}_{0}\).
Hence we can apply the “KAM step” to \(F_{n}:=(\Phi_{n}\circ{\mathcal{L}}_{n})_{*}F_{n-1}\in{\mathcal{W}}_{\vec{v}^{ \text{\tiny 0}}_{n},{\mathfrak{p}}_{2}}\) which is a \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). We fix \((K_{n},\gamma_{n},a_{n},s_{n},r_{n},\rho_{n},{\mathcal{O}}_{n}) \rightsquigarrow(K,\gamma,a,s,r,\rho,{\mathcal{O}})\), \(\Gamma_{n,p}\rightsquigarrow\Gamma_{p}\), \(\Theta_{n,p}\rightsquigarrow\Theta_{p}\), \(\delta_{n}\rightsquigarrow\delta\), \((\gamma_{n+1},a_{n+1},s_{n+1},r_{n+1},\rho_{n+1},{\mathcal{O}}_{n+1}) \rightsquigarrow(\gamma_{+},a_{+},s_{+},r_{+},\rho_{+},{\mathcal{O}}_{+})\). The KAM steps produces a bounded regular vector field \(g_{n+1}\) and a left invertible change of variables \(\Phi_{n+1}=\mathds{1}+f_{n+1}\) such that \(F_{n+1}:=(\Phi_{n+1}\circ{\mathcal{L}}_{n})_{*}F_{n}\in{\mathcal{W}}_{\vec{v}^ {\text{\tiny 0}}_{n+1},{\mathfrak{p}}_{2}}\) is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). We now verify that the bounds (2.56) hold with \(\Gamma_{n+1,p}\rightsquigarrow\Gamma_{+,p}\), \(\Theta_{n+1,p}\rightsquigarrow\Theta_{+,p}\), , \(\delta_{n+1}\rightsquigarrow\delta_{+}\).
Let us prove **(i)**. By substituting into (5.8) we immediately obtain the bounds for \(g_{n+1}\) of (2.56).
Now we recall that, by definition
\[\frac{\gamma_{n}}{\gamma_{n+1}}=1+\frac{1}{2^{n+3}-1}.\]
We use (5.11) together with the inductive hypotheses to obtain
which follow by requiring
\[\max(2^{n}K_{n}^{-1}\varepsilon_{0},2^{n}K_{n}^{\mu+\nu+1-\kappa_{2}}K_{0}^{ \kappa_{2}}{\mathtt{G}}_{0}\varepsilon_{0},2^{n}K_{n}^{-\eta-\kappa_{2}+\mu}K_ {0}^{\kappa_{2}}\varepsilon_{0})\leq\mathtt{c}\,,\]
and as before this follows by (2.42c) and (2.44a).
Regarding \(\Theta_{n+1,{\mathfrak{p}}_{1}}\), using (5.12) we get
which again follows from (2.42c) and (2.44a).
For \(\delta_{n+1}\rightsquigarrow\delta_{+}\), we apply (5.13) and get
\[\delta_{n+1}\leq {\mathtt{C}}{\mathtt{G}}_{0}\Big{(}\varepsilon_{0}^{2}K_{0}^{ \kappa_{2}}({\mathtt{G}}_{0}^{2}K_{0}^{\kappa_{2}}K_{n}^{2\mu+2\nu+4-2\kappa_{ 2}}+K_{n}^{\mu-\eta-\kappa_{2}})+\]
\[(K_{n}^{\kappa_{1}}+\varepsilon_{0}K_{n}^{\kappa_{3}}) ({\mathtt{R}}_{0}K_{n}^{\mu+\nu+1-\Delta{\mathfrak{p}}}+ \varepsilon_{0}K_{0}^{\kappa_{2}}{\mathtt{G}}_{0}K_{n}^{\mu+\nu+1-(1-\alpha) \Delta{\mathfrak{p}}-\kappa_{2}})\Big{)}+(K_{n}^{\kappa_{1}}+\varepsilon_{0}K_ {n}^{\kappa_{3}}){\mathtt{R}}_{0}K_{n}^{-\Delta{\mathfrak{p}}+1}\leq \varepsilon_{0}K_{0}^{\kappa_{2}}K_{n}^{-\chi\kappa_{2}}\]
which follows by (2.42b), (2.42c), (2.42d), (2.44a) and (2.44b).
Regarding \(\Gamma_{n+1,{\mathfrak{p}}_{2}}\), by (5.11) we get
\[\Gamma_{n+1,{\mathfrak{p}}_{2}}\leq{\mathtt{C}}{\mathtt{G}}_{0}(K_{n}^{\kappa_ {1}}+\varepsilon_{0}K_{n}^{\kappa_{3}})\Big{(}1+K_{n}^{\mu+\nu+2}({\mathtt{R}} _{0}+K_{n}^{\alpha\Delta{\mathfrak{p}}-\kappa_{2}}\varepsilon_{0}K_{0}^{\kappa _{2}}{\mathtt{G}}_{0})\Big{)}\leq{\mathtt{G}}_{0}K_{n}^{\chi\kappa_{1}}\]
which follows by (2.42a), (2.42e) and (2.44c).
Finally, by (5.12)
\[\Theta_{n+1,{\mathfrak{p}}_{2}}\leq\mathtt{C}{\mathtt{R}}_{0}(K_{n}^{\kappa_{1 }}+\varepsilon_{0}K_{n}^{\kappa_{3}})+{\mathtt{C}}K_{n}^{\mu+\nu+2}(K_{n}^{ \kappa_{1}}+\varepsilon_{0}K_{n}^{\kappa_{3}}){\mathtt{G}}_{0}\big{(}{\mathtt{ R}}_{0}+K_{n}^{\alpha\Delta{\mathfrak{p}}-\kappa_{2}}\varepsilon_{0}K_{0}^{ \kappa_{2}}{\mathtt{G}}_{0}\big{)}\leq{\mathtt{R}}_{0}K_{n}^{\chi\kappa_{1}}\]
which follows again by (2.42a), (2.42e) and (2.44c).
We now prove **(ii)**. Setting \(\vec{w}_{4,n}=(\gamma_{n},{\mathcal{O}}_{n+1},s_{n}-4\rho_{n}s_{0},a_{n}-4\rho _{n}a_{0},r_{n}-4\rho_{n}r_{0})\), we prove inductively
\[\|({\mathcal{H}}_{n+1}-\mathds{1})(u)\|_{\vec{w}_{4,n},{\mathfrak{p}}_{1}}\leq \varepsilon_{0}\sum_{k=0}^{n+1}\frac{1}{2^{k}}.\] (5.48)
Note that the choice of \(\vec{w}_{4,n}\) (5.48) is consistent with the fact that \(F_{n}:=({\mathcal{H}}_{n})_{*}F_{0}\) is defined on the domain
\[\mathds{T}_{s_{n}}^{d}\times D_{a_{n},p}(r_{n})\,.\]
For \(n=-1\) this is obvious. Then by induction we have
\[\|({{\mathcal{H}}}_{n+1}-\mathds{1})(u)\|_{\vec{w}_{4,n},{ \mathfrak{p}}_{1}} \leq\|({\mathcal{H}}_{n}-\mathds{1})(u)\|_{\vec{w}_{4,n},{ \mathfrak{p}}_{1}}+\|f_{n+1}({\mathcal{L}}_{n+1}\circ{\mathcal{H}}_{n}(u))\|_{ \vec{w}_{4,n},{\mathfrak{p}}_{1}}+\|({\mathcal{L}}_{n+1}-\mathds{1}){\mathcal{ H}}_{n}(u)\|_{\vec{w}_{4,n},{\mathfrak{p}}_{1}}\] (5.49)
\[\stackrel{{\eqref{accaenne},(\ref{satana})}}{{\leq}} \varepsilon_{0}\sum_{k=0}^{n}\frac{1}{2^{k}}+2|g_{n+1}|_{\vec{w}_{4,n},{ \mathfrak{p}}_{1}}\|{\mathcal{L}}_{n+1}\circ{\mathcal{H}}_{n}(u)\|_{\vec{w}_{4 ,n},{\mathfrak{p}}_{1}}+{\mathtt{C}}\varepsilon_{0}K_{n}^{-1}\|{\mathcal{H}}_{ n}(u)\|_{\vec{w}_{4,n},{\mathfrak{p}}_{1}}\]
\[\stackrel{{(\ref{stimatotale2})}}{{\leq}}\varepsilon_ {0}\sum_{k=0}^{n}\frac{1}{2^{k}}+K_{0}^{\kappa_{2}}\varepsilon_{0}{\mathtt{G}} _{0}K_{n}^{-\kappa_{2}+\mu+1}+2\mathtt{C}\varepsilon_{0}K_{n}^{-1}\]
\[\stackrel{{\eqref{numeretti}}}{{\leq}}\varepsilon_{0} \Big{(}\sum_{k=0}^{n}\frac{1}{2^{k}}+\frac{1}{2^{n+1}}\Big{)}\]
for \(K_{0}\) large enough. Moreover as before
\[\|({\mathcal{H}}_{n+1}-{\mathcal{H}}_{n})u\|_{\vec{w}_{4,n},{\mathfrak{p}}_{1} }\leq\|({\mathcal{L}}_{n+1}-\mathds{1}){\mathcal{H}}_{n}(u)\|_{\vec{w}_{4,n},{ \mathfrak{p}}_{1}}+\|f_{n+1}({\mathcal{L}}_{n+1}\circ{\mathcal{H}}_{n})(u)\|_{ \vec{w}_{4,n},{\mathfrak{p}}_{1}}\leq\varepsilon_{0}2^{-(n+1)}\] (5.50)
which implies that the sequence \({\mathcal{H}}_{n}\) is Cauchy and therefore there exists a limit map \({{\mathcal{H}}}_{\infty}=\lim_{n\to\infty}{\mathcal{H}}_{n}\). Moreover
\[\!\!\!\!\!\!\|({{\mathcal{H}}}_{\infty}-\mathds{1})(u)\|_{\gamma_ {\infty},{\mathcal{O}}_{\infty},\frac{s_{0}}{2},\frac{a_{0}}{2},{\mathfrak{p}} _{1}} \leq\|({\mathcal{H}}_{1}-\mathds{1})(u)\|_{\gamma_{\infty},{ \mathcal{O}}_{\infty},\frac{s_{0}}{2},\frac{a_{0}}{2},{\mathfrak{p}}_{1}}+\sum _{n\geq 2}\|({\mathcal{H}}_{n}-{\mathcal{H}}_{n-1})(u)\|_{\gamma_{\infty},{ \mathcal{O}}_{\infty},\frac{s_{0}}{2},\frac{a_{0}}{2},{\mathfrak{p}}_{1}}\] (5.51)
\[\leq 2\varepsilon_{0}\,,\]
so that for \(\varepsilon_{0}\) small enough also (2.57) holds. We are left with the proof of (2.58). By definition the limit vector field is
\[F_{\infty}:=\lim_{n\to\infty}F_{n},\quad F_{n}:=(\Phi_{n}\circ{\mathcal{L}}_{n })_{*}F_{n-1}.\] (5.52)
On the other hand we have
\[{\mathcal{F}}_{\infty}:=({\mathcal{H}}_{\infty})_{*}F_{0}\]
We want to prove that \({\mathcal{F}}_{\infty}=F_{\infty}\); setting \({\mathcal{F}}_{n}:=({\mathcal{H}}_{n})_{*}F_{0}\) we show inductively that \({\mathcal{F}}_{n}=F_{n}\) for any \(n\geq 1\). For \(n=1\) one has \({\mathcal{H}}_{1}=\Phi_{1}\circ{\mathcal{L}}_{1}\) and hence \(F_{1}={\mathcal{F}}_{1}\). Now assume that \({\mathcal{F}}_{n-1}=F_{n-1}\). By definition one has
\[{\mathcal{H}}_{n}={\mathcal{K}}_{n}\circ{\mathcal{H}}_{n-1}:=(\Phi_{n}\circ{ \mathcal{L}}_{n})\circ{\mathcal{H}}_{n-1},\qquad{\mathcal{H}}^{-1}_{n}={ \mathcal{H}}^{-1}_{n-1}\circ{\mathcal{K}}_{n}^{-1}.\]
Hence we have
\[F_{n}-{\mathcal{F}}_{n} =({\mathcal{K}}_{n})_{*}F_{n-1}-({\mathcal{H}}_{n})_{*}F_{0}=d{ \mathcal{K}}_{n}F_{n-1}\circ{\mathcal{K}}_{n}^{-1}-d{\mathcal{H}}_{n}F_{0} \circ{\mathcal{H}}^{-1}_{n}\] (5.53)
\[=d{\mathcal{K}}_{n}F_{n-1}\circ{\mathcal{K}}_{n}^{-1}-d{\mathcal{ K}}_{n}d{\mathcal{H}}_{n-1}F_{0}\circ{\mathcal{H}}_{n-1}^{-1}\circ{\mathcal{K} }_{n}^{-1}\]
\[=d{\mathcal{K}}_{n}(F_{n-1}-d{\mathcal{H}}_{n-1}F_{0}\circ{ \mathcal{H}}_{n-1}^{-1})\circ{\mathcal{K}}_{n}^{-1}=0.\]
This concludes the proof of Theorem 2.25.
## Appendix A Smooth functions and vector fields on the torus
Here we provide some technical results.
The following one is a general result about smooth maps on the torus. First of all, for any \(p\geq 0\) and \(\zeta\geq 0\) we denote as usual
\[H^{p}(\mathds{T}_{s}^{b};\mathds{C}):=\big{\{}u=\sum_{l\in\mathds{Z}^{b}}u_{l} e^{{\rm i}l\cdot\theta}:\|u\|^{2}_{s,p}:=\sum_{l\in\mathds{Z}^{b}}\langle l \rangle^{2p}|u_{l}|^{2}e^{2s|l|}<\infty\big{\}}\,,\] (A.1)
the space of functions which are analytic on the strip \(\mathds{T}_{s}^{b}\), Sobolev on its boundary, and have Fourier coefficients \(u_{l}\). By Cauchy formula for analytic complex functions we have that this \(u\) is uniquely determined by the values that assume on the edge of the domain i.e \(z=x\pm i\sigma s\) where \(\sigma\in\{\pm 1\}^{b}\). We can define a natural norm using the Sobolev norm of the function on the boundary
\[|u|^{2}_{s,p}:=\sum_{\sigma\in\{\pm 1\}^{b}}\int_{\mathds{T}^{b}}\langle\nabla \rangle^{2p}|u(x+i\sigma s)|^{2}\] (A.2)
Using the Fourier basis it reads
\[|u|^{2}_{s,p}:=\sum_{\sigma\in\{\pm 1\}^{b}}\sum_{l\in\mathds{Z}^{b}}\langle l \rangle^{2p}|u_{l}|^{2}e^{-2s\sigma\cdot l}\,.\]
**Lemma A.1**.: _The norm \(|\cdot|_{s,p}\) and_
\[\|u\|_{s,p}^{2}:=\sum_{l\in\mathds{Z}^{b}}\langle l\rangle^{2p}|u_{l}|^{2}e^{2 s|l|}\] (A.3)
_are equivalent._
The following Lemma lists some important properties of Sobolev spaces \(H^{s}:=H^{s}(\mathds{T}^{b};\mathds{C})\) with norm
\[\|u\|^{2}_{s}:=\sum_{l\in\mathds{Z}^{b}}\langle l\rangle^{2p}|u_{l}|^{2}.\]
The same results holds also for our analytic norm in (A.1). The proof of the Lemma is classical.
**Lemma A.2**.: _Let \(s_{0}>d/2\). Then_
1. **Embedding.**__\(\|u\|_{L^{\infty}}\leq C(s_{0})\|u\|_{s_{0}}\)_,_ \(\forall\;u\in H^{s_{0}}\)_._
2. **Algebra.**__\(\|uv\|_{s_{0}}\leq C(s_{0})\|u\|_{s_{0}}\|v\|_{s_{0}}\)_,_ \(\forall\;u,v\in H^{s_{0}}\)_._
3. **Interpolation.** _For_ \(0\leq s_{1}\leq s\leq s_{2}\)_,_ \(s=\lambda s_{1}+(1-\lambda)s_{2}\)_,_ \[\|u\|_{s}\leq\|u\|^{\lambda}_{s_{1}}\|u\|_{s_{2}}^{1-\lambda},\quad\forall\;u \in H^{s_{2}}.\] (A.4)
4. **Asymmetric tame product.** _For_ \(s\geq s_{0}\) _one has_ \[\|uv\|_{s}\leq C(s_{0})\|u\|_{s}\|v\|_{s_{0}}+C(s)\|u\|_{s_{0}}\|v\|_{s},\quad \forall\;u,v\in H^{s}.\] (A.5)
5. **Mixed norms asymmetric tame product.** _For_ \(s\geq 0\)_,_ \(s\in\mathds{N}\) _setting_ \(|u|^{\infty}_{s}:=\sum_{|\alpha|\leq s}||D^{\alpha}u||_{L^{\infty}}\) _the norm in_ \(W^{s,\infty}\) _one has_ \[\|uv\|_{s}\leq\frac{3}{2}\|u\|_{L^{\infty}}\|v\|_{s}+C(s)|u|_{s,\infty}\|v\|_{ 0},\forall\;u\in W^{s,\infty},v\in H^{s}.\] (A.6) _If_ \(u:=u(\lambda)\) _and_ \(v:=v(\lambda)\) _depend in a Lipschitz way on_ \(\lambda\in\Lambda\subset\mathds{R}^{b}\)_, all the previous statements hold if one replace the norms_ \(\|\cdot\|_{s}\)_,_ \(|\cdot|^{\infty}_{s}\) _with_ \(\|\cdot\|_{s,\lambda}\)_,_ \(|\cdot|^{\infty}_{s,\lambda}\) _defined as in (_2.21_)._
We now introduce the space
\[W^{p,\infty}(\mathds{T}^{b}_{\zeta}):=\big{\{}\beta:\mathds{T}^{b}_{\zeta}\to \mathds{T}^{b}_{\zeta}\;:\;|\beta|_{p,\zeta,\infty}:=\sum_{k=0}^{p}\|d^{k} \beta\|_{L^{\infty}(\mathds{T}^{b}_{\zeta})}<\infty\big{\}}\,,\] (A.7)
and note that one has \(H^{\zeta,p+{\mathfrak{p}}_{0}}(\mathds{T}^{b}_{\zeta})\subset W^{p,\infty}( \mathds{T}^{b}_{\zeta})\).
**Lemma A.3** (**Diffeo)**.: _Let \(\beta\in W^{p,\infty}(\mathds{T}^{b}_{\zeta})\) for some \(p,\zeta\geq 0\) such that_
\[\|\beta\|_{\zeta,{\mathfrak{p}}_{0}}\leq\frac{\delta}{2C_{1}},\quad\|\beta\|_{ \zeta,{\mathfrak{p}}_{0}}\leq\frac{1}{2C_{2}},\quad 0<\delta<\frac{\zeta}{2}, \;\;C_{1},C_{2}>0,\] (A.8)
_and let us consider \(\Phi:\mathds{T}^{b}_{\zeta}\to\mathds{T}^{b}_{2\zeta}\) of the form_
\[x\mapsto x+\beta(x)=\Phi(x).\] (A.9)
_Then the following is true._
_(i) There exists_ \(\Psi:\mathds{T}^{b}_{\zeta-\delta}\to\mathds{T}^{b}_{\zeta}\) _of the form_ \(\Psi(y)=y+\tilde{\beta}(y)\) _with_ \(\tilde{\beta}\in W^{p,\infty}(\mathds{T}^{b}_{\zeta-\delta})\) _satisfying_
\[\|\tilde{\beta}\|_{\zeta-\delta,{\mathfrak{p}}_{0}}\leq\frac{\delta}{2},\quad \|\tilde{\beta}\|_{\zeta-\delta,p}\leq 2\|\beta\|_{\zeta,p}\,,\] (A.10)
_such that for all_ \(x\in\mathds{T}^{b}_{\zeta-2\delta}\) _one has_ \(\Psi\circ\Phi(x)=x\)_._
_(ii) For all_ \(u\in H^{\zeta,p}(\mathds{T}^{b}_{\zeta})\)_, the composition_ \((u\circ\Phi)(x)=u(x+\beta(x))\) _satisfies_
\[\|u\circ\Phi\|_{\zeta-\delta,p}\leq C(\|u\|_{\zeta,p}+|d\beta|_{p-1,\zeta, \infty}\|u\|_{\zeta,{\mathfrak{p}}_{0}}).\] (A.11)
_Proof._ For \(\zeta=0\) the result is proved in [38] thus in the following we assume \(\zeta>0\).
_(i)_ First of all recall that, if \({\mathfrak{p}}_{0}\geq b/2\) then \(\|u\|_{L^{\infty}}\leq\|u\|_{\zeta,{\mathfrak{p}}_{0}}\). We look for \(\tilde{\beta}\) such that
\[\tilde{\beta}(y)=-\beta(y+\tilde{\beta}(y)).\] (A.12)
The idea is to rewrite the problem as a fixed point equation. We define the operator \({\mathcal{G}}:H^{\zeta,p}\to H^{\zeta,p}\) as \({\mathcal{G}}(\tilde{\beta})=-\beta(y+\tilde{\beta})\). First of all we need to show that \({\mathcal{G}}\) maps the ball \(B_{\delta/2}:=\{\|u\|_{\zeta-\delta,p}<\delta/2\}\) into itself.
One has
\[\|{\mathcal{G}}(\tilde{\beta})\|_{\zeta-\delta,{\mathfrak{p}}_{0}} =\left\|\sum_{n\geq 0}\frac{1}{n!}(\partial^{n}\beta)\tilde{\beta }^{n}\right\|_{\zeta-\delta,{\mathfrak{p}}_{0}}\leq\sum_{n\geq 0}\frac{1}{n!} \|\beta\|_{\zeta-\delta,{\mathfrak{p}}_{0}+n}\|\tilde{\beta}\|^{n}_{\zeta- \delta,{\mathfrak{p}}_{0}}\,,\] (A.13)
where \(\partial\beta\) denotes the derivative of \(\beta\) w.r.t. its argument. Note that for any \(u\in H^{\zeta+\delta,s}\) and \(\tau>0\) one has
\[\|u\|_{\zeta,s+\tau}\leq\left(\frac{\tau}{e}\right)^{\tau}\frac{1}{\delta^{ \tau}}\|u\|_{\zeta+\delta,s}\,;\] (A.14)
indeed
\[\|u\|_{\zeta,p+\tau}^{2} =\sum_{l\in\mathds{Z}^{b}}\langle l\rangle^{2(p+\tau)}e^{2\zeta|l |}|u_{l}|^{2}\leq\sum_{l\in\mathds{Z}^{b}}\langle l\rangle^{2p}|l|^{2\tau}e^{- 2\delta(|l|)}e^{2(\zeta+\delta)|l|}|b_{l}|^{2}\,,\]
and the function \(f(x):=x^{2\tau}e^{-2\delta x}\) reach its maximum at \(x=\tau/\delta\) and \(f(\tau/\delta)=(\tau/\delta e)^{2\tau}\), so that (A.14) follows. Then using (A.14) and the fact that \(n!=(1/\sqrt{2\pi n})(n/e)^{n}(1+O(1/n))\) as \(n\to\infty\), we obtain
\[\|{\mathcal{G}}(\tilde{\beta})\|_{\zeta-\delta,{\mathfrak{p}}_{0}} \leq\sum_{n\geq 0}\frac{1}{n!}\left(\frac{n}{e}\right)^{n}\frac{1 }{\delta^{n}}\|\beta\|_{\zeta,{\mathfrak{p}}_{0}}\|\tilde{\beta}\|^{n}_{\zeta- \delta,{\mathfrak{p}}_{0}}\leq\|\beta\|_{\zeta,{\mathfrak{p}}_{0}}\sum_{n\geq 0 }C\left(\frac{\|\tilde{\beta}\|_{\zeta-\delta,{\mathfrak{p}}_{0}}}{\delta} \right)^{n}\] (A.15)
\[\leq 2C\|\beta\|_{\zeta,{\mathfrak{p}}_{0}}\stackrel{{ (\ref{diffeo2})}}{{\leq}}\frac{\delta}{2}\,.\]
Finally we show that \({\mathcal{G}}\) is a contraction. One has
\[\ \begin{aligned} \|{\mathcal{G}}(\tilde{\beta}_{1})-{\mathcal{G} }(\tilde{\beta}_{2})\|_{\zeta-\delta,p}&=\left\|\sum_{n\geq 1} \frac{1}{n!}(\partial^{n}\beta)\tilde{\beta}_{1}^{n}-\sum_{n\geq 1}\frac{1}{n! }(\partial^{n}\beta)\tilde{\beta}_{2}^{n}\right\|_{\zeta-\delta,p}\\ &=\left\|\sum_{n\geq 1}\frac{1}{n!}(\partial^{n}\beta)(\tilde{ \beta}_{1}-\tilde{\beta}_{2})\left(\sum_{k=0}^{n-1}\tilde{\beta}_{1}^{k}\tilde {\beta}_{2}^{n-1-k}\right)\right\|_{\zeta-\delta,p}\\ &\leq\|\tilde{\beta}_{1}-\tilde{\beta}_{2}\|_{\zeta-\delta,p}\sum _{n\geq 1}\frac{1}{n!}\left(\frac{n}{e}\right)^{n}\frac{1}{\delta^{n}}\|\beta \|_{\zeta,p}\|\sum_{k=0}^{n-1}\|\tilde{\beta}_{1}\|^{k}_{\zeta-\delta,p}\| \tilde{\beta}_{2}\|_{\zeta-\delta,s}^{n-1-k}\\ &\leq\|\tilde{\beta}_{1}-\tilde{\beta}_{2}\|_{\zeta-\delta,p}C_{2 }\|\beta\|_{\zeta-\delta,p}\stackrel{{(\ref{diffeo2})}}{{\leq}} \frac{1}{2}\|\tilde{\beta}_{1}-\tilde{\beta}_{2}\|_{\zeta-\delta,p}\,.\end{aligned}\] (A.16)
Then we deduce that there exists a unique fixed point in \(B_{\delta/2}\), hence a solution of the equation (A.11).
_(ii)_ One can follow almost word by word the proof of Lemma 11.4 in [38] using the norm (A.2) instead of (A.3) and the interpolation properties of the \(W^{p,\infty}(\mathds{T}^{b}_{\zeta})\)-norms.
**Remark A.4**.: _Note that by Lemma A.1, one has_
\[\|f^{(\theta)}(\theta,y,w)\|_{s,a,p}\approx\frac{1}{s_{0}}\max_{1 \leq i\leq d}\sum_{\sigma\in\{\pm 1\}^{d}}\|f^{(\theta_{i})}({\rm Re}(\theta)+ {\rm i}\sigma s)\|_{H^{p}}\,,\]
\[\|f^{(y)}(\theta,y,w)\|_{s,a,p}\approx\frac{1}{r_{0}^{\mathtt{s}} }\sum_{i=1}^{d_{1}}\sum_{\sigma\in\{\pm 1\}^{d}}\|f^{(y_{i})}({\rm Re}(\theta) +{\rm i}\sigma s)\|_{H^{p}}\,,\]
\[\|f^{(w)}(\theta,y,w)\|_{s,a,p}\approx\frac{1}{r_{0}}\sum_{\sigma \in\{\pm 1\}^{d}}\big{(}\|{\mathtt{f}}_{{\mathfrak{p}}_{0}}({\rm Re}(\theta)+{ \rm i}\sigma s)\|_{H^{p}(\mathds{T}^{d}_{s})}+\|{\mathtt{f}}_{p}({\rm Re}( \theta)+{\rm i}\sigma s)\|_{H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d}_{s})}\big{)}\]
_where \({\mathtt{f}}_{p}(\theta)\) is defined as in (2.12). In particular this means that for all \(s\geq 0\), \(a\geq 0\) and \(p\geq\overline{{\mathfrak{p}}}>n/2\) one has the standard algebra, interpolation and tame properties w.r.t. composition with functions in \(H^{p}(\mathds{T}^{d}_{s})\); see for instance [25, 40, 41, 28] just to mention a few._
From Lemma A.3 and Remark A.4 above we deduce the following result.
**Lemma A.5**.: _Given a tame vector field \(f\in{\mathcal{V}}_{\vec{v},p}\) with scale of constants \(C_{p}(f)\) of the form (2.14) and given a map \(\Phi(\theta)=\theta+\beta(\theta):\mathds{T}^{d}_{s^{\prime}}\to\mathds{T}^{d} _{s}\) as in (A.9) with \(b=d\) and \(\zeta=s\), then the composition \(f\circ\Phi\) is a tame vector field with constant_
\[C_{p}(f\circ\Phi)\leq C_{p}(f)+C_{{\mathfrak{p}}_{0}}(f)\|\beta\|_{s,p+\nu+3}.\] (A.17)
_Moreover if \(f\) is a regular vector field, i.e. it satisfies (4.17), then_
\[|f\circ\Phi|_{\vec{v}_{1},p}\leq|f|_{\vec{v},p}+|f|_{\vec{v},{\mathfrak{p}}_{0 }}\|\beta\|_{s,p+\nu+3}.\] (A.18)
_where \(\vec{v}_{1}=(\lambda,{\mathcal{O}},s^{\prime},a)\)._
Proof.: By Lemma A.3 one has that if \(\|\beta\|_{s,{\mathfrak{p}}_{1}}\) is sufficiently small, then the vector field \(f\circ\Phi\) is defined on \(\mathds{T}^{d}_{s-\rho s_{0}}\times D_{a,p}(r-\rho r_{0})\). Lemma A.3 guarantees that for a function \(u(\theta)\in\mathds{C}\) the estimate (A.11) holds. Hence also the components \(f^{(v)}(\theta,y,w)\) for \(v=\theta,y\) satisfy the same bounds (recall that for the norm (2.7) \(y,w\) are parameters). Let us study the composition of \(f^{(w)}(\theta+\beta(\theta),y,w)\). By Remark 2.4 one has
\[\|f^{(w)}\|_{s,a,p}=\frac{1}{r_{0}}(\|{\mathtt{f}}_{{\mathfrak{p}}_{0}}\|_{s,p }+\|{\mathtt{f}}_{p}\|_{s,{\mathfrak{p}}_{0}}).\]
Now \({\mathtt{f}}_{p}:\mathds{T}^{d}_{s}\to\mathds{C}\) and hence we can apply Lemma A.3 to obtain the result. The bounds on the derivatives follow in the same way. ∎
## Appendix B Properties of Tame and regular vector fields
We now discuss the main properties of \(C^{k}\)-tame and regular vector fields; in particular we need to control the changes in the tameness constants when conjugating via changes of variables generated by regular bounded vector fields.
**Lemma B.1**.: _Consider any two \(C^{k}\)-tame vector fields \(F,G\in\mathcal{W}_{\vec{v},p}\), then the following holds._
* _For_ \(l=1,\ldots,d\) _one has that_ \(\partial_{\theta_{l}}F\) _is a_ \(C^{k}\)_-tame vector field up to order_ \(q-1\) _with tameness constants_ \(C_{\vec{v},p+1}(F)\)_._
* _For_ \(l=1,\ldots,d\) _one has that_ \(\partial_{y_{l}}F,d_{w}F[w]\) _are_ \(C^{k-1}\)_-tame vector fields up to order_ \(q\) _with tameness constants_ \(C_{\vec{v},p}(F)\)_._ _for any_ \(h\geq 0\)_._
* _The commutator_ \([G,F]\) _is a_ \(C^{k-1}\)_-tame vector field up to order_ \(q-1\) _with scale of constants_ \[C_{\vec{v},p}([G,F])\leq\mathtt{C}(C_{\vec{v},p+\nu_{G}+1}(F)C_{\vec{v},{ \mathfrak{p}}_{0}+\nu_{F}+1}(G)+C_{\vec{v},{\mathfrak{p}}_{0}+\nu_{G}+1}(F)C_{ \vec{v},p+\nu_{F}+1}(G)),\] (B.1) _where_ \(\nu_{F}\)_,_ \(\nu_{G}\) _are the loss of regularity of_ \(F\)_,_ \(G\) _respectively._
* _If_ \(F\) _is a polynomial of maximal degree_ \(k\) _in_ \(y,w\) _then it is_ \(C^{\infty}\)_-tame up to order_ \(q\)_._
Proof.: Let us check item \((i)\). We consider a map \(\Phi:=\mathds{1}+f\) as in Definition 2.13. Recall that \(\|f\|_{s,a,{\mathfrak{p}}_{1}}<1/2\). One has that
\[\|(\partial_{\theta}F)\circ\Phi\|_{s,a,p} \leq\|\partial_{\theta}(F\circ\Phi)\|_{s,a,p}+\|(\partial_{\theta }F)\circ\Phi\cdot\partial_{\theta}f\|_{s,a,p}\] (B.2)
\[\leq\|\partial_{\theta}(F\circ\Phi)\|_{s,a,p}+\|(\partial_{\theta }F)\circ\Phi\|_{s,a,p}\|\partial_{\theta}f\|_{s,a,{\mathfrak{p}}_{0}}+\|( \partial_{\theta}F)\circ\Phi\|_{s,a,{\mathfrak{p}}_{0}}\|\partial_{\theta}f\|_ {s,a,p}.\]
Now, for \(p={\mathfrak{p}}_{0}\), by (B.2) one gets
\[\big{(}1-2\|\partial_{\theta}f\|_{s,a,{\mathfrak{p}}_{0}}\big{)}\|(\partial_{ \theta}F)\circ\Phi\|_{s,a,{\mathfrak{p}}_{0}}\leq\|\partial_{\theta}(F\circ \Phi)\|_{s,a,{\mathfrak{p}}_{0}}\stackrel{{(T1)}}{{\leq}}C_{s,a,p+ 1}(F)(1+\|\Phi\|_{s,a,{\mathfrak{p}}_{0}}),\] (B.3)
hence, for \(p>{\mathfrak{p}}_{0}\) one has
\[\|(\partial_{\theta}F)\circ\Phi\|_{s,a,p}\leq\mathtt{c}(C_{s,a,p+1}(F)+C_{s,a, {\mathfrak{p}}_{0}}(F)\|\Phi\|_{s,a,p+1}),\] (B.4)
for some \(\mathtt{c}\) independent of \(p\). Equation (B.4) implies property \((T1)\) for the vector field \((\partial_{\theta}F)(\theta,y,w)\). Clearly it holds for \(p\leq q-1\). The other bounds follows similarly. Finally items \((ii),(iii),(iv)\) follow by the definitions.
∎
**Remark B.2**.: _For \(k\geq 0\) and any \(\mathtt{v}\in\mathtt{V},\mathtt{v}_{1},\ldots,\mathtt{v}_{k}\in\mathtt{U}\) consider any monomial subspace \({\mathcal{V}}^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{k})}\) as in (2.23). Then for all \(C^{k}\)-tame vector fields \(F\) one has that \(\Pi_{{\mathcal{V}}^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_{k})}}F\) is \(C^{\infty}\)-tame (up to the same order as \(F\)) and one can choose the constant as \(C_{\vec{v},p}(\Pi_{{\mathcal{V}}^{(\mathtt{v},\mathtt{v}_{1},\dots,\mathtt{v}_ {k})}}F)=C_{\vec{v},p}(F)\). The same holds for the direct sum \({\mathcal{U}}\) of a finite number of monomial spaces and their orthogonal, namely one can chose_
\[C_{\vec{v},p}(\Pi_{{\mathcal{U}}}F)=C_{\vec{v},p}(F),\qquad C_{\vec{v},p}(\Pi_ {{\mathcal{U}}}^{\perp}F)=C_{\vec{v},p}(F)\,.\]
**Lemma B.3** (**Conjugation)**.: _Consider a tame left invertible map \(\Phi=\mathds{1}+f\) with tame inverse \(\Psi=\mathds{1}+h\) as in Definition 2.8 such that (2.18) holds. Assume that \({\mathfrak{p}}_{1}\geq{\mathfrak{p}}_{0}+\nu+1\) and the fields \(f,h\) are such that \(C_{\vec{v},{\mathfrak{p}}_{1}}(f)=C_{\vec{v},{\mathfrak{p}}_{1}}(h)\leq{ \mathtt{c}}\rho\) for \(\rho>0\) and \(\mathtt{c}\) the same appearing in Remark 2.7¹². For any vector field_
[FOOTNOTE:12][ENDFOOTNOTE]
\[F:\mathds{T}^{d}_{s}\times D_{a,p+\nu}(r)\times{\mathcal{O}}\to V_{a,p},\] (B.5)
_which is \(C^{k}\)–tame up to order \(q\), one has that the push–forward_
\[F_{+}:=\Phi_{*}F:\mathds{T}^{d}_{s-2\rho s_{0}}\times D_{a,p+\nu}(r-2\rho r_{0 })\times{\mathcal{O}}\to V_{a-2\rho a_{0},p}\] (B.6)
_is \(C^{k}\)–tame up to order \(q-\nu-1\), with scale of constants_
\[C_{\vec{v}_{2},p}(F_{+})\leq(1+\rho)\Big{(}C_{\vec{v},p}(F)+C_{\vec{v},{ \mathfrak{p}}_{0}}(F)C_{\vec{v}_{1},p+\nu+1}(f)\Big{)},\] (B.7)
_where \(\vec{v}:=(\lambda,{\mathcal{O}},s,a,r)\), \(\vec{v}_{1}:=(\lambda,{\mathcal{O}},s-\rho s_{0},a-{\varrho}a_{0},r-\rho r_{0})\) and \(\vec{v}_{2}:=(\lambda,{\mathcal{O}},s-2\rho s_{0},a-2{\varrho}a_{0},r-2\rho r_ {0})\)._
Proof.: By (2.19) the vector field \(F_{+}\) is defined in \(\mathds{T}_{s-2\rho s_{0}}\times D_{a,p}(r-2\rho r_{0})\times{\mathcal{O}}\). Then, given a change of coordinates \(\Gamma:\mathds{T}^{d}_{s_{1}}\times D_{a^{\prime},p^{\prime}}(r_{1})\times{ \mathcal{O}}\rightarrow\mathds{T}_{s-2\rho s_{0}}\times D_{a,p}(r-2\rho r_{0})\) we can consider the composition of \(F_{+}\) with \(\Gamma\); in particular
\[\Psi\circ\Gamma:\mathds{T}^{d}_{s_{1}}\times D_{a^{\prime},p^{\prime}}(r_{1}) \times{\mathcal{O}}\longrightarrow\mathds{T}_{s}\times D_{a,p+\nu}(r),\]
namely the domain of \(F\). Let us check the property \((\)T\({}_{0})\) for the vector field \(F_{+}\). In the following we will keep track only of the index \(p\). One has
\[\|\Psi(\Gamma)\|_{p}\leq C_{p}(f)+(1+C_{{\mathfrak{p}}_{0}}(f))\| \Gamma\|_{p},\] (B.8)
\[\|\Psi(\Gamma)\|_{{\mathfrak{p}}_{0}+\nu}\leq 1+2C_{{\mathfrak{p} }_{0}+\nu}(f).\]
so that we get
\[\|F_{+}(\Gamma)\|_{p} \leq\|F(\Psi(\Gamma))\|_{p}+\|{d}f(\Psi(\Gamma))[F(\Psi(\Gamma))] \|_{p}\] (B.9)
\[\stackrel{{(\text{T}_{1})}}{{\leq}}(1+C_{{\mathfrak{p }}_{0}+1}(f))\|F(\Psi(\Gamma))\|_{p}+\left(C_{p+1}(f)+C_{{\mathfrak{p}}_{0}+1} (f)\|\Psi(\Gamma)\|_{p}\right)\|F(\Psi(\Gamma))\|_{{\mathfrak{p}}_{0}}\]
\[\stackrel{{(\text{T}_{0})}}{{\leq}}(1+C_{{\mathfrak{p }}_{0}+1}(f))\left[C_{p}(F)+C_{{\mathfrak{p}}_{0}}(F)\|\Psi(\Gamma)\|_{p+\nu}\right]\]
\[\qquad\qquad+\left(C_{p+1}(f)+C_{{\mathfrak{p}}_{0}+1}(f)\|\Psi( \Gamma)\|_{p}\right)\left[C_{{\mathfrak{p}}_{0}}(F)+C_{{\mathfrak{p}}_{0}}(F) \|\Psi(\Gamma)\|_{{\mathfrak{p}}_{0}+\nu}\right]\,,\]
and therefore
\[\|F_{+}(\Gamma)\|_{p} \leq C_{p}(F)(1+C_{{\mathfrak{p}}_{0}+\nu+1}(f))+5C_{{\mathfrak{p }}_{0}}(F)(1+C_{{\mathfrak{p}}_{0}+1}(f))C_{p+\nu+1}(f)\] (B.10)
\[+\|\Gamma\|_{p+\nu}\left[C_{{\mathfrak{p}}_{0}}(F)(1+3C_{{ \mathfrak{p}}_{0}+\nu+1}(f))^{2}\right],\]
that is \((\)T\({}_{0}\)). The other properties are obtained with similar calculations using also the fact that the vector field \(f\) is linear in the variables \(y,w\). Hence \(F_{+}\) is tame with scale of constants in (B.7). ∎
**Remark B.4**.: _In Lemma B.3, if \(f,h\in E^{(K)}\), then the smoothing estimate (2.32) applied to \(|f|_{\vec{v},p+\nu+1}\) implies that \(F_{+}\) is \(C^{k}\)–tame up to order \(q\). The same holds if \(\Phi\) is generated by a vector field \(g\in E^{(K)}\)._
**Remark B.5**.: _Consider a vector field \(g\) which generates a well defined flow \(\Phi^{t}\) for \(t\leq 1\) and set \(\Phi:=\Phi^{1}\). Then, for all \(p\geq{\mathfrak{p}}_{0}\) for all vector fields \(F\) such that the push-forward with \(\Phi^{t}\) is well defined, one has_
\[L:=\Phi_{*}F-F =\int_{0}^{1}\Phi_{*}^{t}[g,F]dt,\] (B.11)
\[Q:=\Phi_{*}F-[g,F]-F =\int_{0}^{1}\int_{0}^{t}\Phi_{*}^{s}[g,[g,F]]dsdt.\]
_If moreover \(g\in{\mathcal{B}}\) satisfies Definition 2.18 item 5, and \(F\) is as in formula (B.5) then \(L\) is \(C^{k-1}\) tame and \(Q\) is \(C^{k-2}\) tame up to order \(q-\nu-1\) with constants_
\[C_{\vec{v_{2}},p}(L)\leq\mathtt{C}\Big{(}C_{\vec{v},p+1}(F)|g|_{ \vec{v},{\mathfrak{p}}_{0}+\nu+1}+C_{\vec{v},{\mathfrak{p}}_{0}+1}(F)|g|_{\vec {v},p+\nu+1}\Big{)}\] (B.12a)
\[C_{\vec{v_{2}},p}(Q)\leq\mathtt{C}\Big{(}C_{\vec{v},p+2}(F)|g|^{ 2}_{\vec{v},{\mathfrak{p}}_{0}+\nu+1}+C_{\vec{v},{\mathfrak{p}}_{0}+2}(F)|g|_{ \vec{v},{\mathfrak{p}}_{0}+\nu+2}|g|_{\vec{v},p+\nu+2}\Big{)}\] (B.12b)
\(\vec{v}:=(\lambda,{\mathcal{O}},s,a,r)\)_, \(\vec{v}_{2}:=(\lambda,{\mathcal{O}},s-2\rho s_{0},a,r-2\rho r_{0})\). Finally if \(g\in E^{(K)}\) then \(L,Q\) are tame up to order \(q\)._
**Lemma B.6**.: _Consider any subspace \({\mathcal{U}}\) which is a finite direct sum of monomial subspaces as in formula (2.23), having degree at most \(k\), or their average in \(\theta\). Under the hypotheses of Lemma B.3, assume also that¹³\(F\in{\mathcal{U}}\) and \(f\in{\mathcal{B}}\). Then for all \(p\geq{\mathfrak{p}}_{0}\) one has_
[FOOTNOTE:13][ENDFOOTNOTE]
\[C_{\vec{v}_{2},p}(\Pi_{{\mathcal{U}}}^{\perp}\Phi_{*}F)\leq{\mathtt{C}}\Big{(} C_{\vec{v},p}(F)\rho+C_{\vec{v},{\mathfrak{p}}_{1}}(F)|f|_{\vec{v}_{1},p+\nu+1 }\Big{)}\] (B.13)
_where \(\vec{v}:=(\lambda,{\mathcal{O}},s,a)\), \(\vec{v}_{1}:=(\lambda,{\mathcal{O}},s-\rho s_{0},a)\) and \(\vec{v}_{2}:=(\lambda,{\mathcal{O}},s-2\rho s_{0},a)\). Moreover if \(f\in E^{(K)}\) then (B.13) holds up to order \(q\)._
Proof.: One has
\[\Pi_{{\mathcal{U}}}^{\perp}\Big{(}\Phi_{*}F\Big{)}=\Pi_{{\mathcal{U}}}^{\perp} \Big{(}F\circ\Psi-F+d_{u}f(\Psi)[F\circ\Psi]\Big{)}\,.\] (B.14)
The last summand clearly satisfies the estimates (B.13). Regarding the first terms we write
\[\Pi_{{\mathcal{U}}}^{\perp}(F\circ\Psi-F)=\Pi_{{\mathcal{U}}}^{\perp}\big{(}d_ {\theta}F[f^{(\theta)}]+\sum_{\mathtt{u}=y,w}d_{\mathtt{u}}F[f^{(\mathtt{u})}] \big{)}\,;\]
the second term satisfies (B.13) by property (\(T_{1}\)), while we claim that \(d_{\theta}F[f^{(\theta)}]\in{\mathcal{U}}\). Indeed if \(F\in{\mathcal{V}}^{(\mathtt{v},\mathtt{v}_{1},\ldots,\mathtt{v}_{h})}\), it has the form
\[F=\frac{1}{\alpha(\mathtt{v}_{1},\ldots,\mathtt{v}_{h})!}\Big{(}\prod_{i=1}^{h }d_{\mathtt{v}_{i}}\Big{)}F^{(\mathtt{v})}(\theta,0,0)[\mathtt{v}_{1},\ldots, \mathtt{v}_{h}],\]
so that deriving w.r.t. \(\theta\) on both sides and computing at \(g^{(\theta)}\) commutes with the derivation in \(\mathtt{v}_{i}\in{\mathtt{U}}\) (this follows from the fact that if \(f\in{\mathcal{B}}\) then \(f^{(\theta)}(\theta,y,w)=f^{(\theta)}(\theta,0,0)\)). If \({\mathcal{U}}\) is only the average of some monomial space, then clearly its \(\theta\)-derivative is zero. ∎
### Proof of Lemma 4.8
**Lemma B.7**.: _All regular vector fields \(f\) as in Definition 4.5 are \(C^{\infty}\)-tame up to order \(q\) with tameness constant_
\[C_{\vec{v},p}(f)=C_{d,q}|f|_{\vec{v},p}.\] (B.15)
Proof.: In view of Lemma B.1–(iv), we only need to prove that a regular vector field is \(C^{1}\)-tame. Consider a regular vector field \(f\) (see Definition 4.5) and a map \(\Phi=\mathds{1}+g\) as in Definition 2.13. For simplicity we drop the indices \(\vec{v},\vec{v}_{1},\vec{v}_{2}\). Without loss of generality we can also assume that \(g^{(\theta)}\) depends only on \(\theta\), since in \((\)T\({}_{m})\) we first perform the \(y,w\)-derivatives and then compute at \(\Phi=\mathds{1}+g\). Let us check \((\)T\({}_{0})\) for \(f\). One has
\[(f\circ\Phi)^{(\theta)} :=h^{(\theta,0)}(\theta),\qquad(f\circ\Phi)^{(w)}:=h^{(w,0)}( \theta),\]
\[(f\circ\Phi)^{(y)} :=h^{(y,0)}(\theta)+h^{(y,y)}(\theta)\Phi^{(y)}(\theta,y,w)+h^{(y ,w)}(\theta)\cdot\Phi^{(w)}(\theta,y,w),\]
where
\[h^{(v,v^{\prime})}(\theta):=f^{(v,v^{\prime})}(\theta+g^{(\theta,0)}(\theta)), \quad v,v^{\prime}=0,\theta,y,w.\]
We first give bounds on the norm of \(f\circ\Phi\) in terms of the norms of \(h\) and \(\Phi\). The terms depending only on \(\theta\) are trivially bounded by the norm of \(h\). In the \(y\)-component one has
\[\|h^{(y,y)}\Phi^{(y)}\|^{2}_{s,a,p} \leq C(d)\frac{1}{r_{0}^{2\mathtt{s}}}\sum_{\ell\in\mathds{Z}^{d} }\sum_{i=1}^{d_{1}}\sum_{k=1}^{d_{1}}|(h^{(y_{i},y_{k})}g^{(y_{k})})(\ell)|^{2 }e^{2s|\ell|}\langle\ell\rangle^{2p}\] (B.16)
\[=C(d)\frac{1}{r_{0}^{2\mathtt{s}}}\sum_{i=1}^{d_{1}}\sum_{k=1}^{d _{1}}\|h^{(y_{i},y_{k})}(\theta)\Phi^{(y_{k})}(\theta)\|^{2}_{s,p}\]
\[\stackrel{{(\ref{A5})}}{{\leq}}C(d)\frac{1}{r_{0}^{2 \mathtt{s}}}\sum_{i=1}^{d_{1}}\sum_{k=1}^{d_{1}}(\|h^{(y_{i},y_{k})}\|_{s,p}\| \Phi^{(y_{k})}\|_{s,{\mathfrak{p}}_{0}}+\|h^{(y_{i},y_{k})}\|_{s,{\mathfrak{p} }_{0}}\|\Phi^{(y_{k})}\|_{s,p})^{2},\]
hence one obtains
\[\|h^{(y,y)}\Phi^{(y)}\|_{s,a,p}\leq K\left(r_{0}^{\mathtt{s}}\|h^{(y,y)}\|_{s, a,p}\|\Phi^{(y)}\|_{s,a,{\mathfrak{p}}_{0}}+r_{0}^{\mathtt{s}}\|h^{(y,y)}\|_{s ,a,{\mathfrak{p}}_{0}}\|\Phi^{(y)}\|_{s,a,p}\right).\] (B.17)
Finally one has
\[\|h^{(y,w)}\Phi^{(w)}\|^{2}_{s,a,p} \leq C\frac{1}{r_{0}^{2\mathtt{s}}}\sum_{i=1}^{d_{1}}\|h^{(y_{i}, w)}\Phi^{(w)}\|^{2}_{s,p}=C\frac{1}{r_{0}^{2\mathtt{s}}}\sum_{i=1}^{d_{1}}\sum _{l\in\mathds{Z}^{d}}\langle l\rangle^{2p}e^{2s|l|}|(h^{(y_{i},w)}\cdot\Phi^{( w)})(l)|^{2}\] (B.18)
\[\leq C\frac{1}{r_{0}^{2\mathtt{s}}}\sum_{i=1}^{d_{1}}\sum_{l\in \mathds{Z}^{d}}\langle l\rangle^{2p}e^{2s|l|}(\sum_{k\in\mathds{Z}^{d}}|h^{(y_ {i},w)}(l-k)\cdot\Phi^{(w)}(k)|)^{2}\]
\[\qquad+\frac{C}{r_{0}^{2\mathtt{s}}}\sum_{i=1}^{d_{1}}\sum_{l,k \in\mathds{Z}^{d}}\langle l-k\rangle^{2{\mathfrak{p}}_{0}}e^{2s|l-k|}\langle k \rangle^{2p}e^{2s|k|}|h^{(y_{i},w)}(l-k)\cdot\Phi^{(w)}(k)|^{2}\]
\[\stackrel{{(\ref{sonasega})}}{{\leq}}\frac{C}{r_{0}^{ 2\mathtt{s}}}\sum_{i=1}^{d_{1}}\sum_{l,k\in\mathds{Z}^{d}}\langle l-k\rangle^{ 2p}e^{2s|l-k|}\langle k\rangle^{2{\mathfrak{p}}_{0}}e^{2s|k|}\|h^{(y_{i},w)}(l -k)\|^{2}_{-a,-{\mathfrak{p}}_{0}-\nu}\|\Phi^{(w)}(k)\|_{a,{\mathfrak{p}}_{0}+ \nu}^{2}\]
\[\qquad+\frac{C}{r_{0}^{2\mathtt{s}}}\sum_{i=1}^{d_{1}}\sum_{l,k \in\mathds{Z}^{d}}\langle l-k\rangle^{2{\mathfrak{p}}_{0}}e^{2s|l-k|}\langle k \rangle^{2p}e^{2s|k|}\|h^{(y_{i},w)}(l-k)\|^{2}_{-a,-{\mathfrak{p}}_{0}-\nu}\| \Phi^{(w)}(k)\|_{a,{\mathfrak{p}}_{0}+\nu}^{2},\]
where in the third line we used the standard interpolation estimates and the fact that \({\mathfrak{p}}_{0}>d/2\). By (B.18), since \(p\leq q\), it follows that
\[\|h^{(y,w)}\Phi^{(w)}\|_{s,a,p}\leq C\left(r_{0}\|h^{(y,w)}\|_{H^{p}(\mathds{T }^{d}_{s};\ell_{-a,-{\mathfrak{p}}_{0}-\nu})}\|\Phi^{(w)}\|_{s,a,{\mathfrak{p} }_{0}+\nu}+r_{0}\|h^{(y,w)}\|_{H^{{\mathfrak{p}}_{0}}(\mathds{T}^{d}_{s};\ell_ {-a,-{\mathfrak{p}}_{0}-\nu})}\|\Phi^{(w)}\|_{s,a,p}\right).\]
Now each component \(h^{(v,v^{\prime})}\) is a function \(\mathds{T}^{d}_{s}\to V_{a,p}\) composed with a diffeomorphism of the torus given by \(\theta\mapsto\theta+g^{(\theta,0)}(\theta)\). Hence we obtain, by using Lemma A.3\((ii)\) and Lemma A.5,
\[\|f\circ\Phi\|_{s,a,p}\leq C(d,q)(|f|_{s,a,p}+|f|_{s,a,{\mathfrak{p}}_{0}}\| \Phi\|_{s,a,p+\nu}),\] (B.19)
The property \((\)T\({}_{1})\) follows in the same way. ∎
**Lemma B.8**.: _Consider a vector field \(f\in\mathcal{B}\) such that_
\[f:\mathds{T}^{d}_{s}\times D_{a,p}(r)\times{\mathcal{O}}\to V_{a,p}\] (B.20)
_and_
\[|f|_{\vec{v},{\mathfrak{p}}_{1}}\leq{\mathtt{c}}{\rho},\] (B.21)
_for some \(\rho>0\). If \(\rho\) is small enough, then for all \(\xi\in{\mathcal{O}}\) the following holds._
_(i) The map_ \(\Phi:=\mathds{1}+f\) _is such that_
\[\Phi:\mathds{T}^{d}_{s}\times D_{a,p}(r)\times{\mathcal{O}}\longrightarrow \mathds{T}^{d}_{s+\rho s_{0}}\times D_{a,p}(r+\rho r_{0}).\] (B.22)
_(ii) There exists a vector field_ \(h\in{\mathcal{B}}\) _such that_
* \(|h|_{\vec{v}_{1},p}\leq 2|f|_{\vec{v},p}\)_,_ _the map_ \(\Psi:=\mathds{1}+h\) _is such that_ \[\Psi:\mathds{T}^{d}_{s-\rho s_{0}}\times D_{a,p}(r-\rho r_{0})\times{\mathcal{ O}}\to\mathds{T}^{d}_{s}\times D_{a,p}(r).\] (B.23)
* _for all_ \((\theta,y,w)\in\mathds{T}^{d}_{s-2\rho s_{0}}\times D_{a,{\mathfrak{p}}_{1}}(r -2\rho r_{0})\) _one has_ \[\Psi\circ\Phi(\theta,y,w)=(\theta,y,w).\] (B.24)
Proof.: _(i)_ We want to bound the components of \(\Phi=\mathds{1}+f\). First of all we see that for \(\theta\in\mathds{T}^{d}_{s}\) one has
\[|\Phi^{(\theta)}|_{\infty}\leq s+|f^{(\theta)}|_{\infty}\leq s+\|f^{(\theta)} \cdot\partial_{\theta}\|_{s,a,{\mathfrak{p}}_{0}}\stackrel{{(\ref{ pippo})}}{{\leq}}s+\rho s_{0}\,,\] (B.25)
where we used the standard Sobolev embedding Theorem. The bound on \(\|\Phi^{(w)}\|_{a,{\mathfrak{p}}_{0}}\leq r+\rho r_{0}\) follows directly by hypothesis (B.21). In order to obtain the estimates on the \(y-\)components we need to check that
\[|f^{(y,0)}(\theta)|_{1}\leq{\mathtt{c}}^{-1}\|f^{(y,0)} (\theta)\cdot\partial_{y}\|_{s,a,{\mathfrak{p}}_{0}},\qquad|f^{(y ,y)}(\theta)y|_{1}\leq{\mathtt{c}}^{-1}\|f^{(y,y)}(\theta)y\cdot\partial_{y}\| _{s,a,{\mathfrak{p}}_{0}},\] (B.26)
\[|f^{(y,w)}(\theta)w|_{1}\leq{\mathtt{c}}^{-1}\|f^{(y,w)}(\theta)w \cdot\partial_{y}\|_{s,a,{\mathfrak{p}}_{0}}\,.\]
Since for a \(d\)-dimensional vector \(\mathtt{v}\) one has \(|\mathtt{v}|_{1}\leq d|\mathtt{v}|_{\infty}\) we get
\[|f^{(y,w)}(\theta)\cdot w|_{1}\leq d_{1}\max_{v=y_{1},\ldots,y_{d_{1}}}\|f^{(v ,w)}(\theta)\cdot w\|_{\infty}\leq K(n,{\mathfrak{p}}_{0})\|f^{(y,w)}(\theta) \cdot w\|_{{s,{\mathfrak{p}}_{0}}}\,.\] (B.27)
The other bounds in (B.26) follow in the same way. The extension of the bounds for the Lipschitz norm is standard; see for instance [41]. Thus we obtain \(|\Phi^{(y)}|_{1}\leq(r+\rho r_{0})^{2}\) so that (B.22) follows.
_(ii)_ The first \(d\) components of the map \((\theta_{+},y_{+},w_{+})=\Phi(\theta,y,w)\) are \(\theta_{+}=\theta+f^{(\theta)}(\theta)\). If \(\rho\) is small enough we can apply Lemma A.3 in order to define an inverse map \(h^{(\theta)}(\theta_{+})\in W^{p,\infty}(\mathds{T}^{d}_{s-\rho s_{0}})\) with \(\|h^{(\theta)}\|_{s-\rho s_{0},p}\leq 2\|f^{(\theta)}\|_{s,p}\). Hence we set
\[\Psi^{(\theta)}(\theta_{+}):=\theta_{+}+{h^{(\theta)}}(\theta_{+})\,,\qquad \theta_{+}\in\mathds{T}^{d}_{s-\rho s_{0}}.\] (B.28)
Regarding the other components we first solve \(y,w\) as functions of \(y_{+},w_{+},\theta\) and then substitute (B.28). We have
\[w =w_{+}-f^{(w,0)}(\Psi^{(\theta)}(\theta_{+}))\]
\[y =(\mathds{1}-f^{(y,y)}(\Psi^{(\theta)}(\theta_{+})))^{-1}(y_{+}-f ^{(y,0)}(\Psi^{(\theta)}(\theta_{+}))-f^{(y,w)}(\Psi^{(\theta)}(\theta_{+})) \cdot(w_{+}-f^{(w,0)}(\Psi^{(\theta)}(\theta_{+}))))\]
which fixes the remaining components of \(h\). The estimates on the norm of \(h\) follow by Lemma A.3\((ii)\) and by Lemma A.5. ∎
**Lemma B.9**.: _Given any regular bounded vector field \(g\in\mathcal{B}\), \(p\geq{\mathfrak{p}}_{1}\) with \(|g|_{\vec{v},{\mathfrak{p}}_{1}}\leq{\mathtt{c}}\rho\) then for \(0\leq t\leq 1\) there exists \(f_{t}\in\mathcal{B}\) such that the time\(-t\) map of the flow of \(g\) is of the form \(\mathds{1}+f_{t}\) moreover we have \(|f_{t}|_{\vec{v},p}\leq 2|g|_{\vec{v}_{1},p}\) where \(\vec{v}_{1}=(\lambda,{\mathcal{O}},s-\rho s_{0},a,r)\)._
Proof.: The dynamical system associated with \(g\) is
\[\dot{\theta}=g^{(\theta,0)}(\theta),\] (B.29a)
\[\dot{y}=g^{(y,0)}(\theta)+g^{(y,y)}(\theta)y+g^{(y,w)}(\theta) \cdot w,\] (B.29b)
\[\dot{w}=g^{(w,0)}(\theta).\] (B.29c)
We solve first (B.29a), then substitute into (B.29c) and finally substitute both into (B.29b) and hence the result follows by proving that the solution of (B.29a), with initial datum \(\varphi\), has the form
\[\theta(t)=\varphi+h(t,\varphi),\]
with \(h\in H^{p}(\mathds{T}^{d}_{s-\rho s_{0}})\) a zero-average function. This latter statement follows by the standard theory of existence, uniqueness and smoothness w.r.t. the initial data. ∎
## Appendix C Proof of Proposition 3.5
Proof.: Given a tame vector field \(F\in{\mathcal{V}}_{\vec{v},p}\) such that \(F\in{\mathcal{E}}\) for all \(\xi\in{\mathcal{O}}\), let us define
\[{\mathfrak{A}}:={\mathfrak{N}}+{\mathfrak{R}}:=\Pi_{K}\Pi_{\mathcal{X}}([{\Pi_ {\mathcal{N}}F},\cdot])+\Pi_{K}\Pi_{\mathcal{X}}([\Pi_{\mathcal{R}}F,\cdot]).\]
We note that \({\mathfrak{N}},{\mathfrak{R}}:E^{(K)}\cap{\mathcal{B}}_{\mathcal{E}}\to E^{(K) }\cap{\mathcal{X}}\cap{\mathcal{E}}\).
Then the “approximate invertibility” of \({\mathfrak{N}}\) implies the “approximate invertibility” of \({\mathfrak{A}}\). Indeed let \({\mathfrak{W}}:\,E^{(K)}\cap{\mathcal{X}}\cap{\mathcal{E}}\to E^{(K)}\cap{ \mathcal{B}}_{\mathcal{E}}\) be the “approximate right inverse” of \({\mathfrak{N}}\) defined in (3.3) and denote
\[{\mathfrak{U}}:={\mathfrak{R}}{\mathfrak{W}}:E^{(K)}\cap{\mathcal{X}}\cap{ \mathcal{E}}\to E^{(K)}\cap{\mathcal{X}}\cap{\mathcal{E}}.\]
By (3.1) and (3.2) we have that \({\mathfrak{U}}\) is strictly upper triangular so \({\mathfrak{U}}^{\mathtt{b}}=0.\) Now we set \({\mathfrak{B}}={\mathfrak{N}}{\mathfrak{W}}-\mathds{1}\) which is “small” in the sense of (3.4). Then we have
\[({\mathfrak{N}}+{\mathfrak{R}}){\mathfrak{W}}(\mathds{1}+{\mathfrak{U}})^{-1}= (\mathds{1}+{\mathfrak{U}}+{\mathfrak{B}})(\mathds{1}+{\mathfrak{U}})^{-1}= \mathds{1}+{\mathfrak{B}}(\mathds{1}+{\mathfrak{U}})^{-1}\,,\quad(\mathds{1}+{ \mathfrak{U}})^{-1}=\sum_{j=0}^{\mathtt{b}-1}(-1)^{j}{\mathfrak{U}}^{j}\,.\] (C.1)
Thus \({\mathfrak{W}}(\mathds{1}+{\mathfrak{U}})^{-1}\) is an approximate inverse for \({\mathfrak{A}}\) in the sense that it is a true inverse for \({\mathfrak{B}}=0\). Then for all \(\xi\in{\mathcal{O}}\) let us set
\[\tilde{g}:={\mathfrak{W}}(\mathds{1}+{\mathfrak{U}})^{-1}\Pi_{K}\Pi_{X}F.\] (C.2)
As for the bounds we first notice that by (3.3) and (3.6) one has
\[|{\mathfrak{U}}X|_{\vec{v},p}\leq K^{\mu_{1}+\nu+1}\big{[}\Theta_{{\mathfrak{p }}_{1}}|X|_{\vec{v},p}+\big{(}\Theta_{p}(1+\Gamma_{{\mathfrak{p}}_{1}})+\Theta _{{\mathfrak{p}}_{1}}K^{\alpha(p-{\mathfrak{p}}_{1})}\Gamma_{p}\big{)}|X|_{ \vec{v},{\mathfrak{p}}_{1}}\big{]}.\] (C.3)
Now we can prove inductively that
\[|{\mathfrak{U}}^{j}X|_{\vec{v},p}\leq K^{j(\mu+\nu+1)}\Big{[}\Theta_{{ \mathfrak{p}}_{1}}^{j}|X|_{\vec{v},p}+\big{(}\Theta_{p}(1+\Gamma_{{\mathfrak{p }}_{1}})+\Theta_{{\mathfrak{p}}_{1}}K^{\alpha(p-{\mathfrak{p}}_{1})}\Gamma_{p} \big{)}{\mathtt{P}}_{j}(\Theta_{{\mathfrak{p}}_{1}},\Gamma_{{\mathfrak{p}}_{1} })|X|_{\vec{v},{\mathfrak{p}}_{1}}\Big{]},\] (C.4)
where \({\mathtt{P}}_{j}(\Theta_{{\mathfrak{p}}_{1}},\Gamma_{{\mathfrak{p}}_{1}})\) is a polynomial of degree \(2(j-1)\) defined recursively as
\[{\mathtt{P}}_{1}:=1\,,\] (C.5)
\[{\mathtt{P}}_{j}(\Theta_{{\mathfrak{p}}_{1}},\Gamma_{{\mathfrak{p }}_{1}}):=\Theta_{{\mathfrak{p}}_{1}}^{j-1}+2\Theta_{{\mathfrak{p}}_{1}}(1+ \Gamma_{{\mathfrak{p}}_{1}}){\mathtt{P}}_{j-1}(\Theta_{{\mathfrak{p}}_{1}}, \Gamma_{{\mathfrak{p}}_{1}}).\]
Indeed for \(j=1\) this is exactly the bound (C.3); then assuming (C.4) to hold up to \(j\) we have
\[|{\mathfrak{U}}^{j+1}X|_{\vec{v},p} =|{\mathfrak{U}}({\mathfrak{U}}^{j}X)|_{\vec{v},p}\leq K^{\mu+\nu +1}\big{[}\Theta_{{\mathfrak{p}}_{1}}|{\mathfrak{U}}^{j}X|_{\vec{v},p}+\big{(} \Theta_{p}(1+\Gamma_{{\mathfrak{p}}_{1}})+\Theta_{{\mathfrak{p}}_{1}}K^{\alpha (p-{\mathfrak{p}}_{1})}\Gamma_{p}\big{)}|{\mathfrak{U}}^{j}X|_{\vec{v},{ \mathfrak{p}}_{1}}\big{]}\] (C.6)
\[\!\!\!\!\!\leq K^{\mu+\nu+1}\Big{(}\Theta_{{\mathfrak{p}}_{1}}K^{ j(\mu+\nu+1)}\Big{[}\Theta_{{\mathfrak{p}}_{1}}^{j}|X|_{\vec{v},p}+\big{(} \Theta_{p}(1+\Gamma_{{\mathfrak{p}}_{1}})+\Theta_{{\mathfrak{p}}_{1}}K^{\alpha (p-{\mathfrak{p}}_{1})}\Gamma_{p}\big{)}{\mathtt{P}}_{j}(\Theta_{{\mathfrak{p} }_{1}},\Gamma_{{\mathfrak{p}}_{1}})|X|_{\vec{v},{\mathfrak{p}}_{1}}\Big{]}\]
\[+\big{(}\Theta_{p}(1+\Gamma_{{\mathfrak{p}}_{1}})+\Theta_{{ \mathfrak{p}}_{1}}K^{\alpha(p-{\mathfrak{p}}_{1})}\Gamma_{p}\big{)}K^{j(\mu+ \nu+1)}\Big{[}\Theta_{{\mathfrak{p}}_{1}}^{j}+\Theta_{{\mathfrak{p}}_{1}}(1+2 \Gamma_{{\mathfrak{p}}_{1}}){\mathtt{P}}_{j}(\Theta_{{\mathfrak{p}}_{1}}, \Gamma_{{\mathfrak{p}}_{1}})\Big{]}|X|_{\vec{v},{\mathfrak{p}}_{1}}\Big{)}\]
\[\!\!\!\!\!=K^{(j+1)(\mu+\nu+1)}\Big{(}\Theta^{j+1}|X|_{\vec{v},p}\]
\[+\big{(}\Theta_{p}(1+\Gamma_{{\mathfrak{p}}_{1}})+\Theta_{{ \mathfrak{p}}_{1}}K^{\alpha(p-{\mathfrak{p}}_{1})}\Gamma_{p}\big{)}\big{(} \Theta_{{\mathfrak{p}}_{1}}^{j}+2\Theta_{{\mathfrak{p}}_{1}}(1+\Gamma_{{ \mathfrak{p}}_{1}}){\mathtt{P}}_{j}(\Theta_{{\mathfrak{p}}_{1}},\Gamma_{{ \mathfrak{p}}_{1}})\big{)}|X|_{\vec{v},{\mathfrak{p}}_{1}}\Big{)}\]
which is (C.4) for \(j+1\) taking into account (C.5). Moreover, again by induction, the polynomials \({\mathtt{P}}_{j}\) satisfy the bound
\[|{\mathtt{P}}_{j}|\leq 3^{j}\Theta_{{\mathfrak{p}}_{1}}^{j-1}(1+\Gamma_{{ \mathfrak{p}}_{1}})^{j-1}\,,\] (C.7)
uniformly in \(\Theta_{{\mathfrak{p}}_{1}},\Gamma_{{\mathfrak{p}}_{1}}\). Indeed for \(j=1\) this is trivial while assuming (C.7) up to \(j\) we have
\[|{\mathtt{P}}_{j+1}| \stackrel{{\eqref{recu}}}{{\leq}}\Theta_{{\mathfrak{p }}_{1}}^{j}+2\Theta_{{\mathfrak{p}}_{1}}(1+\Gamma_{{\mathfrak{p}}_{1}})|{ \mathtt{P}}_{j}|\] (C.8)
\[\leq\Theta_{{\mathfrak{p}}_{1}}^{j}+2\Theta_{{\mathfrak{p}}_{1}}( 1+\Gamma_{{\mathfrak{p}}_{1}})3^{j}\Theta_{{\mathfrak{p}}_{1}}^{j-1}(1+\Gamma_ {{\mathfrak{p}}_{1}})^{j-1}=\Theta_{{\mathfrak{p}}_{1}}^{j}(1+2\cdot 3^{j}(1+ \Gamma_{{\mathfrak{p}}_{1}})^{j})\]
\[\leq(1+2\cdot 3^{j})\Theta_{{\mathfrak{p}}_{1}}^{j}(1+\Gamma_{{ \mathfrak{p}}_{1}})^{j}\leq 3^{j+1}\Theta_{{\mathfrak{p}}_{1}}^{j}(1+\Gamma_{{ \mathfrak{p}}_{1}})^{j}\,,\]
where in the last inequality we used the fact that \(1+2C^{j}\leq C^{j+1}\) for \(C\geq 3\). Summarizing we obtained
\[|{\mathfrak{U}}^{j}X|_{\vec{v},p}\leq K^{j(\mu_{1}+\nu+1)}\Theta^{j-1}_{{ \mathfrak{p}}_{1}}\Big{[}\Theta_{{\mathfrak{p}}_{1}}|X|_{\vec{v},p}+\big{(} \Theta_{p}(1+\Gamma_{{\mathfrak{p}}_{1}})+\Theta_{{\mathfrak{p}}_{1}}K^{\alpha (p-{\mathfrak{p}}_{1})}\Gamma_{p}\big{)}3^{j}(1+\Gamma_{{\mathfrak{p}}_{1}})^{ j-1}|X|_{\vec{v},{\mathfrak{p}}_{1}}\Big{]},\] (C.9)
so that (the second summand is zero for \(j=0\))
\[|{\mathfrak{W}}{\mathfrak{U}}^{j}X|_{\vec{v},p}\leq 4^{j}K^{j(\mu_{1}+\nu+1)+ \mu_{1}}\Big{[}\Theta^{j}_{{\mathfrak{p}}_{1}}|X|_{\vec{v},p}+\Theta_{p}\Theta ^{j-1}_{{\mathfrak{p}}_{1}}(1+\Gamma_{{\mathfrak{p}}_{1}})^{j}|X|_{\vec{v},{ \mathfrak{p}}_{1}}+\Theta^{j}_{{\mathfrak{p}}_{1}}K^{\alpha(p-{\mathfrak{p}}_{ 1})}(1+\Gamma_{{\mathfrak{p}}_{1}})^{j}\Gamma_{p}|X|_{\vec{v},{\mathfrak{p}}_{ 1}}\Big{]}\] (C.10)
and finally
\[|{\mathfrak{W}}(1+{\mathfrak{U}})^{-1}X|_{\vec{v},p} \leq K^{\mu_{1}}(1+4K^{\mu_{1}+\nu+1}\Theta_{{\mathfrak{p}}_{1}}) ^{b}|X|_{\vec{v},p}\] (C.11)
\[\qquad+K^{\mu_{1}}(1+\Gamma_{{\mathfrak{p}}_{1}})(1+4K^{\mu_{1}+ \nu+1}\Theta_{{\mathfrak{p}}_{1}}(1+\Gamma_{{\mathfrak{p}}_{1}}))^{b-1}\Theta_ {p}|X|_{\vec{v},{\mathfrak{p}}_{1}}\]
\[\qquad+K^{\mu_{1}+\alpha(p-{\mathfrak{p}}_{1})}(1+4K^{\mu_{1}+\nu +1}\Theta_{{\mathfrak{p}}_{1}}(1+\Gamma_{{\mathfrak{p}}_{1}}))^{b}\Gamma_{p}|X |_{\vec{v},{\mathfrak{p}}_{1}}\]
where again the second summand is in fact zero if \(b=0\). Therefore, since \(\Theta_{p}\leq\Gamma_{p}\), we obtain
\[|{\mathfrak{W}}(1+{\mathfrak{U}})^{-1}X|_{\vec{v},p}\leq K^{(b+1)(\mu_{1}+\nu+ 1)}(1+\Theta_{{\mathfrak{p}}_{1}}(1+\Gamma_{{\mathfrak{p}}_{1}}))^{b}(1+\Gamma _{{\mathfrak{p}}_{1}})\Big{[}|X|_{\vec{v},p}+K^{\alpha(p-{\mathfrak{p}}_{1})} \Gamma_{p}|X|_{\vec{v},{\mathfrak{p}}_{1}}\Big{]}\] (C.12)
this concludes the proof of (2.46). The proof of (2.47) follows the same lines. Now we have defined a function \(\tilde{g}\) on the set \({\mathcal{O}}\). In order to conclude the proof we need to extend this function to the whole \({\mathcal{O}}_{0}\). We know that regular vector fields in \(E^{(K)}\) have a structure of Hilbert space w.r.t. the norm \(|\cdot|_{s,a,{\mathfrak{p}}_{1}}\) so we may apply Kirtzbraun Theorem in order to extend \(\tilde{g}\) to a regular vector field in \(E^{(K)}\) with the same Lipschitz norm, i.e. \(|g|_{s,a,{\mathfrak{p}}_{1}}^{\rm lip}\leq\gamma^{-1}|\tilde{g}|_{\vec{v},p}\) . As for the sup norm one clearly has
\[\sup_{\xi\in{\mathcal{O}}_{0}}|g|_{s,a,{\mathfrak{p}}_{1}}\leq\sup_{\xi\in{ \mathcal{O}}_{0}}|\tilde{g}|_{s,a,{\mathfrak{p}}_{1}}+\rm{diam}({\mathcal{O}}_ {0})|g|_{s,a,{\mathfrak{p}}_{1}}^{\rm lip}.\]
∎
## Appendix D Time analytic case
Theorem 2.25 does not make any assumptions on the analiticity parameters \(a_{0},s_{0}\) and relies on tame estimates in order to control the _high_ Sobolev norm \({\mathfrak{p}}_{2}\geq{\mathfrak{p}}_{1}+\kappa_{0}+\chi\kappa_{2}\). However if one makes the Ansatz that \(s_{0}>0\) then we may take \({\mathfrak{p}}_{2}={\mathfrak{p}}_{1}\) and consequently have a simplified scheme, since we do not have to control the tameness constants in high norm but only the norm \(|\cdot|_{\vec{v},{\mathfrak{p}}_{1}}\). In order to do so we need to modify Definition 2.18 by substituting item _3._ with the following:
* For \(K>1\) there exists smoothing projection operators \(\Pi_{K}:{\mathcal{A}}_{\vec{v},p}\to{\mathcal{A}}_{\vec{v},p}\) such that \(\Pi_{K}^{2}=\Pi_{K}\) and setting \(\vec{v}_{1}=(\gamma,{\mathcal{O}},s+s_{1},a,r)\), for \(p_{1}\geq 0\), one has \[\qquad|\Pi_{K}F|_{\vec{v}_{1},p+p_{1}}\leq\mathtt{C}K^{{p_{1}}}e^ {s_{1}K}|F|_{\vec{v},p}\] (D.13) \[\qquad|F-\Pi_{K}F|_{\vec{v},p}\leq\mathtt{C}K^{{-p_{1}}}e^{-s_{1} K}|F|_{\vec{v}_{1},p+p_{1}}\] (D.14) finally if \(C_{\vec{v},p}(F)\) is any tameness constant for \(F\) then we may choose a tameness constant such that \[\qquad C_{\vec{v},p+p_{1}}(\Pi_{K}F)\leq\mathtt{C}K^{{p_{1}}}e^{s_{1}K}C_{\vec {v}_{1},p}(F)\qquad\] (D.15)
We denote by \(E^{(K)}\) the subspace where \(\Pi_{K}E^{(K)}=E^{(K)}\).
**Constraint D.1** (**The exponents: analytic case)**.: _We fix parameters \(\varepsilon_{0},{\mathtt{R}}_{0},{\mathtt{G}}_{0},\mu,\nu,\eta,\chi,\kappa_{2}\) such that the following holds._
* \(0<\varepsilon_{0}\leq{\mathtt{R}}_{0}\leq{\mathtt{G}}_{0}\) _with_ \(\varepsilon_{0}{\mathtt{G}}_{0}^{3},\varepsilon_{0}{\mathtt{G}}_{0}^{2}{ \mathtt{R}}_{0}^{-1}<1\)_._
* _We have_ \(\mu,\nu\geq 0\)_,_ \(1<\chi<2\)_, finally setting_ \(\kappa_{0}:=\mu+\nu+4\)__ \[\kappa_{2}>\frac{2\kappa_{0}}{2-\chi}\,,\qquad\eta>\mu+(\chi-1)\kappa_{2}+1\,,\] (D.16)
* _there exists_ \(K_{0}>1\) _such that_ \[\log K_{0}\geq\frac{1}{\log\chi}C,\] (D.17) _with_ \(C\) _a given function of_ \(\mu,\nu,\eta,\kappa_{2},s_{0}\) _(which goes to_ \(\infty\) _as_ \(s_{0}\to 0\)_) and moreover_ \[{\mathtt{G}}_{0}^{2}{\mathtt{R}}_{0}^{-1}\varepsilon_{0}K_{0}^{ \kappa_{0}}\max(1,{\mathtt{R}}_{0}{\mathtt{G}}_{0}K_{0}^{\kappa_{0}+(\chi-1) \kappa_{2}})<1\,,\] (D.18a) \[K_{0}^{\kappa_{0}+(\chi-1)\kappa_{2}}e^{-\frac{s_{0}K_{0}}{32}}{ \mathtt{G}}_{0}\varepsilon_{0}^{-1}\max\Big{(}1,{\mathtt{R}}_{0}\Big{)}\leq 1\,,\] (D.18b)
Now in order to state our result we define the good parameters and the changes of variables as in the general case but with \({\mathfrak{p}}_{2}={\mathfrak{p}}_{1}\), \(\kappa_{3}=\kappa_{1}=\alpha=0\). For clarity we restate our definition in this simpler case.
**Definition D.2** (**Homological equation)**.: _Let \(\gamma>0\), \(K\geq K_{0}\), consider a compact set \({\mathcal{O}}\subset{\mathcal{O}}_{0}\) and set \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\) and \(\vec{v}^{\text{\tiny 0}}=(\gamma,{\mathcal{O}}_{0},s,a,r)\). Consider a vector field \(F\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\) i.e._
\[F=N_{0}+G:{\mathcal{O}}_{0}\times D_{a,p+\nu}(r)\times\mathds{T}^{d}_{s}\to V_ {a,p}\,,\]
_which is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{1}+2\). We say \(\mathcal{O}\) satisfies the homological equation, for \((F,K,\vec{v}^{\text{\tiny 0}},\rho)\) if the following holds._
_1. For all_ \(\xi\in\mathcal{O}\) _one has_ \(F(\xi)\in{\mathcal{E}}\)_._
_2. there exist a bounded regular vector field_ \(g\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\cap E^{(K)}\) _such that_
* \(g\in{\mathcal{B}}_{\mathcal{E}}\) _for all_ \(\xi\in{\mathcal{O}}\)_,_
* _one has_ \(|g|_{\vec{v}^{\text{\tiny 0}},{\mathfrak{p}}_{1}}\leq\mathtt{C}|g|_{\vec{v},{ \mathfrak{p}}_{1}}\leq\mathtt{c}\rho\) _and_ \[|g|_{\vec{v},{\mathfrak{p}}_{1}}\leq\gamma^{-1}K^{\mu}|\Pi_{K}\Pi_{{\mathcal{X }}}G|_{\vec{v},{\mathfrak{p}}_{1}}(1+\gamma^{-1}C_{{\vec{v},p}}(G))\,,\] (D.19)
* _setting_ \(u:=\Pi_{K}\Pi_{{\mathcal{X}}}({\rm ad}(\Pi_{{\mathcal{X}}}^{\perp}F)[g]-F),\) _one has_ _assssasfffff___ \[|u|_{\vec{v},{\mathfrak{p}}_{1}}\leq\varepsilon_{0}\gamma^{-1}K^{-\eta+\mu}C_{ \vec{v},{\mathfrak{p}}_{1}}(G)|\Pi_{K}\Pi_{{\mathcal{X}}}G|_{\vec{v},{ \mathfrak{p}}_{1}},\] (D.20)
**Remark D.3**.: _Note that if we take \({\mathfrak{p}}_{2}={\mathfrak{p}}_{1}\) then the second inequality in (2.46) as well as item 2(d) of Definition 2.23 follow from (2.30) and (2.31)._
**Definition D.4** (**Compatible changes of variables: analytic case)**.: _Let the parameters in Constraint D.1 be fixed. Fix also \(\vec{v}=(\gamma,{\mathcal{O}},s,a,r)\), \(\vec{v}^{\text{\tiny 0}}=(\gamma,{\mathcal{O}}_{0},s,a,r)\) with \({\mathcal{O}}\subseteq{\mathcal{O}}_{0}\) a compact set, parameters \(K\geq K_{0},\rho<1\). Consider a vector field \(F=N_{0}+G\in{\mathcal{W}}_{\vec{v}^{\text{\tiny 0}},p}\) which is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{1}+2\) and such that \(F\in{\mathcal{E}}\quad\forall\xi\in{\mathcal{O}}\). We say that a left invertible \({\mathcal{E}}\)-preserving change of variables_
\[{\mathcal{L}},{\mathcal{L}}^{-1}:\mathds{T}^{d}_{s}\times D_{a,{\mathfrak{p}}_ {1}}(r)\times{\mathcal{O}}_{0}\to\mathds{T}^{d}_{s+\rho s_{0}}\times D_{a-\rho a _{0},{\mathfrak{p}}_{1}}(r+\rho r_{0})\]
_is compatible with \((F,K,\vec{v},\rho)\) if the following holds:_
* \({\mathcal{L}}\) _is “close to identity”, i.e. denoting_ \(\vec{v}^{\text{\tiny 0}}_{1}:=(\gamma,{\mathcal{O}}_{0},s-\rho s_{0},a-\rho a_ {0},r-\rho r_{0})\) _one has_ \[\|({\mathcal{L}}-\mathds{1})h\|_{\vec{v}^{\text{\tiny 0}}_{1},{ \mathfrak{p}}_{1}}\leq\mathtt{C}\varepsilon_{0}K^{-1}\|h\|_{\vec{v}^{\text{ \tiny 0}},{\mathfrak{p}}_{1}}\,.\] (D.21)
* \({\mathcal{L}}_{*}\) _conjugates the_ \(C^{\mathtt{n}+2}\)_-tame vector field_ \(F\) _to the vector field_ \(\hat{F}:={\mathcal{L}}_{*}F=N_{0}+\hat{G}\) _which is_ \(C^{\mathtt{n}+2}\)_-tame; moreover denoting_ \(\vec{v}_{2}:=(\gamma,{\mathcal{O}},s-2\rho s_{0},a-2\rho a_{0},r-2\rho r_{0})\) _one may choose the tameness constants of_ \(\hat{G}\) _so that_ \[C_{\vec{v}_{2},{\mathfrak{p}}_{1}}(\hat{G})\leq C_{\vec{v},{\mathfrak{p}}_{1}} (G)(1+\varepsilon_{0}K^{-1})\,,\] (D.22)
* \({\mathcal{L}}_{*}\) _“preserves the_ \(({\mathcal{N}},{\mathcal{X}},{\mathcal{R}})\)_-decomposition”, namely one has_ \[\Pi_{\mathcal{N}}^{\perp}({\mathcal{L}}_{*}\Pi_{\mathcal{N}}F)=0\,,\quad\qquad \Pi_{{\mathcal{X}}}({\mathcal{L}}_{*}\Pi_{{\mathcal{X}}}^{\perp}F)=0\,.\] (D.23)
Then the result is the following.
**Theorem D.5** (**Abstract KAM: analytic case)**.: _Let \(N_{0}\) be a diagonal vector field as in Definition 2.20 and consider a vector field_
\[F_{0}:=N_{0}+G_{0}\in{\mathcal{E}}\cap{\mathcal{W}}_{\vec{v}_{0},p}\] (D.24)
_which is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{1}+2\). Fix parameters \(\varepsilon_{0},{\mathtt{R}}_{0},{\mathtt{G}}_{0},\mu,\nu,\eta,\chi,\kappa_{2}\) satisfying Constraint D.1 and assume that_
\[\gamma_{0}^{-1}C_{\vec{v}_{0},{\mathfrak{p}}_{1}}(G_{0})\leq{\mathtt{G}}_{0}\, ,\quad\gamma_{0}^{-1}C_{\vec{v}_{0},{\mathfrak{p}}_{1}}(\Pi_{{\mathcal{N}}}^{ \perp}G_{0})\leq{\mathtt{R}}_{0}\,,\quad\gamma_{0}^{-1}|\Pi_{{\mathcal{X}}}G_{ 0}|_{\vec{v}_{0},{\mathfrak{p}}_{1}}\leq\varepsilon_{0}\,.\] (D.25)
_For all \(n\geq 0\) we define recursively changes of variables \({\mathcal{L}}_{n},\Phi_{n}\) and compact sets \({\mathcal{O}}_{n}\) as follows._
_Set \({\mathcal{H}}_{-1}={\mathcal{H}}_{0}=\Phi_{0}={\mathcal{L}}_{0}=\mathds{1}\), and for \(0\leq j\leq n-1\) set recursively \({\mathcal{H}}_{j}=\Phi_{j}\circ{\mathcal{L}}_{j}\circ{\mathcal{H}}_{j-1}\) and \(F_{j}:=({\mathcal{H}}_{j})_{*}F_{0}:=N_{0}+G_{j}\). Let \({\mathcal{L}}_{n}\) be any change of variables compatible with \((F_{n-1},K_{n-1},\vec{v}_{n-1},\rho_{n-1})\) following Definition D.4, and \({\mathcal{O}}_{n}\) be any compact set_
\[{\mathcal{O}}_{n}\subseteq{\mathcal{O}}_{n-1}\,,\] (D.26)
_which satisfies the Homological equation for \((({\mathcal{L}}_{n})_{*}F_{n-1},K_{n-1},\vec{v}^{\text{\tiny 0}}_{n-1},\rho_{n -1})\). For \(n>0\) let \(g_{n}\) be the regular vector field defined in item (2) of Definition D.2 and set \(\Phi_{n}\) the time-1 flow map generated by \(g_{n}\)._
_Then \(\Phi_{n}\) is left invertible and \(F_{n}:=(\Phi_{n}\circ{\mathcal{L}}_{n})_{*}F_{n-1}\in{\mathcal{W}}_{\vec{v}^{ \text{\tiny 0}}_{n},p}\) is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{1}+2\). Moreover the following holds._
* _Setting_ \(G_{n}=F_{n}-N_{0}\) _then_ \[\Gamma_{n,{\mathfrak{p}}_{1}} :=\gamma_{n}^{-1}C_{\vec{v}_{n},{\mathfrak{p}}_{1}}(G_{n})\leq{ \mathtt{G}}_{n}, \Theta_{n,{\mathfrak{p}}_{1}}:=\gamma_{n}^{-1}C_{\vec{v}_{n},{ \mathfrak{p}}_{1}}(\Pi_{{\mathcal{N}}}^{\perp}G_{n})\leq{\mathtt{R}}_{n},\] (D.27) \[\delta_{n} :=\gamma_{n}^{-1}|\Pi_{{\mathcal{X}}}G_{n}|_{\vec{v}_{n},{ \mathfrak{p}}_{1}}\leq K_{0}^{\kappa_{2}}\varepsilon_{0}K_{n}^{-\kappa_{2}}, |g_{n}|_{\vec{u}_{n},{\mathfrak{p}}_{1}}\leq K_{0}^{\kappa_{2}} \varepsilon_{0}{\mathtt{G}}_{0}K_{n-1}^{-\kappa_{2}+\mu+1},\] _where_ \(\vec{u}_{n}=(\gamma_{n},{\mathcal{O}}_{n},s_{n}+12\rho_{n}s_{0},a_{n}+12\rho_{ n}a_{0},r_{n}+12\rho_{n}r_{0})\)_._
* _The sequence_ \({\mathcal{H}}_{n}\) _converges for all_ \(\xi\in{\mathcal{O}}_{0}\) _to some change of variables_ \[{{\mathcal{H}}}_{\infty}={{\mathcal{H}}}_{\infty}(\xi):D_{{a_{0}},p}({s_{0}}/{ 2},{r_{0}}/{2})\longrightarrow D_{\frac{a_{0}}{2},p}({s_{0}},{r_{0}}).\] (D.28)
* _Defining_ \(F_{\infty}:=({\mathcal{H}}_{\infty})_{*}F_{0}\) _one has_ \[\Pi_{\mathcal{X}}F_{\infty}=0\quad\forall\xi\in{\mathcal{O}}_{\infty}:=\bigcap _{n\geq 0}{\mathcal{O}}_{n}\] (D.29) _and_ \[\gamma_{0}^{-1}C_{\vec{v}_{\infty},{\mathfrak{p}}_{1}}(\Pi_{{\mathcal{N}}}F_{ \infty}-N_{0})\leq 2{\mathtt{G}}_{0},\quad\gamma_{0}^{-1}C_{\vec{v}_{\infty},{ \mathfrak{p}}_{1}}(\Pi_{{\mathcal{R}}}F_{\infty})\leq 2{\mathtt{R}}_{0}\] _with_ \(\vec{v}_{\infty}:=(\gamma_{0}/2,{\mathcal{O}}_{\infty},s_{0}/2,a_{0}/2)\)_._
Proof.: The proof of Theorem D.5 is essentially identical to the one of Theorem 2.25. We give a sketch for completeness. The induction basis is trivial with \(g_{0}=0\). Assuming (D.27) up to \(n\) we prove the inductive step using the “KAM step” of Proposition 5.1 with \({\mathfrak{p}}_{1}={\mathfrak{p}}_{2}\) and \(\alpha=\kappa_{3}=0\) and the bound (5.13) substituted by
\[\delta_{+} \leq{\mathtt{C}}{\Gamma_{{\mathfrak{p}}_{1}}}\big{(}\delta^{2} \Gamma^{2}_{{\mathfrak{p}}_{1}}K^{2\mu+2\nu+4}+\delta\varepsilon_{0}K^{\mu- \eta}\big{)}+e^{-2\rho s_{0}K}K^{\mu+\nu+1-({\mathfrak{p}}_{2}-{\mathfrak{p}}_ {1})}\big{(}\Theta_{{\mathfrak{p}}_{2}}+\varepsilon_{0}K^{\kappa_{3}}\Theta_{{ \mathfrak{p}}_{1}}\big{)}\] (D.30)
\[\qquad+{\Gamma_{{\mathfrak{p}}_{1}}}K^{\mu+\nu+1-({\mathfrak{p}}_ {2}-{\mathfrak{p}}_{1})}e^{-2\rho s_{0}K}\big{(}\Theta_{{\mathfrak{p}}_{2}}+ \varepsilon_{0}K^{\kappa_{3}}\Theta_{{\mathfrak{p}}_{1}}+K^{\alpha({\mathfrak{ p}}_{2}-{\mathfrak{p}}_{1})}\delta(\Gamma_{{\mathfrak{p}}_{2}}+\varepsilon_{0} K^{\kappa_{3}}{\Gamma_{{\mathfrak{p}}_{1}}})\big{)}\,.\]
Bound (D.30) follows using the smoothing properties (D.13), (D.14) in item \((3^{\prime})\), in the equation (5.39).
First of all we note that
\[\rho_{n}^{-1}K_{n}^{\mu+\nu+3}\Gamma_{n,{\mathfrak{p}}_{1}}\delta_{n}\leq \mathtt{c},\] (D.31)
which, by the inductive hypothesis and (2.52) reads
\[2^{n+9}K_{0}^{(\mu+\nu+3-\kappa_{2})\chi^{n}}\mathtt{G}_{0}\varepsilon_{0}K_{0 }^{\kappa_{2}}\leq\mathtt{c};\] (D.32)
this is true since by (D.16) and (D.17) the left hand side (D.32) is decreasing in \(n\) so that (D.31) follows form
\[K_{0}^{\mu+\nu+4}\mathtt{G}_{0}\varepsilon_{0}<1\]
which is indeed implied by (D.18a) because \({\mathtt{G}}_{0}\geq{\mathtt{R}}_{0}\).
Hence we can apply the “KAM step” to \(F_{n}:=(\Phi_{n}\circ{\mathcal{L}}_{n})_{*}F_{n-1}\in{\mathcal{W}}_{\vec{v}^{ \text{\tiny 0}}_{n},{\mathfrak{p}}_{2}}\) which is a \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). We fix \((K_{n},\gamma_{n},a_{n},s_{n},r_{n},\rho_{n},{\mathcal{O}}_{n}) \rightsquigarrow(K,\gamma,a,s,r,\rho,{\mathcal{O}})\), \(\Gamma_{n,p}\rightsquigarrow\Gamma_{p}\), \(\Theta_{n,p}\rightsquigarrow\Theta_{p}\), \(\delta_{n}\rightsquigarrow\delta\), \((\gamma_{n+1},a_{n+1},s_{n+1},r_{n+1},\rho_{n+1},{\mathcal{O}}_{n+1}) \rightsquigarrow(\gamma_{+},a_{+},s_{+},r_{+},\rho_{+},{\mathcal{O}}_{+})\). The KAM steps produces a bounded regular vector field \(g_{n+1}\) and a left invertible change of variables \(\Phi_{n+1}=\mathds{1}+f_{n+1}\) such that \(F_{n+1}:=(\Phi_{n+1}\circ{\mathcal{L}}_{n})_{*}F_{n}\in{\mathcal{W}}_{\vec{v}^ {\text{\tiny 0}}_{n+1},{\mathfrak{p}}_{2}}\) is \(C^{\mathtt{n}+2}\)-tame up to order \(q={\mathfrak{p}}_{2}+2\). We now verify that the bounds (D.27) hold with \(\Gamma_{n+1,{\mathfrak{p}}_{1}}\rightsquigarrow\Gamma_{+,{\mathfrak{p}}_{1}}\), \(\Theta_{n+1,{\mathfrak{p}}_{1}}\rightsquigarrow\Theta_{+,{\mathfrak{p}}_{1}}\), , \(\delta_{n+1}\rightsquigarrow\delta_{+}\).
Let us prove **(i)**, the others follow exactly as in Theorem (2.25).
By substituting into (5.8) we immediately obtain the bounds for \(g_{n+1}\) of (D.27).
Now we recall that, by definition
\[\frac{\gamma_{n}}{\gamma_{n+1}}=1+\frac{1}{2^{n+3}-1}.\]
We use (5.11) together with the inductive hypotheses to obtain
which follow by requiring
\[\max(2^{n}K_{n}^{-1}\varepsilon_{0},2^{n}K_{n}^{\mu+\nu+1-\kappa_{2}}K_{0}^{ \kappa_{2}}{\mathtt{G}}_{0}\varepsilon_{0},2^{n}K_{n}^{-\eta-\kappa_{2}+\mu}K_ {0}^{\kappa_{2}}\varepsilon_{0})\leq\mathtt{c}\,,\]
and as before this follows by (D.16) and (D.18a).
Regarding \(\Theta_{n+1,{\mathfrak{p}}_{1}}\), using (5.12) we get
which again follows from (D.16) and (D.18a).
For \(\delta_{n+1}\rightsquigarrow\delta_{+}\), we apply (D.30) with \({\mathfrak{p}}_{2}={\mathfrak{p}}_{1}\), \(\alpha=\kappa_{3}=0\) and get
\[\delta_{n+1}\leq {\mathtt{C}}{\mathtt{G}}_{0}\Big{(}\varepsilon_{0}^{2}K_{0}^{ \kappa_{2}}({\mathtt{G}}_{0}^{2}K_{0}^{\kappa_{2}}K_{n}^{2\mu+2\nu+4-2\kappa_{ 2}}+K_{n}^{\mu-\eta-\kappa_{2}})+\]
\[e^{-2\rho_{n}s_{0}K_{n}}({\mathtt{R}}_{0}K_{n}^{\mu+\nu+1}+ \varepsilon_{0}K_{0}^{\kappa_{2}}{\mathtt{G}}_{0}K_{n}^{\mu+\nu+1-\kappa_{2}}) \Big{)}+e^{-2\rho_{n}s_{0}K_{n}}{\mathtt{R}}_{0}K_{n}\leq\varepsilon_{0}K_{0}^ {\kappa_{2}}K_{n}^{-\chi\kappa_{2}}\]
Now
\[{\mathtt{C}}{\mathtt{G}}_{0}\Big{(}\varepsilon_{0}^{2}K_{0}^{\kappa_{2}}({ \mathtt{G}}_{0}^{2}K_{0}^{\kappa_{2}}K_{n}^{2\mu+2\nu+4-2\kappa_{2}}+K_{n}^{ \mu-\eta-\kappa_{2}})\leq\frac{1}{2}\varepsilon_{0}K_{0}^{\kappa_{2}}K_{n}^{- \chi\kappa_{2}}\]
by (D.16) and (D.18a). As for the second term, since \(s_{0}>0\) all the summands are decreasing in \(n\) provided that \(K_{0}\) is large enough.
∎
## References
* [1] J. Moser, Convergent series expansions for quasi–periodic motions, Math. Ann. 169 (1967) 136–176.
* [2] A. Kolmogorov, On conservation of conditionally periodic motions for a small change in hamil- ton’s function, Dokl. Akad. Nauk SSSR 98 (1954) 527–530.
* [3] V. I. Arnold, Small denominators and problems of stability of motion in classical celestial mechanics, Russian Math. Surveys 18 (1963) 85–193.
* [4] J. Moser, Rapidly convergent iteration method and non-linear partial differential equations - i, Ann. Sc. Norm. Sup. Pisa 20 (2) (1966) 265–315.
* [5] J. Pöschel, A lecture on the classical KAM theorem, in: Proc. Symp. Pure Math., Vol. 69, 2001, pp. 707–732.
* [6] H. Rüssmann, Nondegeneracy in the perturbation theory of integrable dynamical systems, in: Number theory and dynamical systems (York, 1987), Vol. 134, Cambridge Univ. Press, Cambridge, 1989, pp. 5–8.
* [7] M. Sevryuk, The reversible context 2 in KAM theory: the first steps, Regul. Chaotic Dyn 16 (1-2) (2011) 24–38.
* [8] E. Zehnder, Generalized implicit function theorems with applications to some small divisors problems i-ii, Comm. Pure Appl. Math. (29) (1976) 49–113.
* [9] J. Fejoz, Periodic and quasi-periodic motions in the many-body problem. mémoire d’habilitation à diriger des recherches. (2010).
* [10] S. Kuksin, Hamiltonian perturbations of infinite-dimensional linear systems with imaginary spectrum, Funktsional Anal. i Prilozhen. 21 (3) (1987) 22–37.
* [11] C. E. Wayne, Periodic and quasi-periodic solutions of nonlinear wave equations via KAM theory, Comm. Math. Phys. 127 (3) (1990) 479–528.
* [12] W. Craig, C. E. Wayne, Newton’s method and periodic solutions of nonlinear wave equations, Comm. Pure Appl. Math. 46 (11) (1993) 1409–1498.
* [13] J. Bourgain, Construction of quasi-periodic solutions for hamiltonian perturbations of linear equations and applications to nonlinear pde, Internat. Math. Res. Notices.
* [14] J. Pöschel, Quasi-periodic solutions for a nonlinear wave equation, Comment. Math. Helv. 71 (2) (1996) 269–296. doi:10.1007/BF02566420. URL http://dx.doi.org/10.1007/BF02566420
* [15] J. Pöschel, A KAM-theorem for some nonlinear partial differential equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 23 (1) (1996) 119–148. URL http://www.numdam.org/item?id=ASNSP_1996_4_23_1_119_0
* [16] S. Kuksin, J. Pöschel, Invariant Cantor manifolds of quasi-periodic oscillations for a nonlinear Schrödinger equation, Ann. of Math. 143 (1) (1996) 149–179. doi:10.2307/2118656.
* [17] J. Bourgain, Periodic solutions of nonlinear wave equations, in: Harmonic Analysis and Partial Differential Equations (Chicago, IL, 1996), Chicago Lectures in Math, Univ. Chicago Press, 1999, pp. 69–97.
* [18] S. Kuksin, A KAM theorem for equations of the Korteweg-de Vries type, Rev. Math. Phys. 10 (3) (1998) 1–64.
* [19] J. Bourgain, Quasi-periodic solutions of Hamiltonian perturbations of 2D linear Schrödinger equations, Ann. of Math. (2) 148 (2) (1998) 363–439. doi:10.2307/121001.
* [20] L. Chierchia, J. You, KAM tori for 1D nonlinear wave equations with periodic boundary conditions, Comm. Math. Phys. 211 (2000) 497–525.
* [21] J. Bourgain, Green’s function estimates for lattice Schrödinger operators and applications, Vol. 158 of Annals of Mathematics Studies, Princeton University Press, Princeton, NJ, 2005.
* [22] J. Geng, J. You, A KAM theorem for Hamiltonian partial differential equations in higher dimensional spaces, Comm. Math. Phys. 262 (2) (2006) 343–372. doi:10.1007/s00220-005-1497-0.
* [23] L. H. Eliasson, S. B. Kuksin, KAM for the nonlinear Schrödinger equation, Ann. of Math. (2) 172 (1) (2010) 371–435. doi:10.4007/annals.2010.172.371.
* [24] M. Berti, P. Bolle, Sobolev quasi periodic solutions of multidimensional wave equations with a multiplicative potential, Nonlinearity 25 (2012) 2579–2613.
* [25] M. Berti, P. Bolle, Quasi-periodic solutions for Schrödinger equations with Sobolev regularity of NLS on \({\mathbb{T}}^{d}\) with a multiplicative potential, J. European Math. Society 15 (2013) 229–286.
* [26] J. Geng, J. You, X. Xu, KAM tori for cubic NLS with constant potentials, preprint.
* [27] M. Procesi, C. Procesi, A KAM algorithm for the non–linear Schrödinger equation, Advances in Math. 272 (2015) 399–470.
* [28] M. Berti, L. Corsi, M. Procesi, An abstract Nash-Moser theorem and quasi-periodic solutions for NLW and NLS on compact Lie groups and homogeneous manifolds, Comm. Math. Phys. 334 (3) (2015) 1413–1454.
* [29] L. Corsi, E. Haus, M. Procesi, A KAM result on compact Lie groups, Acta Applicandae Mathematicae 137 (2015) 41–59.
* [30] B. Grébert, E. Paturel, KAM for the Klein-Gordon equation on \({\mathbb{S}}^{d}\), Boll. Unione Mat Ital. 9 (2) (2016) 237–288.
* [31] T. Kappeler, J. Pöschel, KAM and KdV, Vol. 45, Springer-Verlag, Berlin, 2003.
* [32] J. Zhang, M. Gao, X. Yuan, KAM tori for reversible partial differential equations, Nonlinearity 24 (2011) 1189–1228.
* [33] J. Liu, X. Yuan, A KAM Theorem for Hamiltonian partial differential equations with unbounded perturbations, Comm. Math. Phys, 307 (2011) 629–673.
* [34] M. Berti, L. Biasco, M. Procesi, KAM theory for the Hamiltonian derivative wave equation, Annales Scientifiques de l’ENS 46 (2) (2013) 299–371.
* [35] M. Berti, P. Biasco, M. Procesi, KAM theory for reversible derivative wave equations, Archive for Rational Mechanics and Analysis 212 (3) (2014) 905–955.
* [36] G. Iooss, P. Plotnikov, J. Toland, Standing waves on an infinitely deep perfect fluid under gravity, Arch. Ration. Mech. Anal. 177 (3) (2005) 367–478.
* [37] P. Baldi, Periodic solutions of forced Kirchhoff equations, Ann. Scuola Norm. Sup. Pisa, Cl. Sci. 8 (5) (2009) 117–141.
* [38] P. Baldi, Periodic solutions of fully nonlinear autonomous equations of Benjamin-Ono type, Ann. I. H. Poincaré (C) Anal. Non Linéaire 30 (2013) 33–77.
* [39] P. Baldi, M. Berti, R. Montalto, KAM for quasi-linear and fully nonlinear forced KdV, Math. Ann. 359 (2014) 471–536.
* [40] P. Baldi, M. Berti, R. Montalto, KAM for autonomous quasilinear perturbations of KdV, arXiv:1404-3125 (2014).
* [41] R. Feola, M. Procesi, Quasi-periodic solutions for fully nonlinear forced reversible Schrödinger equations, Journal of Differential Equations.
* [42] R. Montalto, Quasi-periodic solutions of forced Kirchoff equation, preprint, arXiv:1602.05093.
* [43] M. Berti, R. Montalto, Quasi-periodic standing wave solutions of gravity-capillary water waves, preprint (2016).
* [44] F. Giuliani, Quasi-periodic solutions for quasi-linear generalized KdV equations, preprint 2016, arXiv:1607.02583.
* [45] M. Berti, P. Bolle, A Nash-Moser approach to KAM theory, to appear on Field Institute Communications.
* [46] R. Feola, M. Procesi, KAM for quasi-linear autonomous NLS, in preparation.
* [47] T. Kappeler, R. Montalto, Canonical coordinates with tame estimates for the defocusing NLS equation on the circle, preprint, arXiv:1607.04454.
* [48] M. Berti, L. Biasco, Branching of Cantor manifolds of elliptic tori and applications to PDEs, Comm. Math. Phys. 305 (3) (2011) 741–796.
* [49] J. Bourgain, Global solutions of nonlinear Schrödinger equations, Vol. 46 of American Mathematical Society Colloquium Publications, American Mathematical Society, Providence, RI, 1999.
* [50] M. Berti, P. Bolle, Quasi-periodic solutions for autonomous NLW on \(\mathbb{T}^{d}\) with a multiplicative potential, in preparation.
* [51] M. Procesi, C. Procesi, Reducible quasi-periodic solutions for the non–linear Schrödinger equation, BUMI 2 (2016) 189–236.
* [52] R. Feola, Quasi-periodic solutions for fully nonlinear NLS, Ph.D. thesis, University ”La Sapienza” (2016).
|
1705.08563 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 76524,
"num_imgs": 2,
"llama3_tokens_count": 24813
} | [
"content_image/1705.08563/azure.jpg",
"content_image/1705.08563/amazon.jpg"
] | # Simple Pricing Schemes for the Cloud
Ian A. Kash
Microsoft ResearchCambridgeUK
Peter Key
Microsoft ResearchCambridgeUK
Warut Suksompong
Stanford UniversityStanfordUSA
###### Abstract.
The problem of pricing the cloud has attracted much recent attention due to the widespread use of cloud computing and cloud services. From a theoretical perspective, several mechanisms that provide strong efficiency or fairness guarantees and desirable incentive properties have been designed. However, these mechanisms often rely on a rigid model, with several parameters needing to be precisely known in order for the guarantees to hold. In this paper, we consider a stochastic model and show that it is possible to obtain good welfare and revenue guarantees with simple mechanisms that do not make use of the information on some of these parameters. In particular, we prove that a mechanism that sets the same price per time step for jobs of any length achieves at least \(50\%\) of the welfare and revenue obtained by a mechanism that can set different prices for jobs of different lengths, and the ratio can be improved if we have more specific knowledge of some parameters. Similarly, a mechanism that sets the same price for all servers even though the servers may receive different kinds of jobs can provide a reasonable welfare and revenue approximation compared to a mechanism that is allowed to set different prices for different servers.
## 1. Introduction
With cloud computing generating billions of dollars per year and forming a significant portion of the revenue of large software companies (Columbus, 2016), the problem of how to price cloud resources and services is of great importance. On the one hand, for a pricing scheme to be used, it is necessary that the scheme provide strong welfare and revenue guarantees. On the other hand, it is also often desirable that the scheme be simple. We combine the two objectives in this paper and show that simple pricing schemes perform almost as well as more complex ones with respect to welfare and revenue guarantees. In particular, consider the pricing scheme for virtual machines on Microsoft Azure shown in Figure 1. Once the user chooses the basic parameters such as region, type, and instance size, the price is calculated by simply multiplying an hourly base price by the number of virtual machines and number of hours desired. The question that we study can be phrased in this setting as follows: How much more welfare or revenue could be created if instead of this simple multiplication formula, a complex table specifying the price for each number of hours were to be used? Our main result is that the former offers at worst a two approximation to the latter, both in terms of welfare and revenue. Similarly, we demonstrate that setting a single price for a group of servers, even though the servers may receive different kinds of jobs, can provide a reasonable welfare and revenue approximation compared to setting different prices for different servers.
<figure><img src="content_image/1705.08563/azure.jpg"><figcaption>Figure 1. Pricing scheme for virtual machines on Microsoft Azure (Azure,2016).</figcaption></figure>
In much of the prior work in this space, which focuses more explicitly on scheduling, prices depend in a complex way on a number of parameters (typically including job length, arrival time, deadline, and value) as well as the current state of the system (Azar et al., 2015; Dehghani et al., 2016; Jain et al., 2011, 2012; Lucier et al., 2013). A weakness of such schemes is that they require these parameters to be known up front in order for the desirable properties of the mechanisms, such as their approximation ratios, to hold. The availability of such information is not always realistic in practice. Even when it is in principle possible to provide this information, there is a cost to participants in both time and resources to figure it out. In this work, we show that good results are possible with no up front information.
For our initial results we assume that there is a single server, which receives jobs of various lengths whose value per time step is drawn from the same probability distribution regardless of length. We compare the welfare and revenue that can be obtained by setting a price per time step that is independent of the job length against the corresponding objective obtained by setting an individual price for each job length. When we are allowed the freedom of setting different prices for different job lengths, intuitively we want to set a higher price per time step for longer jobs as a premium for reserving the server for a longer period of time.¹ However, as we show, we do not lose more than \(50\%\) of the welfare or revenue if we are only allowed to set one price. We would like to emphasize that this is a worst-case bound over a wide range of parameters, including the number of job lengths, the distribution over job lengths, and the distribution over job values. Indeed, as we show, we can obtain improved bounds if we know the value of some of these parameters. The price that we use in the single-price setting can be chosen from one of the prices used in the multi-price setting, meaning that we do not have to calculate a price from scratch. Moreover, all of our approximation guarantees hold generally for arbitrary prices, meaning that for any prices that we may set in a multi-price setting (i.e., not necessarily optimal ones), we can obtain an approximation of the welfare or revenue by setting one of those prices alone. Finally, we emphasize that these results put no restrictions on the form of the distribution; it can be discrete, continuous, or mixed. The only substantive constraint is that jobs of all lengths share the same distribution of value per time step. However, in an extension we show that a version of our results continues to hold even if this constraint is relaxed.
[FOOTNOTE:1][ENDFOOTNOTE]
<figure><img src="content_image/1705.08563/amazon.jpg"><figcaption>Figure 2. Pricing scheme for defined duration spot instances on Amazon(Amazon, 2017).</figcaption></figure>
We then generalize our results to a setting where there are multiple servers, each of which receives jobs of various lengths. The distribution over job lengths can be different for different servers. This is conceivable, for instance, if the servers are in various geographic locations or are utilized by various groups of users. We compare the welfare and revenue obtained by a simple pricing scheme that sets the same price for all servers against the corresponding objective achieved by a scheme that can set a different (single) price for each server. Roughly speaking, we show that as long as the parameters are not too extreme, e.g., the number of servers or the job lengths are not too large, then we do not lose too much of the welfare or revenue by setting a single price. Combining this with our initial results, we obtain an approximation of a very restricted pricing scheme where we must set the same price for all servers and all job lengths against one where we can set an individual price for each job length of each server. These results require an assumption that all servers have the same probability of not receiving a job at a time step. Using similar techniques, we also obtain approximation bounds when this assumption does not hold but there is only one job length across all servers.
### Related work
Much recent work has focused on designing online scheduling mechanisms with good welfare guarantees and incentive properties. Jain et al. (2011) exhibited a truthful mechanism for batch jobs on cloud systems where jobs are allocated non-preemptively, and the same group of authors came up with mechanisms for deadline-sensitive jobs in large computing clusters (Jain et al., 2012). Lucier et al. (2013) also considered the problem of scheduling deadline-sensitive jobs; they circumvented known lower bounds by assuming that jobs could be delayed and still finish by their deadline. Zhang et al. (2013) developed a framework for truthful online cloud auctions where users with heterogeneous demands can come and leave on the fly. More recently, Azar et al. (2015) constructed a truthful mechanism that achieves a constant competitive ratio given that slackness is allowed, while Dehghani et al. (2016) assumed a stochastic model and developed a truthful mechanism that approximates the expected maximum welfare up to a constant factor. Wang et al. (2015) designed mechanisms for selling reserved instances where users are allowed to reserve resources of any length and from any time point in the future. Their mechanisms determine the acceptance and payment immediately when the job arrives, and achieve a competitive ratio that is optimal to within a constant factor with regard to welfare.
Other work in this space has dealt with comparing pricing mechanisms such as the on-demand market and the spot market (Abhishek et al., 2012; Dierks and Seuken, 2016; Hoy et al., 2016), achieving fairness in job allocation (Friedman et al., 2014), and studying models of real-time pricing with budget constraints (Friedman et al., 2015). Kash and Key (2016) gave a survey of the current state of research in economics and computer science with respect to cloud pricing.
From a technical perspective, our work bears a resemblance to the work of Dütting et al. (2016) on discriminatory and anonymous posted pricing and of Disser et al. (2016) on hiring secretaries. In particular, Dütting et al. considered the problem of selling a single item to buyers who arrive sequentially with values drawn independently from identical distributions. They showed that by posting discriminatory prices, one can obtain at most \(2-1/n\) times as much revenue as that obtained by posting the same anonymous price, where \(n\) is the number of buyers. As is also the case in our work, their anonymous price can always be chosen from one of the discriminatory prices, but their bound is obtained via a relaxation of the discriminatory pricing problem, a different technique than what we use. Disser et al. provided a competitive online algorithm for a variant of the stochastic secretary problem, where applicants need to be hired over time. Upon the arrival of each applicant, the cost per time step of the applicant is revealed, and we have to decide on the duration of the employment, starting immediately. Once an applicant is accepted, we cannot terminate the contract until the duration of the job is over.
Our work falls into the broader area of the design and analysis of simple mechanisms, particularly posted price mechanisms. One of the motivations for studying simple mechanisms is that in practice, designers are often willing to partially give up optimality in return for simplicity. Mechanisms that simply post prices on goods have received significant attention since they reflect perhaps the most common way of selling goods in the real world, and moreover they leave no room for strategizing, making them easy for agents to participate in. A long line of work has investigated how well such mechanisms can approximate optimal mechanisms with respect to various objectives including welfare (Feldman et al., 2015; Cohen-Addad et al., 2016; Ezra et al., 2017), revenue (Chawla et al., 2010; Babaioff et al., 2011; Blumrosen and Holenstein, 2008), and social costs (Cohen et al., 2015). In Section 3.4 we show that techniques from this literature can recover some of our results under relaxed assumptions.
## 2. Preliminaries
We consider a system with a number of servers and discrete time steps. Each job takes an integer number of time steps to complete and yields a value upon completion. The value _per time step_ of a job is drawn from a distribution which is independent of the length of the job. Let \(F\) be the cumulative distribution function of this distribution and \(f\) the probability density function with respect to a base measure \(\mu\), and define \(\ell(x)=xf(x)\).² We do not make any assumption on our distribution; in particular, it need not be continuous or discrete, which is why we allow flexibility in terms of the base measure.
[FOOTNOTE:2][ENDFOOTNOTE]
When a job request is made for a job to be served by a server, there is a price \(p\) per time step which may depend on the job length and/or the server. If the value per time step of the job is at least \(p\), the server accepts and executes the job to completion. Otherwise, the server rejects the job. The objectives in our model are the steady-state welfare and revenue for each pricing scheme. In particular, we will be interested in the expected welfare and revenue per time step, given that the job values are drawn from a probability distribution. This can also be thought of as the average welfare and revenue per time step that result from a pricing scheme over a long period of time.
In Section 3, we assume that there is a single server. Each time step, either zero or one job appears. A job with length \(a_{i}\) appears with probability \(0<r_{i}\leq 1\), where \(\sum_{i=1}^{n}r_{i}\leq 1\) and \(n\) denotes the number of job lengths. We are allowed to set a price \(p_{i}\) for jobs of length \(a_{i}\). If a server accepts a job of length \(a_{i}\), it is busy and cannot accept other jobs for \(a_{i}\) time steps, including the current one. We compare the setting where we are forced to set the same price \(p\) for all job lengths against the setting where we can set a different price \(p_{i}\) for each job length \(a_{i}\). Note that if we could set different prices for different job lengths, then to optimize welfare or revenue, intuitively we would set a higher price per time step for longer jobs as a premium for reserving the server for a longer period. Put differently, once we accept a longer job, we are stuck with it for a longer period, during which we miss the opportunity to accept other jobs. Consequently, we should set a higher standard for accepting longer jobs. (See also Footnote 1.)
In Section 4, we assume that there are multiple servers. Each time step, either zero or one job appears for each server \(1\leq j\leq n\). For server \(j\), a job with length \(a_{ji}\) appears with probability \(0<r_{ji}\leq 1\) for \(1\leq i\leq n_{j}\), where \(n_{j}\) denotes the number of job lengths for server \(j\). We do not assume that the set of job lengths or the number of job lengths is identical across servers. On the other hand, we assume that the probability of no job appearing at a time step is the same for all servers, i.e., \(\sum_{i=1}^{n_{j}}r_{ji}\) is constant for any \(j\). In Subsection 4.1, we assume that we can set one price per server, and we compare the setting where we are forced to set the same price \(p\) for all servers against that where we can set a different price \(p_{j}\) for each server \(j\). In Subsection 4.2, we assume that we can set a different price \(p_{ji}\) for each server \(j\) and each of its job lengths \(a_{ji}\), and we compare that setting against the one where we are forced to set the same price \(p\) for all servers and all job lengths.
## 3. One Server
In this section, we assume that there is a single server, which receives jobs of various lengths. After presenting an introductory example in Subsection 3.1, we consider the general setting with an arbitrary number of job lengths in Subsection 3.2. In this setting, we show a \(50\%\) approximation for both welfare and revenue of setting one price for all job lengths compared to setting an individual price for each job length, for any realization of the parameters. Moreover, we show in Subsection 3.3 that our techniques provide a template for deriving tighter bounds if we have more specific information on the parameters. In particular, when there are two job lengths, we show for each setting of the parameters a tight approximation bound for welfare and revenue. Our approximation results hold for arbitrary (i.e., not necessarily optimal) pricing schemes, and the price we use in the single-price setting can be drawn from one of the prices in the multi-price setting. Finally, in Subsection 3.4 we consider an extension that does not assume independence between the job length and the value per time step.
### Warm-Up: Uniform Distribution
As a warm-up example, assume that at any time step a job with length 1 or 2 appears with probability \(50\%\) each. The value per time step of a job is drawn from the uniform distribution over \([0,1]\). Suppose that we set a price per time step \(p_{1}\) for jobs of length 1 and \(p_{2}\) for jobs of length 2.
Consider an arbitrary time step when the server is free. If the job drawn at that time step has length 1, then with probability \(p_{1}\) it has value below \(p_{1}\) and is rejected. In this case, the server passes one time step without a job. Otherwise, the job has value at least \(p_{1}\) and is accepted. In this case, the expected welfare from executing the job is \(\frac{1+p_{1}}{2}\). Similarly, if the job has length 2, then with probability \(p_{2}\) it is rejected, and with probability \(1-p_{2}\) it is accepted and yields expected welfare \(2\cdot\frac{1+p_{2}}{2}=1+p_{2}\) over two time steps. Letting \(c_{w}\) denote the expected welfare per time step assuming that the server is free at the current time step, we have
\[0=\frac{1}{2}\left(-p_{1}c_{w}+(1-p_{1})\left(\frac{1+p_{1}}{2}-c_{w}\right) \right)+\frac{1}{2}\left(-p_{2}c_{w}+(1-p_{2})\left(1+p_{2}-2c_{w}\right) \right).\]
The two terms on the right hand side correspond to jobs of length 1 and 2, which are drawn with probability \(1/2\) each. In the case that a job of length 2 is drawn, with probability \(p_{2}\) it is rejected and the server is idle for one time step, during which it would otherwise have produced expected welfare \(c_{w}\). With the remaining probability \(1-p_{2}\) the job is accepted, yielding expected welfare \(1+p_{2}\) over two time steps, during which the server would otherwise have produced expected welfare \(2c_{w}\). The derivation for the term corresponding to jobs of length 1 is similar. By equating the expected welfare with the variable denoting this quantity, we arrive at the equation above.
Solving for \(c_{w}\), we get
\[c_{w}(p_{1},p_{2})=\frac{\frac{(1-p_{1})(1+p_{1})}{2}+(1-p_{2})(1+p_{2})}{3-p_ {2}}.\]
To maximize \(c_{w}(p_{1},p_{2})\) over all values of \(p_{1},p_{2}\), we should set \(p_{1}=0\). (Indeed, to maximize welfare we should always accept jobs of length 1 since they do not interfere with future jobs.) Then the value of \(p_{2}\) that maximizes \(c_{w}(p_{1},p_{2})\) is \(p_{2}=3-\sqrt{\frac{15}{2}}\approx 0.261\), yielding \(c_{w}(p_{1},p_{2})=6-\sqrt{30}\approx 0.522\).
On the other hand, if we set the same price \(p=p_{1}=p_{2}\) for jobs with different lengths, our welfare per time step becomes
\[c_{w}(p)=\frac{\frac{(1-p)(1+p)}{2}+(1-p)(1+p)}{3-p}=\frac{3(1-p)(1+p)}{2(3-p)}.\]
This is maximized at \(p=3-2\sqrt{2}\approx 0.172\), yielding \(c_{w}(p)=9-6\sqrt{2}\approx 0.515\). Moreover, if we use either of the prices in the optimal price combination for the two-price setting as the single price, we get \(c_{w}(0)=0.5\) and \(c_{w}\left(3-\sqrt{\frac{15}{2}}\right)\approx 0.510\).
Next, we repeat the same exercise for revenue. We can derive the equations in the same way, with the only difference being that the revenue from accepting a job at price \(p\) is simply \(p\). Letting \(c_{r}\) denote the revenue per time step, we have
\[0=\frac{1}{2}\left(-p_{1}c_{r}+(1-p_{1})\left(p_{1}-c_{r}\right)\right)+\frac{ 1}{2}\left(-p_{2}c_{r}+(1-p_{2})\left(2p_{2}-2c_{r}\right)\right).\]
Solving for \(c_{r}\), we get
\[c_{r}(p_{1},p_{2})=\frac{(1-p_{1})p_{1}+2(1-p_{2})p_{2}}{3-p_{2}}.\]
To maximize \(c_{r}\) over all values of \(p_{1},p_{2}\), we should set \(p_{1}=0.5\). (Indeed, to maximize revenue we should always set the monopoly price for jobs of length 1 since they do not interfere with future jobs.) Then the value of \(p_{2}\) that maximizes \(c_{r}(p_{1},p_{2})\) is \(p_{2}=3-\sqrt{\frac{47}{8}}\approx 0.576\), yielding \(c_{r}(p_{1},p_{2})=10-\sqrt{94}\approx 0.304\).
On the other hand, if we set the same price \(p=p_{1}=p_{2}\) for jobs with different lengths, our revenue per time step becomes
\[c_{r}(p)=\frac{(1-p)p+2(1-p)p}{3-p}=\frac{3(1-p)p}{3-p}.\]
This is maximized at \(p=3-\sqrt{6}\approx 0.551\), yielding \(c_{r}(p)=15-6\sqrt{6}\approx 0.303\). Moreover, if we use either of the prices in the optimal price combination for the two-price setting as the single price, we get \(c_{r}(0.5)=0.3\) and \(c_{r}\left(3-\sqrt{\frac{47}{8}}\right)\approx 0.302\).
Observe that for both welfare and revenue, the maximum in the one-price setting is not far from that in the two-price setting. In addition, in both cases at least one of the two prices in the optimal price combination for the two-price setting, when used alone as a single price, performs almost as well as the maximum in the two-price setting. In the remainder of this section, we will show that this is not a coincidence, but rather a phenomenon that occurs for any set of job lengths, any probability distribution over job lengths, and any probability distribution over job values.
### General 50% Approximation
In this subsection, we consider a general setting with an arbitrary number of job lengths. We show that even at this level of generality, it is always possible to obtain \(50\%\) of the welfare and revenue of setting an individual price for each job length by setting just one price. Although the optimal price in the one-price setting might be different from any of the prices in the multi-price setting, we show that at least one of the prices in the latter setting can be used alone to achieve the \(50\%\) guarantee.
Assume that there are jobs of lengths \(a_{1}\leq a_{2}\leq\dots\leq a_{n}\) which appear at each time step with probability \(r_{1},r_{2},\dots,r_{n}\), respectively. Suppose that we set a price per time step \(p_{i}\) for jobs of length \(a_{i}\). Recall that the value per time step of a job is drawn from a distribution with cumulative distribution function \(F\) and probability density function \(f\).
The following lemma gives the formulas for the expected welfare and revenue per time step. The derivation is straightforward and can be found in the Appendix A.
Lemma 3.1 ().: _Let \(S=a_{1}r_{1}+\dots+a_{n}r_{n}\) and \(R=r_{1}+\dots+r_{n}\), and let \(c_{w}\) and \(c_{r}\) denote the expected welfare and revenue per time step, respectively. We have_
(1) \[c_{w}(p_{1},p_{2},\dots,p_{n})=\frac{a_{1}r_{1}\int_{x\geq p_{1}}\ell d\mu+ \dots+a_{n}r_{n}\int_{x\geq p_{n}}\ell d\mu}{S-((a_{1}-1)r_{1}F(p_{1})+\dots+( a_{n}-1)r_{n}F(p_{n}))+(1-R)}\]
_and_
(2) \[c_{r}(p_{1},p_{2}\dots,p_{n})=\frac{a_{1}r_{1}(1-F(p_{1}))p_{1}+\dots+a_{n}r_{ n}(1-F(p_{n}))p_{n}}{S-((a_{1}-1)r_{1}F(p_{1})+\dots+(a_{n}-1)r_{n}F(p_{n}))+( 1-R)}.\]
_In particular, if \(p_{1}=\dots=p_{n}=p\), then_
\[c_{w}(p)=\frac{S\int_{x\geq p}\ell d\mu}{S-(S-R)F(p)+(1-R)}\]
_and_
\[c_{r}(p)=\frac{S(1-F(p))p}{S-(S-R)F(p)+(1-R)}.\]
With the formulas for welfare and revenue in hand, we are ready to show the main result of this section, which exhibits that the worst-case approximation ratio for welfare or revenue between the single-price setting and the multi-price setting is at least \(50\%\). As we will see later in Subsection 3.3, this bound is in fact tight, and it remains tight even when there are only two job lengths. Note that the bound holds for any number of job lengths, any distribution over job lengths, and any distribution over job values.
Theorem 3.2 ().: _For any prices \(p_{1},p_{2},\dots,p_{n}\) that we set in the multi-price setting, we can achieve a welfare (resp. revenue, or any convex combination of welfare and revenue) approximation of at least \(50\%\) in the one-price setting by using one of the prices \(p_{i}\) as the single price._
To prove Theorem 3.2, we work with the ratio
\[\frac{\max(c_{w}(p_{1}),\dots,c_{w}(p_{n}))}{c_{w}(p_{1},\dots,p_{n})}\]
and show that it is at least \(1/2\) for any \(p_{1},\dots,p_{n}\) (and similarly for revenue or any convex combination of welfare and revenue). Using the formula (1) for \(c_{w}\) given in Lemma 3.1, we can write the ratio in terms of the variables \(A_{i}=\frac{\int_{x\geq p_{i}}\ell d\mu}{\int_{x\geq p_{1}}\ell d\mu}\) and \(B_{i}=F(p_{i})\) for \(1\leq i\leq n\). For any fixed values of \(B_{i}\), we then deduce the values of \(A_{i}\) that minimize the ratio of interest. Finally, we show that the remaining expression is always at least \(1/2\) no matter the values of \(B_{i}\). The full proof can be found in Appendix B.
### Tighter bounds for specific parameters
Assume in this subsection that there are jobs of two lengths \(a<b\) which appear at each time step with probability \(r_{1}\) and \(r_{2}\), respectively, where \(r_{1}+r_{2}\leq 1\). Suppose that we set a price per time step \(p_{1}\) for jobs of length \(a\) and \(p_{2}\) for jobs of length \(b\). Recall that the value per time step of a job is drawn from a distribution with cumulative distribution function \(F\) and probability density function \(f\).
Our next result exhibits a tight approximation bound for any fixed setting of the job lengths and their distribution.
Theorem 3.3 ().: _For any prices \(p_{1}\) and \(p_{2}\) that we set in the two-price setting, we can achieve a welfare (resp. revenue, or any convex combination of welfare and revenue) approximation of at least_
\[\rho(a,b,r_{1},r_{2}) :=\frac{(ar_{1}+br_{2})(ar_{1}+1-r_{1})}{a(a-1)r_{1}^{2}+a(b-1)r_ {1}r_{2}+ar_{1}+br_{2}}\]
_in the one-price setting by setting either \(p_{1}\) or \(p_{2}\) alone. Moreover, this bound is the best possible even if we are allowed to set a price different from \(p_{1}\) or \(p_{2}\) in the one-price setting._
To prove this theorem, we work with the expression in terms of \(B_{i}=F(p_{i})\) that we have from the proof of Theorem 3.2. We then show that the expression is minimized when we take \(B_{1}=0\) and \(B_{2}=1\), meaning that the distribution on job values is bimodal. The proof method readily yields an example showing that our bound is tight, where the bimodal distribution on job values puts a large probability on a low value and a small probability on a high value. The full proof can be found in Appendix C.
Theorem 3.3 allows us to obtain the worst-case approximation ratio in arbitrary settings of the parameters. Some examples follow.
* Suppose that \(r_{1}+r_{2}=1\), i.e., a job appears at every time step. This assumption can in fact be made without loss of generality, because we can convert jobs not arriving to jobs arriving with a value of 0 as long as all prices are nonzero. This only changes \(F\) and so is irrelevant for the calculation of \(\rho\). In this case, the approximation ratio is \[\rho(a,b,r,1-r)=\frac{(ar+b-br)(ar+1-r)}{a(a-b)r^{2}+b(a-1)r+b}.\]
* Next, we look at how the bound behaves when the job lengths are close together or far apart. Suppose for convenience that \(r_{1}=r_{2}=1/2\), i.e., a job appears at every time step and is of length \(a\) or \(b\) with equal probability. Then the approximation ratio is \[\rho\left(a,b,\frac{1}{2},\frac{1}{2}\right)=\frac{(a+b)(a+1)}{a^{2}+ab+2b}.\] In particular, the approximation ratio is * \(\frac{6}{7}\) if \(a=1\) and \(b=2\); * \(\frac{4}{5}\) if \(a=1\) and \(b=3\); * \(\frac{15}{16}\) if \(a=2\) and \(b=3\); * \(\frac{a+1}{a+2}\) if \(b\rightarrow\infty\). Note that if \(b\rightarrow\infty\), then the ratio approaches 1 as \(a\) grows. This makes sense because when the two job lengths are large and close to each other, there is little difference between accepting one or the other. Also, if we take \(b=a+1\), then the ratio becomes \[\rho\left(a,a+1,\frac{1}{2},\frac{1}{2}\right)=\frac{2a^{2}+3a+1}{2a^{2}+3a+2}.\] This also converges to 1 as \(a\rightarrow\infty\).
* We now consider the behavior of the bound when the shorter job length is fixed. Suppose for convenience that \(a=1\), i.e., the shorter job length is 1. Then the approximation ratio is \[\rho(1,b,r_{1},r_{2})=\frac{r_{1}+br_{2}}{(b-1)r_{1}r_{2}+r_{1}+br_{2}}.\] As the longer job length grows, this ratio decreases and approaches \(\frac{1}{1+r_{1}}\). This is consistent with the intuition that the approximation gets worse as the job lengths are farther apart.
* We next look at the “opposite” of the previous case and assume that the longer job length is extremely large. Suppose that \(b\rightarrow\infty\). Then the approximation ratio is \[\rho(a,\infty,r_{1},r_{2})=\frac{ar_{1}+1-r_{1}}{ar_{1}+1}.\] Note that this ratio does not depend on \(r_{2}\). The ratio increases as the shorter job length grows. Again, this is consistent with the intuition that the approximation gets worse as the job lengths are farther apart.
If we fix the probabilities \(r_{1},r_{2}\), we can derive a tight worst-case bound over all possible job lengths \(a,b\).
Theorem 3.4 ().: _For fixed \(r_{1},r_{2}\), we have_
\[\rho(a,b,r_{1},r_{2})\geq\frac{1}{1+r_{1}}\]
_for arbitrary \(a,b\). Moreover, this bound is the best possible._
Proof.: From Theorem 3.3, we need to show the inequality
\[\frac{(ar_{1}+br_{2})(ar_{1}+1-r_{1})}{a(a-1)r_{1}^{2}+a(b-1)r_{1}r_{2}+ar_{1} +br_{2}}\geq\frac{1}{1+r_{1}}.\]
This simplifies to
\[r_{1}(a(a-1)r_{1}^{2}+(a-1)br_{1}r_{2}+ar_{1}+ar_{2})\geq 0.\]
Since each term on the left-hand side is nonnegative, the inequality holds.
For tightness of the bound, let \(a=1\) and \(b\rightarrow\infty\). The approximation ratio is
\[\rho(1,\infty,r_{1},r_{2}) =\frac{r_{1}+br_{2}}{r_{1}(1-r_{2})+br_{2}(1+r_{1})}\]
\[=\frac{1}{1+r_{1}},\]
as desired. ∎
Note that the fact that the bound is tight at \(a=1\) and \(b\rightarrow\infty\) is consistent with the intuition that the further apart the job lengths are, the more welfare and revenue there is to be gained by setting different prices for different job lengths, and consequently the worse the approximation ratio.
Finally, we show that we can obtain at least \(50\%\) of the welfare or revenue from setting two prices by using one of those prices.
Theorem 3.5 ().: _For arbitrary \(a,b,r_{1},r_{2}\), we have_
\[\rho(a,b,r_{1},r_{2})\geq\frac{1}{2}.\]
_Moreover, this bound is the best possible._
Proof.: Using Theorem 3.4, we find that
\[\rho(a,b,r_{1},r_{2})\geq\frac{1}{1+r_{1}}\geq\frac{1}{2}\]
since \(r_{1}\leq 1\).
For tightness of the bound, let \(a=1\) and \(r_{2}=1-r_{1}\). The approximation ratio is
\[\rho(1,b,r,1-r)=\frac{r+(1-r)b}{r^{2}+(1-r^{2})b}.\]
Taking \(b=\frac{1}{(1-r)^{2}}\), the ratio becomes
\[\frac{r+\frac{1}{1-r}}{r^{2}+\frac{1+r}{1-r}}=\frac{r-r^{2}+1}{r^{2}-r^{3}+1+r},\]
which approaches \(1/2\) as \(r\to 1^{-}\).³ ∎
[FOOTNOTE:3][ENDFOOTNOTE]
While we do not have a general formula for the worst-case approximation ratio for each choice of the parameters \(a_{1},\dots,a_{n},r_{1},\dots,r_{n}\) as we do for the case of two job lengths, the function \(h\) in the proof of Theorem 3.2 still allows us to derive a tighter bound for each specific case. Note that to find the minimum of \(h\), it suffices to check \(B_{i}=0\) or 1 (see Footnote 6), so we only have a finite number of cases to check.
In fact, one can show that the minimum is always attained when \(B_{1}=0\) and \(B_{n}=1\). However, as we show next, the remaining \(B_{i}\)’s may be 0 or 1 at the minimum, depending on the job lengths and the probability distribution over them. We take \(n=3\) and assume for convenience that \(r_{1}=r_{2}=r_{3}=1/3\) and \((a_{1},a_{2})=(2,3)\). The following examples show that \(B_{2}\) can be 0 or 1 at the minimum depending on how far the longest job length \(a_{3}\) is from \(a_{1}\) and \(a_{2}\).
* Suppose that \(a_{3}=6\). Then we have \[h(B_{1},B_{2},B_{3})=\frac{121-11B_{1}-22B_{2}-55B_{3}}{121-16B_{1}-24B_{2}-48 B_{3}}.\] This is minimized at \((B_{1},B_{2},B_{3})=(0,1,1)\), where its value is \(44/49\).
* Suppose that \(a_{3}=7\). Then we have \[h(B_{1},B_{2},B_{3})=\frac{48-4B_{1}-8B_{2}-24B_{3}}{48-6B_{1}-9B_{2}-21B_{3}}.\] This is minimized at \((B_{1},B_{2},B_{3})=(0,B_{2},1)\) for any \(0\leq B_{2}\leq 1\), where its value is \(8/9\).
* Suppose that \(a_{3}=8\). Then we have \[h(B_{1},B_{2},B_{3})=\frac{169-13B_{1}-26B_{2}-91B_{3}}{169-20B_{1}-30B_{2}-80 B_{3}}.\] This is minimized at \((B_{1},B_{2},B_{3})=(0,0,1)\), where its value is \(78/89\).
The above examples show that the transition point where the optimal value of \(B_{2}\) goes from 0 to 1 for \(r_{1}=r_{2}=r_{3}=1/3\) and \((a_{1},a_{2})=(2,3)\) is at \(a_{3}=7\), where \(h(B_{1},B_{2},B_{3})\) takes on the same value for any \(0\leq B_{2}\leq 1\).
### Extension
In this subsection, we show that by using a single price, we can obtain \(50\%\) of the welfare not only compared to using multiple prices, but also compared to the offline optimal welfare.⁴ In fact, we will also not need the assumption that the job length and the value per time step are independent. However, the result only works for particular prices rather than arbitrary ones, and we cannot obtain tighter results for specific parameters using this method.
[FOOTNOTE:4][ENDFOOTNOTE]
Theorem 3.6 ().: _Assume that the job length and the value per time step are not necessarily independent. There exists a price \(p\) such that we can achieve a \(50\%\) approximation of the offline optimal welfare by using \(p\) as the single price._
The proof of Theorem 3.6 can be found in Appendix D.
## 4. Multiple Servers
In this section, we assume that there are multiple servers, each of which receives jobs of various lengths. Under the assumption that the servers have the same probability of receiving no job at a time step, we show in Subsection 4.1 an approximation bound of the welfare and revenue of setting one price for all servers compared to setting an individual price for each server. This yields a strong bound when at least one of the dimensions of the parameters is not too extreme, e.g., the number of servers or the job lengths are not too large. In Subsection 4.2, we combine the newly obtained results with those from Section 3. Using a composition technique, we derive a general result that compares the welfare and revenue obtained by a restricted mechanism that sets the same price for all servers and all job lengths against those obtained by a mechanism that can set a different price for each job length of each particular server. We show that even with the heavy restrictions, the former mechanism still provides a reasonable approximation to the latter in a wide range of situations. Using similar techniques, we also obtain approximation bounds when this assumption does not hold but there is only one job length across all servers. The analysis of the latter setting is deferred to Appendix E.
As in Section 3, our approximation results hold for arbitrary (i.e., not necessarily optimal) pricing schemes, and the price we use in the single-price setting can be drawn from one of the prices in the multi-price setting.
### One price per server
Assume that at each time step, either zero or one job appears for each server \(1\leq j\leq n\). Server \(j\) receives jobs of length \(a_{j1}\leq a_{j2}\leq\dots\leq a_{jn_{j}}\) with probability \(r_{j1},r_{j2},\dots,r_{jn_{j}}\), respectively. Suppose that we set a price per time step \(p_{j}\) for all jobs on server \(j\). Recall that the value per time step of a job is drawn from a distribution with cumulative distribution function \(F\) and probability density function \(f\), and that we assume that \(\sum_{i=1}^{n_{j}}r_{ji}\) is constant. Let \(S_{j}=a_{j1}r_{j1}+\dots+a_{jn_{j}}r_{jn_{j}}\) and \(R=r_{j1}+\dots+r_{jn_{j}}\).
Using the formula (1) for \(c_{w}\) given in Lemma 3.1, we find that the welfare per time step is
\[d_{w}(p_{1},p_{2},\dots,p_{n}) =\sum_{j=1}^{n}\frac{S_{j}\int_{x\geq p_{j}}\ell d\mu}{S_{j}-(S_{ j}-R)F(p_{j})+(1-R)}\]
\[=\sum_{j=1}^{n}\frac{\int_{x\geq p_{j}}\ell d\mu}{1-\left(1-\frac {R}{S_{j}}\right)F(p_{j})+\frac{1-R}{S_{j}}}.\]
If we set the same price \(p=p_{1}=\dots=p_{n}\) for different servers, our welfare per time step becomes
\[d_{w}(p)=\sum_{j=1}^{n}\frac{\int_{x\geq p}\ell d\mu}{1-\left(1-\frac{R}{S_{j} }\right)F(p)+\frac{1-R}{S_{j}}}.\]
Similarly, we have the formulas for revenue per time step
\[d_{r}(p_{1},p_{2},\dots,p_{n})=\sum_{j=1}^{n}\frac{(1-F(p_{j}))p_{j}}{1-\left( 1-\frac{R}{S_{j}}\right)F(p_{j})+\frac{1-R}{S_{j}}}\]
and
\[d_{r}(p)=\sum_{j=1}^{n}\frac{(1-F(p))p}{1-\left(1-\frac{R}{S_{j}}\right)F(p)+ \frac{1-R}{S_{j}}}.\]
We show that if at least one dimension of the parameters is not too extreme, e.g., the number of servers or the job lengths are bounded, then we can obtain a reasonable approximation of the welfare and revenue in the multi-price setting by setting just one price.
Theorem 4.1 ().: _For any prices \(p_{1},p_{2},\dots,p_{n}\) that we set in the multi-price setting, we can achieve a welfare (resp. revenue, or any convex combination of welfare and revenue) approximation of at least_
\[\max\left(\frac{1}{H_{n}},\frac{M-1}{M\ln M}\right)\]
_in the one-price setting, where \(H_{n}=1+\frac{1}{2}+\dots+\frac{1}{n}\approx\ln n\) is the \(n\)th Harmonic number and \(M=\max_{i,j}\frac{S_{i}}{S_{j}}\)._
In particular, if all job lengths are bounded above by \(c\), then \(R\leq S_{j}\leq cR\) for all \(1\leq j\leq n\), and so \(\max_{i,j}\frac{S_{i}}{S_{j}}\leq c\). The theorem then implies that the approximation ratio is at least \(\frac{c-1}{c\ln c}\).
The proof follows a similar outline to that of Theorem 3.2, but the details are more involved. It can be found in Appendix F.
### Multiple prices per server
Assume as in Subsection 4.1 that at each time step, server \(j\) receives jobs of length \(a_{j1}\leq a_{j2}\leq\dots\leq a_{jn_{j}}\) with probability \(r_{j1},r_{j2},\dots,r_{jn_{j}}\), respectively. In this subsection, we consider setting an individual price not only for each server but also for each job length of that server. In particular, suppose that we set a price per time step \(p_{ji}\) for jobs of length \(a_{ji}\) on server \(j\). Recall that the value per time step of a job is drawn from a distribution with cumulative distribution function \(F\) and probability density function \(f\), and that we assume that \(\sum_{i=1}^{n_{j}}r_{ji}\) is constant. Let \(S_{j}=a_{j1}r_{j1}+\dots+a_{jn_{j}}r_{jn_{j}}\).
We will compare a setting where we have considerable freedom with our pricing scheme and can set a different price \(p_{ji}\) for each job length \(a_{ji}\) on each server \(j\) with a setting where we have limited freedom and must set the same price \(p\) for all job lengths and all servers. We show that by “composing” our results on the two dimensions, we can obtain an approximation of the welfare and revenue of setting different prices by setting a single price.
Theorem 4.2 ().: _For any prices \(p_{ji}\), where \(1\leq j\leq n\) and \(1\leq i\leq n_{j}\) for each \(j\), that we set in the multi-price setting, we can achieve a welfare (resp. revenue, or any convex combination of welfare and revenue) approximation of at least_
\[\frac{1}{2}\cdot\max\left(\frac{1}{H_{n}},\frac{M-1}{M\ln M}\right)\]
_in the one-price setting, where \(H_{n}=1+\frac{1}{2}+\dots+\frac{1}{n}\approx\ln n\) is the \(n\)th Harmonic number and \(M=\max_{i,j}\frac{S_{i}}{S_{j}}\)._
Proof.: Consider welfare. By Theorem 3.2, for each server \(j\) we can achieve an \(\frac{1}{2}\)-approximation of the welfare in the multi-price setting by setting a single price \(p_{j}\) for all job lengths. On the other hand, using Theorem 4.1, we can approximate the latter welfare by a factor of \(\max\left(\frac{1}{H_{n}},\frac{M-1}{M\ln M}\right)\) by setting a single price \(p\) for all servers. Therefore, setting a single price \(p\) also yields a \(\frac{1}{2}\cdot\max\left(\frac{1}{H_{n}},\frac{M-1}{M\ln M}\right)\)-approximation of the original welfare.
The same argument holds for revenue and for any convex combination of welfare and revenue. ∎
If we have tighter approximations for either the “different prices for different job lengths” or the “different prices for different servers” dimension, for instance by knowing the values of some of the parameters, then the same composition argument yields a correspondingly tighter bound.
## 5. Conclusion
In this paper, we study how well simple pricing schemes that are oblivious to certain parameters can approximate optimal schemes with respect to welfare and revenue, and prove several results when the simple schemes are restricted to setting the same price for all servers or all job lengths. Our results provide an explanation of the efficacy of such schemes in practice, including the one shown in Figure 1 for virtual machines on Microsoft Azure. Since simple schemes do not require agents to spend time and resources to determine their specific parameter values, our results also serve as an argument in favor of using these schemes in a range of applications. It is worth noting that as all of our results are of worst case nature, we can expect the guarantees on welfare and revenue to be significantly better than these pessimistic bounds in practical instances where the parameters are not adversarially tailored.
We believe that there is still much interesting work to be done in the study of simple pricing schemes for the cloud. We conclude our paper by listing some intriguing future directions.
* In many scheduling applications, a job can be scheduled online to any server that is not occupied at the time. Does a good welfare or revenue approximation hold in such a model?
* Can our results be extended to models with more fluid job arrivals, for example one where several jobs can arrive at each time step?
* Can we approximate welfare and revenue simultaneously? A trivial randomized approach would be to choose with equal probability whether to approximate welfare or revenue. According to Theorem 3.2, this yields a \(1/4\)-approximation for both expected welfare and expected revenue of the single-price setting in comparison to the multi-price setting for job lengths.
## Acknowledgments
Preliminary versions of this paper appeared in Proceedings of the 13th Conference on Web and Internet Economics, December 2017, and Proceedings of the 12th Workshop on the Economics of Networks, Systems and Computation, June 2017. We thank the anonymous reviewers for helpful comments. Warut Suksompong is partially supported by a Stanford Graduate Fellowship.
## References
* (1)
* Abhishek et al. (2012) Vineet Abhishek, Ian A. Kash, and Peter Key. 2012. Fixed and Market Pricing for Cloud Services. (2012). Appeared in the 7th Workshop on the Economics of Networks, Systems and Computation.
* Amazon (2017) Amazon 2017. Amazon EC2 Spot Instances Pricing. http://aws.amazon.com/ec2/spot/pricing. (2017). Accessed 2017-08-01.
* Azar et al. (2015) Yossi Azar, Inna Kalp-Shaltiel, Brendan Lucier, Ishai Menache, Joseph (Seffi) Naor, and Jonathan Yaniv. 2015. Truthful Online Scheduling with Commitments. In _Proceedings of the Sixteenth ACM Conference on Economics and Computation_. 715–732.
* Azure (2016) Azure 2016. Microsoft Azure Pricing Calculator. http://azure.microsoft.com/en-us/pricing/calculator. (2016). Accessed 2016-09-19.
* Babaioff et al. (2011) Moshe Babaioff, Liad Blumrosen, Shaddin Dughmi, and Yaron Singer. 2011. Posting Prices with Unknown Distributions. In _Innovations in Computer Science - ICS 2010_. 166–178.
* Blumrosen and Holenstein (2008) Liad Blumrosen and Thomas Holenstein. 2008. Posted prices vs. negotiations: an asymptotic analysis. In _Proceedings of the 9th ACM Conference on Electronic Commerce_. 49.
* Chawla et al. (2010) Shuchi Chawla, Jason D. Hartline, David L. Malec, and Balasubramanian Sivan. 2010. Multi-parameter mechanism design and sequential posted pricing. In _Proceedings of the 42nd ACM Symposium on Theory of Computing_. 311–320.
* Cohen et al. (2015) Ilan Reuven Cohen, Alon Eden, Amos Fiat, and Lukasz Jez. 2015. Pricing Online Decisions: Beyond Auctions. In _Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms_. 73–91.
* Cohen-Addad et al. (2016) Vincent Cohen-Addad, Alon Eden, Michal Feldman, and Amos Fiat. 2016. The Invisible Hand of Dynamic Market Pricing. In _Proceedings of the 2016 ACM Conference on Economics and Computation_. 383–400.
* Columbus (2016) Louis Columbus. 2016. Roundup Of Cloud Computing Forecasts And Market Estimates, 2016. http://www.forbes.com/sites/louiscolumbus/2016/03/13/roundup-of-cloud-computing-forecasts-and-market-estimates-2016. (2016). Accessed 2016-09-19.
* Dehghani et al. (2016) Sina Dehghani, Ian A. Kash, and Peter Key. 2016. Online Stochastic Scheduling and Pricing the Cloud. (2016). Working Paper.
* Dierks and Seuken (2016) Ludwig Dierks and Sven Seuken. 2016. Cloud Pricing: The Spot Market Strikes Back. (2016). Appeared in the Workshop on Economics of Cloud Computing.
* Disser et al. (2016) Yann Disser, John Fearnley, Martin Gairing, Oliver Göbel, Max Klimm, Daniel Schmand, Alexander Skopalik, and Andreas Tönnis. 2016. Hiring Secretaries over Time: The Benefit of Concurrent Employment. _CoRR_ abs/1604.08125 (2016).
* Dütting et al. (2016) Paul Dütting, Felix A. Fischer, and Max Klimm. 2016. Revenue Gaps for Discriminatory and Anonymous Sequential Posted Pricing. _CoRR_ abs/1607.07105 (2016).
* Ezra et al. (2017) Tomer Ezra, Michal Feldman, Tim Roughgarden, and Warut Suksompong. 2017. Pricing Identical Items. _CoRR_ abs/1705.06623 (2017).
* Feldman et al. (2015) Michal Feldman, Nick Gravin, and Brendan Lucier. 2015. Combinatorial Auctions via Posted Prices. In _Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms_. 123–135.
* Friedman et al. (2014) Eric J. Friedman, Ali Ghodsi, and Christos-Alexandros Psomas. 2014. Strategyproof allocation of discrete jobs on multiple machines. In _Proceedings of the Fifteenth ACM Conference on Economics and Computation_. 529–546.
* Friedman et al. (2015) Eric J. Friedman, Miklós Z. Rácz, and Scott Shenker. 2015. Dynamic Budget-Constrained Pricing in the Cloud. In _Proceedings of the 28th Canadian Conference on Artificial Intelligence_. 114–121.
* Hoy et al. (2016) Darrell Hoy, Nicole Immorlica, and Brendan Lucier. 2016. On-Demand or Spot? Selling the Cloud to Risk-Averse Customers. In _Proceedings of the 12th International Conference on Web and Internet Economics_. 73–86.
* Jain et al. (2011) Navendu Jain, Ishai Menache, Joseph (Seffi) Naor, and Jonathan Yaniv. 2011. A Truthful Mechanism for Value-Based Scheduling in Cloud Computing. In _Proceedings of the 4th International Symposium on Algorithmic Game Theory_. 178–189.
* Jain et al. (2012) Navendu Jain, Ishai Menache, Joseph (Seffi) Naor, and Jonathan Yaniv. 2012. Near-Optimal Scheduling Mechanisms for Deadline-Sensitive Jobs in Large Computing Clusters. In _Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures_. 255–266.
* Kash and Key (2016) Ian A. Kash and Peter Key. 2016. Pricing the Cloud. _IEEE Internet Computing_ 20, 1 (2016), 36–43.
* Lucier et al. (2013) Brendan Lucier, Ishai Menache, Joseph (Seffi) Naor, and Jonathan Yaniv. 2013. Efficient Online Scheduling for Deadline-Sensitive Jobs. In _Proceedings of the 25th ACM Symposium on Parallelism in Algorithms and Architectures_. 305–314.
* Wang et al. (2015) Changjun Wang, Weidong Ma, Tao Qin, Xujin Chen, Xiaodong Hu, and Tie-Yan Liu. 2015. Selling reserved instances in cloud computing. In _Proceedings of the 24th International Conference on Artificial Intelligence_. 224–230.
* Zhang et al. (2013) Hong Zhang, Bo Li, Hongbo Jiang, Fangming Liu, Athanasios V. Vasilakos, and Jiangchuan Liu. 2013. A framework for truthful online auctions in cloud computing with heterogeneous user demands. In _Proceedings of the IEEE INFOCOM 2013_. 1510–1518.
## Appendix A Proof of Lemma 3.1
We represent the states of the server by a Markov chain. Initially, we have an idle state corresponding to when the server is free. For each job length \(a_{i}\), we create \(a_{i}-1\) states that the server goes through when it accepts a job of length \(a_{i}\). The states represent the number of time steps remaining to complete the service of the job before the server returns to the idle state. The rewards, which can be either welfare or revenue, are collected at each transition. The structure of the Markov chain makes solving for the stationary distribution straightforward. This distribution gives the average proportion of time spent in each transition and the average welfare or revenue can be written in terms of these proportions. If the threshold for accepting a job of a certain length is \(p\), then the revenue collected for each transition with that job length is always \(p\), while the expected welfare gained during the transition is given by \(\frac{\int_{x\geq p}\ell d\mu}{1-F(p)}\). This allows us to write down the formulas for \(c_{w}\) and \(c_{r}\).
Next, we present a simpler approach that is perhaps less formal. Consider an arbitrary time step when the server is free. If the job drawn at that time step has length \(a_{1}\), then with probability \(F(p_{1})\) it has value below \(p_{1}\) and is rejected, and with probability \(1-F(p_{1})\) it has value at least \(p_{1}\) and is accepted. In the latter case, the job yields expected welfare \(\frac{\int_{x\geq p_{1}}\ell d\mu}{1-F(p_{1})}\) per time step over \(a_{1}\) time steps. Analogous statements hold if the job has length \(a_{i}\) for \(2\leq i\leq n\). It follows that
\[\begin{split} 0&=r_{1}\left(-F(p_{1})c_{w}+(1-F(p_{1 }))\left(a_{1}\cdot\frac{\int_{x\geq p_{1}}\ell d\mu}{1-F(p_{1})}-a_{1}c_{w} \right)\right)\\ &\quad+r_{2}\left(-F(p_{2})c_{w}+(1-F(p_{2}))\left(a_{2}\cdot \frac{\int_{x\geq p_{2}}\ell d\mu}{1-F(p_{2})}-a_{2}c_{w}\right)\right)\\ &\quad+\dots\\ &\quad+r_{n}\left(-F(p_{n})c_{w}+(1-F(p_{n}))\left(a_{n}\cdot \frac{\int_{x\geq p_{n}}\ell d\mu}{1-F(p_{n})}-a_{n}c_{w}\right)\right)\\ &\quad+(1-r_{1}-r_{2}-\dots-r_{n})(-c_{w}).\end{split}\]
Solving for \(c_{w}\), we have
\[c_{w}(p_{1},p_{2},\dots,p_{n})=\frac{a_{1}r_{1}\int_{x\geq p_{1}}\ell d\mu+ \dots+a_{n}r_{n}\int_{x\geq p_{n}}\ell d\mu}{S-((a_{1}-1)r_{1}F(p_{1})+\dots+( a_{n}-1)r_{n}F(p_{n}))+(1-R)}.\]
For revenue, we can derive the equations in the same way, with the exception that the revenue from accepting a job at price \(p\) is simply \(p\). We have
\[\begin{split} 0&=r_{1}\left(-F(p_{1})c_{r}+(1-F(p_{1 }))\left(a_{1}p_{1}-a_{1}c_{r}\right)\right)\\ &\quad+r_{2}\left(-F(p_{2})c_{r}+(1-F(p_{2}))\left(a_{2}p_{2}-a_{ 2}c_{r}\right)\right)\\ &\quad+\dots\\ &\quad+r_{n}\left(-F(p_{n})c_{r}+(1-F(p_{n}))\left(a_{n}p_{n}-a_{ n}c_{r}\right)\right)\\ &\quad+(1-r_{1}-r_{2}-\dots-r_{n})(-c_{r}).\end{split}\]
Solving for \(c_{r}\), we get
\[c_{r}(p_{1},p_{2}\dots,p_{n})=\frac{a_{1}r_{1}(1-F(p_{1}))p_{1}+\dots+a_{n}r_{ n}(1-F(p_{n}))p_{n}}{S-((a_{1}-1)r_{1}F(p_{1})+\dots+(a_{n}-1)r_{n}F(p_{n}))+( 1-R)}.\]
## Appendix B Proof of Theorem 3.2
We first consider welfare. We wish to show that setting one of the prices \(p_{i}\) alone achieves an approximation of \(1/2\) of setting all \(n\) prices. That is,
\[\max(c_{w}(p_{1}),\dots,c_{w}(p_{n}))\geq\frac{1}{2}\cdot c_{w}(p_{1},\dots,p_ {n}).\]
To establish this inequality, we will work with the ratio
\[\frac{\max(c_{w}(p_{1}),\dots,c_{w}(p_{n}))}{c_{w}(p_{1},\dots,p_{n})}\]
and show that its minimum is at least \(1/2\).
Writing \(A_{i}=\frac{\int_{x\geq p_{i}}\ell d\mu}{\int_{x\geq p_{1}}\ell d\mu}\) (in particular, \(A_{1}=1\)) and \(B_{i}=F(p_{i})\) for \(1\leq i\leq n\), the ratio to minimize becomes
\[g(A_{1},\dots,A_{n},B_{1},\dots,B_{n}):=\max_{i=1}^{n}\left( \frac{SA_{i}}{S-(S-R)B_{i}+(1-R)}\right)\\ \cdot\frac{S-((a_{1}-1)r_{1}B_{1}+\dots+(a_{n}-1)r_{n}B_{n})+(1-R )}{a_{1}r_{1}A_{1}+\dots+a_{n}r_{n}A_{n}}.\]
_Case 1_: The max function outputs the first term, \(\frac{SA_{1}}{S-(S-R)B_{1}+(1-R)}\).
Taking into account that \(A_{1}=1\), we want to minimize the ratio
\[\frac{S}{S-(S-R)B_{1}+(1-R)}\cdot\frac{S-((a_{1}-1)r_{1}B_{1}+\dots+(a_{n}-1)r _{n}B_{n})+(1-R)}{a_{1}r_{1}+a_{2}r_{2}A_{2}+\dots+a_{n}r_{n}A_{n}},\]
where \(A_{i}\leq\frac{S-(S-R)B_{i}+(1-R)}{S-(S-R)B_{1}+(1-R)}\) for \(i\geq 2\).
For any \(A_{i}\), if we fix the remaining \(A_{j}\) and all \(B_{j}\), this is a decreasing function in \(A_{i}\). To minimize it, we should set \(A_{i}=\frac{S-(S-R)B_{i}+(1-R)}{S-(S-R)B_{1}+(1-R)}\) for all \(i\geq 2\). The ratio becomes
\[\frac{S^{2}-((a_{1}-1)r_{1}B_{1}+\dots+(a_{n}-1)r_{n}B_{n})S+S(1-R)}{S^{2}-(a_ {1}r_{1}B_{1}+\dots+a_{n}r_{n}B_{n})(S-R)+S(1-R)}.\]
_Case 2_: The max function outputs the \(i\)th term, \(\frac{SA_{i}}{S-(S-R)B_{i}+(1-R)}\), for some \(i\geq 2\). This means that \(A_{j}\leq\frac{S-(S-R)B_{j}+(1-R)}{S-(S-R)B_{i}+(1-R)}\cdot A_{i}\) for all \(j\geq 2\) with \(j\neq i\), and \(A_{i}\geq\frac{S-(S-R)B_{i}+(1-R)}{S-(S-R)B_{1}+(1-R)}\). We want to minimize the ratio
\[\frac{SA_{i}}{S-(S-R)B_{i}+(1-R)}\cdot\frac{S-((a_{1}-1)r_{1}B_{1}+\dots+(a_{n }-1)r_{n}B_{n})+(1-R)}{a_{1}r_{1}+a_{2}r_{2}A_{2}+\dots+a_{n}r_{n}A_{n}},\]
or equivalently,
\[\frac{SA_{i}}{a_{1}r_{1}A_{1}+\dots+a_{n}r_{n}A_{n}}\cdot\frac{S-((a_{1}-1)r_{ 1}B_{1}+\dots+(a_{n}-1)r_{n}B_{n})+(1-R)}{S-(S-R)B_{i}+(1-R)}.\]
For any \(j\geq 2\) with \(j\neq i\), if we fix all terms \(A_{k}\) except \(A_{j}\) and fix all \(B_{k}\), then this function is decreasing in \(A_{j}\). To minimize it, we should set \(A_{j}=\frac{S-(S-R)B_{j}+(1-R)}{S-(S-R)B_{i}+(1-R)}\cdot A_{i}\). The resulting function is increasing in \(A_{i}\) if we fix all \(B_{k}\), so we should set \(A_{i}=\frac{S-(S-R)B_{i}+(1-R)}{S-(S-R)B_{1}+(1-R)}\). We obtain the same ratio as in Case 1.
Hence in either case, we are left with minimizing the function
\[h(B_{1},\dots,B_{n}):=\frac{S^{2}-((a_{1}-1)r_{1}B_{1}+\dots+(a_{n}-1)r_{n}B_{ n})S+S(1-R)}{S^{2}-(a_{1}r_{1}B_{1}+\dots+a_{n}r_{n}B_{n})(S-R)+S(1-R)}.\]
In particular, we want to show that \(h(B_{1},\dots,B_{n})\geq\frac{1}{2}\) for any choice of \(B_{1},\dots,B_{n}\). This is equivalent to
\[S^{2}+S(1-R)\geq(2S(a_{1}-1)-a_{1}(S-R))r_{1}B_{1}+\dots+(2S(a_{n}-1)-a_{n}(S- R))r_{n}B_{n}\]
or
\[S^{2}+S(1-R)\geq((a_{1}-2)S+a_{1}R)r_{1}B_{1}+\dots+((a_{n}-2)S+a_{n}R)r_{n}B_ {n}.\]
We consider two cases.
1. \(a_{1}\geq 2\) (and hence \(a_{2},\dots,a_{n}\geq 2\)). All coefficients of \(r_{i}B_{i}\) on the right-hand side are positive, so we only need to verify the inequality for \(B_{1}=\dots=B_{n}=1\). We have \[h(B_{1}=1,\dots,B_{n}=1) =\frac{S^{2}-((a_{1}-1)r_{1}+\dots+(a_{n}-1)r_{n})S+S(1-R)}{S^{2} -(a_{1}r_{1}+\dots+a_{n}r_{n})(S-R)+S(1-R)}\] \[=\frac{S^{2}-(S-R)S+S(1-R)}{S^{2}-S(S-R)+S(1-R)}=1>\frac{1}{2}.\]
2. \(a_{1}=1\).⁵ All coefficients of \(r_{i}B_{i}\) on the right-hand side except the first one are positive, so we only need to verify the inequality for \(B_{1}=0\) and \(B_{2}=\dots=B_{n}=1\). We have [FOOTNOTE:5][ENDFOOTNOTE] \[h(B_{1}=0,B_{2}=1,\dots,B_{n}=1) =\frac{S}{S+r_{1}(S-R)}>\frac{1}{2},\] where the last line follows from \(r_{1}(S-R)<1\cdot S=S\).
So the inequality holds in both cases, and the approximation ratio is at least \(\frac{1}{2}\), as claimed.
Finally, we can obtain analogous results for revenue by essentially repeating the same argument but instead writing \(A_{i}=\frac{(1-F(p_{i}))p_{i}}{(1-F(p_{1}))p_{1}}\) for \(1\leq i\leq n\), and for any convex combination of welfare and revenue by writing \(A_{i}\) as the appropriate convex combination of the two corresponding terms.
## Appendix C Proof of Theorem 3.3
Using the proof of Theorem 3.2, we are left with minimizing the function
\[h(B_{1},B_{2}):=\frac{S^{2}-((a-1)r_{1}B_{1}+(b-1)r_{2}B_{2})S+S(1-R)}{S^{2}- ar_{1}(S-R)B_{1}-br_{2}(S-R)B_{2}+S(1-R)}.\]
Note that the numerator and the denominator are positive for all \(0\leq B_{1},B_{2}\leq 1\). When \(B_{1}=B_{2}=B\), both terms are equal to
\[S^{2}-S(S-R)B+S(1-R),\]
and so \(h(B_{1},B_{2})=1\). Moreover, using the assumption \(a<b\), we find that
* \(h(B_{1},B_{2})<1\) when \(B_{1}=0\) and \(B_{2}>0\), and
* \(h(B_{1},B_{2})>1\) when \(B_{1}>0\) and \(B_{2}=0\).
Hence the function \(h(B_{1},B_{2})\) is increasing in \(B_{1}\) for fixed \(B_{2}\), and decreasing in \(B_{2}\) for fixed \(B_{1}\).⁶ This implies that the function is minimized when \(B_{1}=0\) and \(B_{2}=1\), where its value is
[FOOTNOTE:6][ENDFOOTNOTE]
\[h(B_{1}=0,B_{2}=1) =\frac{S^{2}-(b-1)r_{2}S+S(1-R)}{S^{2}-br_{2}(S-R)+S(1-R)}\]
\[=\frac{(ar_{1}+br_{2})(ar_{1}+1-r_{1})}{a(a-1)r_{1}^{2}+a(b-1)r_{ 1}r_{2}+ar_{1}+br_{2}}\]
\[=\rho(a,b,r_{1},r_{2}),\]
as claimed.
Next, we show that the approximation ratio is tight even if we are allowed to set an arbitrary price (i.e., not necessarily \(p_{1}\) or \(p_{2}\)) in the one-price setting. To this end, consider a discrete bimodal distribution where a high probability \(q_{1}\approx 1\) is put on a value \(v_{1}\approx 0\) and a small probability \(q_{2}\approx 0\) is put on a value \(v_{2}\approx 1\).⁷ The values \(v_{1},v_{2}\) and the probabilities are chosen arbitrarily close to 0 and 1 and so that the relation
[FOOTNOTE:7][ENDFOOTNOTE]
\[\frac{q_{2}v_{2}}{q_{1}v_{1}}=\frac{1}{S-R}\]
is satisfied.
In the two-price setting, we can set prices \(p_{1}=v_{1}\) and \(p_{2}=v_{2}\) and obtain welfare
\[c_{w}(p_{1}=v_{1},p_{2}=v_{2}) =\frac{ar_{1}(q_{1}v_{1}+q_{2}v_{2})+br_{2}(q_{2}v_{2})}{S-((a-1) r_{1}F(v_{1})+(b-1)r_{2}F(v_{2}))+(1-R)}\]
\[\approx\frac{ar_{1}(q_{1}v_{1}+q_{2}v_{2})+br_{2}(q_{2}v_{2})}{S- (b-1)r_{2}+(1-R)}\]
\[=\frac{q_{2}v_{2}(ar_{1}(S+1-R)+br_{2})}{S-(b-1)r_{2}+(1-R)}.\]
On the other hand, in the one-price setting there are three price ranges that we can pick: \([0,v_{1}]\), \((v_{1},v_{2}]\), and \((v_{2},\infty)\). If we set a price in the range \((v_{2},\infty)\), no job is accepted and the welfare is zero. For each of the other two ranges, setting any price in the range yields the same set of accepted jobs and thus the same welfare. Hence it suffices to consider setting prices \(v_{1}\) and \(v_{2}\). We have
\[c_{w}(p=v_{1})=\frac{S(q_{1}v_{1}+q_{2}v_{2})}{S+1-R}=Sq_{2}v_{2}\]
and
\[c_{w}(p=v_{2})\approx\frac{Sq_{2}v_{2}}{S-(S-R)+(1-R)}=Sq_{2}v_{2}.\]
We obtain the same welfare per time step in either case. It follows that the approximation ratio is at most
\[\frac{c_{w}(p=v_{1})}{c_{w}(p_{1}=v_{1},p_{2}=v_{2})} =\frac{Sq_{2}v_{2}(S-(b-1)r_{2}+(1-R))}{q_{2}v_{2}(ar_{1}(S+1-R)+ br_{2})}\]
\[=\frac{S(ar_{1}+1-r_{1})}{a(a-1)r_{1}^{2}+a(b-1)r_{1}r_{2}+ar_{1} +br_{2}}\]
\[=\rho(a,b,r_{1},r_{2}).\]
Finally, we can obtain analogous results for revenue by essentially repeating the same argument but instead writing \(A_{2}=\frac{(1-F(p_{2}))p_{2}}{(1-F(p_{1}))p_{1}}\) in the proof of Theorem 3.2, and for any convex combination of welfare and revenue by writing \(A_{2}\) as the appropriate convex combination of the two corresponding terms.
## Appendix D Proof of Theorem 3.6
We prove the result for a distribution with discrete support and extend it to continuous and mixed support later. Our approach follows that of Feldman et al. (2015). As job length and value are not independent, assume that there are jobs of classes \(1,2,\dots,n\) and that a job of class \(j\) arrives with probability \(r_{j}\), has length \(a_{j}\) and value per timestep \(v_{j}\). Note that \(a_{j}\) and \(v_{j}\) need not be different for different classes of jobs.
We can write an “expected LP” to upper bound the maximum welfare per timestep as follows:
\[Opt =\max\sum_{j}x_{j}v_{j}a_{j}\mbox{ s.t.}\]
\[x_{j} \leq r_{j}\]
\[\sum_{j}x_{j}a_{j} \leq 1\]
Here, \(Opt\) is the optimal long-run average welfare per time step, and the ratio \(x_{j}/r_{j}\) can be thought of as the probability that an arriving job of class \(j\) is accepted. The constraints then say that this probability is at most 1 and that we cannot accept more jobs in expectation than, if they arrived perfectly, would saturate the machine.
Take \(p=Opt/2\),and consider some time \(t\). Let \(y_{t}\) be the probability that the server is occupied at time \(t\) (possibly by the job arriving at time \(t\)). Then the expected revenue at time \(t\) is \(py_{t}=(Opt/2)\cdot y_{t}\). The expected consumer surplus of the job arriving at time \(t\) is at least \(\sum_{j}r_{j}(v_{j}-p)a_{j}(1-y_{t})\), because the probability that the server is unoccupied at time \(t\) (either by the job arriving at time \(t\) or by an earlier job) is a lower bound on the probability that the server was not occupied when the job at time \(t\) arrived. By the constraints of the LP, the consumer surplus is at least \((Opt/2)\cdot(1-y_{t})\). Welfare is the sum of revenue and consumer surplus, so summing over all \(t\) shows that the welfare per time step is at least \(Opt/2\).
If the support of the distribution is not discrete, a similar argument still works. To determine \(Opt\), we order the jobs in decreasing order of value per timestep and take the jobs until the machine is saturated. The rest of the proof then follows in the same way as before, with sums replaced by integrals.
As a further extension, this argument can also be adapted to the multiple server case. We can solve the expected LP for each server individually and compute a price \(p_{i}=Opt_{i}/2\). We can then select whichever of these prices maximizes welfare to recover the \(1/H_{n}\) bound from Theorem 4.2. (For some \(i\), the welfare from the \(i\) servers with the largest values of \(Opt_{i}\) at price \(p_{i}\), which we can lower bound as \(iOpt_{i}/2\), must be at least \(\frac{1}{2H_{n}}\sum_{i}Opt_{i}\).)
## Appendix E Multiple Servers, One Job Length
Assume that at each time step, either zero or one job appears for each server. Server \(j\) receives a job of length \(a\) with probability \(r_{j}\). Suppose that we set a price per time step \(p_{j}\) for jobs on server \(j\). Recall that the value per time step of a job is drawn from a distribution with cumulative distribution function \(F\) and probability density function \(f\).
Using the formula (1) for \(c_{w}\) given in Lemma 3.1, we find that the welfare per time step is
\[e_{w}(p_{1},p_{2},\dots,p_{n}) =\sum_{j=1}^{n}\frac{ar_{j}\int_{x\geq p_{j}}\ell d\mu}{ar_{j}-(a -1)r_{j}F(p_{j})+(1-r_{j})}\]
\[=\sum_{j=1}^{n}\frac{a\int_{x\geq p_{j}}\ell d\mu}{(a-1)(1-F(p_{j }))+\frac{1}{r_{j}}}.\]
If we set the same price \(p=p_{1}=\dots=p_{n}\) for different servers, our welfare per time step becomes
\[e_{w}(p)=\sum_{j=1}^{n}\frac{a\int_{x\geq p}\ell d\mu}{\frac{1}{r_{j}}+(a-1)(1 -F(p))}.\]
Similarly, we have the formulas for revenue per time step
\[e_{r}(p_{1},p_{2},\dots,p_{n}) =\sum_{j=1}^{n}\frac{a(1-F(p_{j}))p_{j}}{(a-1)(1-F(p_{j}))+\frac{ 1}{r_{j}}}\]
and
\[e_{r}(p)=\sum_{j=1}^{n}\frac{a(1-F(p))p}{(a-1)(1-F(p))+\frac{1}{r_{j}}}.\]
We will compare the welfare and revenue that can be obtained by setting a single price against setting several prices. Similarly to Section 4, we will show that if at least one dimension of the parameters is not too extreme, e.g., the number of servers, the job length, or the probabilities of jobs occurrence are bounded, then we can obtain a reasonable approximation of the welfare and revenue in the multi-price setting by setting just one price.
Theorem E.1 ().: _For any prices \(p_{1},p_{2},\dots,p_{n}\) that we set in the multi-price setting, we can achieve a welfare (resp. revenue, or any convex combination of welfare and revenue) approximation of at least_
\[\max\left(\frac{1}{H_{n}},\frac{M-1}{M\ln M},\frac{1}{a}\right)\]
_in the one-price setting, where \(H_{n}=1+\frac{1}{2}+\dots+\frac{1}{n}\approx\ln n\) is the \(n\)th Harmonic number and \(M=\max_{i,j}\frac{r_{i}}{r_{j}}\)._
In particular, if all probabilities of job occurrence are bounded below by \(c\leq 1\), then \(\max_{i,j}\frac{S_{i}}{S_{j}}\leq\frac{1}{c}\). The theorem then implies that the approximation ratio is also at least \(\frac{\frac{1}{c}-1}{\frac{1}{c}\ln\left(\frac{1}{c}\right)}=\frac{c-1}{\ln c}\).
Proof.: We first consider welfare. We will work with the ratio
\[\frac{\max(e_{w}(p_{1}),\dots,e_{w}(p_{n}))}{e_{w}(p_{1},\dots,p_{n})}\]
and try to minimize it.
Writing \(A_{j}=\frac{\int_{x\geq p_{j}}\ell d\mu}{\int_{x\geq p_{1}}\ell d\mu}\) (in particular, \(A_{1}=1\)), \(B_{j}=F(p_{j})\), and \(R_{j}=1/r_{j}\) for \(1\leq j\leq n\), the ratio to minimize becomes
\[g(A_{1},\dots,A_{n},B_{1},\dots,B_{n}):=\max_{j=1}^{n}\left(\sum_{i=1}^{n} \frac{A_{j}}{R_{i}+(a-1)(1-B_{j})}\right)\cdot\frac{1}{\sum_{j=1}^{n}\frac{A_{ i}}{R_{i}+(a-1)(1-B_{i})}}.\]
Let \(U_{j}=\sum_{i=1}^{n}\frac{1}{R_{i}+(a-1)(1-B_{j})}\) for \(1\leq j\leq n\), and \(T=\sum_{i=1}^{n}\frac{A_{i}}{R_{i}+(a-1)(1-B_{i})}\). We have
\[g(A_{1},\dots,A_{n},B_{1},\dots,B_{n})=\frac{\max_{j=1}^{n}(A_{j}U_{j})}{T}.\]
_Case 1_: The max function outputs the first term, \(A_{1}U_{1}\).
Taking into account that \(A_{1}=1\), we want to minimize the ratio
\[\frac{U_{1}}{T}=\left(\sum_{i=1}^{n}\frac{1}{R_{i}+(a-1)(1-B_{1})}\right)\cdot \frac{1}{\sum_{i=1}^{n}\frac{A_{i}}{R_{i}+(a-1)(1-B_{i})}},\]
where \(A_{i}\leq\frac{U_{1}}{U_{i}}\) for all \(i\geq 2\).
For any \(A_{j}\), if we fix the remaining \(A_{i}\) and all \(B_{i}\), this is a decreasing function in \(A_{j}\). To minimize it, we should set \(A_{i}=\frac{U_{1}}{U_{i}}\) for all \(i\geq 2\). The ratio becomes
\[\frac{1}{\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{R_{j}+(a-1)(1-B_{j})}{R_{i }+(a-1)(1-B_{j})}}}.\]
_Case 2_: The max function outputs the \(j\)th term, \(A_{j}U_{j}\), for some \(j\geq 2\).
This means that \(A_{i}\leq\frac{A_{j}U_{j}}{U_{i}}\) for all \(i\geq 2\) with \(i\neq j\), and \(A_{j}\geq\frac{U_{1}}{U_{j}}\). We want to minimize the ratio
\[\frac{A_{j}U_{j}}{T}=\left(\sum_{i=1}^{n}\frac{A_{j}}{R_{i}+(a-1)(1-B_{j})} \right)\cdot\frac{1}{\sum_{i=1}^{n}\frac{A_{i}}{R_{i}+(a-1)(1-B_{i})}}.\]
For any \(i\geq 2\) with \(i\neq j\), if we fix all terms \(A_{k}\) except \(A_{i}\) and fix all \(B_{k}\), then this function is decreasing in \(A_{i}\). To minimize it, we should set \(A_{i}=\frac{A_{j}U_{j}}{U_{i}}\). The resulting function is increasing in \(A_{j}\) if we fix all \(B_{k}\), so we should set \(A_{j}=\frac{U_{1}}{U_{j}}\). We obtain the same ratio as in Case 1.
Hence in either case, we are left with minimizing the function
\[h(B_{1},\dots,B_{n}):=\frac{1}{\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{R_{j }+(a-1)(1-B_{j})}{R_{i}+(a-1)(1-B_{j})}}}.\]
In other words, the reciprocal of this function, which we want to maximize, is
\[h_{0}(B_{1},\dots,B_{n}) :=\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{R_{j}+(a-1)(1-B_{j}) }{R_{i}+(a-1)(1-B_{j})}}.\]
Assume without loss of generality that \(R_{1}\geq R_{2}\geq\ldots\geq R_{n}\). For \(1\leq j\leq n\), write \(h_{j}=\frac{1}{\sum_{i=1}^{n}\frac{R_{j}+(a-1)(1-B_{j})}{R_{i}+(a-1)(1-B_{j})}}\), i.e., the \(j\)th term of the sum that constitutes \(h_{0}\). The last \(n+1-j\) terms of the sum in the denominator of \(h_{j}\) are at least 1 since \(R_{j}\geq R_{i}\) for \(i\geq j\). This implies that \(h_{j}\leq\frac{1}{n+1-j}\), and therefore
\[h_{0}(B_{1},\dots,B_{n}) =h_{1}+h_{2}+\dots+h_{n}\]
\[\leq\frac{1}{n}+\frac{1}{n-1}+\dots+1=H_{n}.\]
Equivalently, \(h(B_{1},\dots,B_{n})\geq\frac{1}{H_{n}}\).
Next, since \(M=\max_{i,j}\frac{r_{i}}{r_{j}}\), we have \(\frac{R_{i}}{R_{j}}\leq M\) for all \(i,j\). Note that
\[\frac{R_{i}+(a-1)(1-B_{i})}{R_{j}+(a-1)(1-B_{i})}\geq\frac{1}{M}\]
for all \(i,j\). This implies that
\[h_{j}\leq\frac{1}{n+1-j+\frac{j-1}{M}}\]
for all \(1\leq j\leq n\), and therefore
\[h_{0}(B_{1},\dots,B_{n}) =\sum_{j=1}^{n}\frac{1}{n-(j-1)\left(1-\frac{1}{M}\right)}\]
\[=\sum_{j=1}^{n}\frac{1}{n}\cdot\frac{1}{1-\frac{j-1}{n}\left(1- \frac{1}{M}\right)}.\]
The last sum is the left Riemann sum of the function \(\frac{1}{1-x\left(1-\frac{1}{M}\right)}\). This function is increasing in \(x\), so its integral between 0 and 1 is an upper bound for our sum. Hence we have
\[h_{0}(B_{1},\dots,B_{n})\leq\int_{0}^{1}\frac{1}{1-x\left(1-\frac{1}{M}\right) }dx=\frac{M\ln M}{M-1}.\]
Equivalently, \(h(B_{1},\dots,B_{n})\geq\frac{M-1}{M\ln M}\).
Finally, one can check that since \(R_{i}\geq 1\) and \(1-B_{i}\leq 1\) for all \(i\),
\[\frac{R_{i}+(a-1)(1-B_{i})}{R_{j}+(a-1)(1-B_{i})}\geq\frac{1}{a}\cdot\frac{R_{ i}}{R_{j}}\]
for all \(i,j\). It follows that
\[h_{0}(B_{1},\dots,B_{n}) \leq\sum_{i=1}^{n}\frac{1}{\sum_{j=1}^{n}\frac{1}{a}\cdot\frac{R_ {i}}{R_{j}}}\]
\[=a\sum_{i=1}^{n}\frac{1}{\sum_{j=1}^{n}\frac{R_{i}}{R_{j}}}\]
\[=a.\]
Equivalently, \(h(B_{1},\dots,B_{n})\geq\frac{1}{a}\).
Combining the three bounds, we have
\[h(B_{1},\dots,B_{n})\geq\max\left(\frac{1}{H_{n}},\frac{M-1}{M\ln M},\frac{1}{ a}\right),\]
as desired.
Finally, we can obtain analogous results for revenue by essentially repeating the same argument but instead writing \(A_{j}=\frac{(1-F(p_{j}))p_{j}}{(1-F(p_{1}))p_{1}}\) for \(1\leq j\leq n\), and for any convex combination of welfare and revenue by writing \(A_{j}\) as the appropriate convex combination of the two corresponding terms. ∎
We now address the tightness of the approximation ratio. The upper bound \(H_{n}\) for \(h_{0}\) is the best possible in the sense that there exist values \(B_{1},\dots,B_{n},R_{1},\dots,R_{n},a\) such that \(h_{0}\) gets arbitrarily close to \(H_{n}\). In particular, take \(R_{j}=c^{2(n-j)}\) and \(B_{j}=1-\frac{c^{2(n-j)+1}}{a-1}\) for some large constant \(c\). We have
\[h_{0}(B_{1},\dots,B_{n}) =\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{R_{j}+(a-1)(1-B_{j})} {R_{i}+(a-1)(1-B_{j})}}\]
\[=\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{c^{2(n-j)}+c^{2(n-j)+ 1}}{c^{2(n-i)}+c^{2(n-j)+1}}}\]
\[=\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{1+c}{c^{2(j-i)}+c}}.\]
Taking \(c\rightarrow\infty\), we find that or \(j\leq i\), the fraction \(\frac{1+c}{c^{2(j-i)}+c}\) converges to 1, while for \(j>i\), it converges to 0. Hence \(h_{j}\rightarrow\frac{1}{n+1-j}\), and consequently \(h_{0}\to H_{n}\).
While this argument does not directly imply the tightness of the approximation ratio, we see it as strong evidence for that claim.
## Appendix F Proof of Theorem 4.1
We first consider welfare. We will work with the ratio
\[\frac{\max(d_{w}(p_{1}),\dots,d_{w}(p_{n}))}{d_{w}(p_{1},\dots,p_{n})}\]
and try to minimize it.
Writing \(A_{j}=\frac{\int_{x\geq p_{j}}\ell d\mu}{\int_{x\geq p_{1}}\ell d\mu}\) (in particular, \(A_{1}=1\)), \(B_{j}=F(p_{j})\), and \(s_{j}=\frac{1}{S_{j}}\) for \(1\leq j\leq n\), the ratio to minimize becomes
\[g(A_{1},\dots,A_{n},B_{1},\dots,B_{n}):=\\ \max_{j=1}^{n}\left(\sum_{i=1}^{n}\frac{A_{j}}{1-(1-Rs_{i})B_{j}+(1-R)s_{i}} \right)\cdot\frac{1}{\sum_{i=1}^{n}\frac{A_{i}}{1-(1-Rs_{i})B_{i}+(1-R)s_{i}}}.\]
Let \(U_{j}=\sum_{i=1}^{n}\frac{1}{1-(1-Rs_{i})B_{j}+(1-R)s_{j}}\) for \(1\leq j\leq n\), and \(T=\sum_{i=1}^{n}\frac{A_{i}}{1-(1-Rs_{i})B_{i}+(1-R)s_{i}}\). We have
\[g(A_{1},\dots,A_{n},B_{1},\dots,B_{n})=\frac{\max_{j=1}^{n}(A_{j}U_{j})}{T}.\]
_Case 1_: The max function outputs the first term, \(A_{1}U_{1}\).
Taking into account that \(A_{1}=1\), we want to minimize the ratio
\[\frac{U_{1}}{T}=\left(\sum_{i=1}^{n}\frac{1}{1-(1-Rs_{i})B_{1}+(1-R)s_{i}} \right)\cdot\frac{1}{\sum_{i=1}^{n}\frac{A_{i}}{1-(1-Rs_{i})B_{i}+(1-R)s_{i}}},\]
where \(A_{i}\leq\frac{U_{1}}{U_{i}}\) for all \(i\geq 2\).
For any \(A_{j}\), if we fix the remaining \(A_{i}\) and all \(B_{i}\), this is a decreasing function in \(A_{j}\). To minimize it, we should set \(A_{i}=\frac{U_{1}}{U_{i}}\) for all \(i\geq 2\). The ratio becomes
\[\frac{1}{\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{1-(1-Rs_{j})B_{j}+(1-R)s_{ j}}{1-(1-Rs_{i})B_{j}+(1-R)s_{i}}}}.\]
_Case 2_: The max function outputs the \(j\)th term, \(A_{j}U_{j}\), for some \(j\geq 2\).
This means that \(A_{i}\leq\frac{A_{j}U_{j}}{U_{i}}\) for all \(i\geq 2\) with \(i\neq j\), and \(A_{j}\geq\frac{U_{1}}{U_{j}}\). We want to minimize the ratio
\[\frac{A_{j}U_{j}}{T}=\left(\sum_{i=1}^{n}\frac{A_{j}}{1-(1-Rs_{i})B_{j}+(1-R)s _{i}}\right)\cdot\frac{1}{\sum_{i=1}^{n}\frac{A_{i}}{1-(1-Rs_{i})B_{i}+(1-R)s_ {i}}}.\]
For any \(i\geq 2\) with \(i\neq j\), if we fix all terms \(A_{k}\) except \(A_{i}\) and fix all \(B_{k}\), then this function is decreasing in \(A_{i}\). To minimize it, we should set \(A_{i}=\frac{A_{j}U_{j}}{U_{i}}\). The resulting function is increasing in \(A_{j}\) if we fix all \(B_{k}\), so we should set \(A_{j}=\frac{U_{1}}{U_{j}}\). We obtain the same ratio as in Case 1.
Hence in either case, we are left with minimizing the function
\[h(B_{1},\dots,B_{n}):=\frac{1}{\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{1-(1 -Rs_{j})B_{j}+(1-R)s_{j}}{1-(1-Rs_{i})B_{j}+(1-R)s_{i}}}}.\]
In other words, the reciprocal of this function, which we want to maximize, is
\[h_{0}(B_{1},\dots,B_{n}) :=\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{1-(1-Rs_{j})B_{j}+(1 -R)s_{j}}{1-(1-Rs_{i})B_{j}+(1-R)s_{i}}}\]
\[=\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{1-B_{j}+s_{j}(1-R+RB_ {j})}{1-B_{j}+s_{i}(1-R+RB_{j})}}.\]
Assume without loss of generality that \(s_{1}\geq s_{2}\geq\ldots\geq s_{n}\). For \(1\leq j\leq n\), write \(h_{j}=\frac{1}{\sum_{i=1}^{n}\frac{1-B_{j}+s_{j}(1-R+RB_{j})}{1-B_{j}+s_{i}(1- R+RB_{j})}}\), i.e., the \(j\)th term of the sum that constitutes \(h_{0}\). The last \(n+1-j\) terms of the sum in the denominator of \(h_{j}\) are at least 1 since \(s_{j}\geq s_{i}\) for \(i\geq j\). This implies that \(h_{j}\leq\frac{1}{n+1-j}\), and therefore
\[h_{0}(B_{1},\dots,B_{n}) =h_{1}+h_{2}+\dots+h_{n}\]
\[\leq\frac{1}{n}+\frac{1}{n-1}+\dots+1=H_{n}.\]
Equivalently, \(h(B_{1},\dots,B_{n})\geq\frac{1}{H_{n}}\).
Next, since \(M=\max_{i,j}\frac{S_{i}}{S_{j}}\), we have \(\frac{s_{i}}{s_{j}}\leq M\) for all \(i,j\). Note that
\[\frac{1-(1-Rs_{j})B_{j}+(1-R)s_{j}}{1-(1-Rs_{i})B_{j}+(1-R)s_{i}}\geq\frac{1}{M}\]
for all \(i,j\), since both the numerator and the denominator are positive, and when seen as a function of \(B_{j}\), the expression is monotonic and takes on values at least \(\frac{1}{M}\) at \(B_{j}=0\) and \(B_{j}=1\). This implies that
\[h_{j}\leq\frac{1}{n+1-j+\frac{j-1}{M}}\]
for all \(1\leq j\leq n\), and therefore
\[h_{0}(B_{1},\dots,B_{n}) =\sum_{j=1}^{n}\frac{1}{n-(j-1)\left(1-\frac{1}{M}\right)}\]
\[=\sum_{j=1}^{n}\frac{1}{n}\cdot\frac{1}{1-\frac{j-1}{n}\left(1- \frac{1}{M}\right)}.\]
The last sum is the left Riemann sum of the function \(\frac{1}{1-x\left(1-\frac{1}{M}\right)}\). This function is increasing in \(x\), so its integral between 0 and 1 is an upper bound for our sum. Hence we have
\[h_{0}(B_{1},\dots,B_{n})\leq\int_{0}^{1}\frac{1}{1-x\left(1-\frac{1}{M}\right) }dx=\frac{M\ln M}{M-1}.\]
Equivalently, \(h(B_{1},\dots,B_{n})\geq\frac{M-1}{M\ln M}\).
Combining the two bounds, we have
\[h(B_{1},\dots,B_{n})\geq\max\left(\frac{1}{H_{n}},\frac{M-1}{M\ln M}\right),\]
as desired.
Finally, we can obtain analogous results for revenue by essentially repeating the same argument but instead writing \(A_{j}=\frac{(1-F(p_{j}))p_{j}}{(1-F(p_{1}))p_{1}}\) for \(1\leq j\leq n\), and for any convex combination of welfare and revenue by writing \(A_{j}\) as the appropriate convex combination of the two corresponding terms.
Done with the proof of the theorem, we now address the tightness of the approximation ratio. The upper bound \(H_{n}\) for \(h_{0}\) is the best possible in the sense that there exist values \(B_{1},\dots,B_{n},s_{1},\dots,s_{n},R\) such that \(h_{0}\) gets arbitrarily close to \(H_{n}\). In particular, take \(s_{j}=c^{-2j}\) and \(B_{j}=1-c^{-2j+1}\) for some large constant \(c\), and \(R=1\). We have
\[h_{0}(B_{1},\dots,B_{n}) =\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{1-B_{j}+B_{j}s_{j}}{1 -B_{j}+B_{j}s_{i}}}\]
\[=\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{c^{-2j+1}+c^{-2j}-c^{ -4j+1}}{c^{-2j+1}+c^{-2i}-c^{-2j-2i+1}}}\]
\[=\sum_{j=1}^{n}\frac{1}{\sum_{i=1}^{n}\frac{c+1-c^{-2j+1}}{c+c^{2 (j-i)}-c^{-2i+1}}}.\]
Taking \(c\rightarrow\infty\), we find that for \(j\leq i\), the fraction \(\frac{c+1-c^{-2j+1}}{c+c^{2(j-i)}-c^{-2i+1}}\) converges to 1, while for \(j>i\), it converges to 0. Hence \(h_{j}\rightarrow\frac{1}{n+1-j}\), and consequently \(h_{0}\to H_{n}\).
While this argument does not directly imply the tightness of the approximation ratio, we see it as strong evidence for that claim.
|
0901.1618 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 43513,
"num_imgs": 5,
"llama3_tokens_count": 10793
} | [
"content_image/0901.1618/x1.png",
"content_image/0901.1618/x2.png",
"content_image/0901.1618/x3.png",
"content_image/0901.1618/x4.png",
"content_image/0901.1618/x6.png"
] | ###### Abstract
The properties of the Extreme Horizontal Branch stars are quite well understood, but much uncertainty surrounds the many paths that bring a star to this peculiar configuration. Asteroseismology of pulsating EHB stars has been performed on a number of objects, bringing us to the stage where comparisons of the inferred properties with evolutionary models becomes feasible. In this review I will outline our current understanding of the formation and evolution of these stars, with emphasis on recent progress. The aim is to show how the physical parameters derived by asteroseismology can enable the discrimination between different evolutionary models.
] ]
## [
]Asteroseismology and evolution of EHB stars
]]
R. H. Østensen
Instituut voor Sterrenkunde, K.U.Leuven, Celestijnenlaan 200D, B-3001 Leuven, Belgium
Individual Objects: V361 Hya, V1093 Her, DW Lyn, V391 Peg, Balloon 090100001, KL UMa, NY Vir, V338 Ser, LS IV–14\({}^{\circ}\)116, HD 188112
### Introduction
Let me first clarify the basic terminology with respect to EHB stars, which can sometimes be confusing as the terms EHB and sdB are often used to label the same stars. The terms Extended Horizontal Branch or Extreme Horizontal Branch have been used interchangeably to describe the sequence of stars observed to lie bluewards of the normal Horizontal Branch stars in globular clusters, and also in temperature/gravity plots of hot field stars. The EHB feature was first described and associated with field sdB and sdO stars by Greenstein & Sargent (1974). Now, EHB stars are taken to mean core helium burning stars with an envelope too thin to sustain hydrogen burning. It is also understood that not all sdB stars are EHB stars. In particular, if a star loses its envelope without the core reaching the mass required for the helium flash, its cooling track can take it through the sdB domain on its way to become a helium core WD. The sdB/sdO terms are used to describe the spectroscopic appearance and do not presume any particular evolutionary stage. Several subclassification schemes have been used, but most common nowadays is the one introduced by Moehler et al. (1990). This scheme names as sdB stars those of the hot subdwarfs showing He I absorption lines, as sdO stars those showing He II, and as sdOB stars those showing features of both. Additionally the terms He-sdB and He-sdO are used to describe stars in which the helium lines dominate over the Balmer lines. The EHB forms a sequence of stars from the coolest sdBs to the sdOB domain, and it is clear that most stars given this classification are in fact EHB stars. For the He-rich objects a coherent picture has yet to emerge.
The current canonical picture of the EHB stars was mostly established by Heber (1986), in which the EHB stars are helium core burning stars with masses close to the core helium flash mass of \(\sim\)0.47 M\({}_{\odot}\), and an extremely thin hydrogen envelope, too thin to sustain hydrogen burning (no more than 1% by mass). It is understood that they are post red giant branch (RGB) stars that have started core helium burning in a helium flash before or after the envelope was removed by any of several possible mechanisms. The lifetime of EHB stars from the zero-age EHB (ZAEHB) to the terminal age EHB (TAEHB), when core helium runs out, takes between 100 and 150 Myrs. The post-EHB evolution will take them through the sdO domain directly to the white dwarf (WD) cooling curve without ever passing through a second giant stage. The time they spend shell helium burning before leaving the sdO domain can be up to 20 Myrs.
Although the future evolution of EHB stars after core He-exhaustion has always been presumed quite simple, the paths that lead to the EHB have always been somewhat mysterious. New hope that the evolutionary paths leading to the formation of EHB stars can be resolved has been kindled by the discovery that many of them pulsate, which has opened up the possibility of probing their interiors using asteroseismological methods. These pulsators are known as sdBV stars, and several distinct subclasses are now recognised (see the _Asteroseismology_ section below). But in order to understand what questions asteroseismology can ask and answer, it is essential to understand the different paths that produce EHB stars. Only by understanding the evolutionary history of these stars is it possible to construct realistic models of their interiors which are needed for asteroseismology to be able to distinguish between the different formation scenarios. For this reason we will review the essential points of the _Formation and Evolution_ first, after starting with a look at the observed properties of the hot subdwarf population in _The Observed EHB_ below.
Besides the spectacularly rapid pulsations in the EHB stars, another factor that has contributed to the recent burst in interest in EHB stars is the realisation that these stars are the main contributor to the UV-upturn phenomenon observed in elliptical galaxies. An excellent review of the UV upturn and the binary population synthesis models required to model this phenomenon, can be found in Podsiadlowski et al. (2008). For a more in-depth review of the properties of all hot subdwarf stars, the exhaustive review by Heber (2009) is recommended.
### The Observed EHB
Hot subdwarf stars were found in the galactic caps already by Humason & Zwicky (1947). By the time Greenstein & Sargent (1974) wrote their seminal paper, the number of such faint blue stars had grown to 189, permitting a systemic study of the population. The PG survey (Green et al. 1986), which covered more than 10 000 square degrees at high galactic latitudes, found that of 1874 UV-excess objects detected more than 1000 were hot subdwarfs, so these stars dominate the population of faint blue stars down to the PG survey limit (\(B\) = 16.5). Together with the large sample of subdwarfs detected in the HS survey and analysed by Edelmann et al. (2003), these have provided a rich source of hot subdwarfs for observers to follow up, and new discoveries are still being made. The recent Sloan Digital Sky Survey (SDSS, Stoughton et al. 2002), also contains spectra of more than 1000 hot subdwarfs, but as the SDSS reaches much deeper than the PG survey, WD stars start to dominate around about \(B\) = 18 as the thickness of the galactic disk is reached. The Subdwarf Database (Østensen 2006) catalogs about 2500 hot subdwarfs, with extensive references to the available literature.
<figure><img src="content_image/0901.1618/x1.png"><figcaption>Figure 1: The EHB in the Teff–logg plane as observed by the Bok-Green survey(Green et al. 2008). The symbols mark observed stars with the size indicatingthe helium abundance. The theoretical zero age HeMS is shown for a wide rangeof masses. Models from Paczyński (1971) with masses of 0.5, 0.7, 0.85, 1, 1.5and 2 M⊙ are marked with ∗ symbols (starting from low Teff). More recentmodels from Kawaler & Hostler (2005) are shown for M∗ = 0.41, 0.43, … 0.57 M⊙are marked with + symbols, and the zero age and terminal age EHB for the 0.47M⊙ core models are also drawn. For the latter, four evolutionary tracks withdifferent envelope thicknesses logMe/M∗ = -3.5,-3,-2.5,-2 are drawn (startingfrom high logg).</figcaption></figure>
Several surveys have attempted to tackle the question of the binary frequency on the EHB, but the matter is complicated by the many different types of systems EHB stars are found in. EHB stars with FGK companions are easily detected from their double-lined spectra or from IR excess. But EHB stars with WD or M-dwarf companions show no such features. When the orbital periods are sufficiently short, these systems can easily be revealed from their RV variations. Using the RV method, Maxted et al. (2001) targeted 36 EHB stars and found 21 binaries, all with periods less than 30 days. This gave a fraction of short period binaries of 60 \(\pm\) 8 %. Other surveys have found smaller fractions, but they have not constrained the sample to focus strictly on the EHB. From high-resolution VLT spectra of 76 sdB stars from the SPY survey Lisker et al. (2005) found that 24 showed the signature of an FGK companion, none of which show any RV variability. Napiwotzki et al. (2005) reported that of 46 sdB stars in the same sample, 18 (39%) were RV variable. Clearly, the binary fraction in EHB stars is much higher than for normal stars, but an accurate number has yet to be established.
Most recently, Green et al. (2008) have presented a uniform high signal-to-noise low-resolution survey of a large sample including most known hot subdwarf stars brighter than \(V\) = 14.2, using the university of Arizona 2.3 m Bok telescope (hereafter referred to as the Bok-Green or BG survey). From this large sample the clearest picture of the EHB to date emerges (Fig. 1 and 2). Most stars in the diagram are clearly well bound by EHB models for a narrow mass distribution. Most of the remaining stars are consistent with post-EHB models, but could also fit core helium burning objects with higher than canonical masses. The most helium rich objects, however, appear to form their own sequence, which cannot be explained by canonical EHB models. The tail of the regular horizontal branch reaches into the diagram from the upper right, but is separated from the EHB by a substantial gap, as first noted by Newell (1973). Although the details of this survey are still under analysis, several new features have been noted. The sequence of He-rich objects around 40 kK is not compatible with current evolutionary scenarios, since post-EHB and post-RGB objects pass too rapidly through this region of the \(T_{eff}\)–\(\log g\) plane to produce the observed clustering, but the _late hot flasher_ scenario (discussed in the next section) holds some promise.
The large group of stars below the helium main sequence (HeMS) is more problematic as no type of star should be able to stay in this position in the \(T_{eff}\)–\(\log g\) plane more than briefly, and no clustering should occur. The feature was also noted by Stroeer et al. (2007), but it cannot be ruled out that this is an artifact of the models, as many of these stars appear to have significant amounts of CNO processed material in their atmospheres, and the NLTE models used do not account for this. Another remarkable feature appears when looking at the distribution of short period binaries in the \(T_{eff}\)–\(\log g\) plane (Fig. 2). In particular, the incidence of such binaries appears to be much smaller at the hot, high gravity end of the EHB strip than among the cooler subdwarfs. This can be understood in terms of the relative efficiency of the common envelope ejection channel (CE, see next section) that produces short period binaries versus the other channels producing long period systems or single stars. It would appear that the CE channel is significantly less effective in removing the envelope than the other channels.
<figure><img src="content_image/0901.1618/x2.png"><figcaption>Figure 2: The same as Fig. 1, but with the symbol size indicating thedispersion in radial velocity. The EHB stars with the highest velocityvariations appear to be concentrated at lower gravities on the EHB. sdB+FGKstars are not included here, due to difficulties in reliably disentanglingsuch composite spectra.</figcaption></figure>
### Formation and evolution
As binary interactions are a key for understanding the formation of EHB stars, I will attempt a short introduction here. Mass transfer during close binary evolution is well understood, although there are still some unknown factors. While the full picture is exceedingly complex, and would take far too much time to describe here, I will try to outline some of the most important possibilities.
#### Losing the Envelope
There are two fundamental evolutionary paths, and which one a system enters depends only on the mass ratio of the system. If the expanding mass donor is more massive than the accretor, the orbit will shrink catastrophically and the system enters a common envelope (CE) phase. As the orbit shrinks further due to friction, orbital energy is deposited in the envelope, spinning it up. When sufficient energy is deposited the envelope is ejected. Stars with an initial main sequence (MS) mass below about 1.8 M\({}_{\odot}\) can ignite helium in a core flash before the tip of the RGB, and if the envelope is ejected at the right time the result is an EHB star with a mass close to the flash mass of \(\sim\)0.47 M\({}_{\odot}\), and a very close low mass companion. If the envelope is ejected before the core reaches the required mass, the core never ignites helium and the star will not settle on the EHB, but continues to contract and ends up as a helium core white dwarf. The close sdB+WD binary HD 188112 (Heber et al. 2003) is a particular point case, as the Hipparcos parallax together with the observed spectroscopic surface gravity clearly constrain the mass of the subdwarf component to be below the core helium burning limit. If the MS mass is higher than about 2.0 M\({}_{\odot}\) the star will ignite helium non-degeneratively well before the core reaches the mass required for the helium flash. If the envelope is subsequently ejected on the tip of the RGB the outcome would be an EHB star which could have a mass as low as 0.33 M\({}_{\odot}\).
On the other hand, if the companion is more massive than the red giant donor filling its Roche lobe, the orbit expands and no CE is formed. In this stable Roche lobe overflow (RLOF) scenario the orbital period can end up as long as 2000 days. As with CE ejection, if the red giant starts out with a mass below the mass required for the helium flash to occur, the mass transfer must happen close to the tip of the RGB, and the core of the giant becomes an EHB star with a mass close to 0.47 M\({}_{\odot}\). If it is too massive, non-degenerate helium burning starts earlier, and the result is an sdB star with a mass between 0.33 and 1.1 M\({}_{\odot}\) (Han et al. 2000).
#### The Problematic Singles
A significant number of sdB stars are definitely single stars, and their formation is the most problematic and controversial. While post-CE systems leave behind a close binary that is easily detectable from the radial velocity (RV) variations, post-RLOF binaries have such long periods that it requires very long term efforts with high precision spectroscopy to detect them. Up to now there are no detections of any such orbits. However, long term asteroseismic monitoring can detect orbitally induced variations in the pulsation period with much higher precision than can be done from spectroscopy (Silvotti, these proceedings). The clearest case yet is V391 Peg where the modulation of the pulsations are consistent with a planet with a mass (\(M\sin i\)) of 3.2 \(M_{\mathrm{Jupiter}}\) in an orbit with period of 1 170 days (Silvotti et al. 2007). While the planet might have entered the outer layers of the envelope of the red giant before the envelope was lost, the current orbit is too wide for it to have been responsible for the actual envelope ejection.
Several scenarios have been proposed that may produce single EHB stars. It is well known that RGB stars lose significant mass in the form of a stellar wind as they expand and their surface gravity becomes extremely low. D’Cruz et al. (1996) computed evolutionary models of RGB stars with mass loss parameterised by the Reimers efficiency \(\eta_{R}\). They found that the observed distributions of HB and EHB stars can be explained “_so long as nature provides a broad enough distribution in \(\eta_{R}\)_”. However, the actual physics behind the large variation in the mass loss efficiency remains unexplained. Another possibility is the merger of two He-core WD stars, first proposed by Webbink (1984). Saio & Jeffery (2000) have shown that models for such a merger can successfully predict the behaviour of the pulsating helium star V652 Her, demonstrating the feasibility of the merger scenario. Such extreme helium stars will evolve to become hot helium rich subdwarfs located close to the HeMS. However, a remaining problem with merger models is that they invariably leave behind rapidly rotating objects. So far, no single hot subdwarf has demonstrated more than moderate rotational velocities from high resolution spectroscopy.
Another possibility is that CE ejection can be triggered by a giant planet that evaporates in the process (Soker 1998). Nelemans & Tauris (1998) demonstrated in the context of white dwarf evolution that there are clear domains in initial orbital period versus planetary mass, where the planet ejects the envelope and is disrupted as it fills its own Roche-lobe after the spiral in. The final rotation period of the remaining helium core is not affected by this process, as the planet transfers almost all of its angular momentum to the envelope before its ejection. A final possibility for the formation of single hot subdwarfs, also noted by Nelemans & Tauris (1998) in the context of formation of undermassive white dwarfs, is a variation of the RLOF mechanism. If the envelope is transferred onto an accretor that is already a massive white dwarf, an asymmetric accretion induced collapse may produce a high velocity neutron star which escapes the system. If the companion is sufficiently massive to explode as a supernova, Marietta et al. (2000) have computed that the explosion itself impacts 1000 times more energy on the envelope than its binding energy, easily stripping the giant to the core. If the core is massive enough for helium burning, and the SN explosion sufficiently asymmetric, the remnant of the mass donor would end up as a single EHB star. In both cases the disruption of such a binary system would leave the EHB star with an unusual galactic orbit, which should be observable at least in a sufficiently large sample.
#### To Flash or Not to Flash
If an RGB star loses its envelope before the core has reached the mass required for the helium flash, the core will contract and heat up, before cooling as a helium core WD. The tracks calculated for such flashless post-RGB evolution covers a wide span in temperatures, with models around 0.2 M\({}_{\odot}\) passing through the cool end of the EHB, and the remnants with masses close to the helium flash mass reaching temperatures up to 100 kK (Driebe et al. 1999).
A borderline case exists when the mass is just on the limit for the helium core flash to occur. Then the flash can happen after the RGB stage, while the core is either on its way to or on the actual WD cooling curve. Such models are known as hot flashers, and the eventual outcome depends on the exact stage at which the helium flash occurs. If ignition occurs before the turning point on the WD cooling track (early hot flashers), the remaining H-burning shell produces a sufficient entropy barrier to prevent the convection zone produced by the helium flash to reach the surface (Iben 1976). But if helium ignites on the actual WD cooling curve, any remaining shell H-burning is too weak to prevent the convection zone reaching the envelope. The envelope hydrogen is then mixed into the core and quickly burnt (Sweigart 1997), and CNO processed material is transported to the outer layers in a flash mixing process. Such _late hot flashers_ are predicted to end up with an atmosphere almost totally devoid of hydrogen and with observable CNO lines in their spectra.
Recently, Miller Bertolami et al. (2008) have performed extensive simulations of late hot flashers in order to determine how well models can reproduce observations. They predict that the core flash cycles should take place in the region above the HeMS where the strip of helium rich subdwarfs are concentrated (Fig. 1). However, the core flash phase lasts less than 2 Myr, after which the stars settle close to the HeMS, for a regular EHB lifetime of at least 66 Myr. Observations do not support such a concentration of objects at this location in the \(T_{eff}\)–\(\log g\) plane. To resolve this they propose that some remaining hydrogen could have survived the mixing and should slowly diffuse to the surface, effectively pulling the star up toward the cooler region of the EHB.
Another very recent development was presented by Politano et al. (2008). They have extended the classic common envelope ejection mechanism to include the case when a low mass MS star or brown dwarf merges with the helium core, in order to produce single EHB stars. However, as with the helium white dwarf merger scenario, the problem remains that the products end up spinning close to break-up velocity. Since a rapidly rotating subpopulation of sdB stars has yet to be found, this channel is only of marginal interest, unless a way to eject the envelope without spinning up the core can be found.
<figure><img src="content_image/0901.1618/x3.png"><figcaption>Figure 3: Section of the Teff–logg plane where the EHB stars are located.Pulsators with temperatures and gravities in the BG survey are marked with bigsymbols and error bars. Small symbols without error bars are stars notobserved to pulsate. In the online version the colors indicate short periodpulsators with (green) and without (red) published asteroseismic solution,long period pulsators (magenta) and hybrid pulsators (blue core).</figcaption></figure>
### Asteroseismology
The first evidence of rapid pulsations in EHB stars were reported by Kilkenny et al. (1997), after their detection of multiperiodic pulsations in V361 Hya. The V361 Hya stars span the hot end of the EHB strip from about 28 to 34 kK, and pulsate in \(p\)-modes of low \(\ell\) orders and with photometric amplitudes up to 6 %. The pulsation periods range between 100 and 400 s, and 40 such stars are known in the literature to date (Oreiro et al. 2009). One of these, V338 Ser, has periods reaching almost 10 minutes, but stands out as it sits well above the EHB (Fig. 3), being most likely in a post-EHB stage of evolution.
It took several years from the announcement of the first sdB pulsator until it was realised that the same group of stars are subject to longer period pulsations as well. Green et al. (2003) published the discovery of pulsations in V1093 Her, with periods between one half and two hours, and reported that as many as 75% of sdB stars cooler than 30 kK display some level of pulsations at these periods. The V1093 Her stars span the EHB from the coolest sdBs at around 24 kK up to the domain of the V361 Hya stars (Fig. 3). The pulsations were identified with high radial order \(g\)-modes by Green et al. (2003), and their amplitudes are very low, typically 0.2 % or less. Although such low level pulsations are common in V1093 Her stars, and have been detected in at least 30, only a few have been studied in detail due to the long time-base and high precision required to detect and resolve their modes.
Even more recently, Schuh et al. (2006) realised that a known V361 Hya star, DW Lyn, was actually displaying the \(g\)-modes of a V1093 Her star simultaneously with \(p\)-modes of a V361 Hya star. As noted by the authors, the four V361 Hya stars DW Lyn, V391 Peg and Balloon 090100001 (BA09 in Fig. 3 and hereafter), and KL UMa, form a compact group closer to the domain of the V1093 Her stars than the remaining V361 Hya stars. Hybrid DW Lyn type pulsations have now been detected in both V391 Peg and BA09 as well, but appear to be absent in KL UMa. Intriguingly, KL UMa also stands out as the only binary of the quartet (O’Toole et al. 2004).
A fourth type of pulsations was noted in the He-sdB star LS IV–14\({}^{\circ}\)116 by Ahmad & Jeffery (2005). They detected pulsations with amplitudes at the 1% level and periods around 15 minutes. The atmospheric parameters reported by the authors, \(T_{eff}\) = 32.5 kK, \(\log g\) = 5.4 dex places the star just above the EHB strip in the \(T_{eff}\)–\(\log g\) diagram, well surrounded by V361 Hya stars. With a supersolar helium abundance, log(He/H) = -0.6, this star represents a different evolutionary state than the regular EHB pulsators. Up to now this star remains unique, but since stars with the atmospheric properties of LS IV–14\({}^{\circ}\)116 are extremely rare it is too early to tell whether pulsations in stars with similar atmospheric parameters are common or rare.
A fifth and final class of pulsations in hot subdwarf stars was discovered by Woudt et al. (2006) in the hot sdO binary J17006+0748, but this will not be discussed here as this star is very far from the EHB region.
#### Driving the Beat
Pulsations in sdB stars were predicted to occur by Charpinet et al. (1996) at about the same time as the first pulsators were discovered by Kilkenny et al. (1997). The driving mechanism is due to an opacity bump caused by iron group elements (Charpinet 1997). This mechanism is inefficient at solar metallicity, but gravitational settling and radiative levitation can work together to locally enhance metals in a driving zone in the envelope. This \(\kappa\) mechanism has been successfully invoked to explain both the \(p\)-mode pulsations in V361 Hya stars and the \(g\)-mode pulsations in V1093 Her stars (Fontaine et al. 2003).
While the first models by Fontaine et al. (2003) could produce unstable modes in the coolest sdB stars, there appeared to be a gap between one island of instability on the cool end of the EHB and one at the hot end. The pulsators at the hot end of the \(g\)-mode instability region remained problematic, and particularly so the hybrid DW Lyn type pulsators. Some relief to this problem was recently provided by Jeffery & Saio (2007), with the application of improved opacity values from the OP project as well as the explicit consideration of nickel in addition to iron. The new models are sufficient to bridge the gap between the hot and cool EHB pulsators, and could also predict an island of instability in the sdO domain, close to the observed location of J17006+0748. However, these most recent calculations do not yet give a perfect description of the observed picture. More unstable modes are still found in models at the hot end of the EHB than on the cool end, while observations indicate that pulsations are more common in cool sdB stars. In fact, the problem is now more to explain why most EHB stars on the hot end of the strip do not pulsate, than why they do. Jeffery & Saio (2007) speculated that the iron group element enhancements, which build up due to a diffusion process over rather long timescales, may be disrupted by the the atmospheric motions as pulsations build up to some level. They note that since \(p\)-modes mostly involve vertical motion, while \(g\)-modes are dominated by horizontal motion, it is possible that \(p\)-modes are more effective at redistributing the iron group elements out of the driving zone.
#### Levitation does the Trick
Detailed models of the internal structure of the EHB stars are critical for improving our understanding of the asteroseismic properties. The simplest models with uniform metallicities are not able to drive pulsations in these stars at all. Only with the inclusion of an iron opacity bump was it possible to find unstable modes (Charpinet 1996). However, the periods predicted by these so-called second generation models have usually not been matched with observed periods to better than about one percent, while the observed periods have a precision that are an order of magnitude better. Efforts to improve the atmospheric models to include more of the various effects that can have significant impact on the pulsation spectrum are ongoing.
Important progress was reported by Fontaine et al. (2006), who clearly demonstrated the importance of properly including time-dependent diffusion calculations in order to predict pulsation frequencies and mode stability. Starting from a uniform distribution and solar iron abundance, they demonstrated that it takes several hundred thousand years for radiative acceleration and gravitational settling to produce sufficient iron in the driving region to create unstable modes. After about 1 Myr there are no more changes with respect to which modes are driven and not, but the pulsation frequencies may still shift as the iron opacity bump builds up further. After about 10 Myr iron reaches equilibrium in these models, and no further changes are seen. Since the time to onset of pulsations is just 1/1000 of the typical EHB lifetime, this does not help resolve the issue as to why only a fraction of the EHB stars pulsate.
<figure><img src="content_image/0901.1618/x4.png"><figcaption>Figure 4: Published asteroseismic solutions for ten V361 Hya stars. The lefthand panel shows total mass plotted versus envelope mass, and the right handpanel shows envelope mass versus surface gravity. Note that only the optimalsolution is shown even if the papers discuss several possible ones.</figcaption></figure>
Second generation models based on a pure hydrogen atmosphere on top of a simple ’hard ball’ core approximation, to which an explicit iron abundance profile is added, have been used to derive sensible asteroseismic quantities for a number of V361 Hya stars (e.g. Charpinet et al. 2007). The adopted ‘forward’ method basically consists of constructing a large grid of models in the four dimensional parameter space spanned by the fundamental model parameters, effective temperature \(T_{eff}\), surface gravity \(\log g\), total mass \(M\), and envelope mass fraction \(q\)(H). A minimisation procedure is then invoked to find the model that best matches the observed periods.
To date, ten asteroseismic solutions computed with this method have been published. They were summarised in Randall et al. (2007) for the first seven, and in Fig. 4 the new solutions by van Grootel et al. (2008), van Spaandonk et al. (2008) and Charpinet et al. (2008) (discussed below) have been included. A feature of the asteroseismic modelling is that \(T_{eff}\) is rather poorly constrained, and a better value can usually be provided from spectroscopy. The surface gravity, total mass, and envelope mass fraction, however, have very small associated errors in the asteroseismic solutions, so we plot only these in Fig. 4. The distribution of masses is not as concentrated around 0.47 M\({}_{\odot}\) as most canonical evolutionary models have presumed, but all points are well within the permitted ranges for synthetic populations considered by Han et al. (2002, 2003). Except for two outliers, all the stars appear to form a trend with envelope mass, \(M_{\mathrm{e}}\), increasing with total mass, \(M\). Although this feature has not been accounted for by evolutionary calculations, it could occur as a natural consequence of a higher core mass requiring more energy to remove the envelope. More disturbing is the lack of any clear trend in envelope mass versus surface gravity, as is clearly demanded for canonical EHB models. The scatter in the high gravity objects is easily explained by their spread in mass, and KL UMa fits well with the expected \(M_{e}\)/\(\log g\) trend. But the unusually low envelope masses for BA09 and V338 Ser are hard to explain, and may indicate that the adopted models are too simplified to represent the seismic properties for these cases.
#### Ballooney
The exceptional amplitude of the dominant period in BA09 has hinted towards a radial nature, and this was finally confirmed by Baran et al. (2008) by combining evidence from multicolor photometry and the radial velocity amplitude measured by Telting & Østensen (2006).
Van Grootel et al. (2008) have successfully applied the forward method to BA09, demonstrating some peculiarities in the model predictions. Their optimal solution for the main mode, when using no constraints, is \(\ell\) = 2, which is not reconcilable with the spectroscopic data. However, by imposing mode constraints from multicolor photometry they do find asteroseismic solutions that agree with all observational data. Curiously, the physical parameters for the constrained and unconstrained fits are almost identical, even if the mode identification changes for half the modes considered. The authors conclude “_Our primary result is that the asteroseismic solution stands very robust, whether or not external constraints on the values of the degree \(\ell\) are used._” This peculiarity arises from the high mode density and the way the modes are distributed in period space. But the deeper cause of the problem is the low precision with which the second generation models predict pulsation periods. Models with a more detailed internal structure are therefore urgently needed in order to resolve this problem. It is a concern that the envelope mass fraction van Grootel et al. (2008) find (\(\log M_{e}/M_{\ast}\) \(\simeq\) –4.9, regardless of which modes are which) is several hundred times lower than what any EHB model would predict for such a low mass star at this position in the \(T_{eff}\)–\(\log g\) diagram. With such a thin envelope all models put the star close to the HeMS for a core helium burning star. The authors’ suggestion that the star is in a post-EHB stage of evolution is beyond canonical theory, as only models with substantial hydrogen envelopes evolve to lower gravities before moving off to the sdO domain as the core starts to contract (see Fig. 1).
The envelope mass discrepancy is even more severe in the asteroseismic results for V338 Ser (van Spaandonk et al. 2008), whose best fitting model (number 4 of 5 presented) has an envelope mass fraction \(\log M_{e}/M_{\ast}\) = –5.78! While the authors seem to prefer an even more extreme value of –6.22 for a model with a slightly poorer fit, in order to obtain a mass of 0.561 M\({}_{\odot}\) rather than the unusually high mass of 0.707 M\({}_{\odot}\), the high mass solution might be the most interesting. For if the exceptionally high mass is real, evolutionary calculations would place V338 Ser in a core helium burning stage, not in a post-EHB stage as would be the case if its mass was around 0.5 M\({}_{\odot}\). But as for BA09, evolutionary models demand an envelope mass fraction more than 1 000 times higher than found by van Spaandonk et al. (2008), in order to find V338 Ser at the observed \(\log g\).
With a recent update of the forward modelling code, Charpinet et al. (2008) have produced a very convincing model for the eclipsing binary system NY Vir. This star has been particularly challenging since it is rapidly rotating, due to being in a tidally locked orbit with the close M-dwarf companion. The rotational splitting of modes with different \(m\) produces a particularly rich pulsation spectrum. Charpinet et al. (2008) use asteroseismology to discriminate between three solutions from the binary orbit published by Vučković et al. (2007), and finds that the intermediate model with a mass of 0.47 M\({}_{\odot}\) is clearly favored. This solution is also the only one consistent with the \(\log g\) from the BG survey (Fig. 3).
Most recently, Telting et al. (2008) have presented the first study of line-profile variations in these stars based on high-resolution spectroscopy. Line profile variations of metal lines is a technique to directly determine the spherical harmonic order numbers \(\ell,m\) of a pulsation mode, which is well established for various MS pulsators. To invoke this technique on the faint EHB stars requires substantial investments in terms of telescope time, which has prohibited its use up to now. With the preliminary results on the high amplitude pulsator BA09, Telting et al. (2008) clearly demonstrate that the \(\ell\) of the main mode must be either 0 or 1. Again, the observational accuracy has advanced ahead of the theoretical models, as the standard modelling of such line profile variations are insufficient to properly account for the complex effects on the line profiles in the high temperature and gravity domain of the EHB stars.
A final advancement that demonstrates with particular clarity the direction in which asteroseismology of EHB stars needs to move in order to progress, is the work of Hu et al. (2008). The authors have taken real evolutionary models that have been evolved from the ZAMS, through flashless helium ignition on the RGB, to the EHB by peeling of the envelope, and proceeded to compute the pulsation properties of these stars for a number of different configurations. By comparing these models with the classical post-flash models with similar surface parameters, they clearly show that the differences in internal structure from the two evolutionary paths produce significant differences in both the predicted pulsation periods, and with respect to which modes are excited or damped! Two of the flashless models of Hu et al. (2008) are plotted in Fig. 5 together with a classical post-flash EHB model. From an evolutionary population synthesis point of view it is interesting to note that the post-EHB tracks are substantially different. After the core has exhausted its helium and the star moves off the TAEHB the star briefly burns the remaining envelope hydrogen, before contracting and cooling down during a helium shell burning phase, which lasts much longer than in canonical models (up to 20 Myrs). This is long enough to produce a slight clustering of objects below the HeMS at temperatures between 45 and 50 kK, just where a substantial cluster of He-sdO stars are observed.
<figure><img src="content_image/0901.1618/x6.png"><figcaption>Figure 5: Same as Fig. 1, but with the evolutionary tracks for flashlesspost-RGB evolution from Hu et al. (2008) overplotted. The tracks evolverapidly from the top of the plot to the ZAEHB as the envelope settles down onthe helium burning core, and the stars spends most of their time (200 Myr) inthe upward part of the loop at constant Teff. After core helium exhaustion thestar rapidly heats up, but then turns and moves back again for a several Myrlong shell helium burning phase below the HeMS.</figcaption></figure>
### Conclusions
Asteroseismology of EHB stars is a field in rapid progress, with exceptional challenges due to their complex formation paths. Much progress has been made on evolutionary models, but much remains to be done, particularly with respect to the formation of single EHB stars. Better models are also needed to reproduce the effects of the helium flash stage on the envelope composition, in order to reproduce the internal structure of EHB stars at the accuracy achieved by observational asteroseismology. Only when these advances are in place can asteroseismology reliably test the different formation scenarios.
The low amplitudes and long periods make it very difficult to establish detailed pulsation spectra for V1093 Her stars, and even when it can be done the high mode density makes it difficult to assign modes to the observed frequencies. But \(g\)-modes are particularly interesting because they probe deep into the stellar interior. This is a significant challenge for the future due to the long time-base required to reliably determine the longer pulsation periods in these stars. The upcoming Kepler satellite mission (Christensen-Dalsgaard et al. 2007) provides an excellent opportunity if pulsators can be found within its field of view.
###### Acknowledgements.
Special thanks to E.M. Green for kindly providing the detailed results from her 2008 article, which made it possible to reproduce the figures from that article together with evolutionary tracks. Without these data, Fig. 1, 2, 3, and 5 would have been a lot less informative. The author is supported by the Research Council of the University of Leuven under grant GOA/2008/04 and by the EU FP6 Coordination Action HELAS.References
Ahmad, A., & Jeffery, C.S., 2005, A&A, 437, L51
Baran, A, Pigulski, A, & O’Toole, S.J., 2008, MNRAS, 385, 255
Charpinet, S., Fontaine, G., Brassard, P., et al. 1997, ApJ, 483, L123
Charpinet, S., Fontaine, G., Brassard, P., & Dorman, B., 1996, ApJ, 471, L103
Charpinet, S., Fontaine, G., Brassard, P., et al. 2007, CoAst, 150, 241
Charpinet, S., van Grootel, V., Reese, D., et al. 2008, A&A, 489, 377
Christensen-Dalsgaard, J., Arentoft, T., Brown, T.M., et al. 2007, CoAst, 150, 350
D’Cruz, N.L., Dorman, B., Rood, R.T., & O’Connell, R.W., 1996, ApJ, 466, 359
Driebe, T., Blöcker, T., Schönberner, D., & Herwig, F., 1999, A&A, 350, 89
Edelmann, H., Heber, U., Hagen, H.-J., Lemke, M., et al. 2003 A&A, 400, 939
Fontaine, G., Brassard, P., Charpinet, S., et al. 2003, ApJ, 597, 518
Fontaine, G., Brassard, P., Charpinet, S., & Chayer, P. 2006, Mem. S.A.It., 77, 49
Green, E.M., Fontaine, G., Reed, M.D., et al. 2003, ApJ, 583, L31
Green, E.M., Fontaine, G., Hyde, E.A., et al. 2008, ASP Conf. Ser., 392, 75
Green, R.F., Schmidt, M., & Liebert, J., 1986, ApJSS, 61, 305
Greenstein, J.L., & Sargent, A.I., 1974, ApJSS, 28, 157
van Grootel, V., Charpinet, S., Fontaine, G., et al. 2008, A&A, 488, 685
Han, Z., Tout, C.A., Eggleton, P.P, 2000, MNRAS, 319, 215
Han, Z, Podsiadlowski, Ph., Maxted, P.F.L, et al. 2002, MNRAS, 336, 449
Han, Z, Podsiadlowski, Ph., Maxted, P.F.L, & Marsh, T.R., 2003, MNRAS, 341, 669
Heber, U., 1986, A&A, 155, 33
Heber, U., Edelmann, H., Lisker, T., Napiwotzki, R., 2003, A&A, 411, L477
Heber, U., 2009, Annual Review of Astronomy and Astrophysics, 47, submitted
Humason, M.L., & Zwicky, F., 1947, ApJ, 117, 313
Iben Jr, I., ApJ, 1976, 208, 165
Jeffery, C.S., & Saio, H., 2007, MNRAS, 378, 379
Kawaler, S.D., & Hostler, S.R., 2005, ApJ, 621, 432
Kilkenny, D., Koen, C., O’Donoghue, D., & Stobie, R.S., 1997, MNRAS, 285, 640
Lisker, T., Heber, U., Napiwotzki, R., et al. 2005, A&A, 430, 223
Maxted, P.F.L, Heber, U., Marsh, T.R., & North, R.C., 2001, MNRAS, 326, 1391
Miller Bertolami, M.M., Althaus, L.G., Unglaub, K., & Weiss, A., 2008, A&A, 491, 253
Moehler, S., Richtler, T., de Boer, K.S., et al. 1990, A&ASS, 86, 53
Napiwotzki, R., Karl, C.A., Lisker, T., et al. 2004, AP&SS, 291, 321
Nelemans, G., & Tauris, T.M., 1998, A&A, 335, 85
Newell, E.B., 1973, ApJSS, 26, 37
Oreiro, R., Østensen, R.H., Green, E.M., & Geier, S., 2009, A&A, in press.
Østensen, R.H., 2006, Baltic Astronomy, 15, 85
O’Toole, S.J., Heber, U., Benjamin, R.A., 2004, A&A, 422, 1053
Paczyński, B., 1971, Acta Astronomica, 21, 1
Podsiadlowski, Ph., Han, Z., Lynas-Gray, A.E., & Brown, D., 2008, ASP Conf. Ser., 392, 15
Politano, M., Taam, R.E., van der Sluys, M., Willems, B., 2008, ApJ, 687, L99
Saio, H., & Jeffery, C.S., 2000, MNRAS, 313, 671
Schuh, S., Huber, J., Dreizler, S., Heber, U., et al. 2006, A&A, 445, L31
Silvotti, R., Schuh, S., Janulis, R., Solheim, J.-E., et al. 2007, Nature, 449, 189
Soker, N., 1998, AJ, 116, 1308
van Spaandonk, L., Fontaine, G., Brassard, P., & Aerts, C., 2008, ASP Conf. Ser., 392, 387
Stoughton, C., Lupton, R.H., Bernardi, M., et al. 2002, AJ, 123, 485
Stroeer, A., Heber, U., Lisker, T., et al. 2007, A&A, 462, 269
Sweigart, A.V., 1997, in:_“The Third Conference on Faint Blue Stars”_, Philip, Liebert, Saffer & Hayes (Eds), L. Davis Press, p.3 (arXiv:astro-ph/9708164)
Telting, J.H., & Østensen, R.H., 2006, A&A, 450, 1149
Telting, J.H., Geier, S., Østensen, R.H., Heber, U., et al. 2008, A&A, in press.
Vučković, M., Aerts, C., Østensen, R., et al. 2007, A&A, 471, 605
Webbink, R.F., 1984, ApJ, 277, 355
Woudt, P.A., Kilkenny, D., Zietsman, E., et al. 2006, MNRAS, 371, 1497
|
1805.03709 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 72719,
"num_imgs": 3,
"llama3_tokens_count": 15850
} | [
"content_image/1805.03709/completeness_old.png",
"content_image/1805.03709/mc_approx_good.png",
"content_image/1805.03709/mc_approx_bad.png"
] | # SLAMCast: Large-Scale, Real-Time 3D Reconstruction and Streaming for Immersive Multi-Client Live Telepresence
Patrick Stotko
Stefan Krumpen
Matthias B. Hullin
Michael Weinmann
and Reinhard Klein
###### Abstract
Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR/AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90% reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages.
Remote collaboration, live telepresence, real-time reconstruction, voxel hashing, RGB-D, real-time streaming. 1254 Research system Patrick Stotko, Stefan Krumpen, Matthias B. Hullin, Michael Weinmann, and Reinhard Klein are with University of Bonn. E-mail: {stotko, krumpen, hullin, mw, rk}@cs.uni-bonn.de. Stotko _et al._: SLAMCast: Large-Scale, Real-Time 3D Reconstruction and Streaming for Immersive Multi-Client Live Telepresence Illustration of our novel multi-client live telepresence framework for remote collaboration: RGB-D data captured with consumer-grade cameras represent the input to our real-time large-scale reconstruction technique that is based on a novel thread-safe GPU hash map data structure. Efficient data streaming is achieved by transmitting a novel compact representation of the reconstructed model in terms of Marching Cubes indices. Multi-client live telepresence is achieved by the server’s independent handling of client requests.
Introduction
One of the main motivations behind virtual reality research has always been to allow users to immersively and subjectively explore remote places or environments. An experience of telepresence could benefit applications as diverse as remote collaboration, entertainment, advertisement, teaching, hazard site exploration, or rehabilitation. Thanks to advances in display technology and the emergence of high-resolution head-mounted devices, we have seen a recent surge in virtual reality solutions. However, it has long been known that traditional display parameters like resolution, frame rate and contrast are not the only factors contributing to an immersive viewing experience. The presentation of the data, its consistency, low-latency control to avoid motion sickness, the degree of awareness and the suitability of controller devices are just as important [11, 16, 53]. For applications such as remote exploration, remote collaboration or teleconferencing, these conditions are not easily met, as the scene is not pre-built but needs to be reconstructed on-the-fly from 3D input data captured by a person or robotic device. At the same time, the data flow in a well-designed system should give multiple remote users the freedom to individually explore, for instance using head-mounted displays (HMD), the current state of reconstruction in the most responsive way possible.
A particular challenge, therefore, is to find a suitable coupling between the acquisition and viewing stages that respects the practical limitations imposed by available network bandwidth and client-side compute hardware while still guaranteeing an immersive exploration experience. For this purpose, teleconferencing systems for transmitting dynamic 3D models of their _users_ typically rely on massive well-calibrated acquisition setups with several statically mounted cameras around the region of interest [50, 41, 9]. Instead, we direct our attention to the remote exploration of _places_ using portable, consumer-grade acquisition devices, for instance in scenarios of remote inspection or consulting. On the acquisition site, a user digitizes their physical environment using consumer-grade 3D capture hardware. Remote clients can perform immersive and interactive live inspection of that environment using off-the-shelf VR devices even while it is acquired and progressively refined. In this scenario, additional challenges arise as the incoming amount of captured data may be high and may also significantly vary over time depending on the size of the scene that is currently imaged. The latter particularly happens for strongly varying object distances within the captured data, whereas the amount of data over time remains in the same order of magnitude if the objects are within the same distance to the capturing camera (as met for teleconferencing scenarios). A first attempt towards interactive virtual live inspection of real scenes [35] built upon real-time voxel block hashing based 3D reconstruction [22] using implicit truncated signed distance fields (TSDFs) that has become a well-established method for high-quality reconstructions [19, 37, 51, 38, 8, 39, 22]. Voxel blocks that are completely processed, i.e. those that are no longer visible, are immediately sent to the remote client and locally converted into a mesh representation using Marching Cubes [30] to perform the actual rendering. Besides the fact that the system is restricted to one remote user, other limitations are the rather high bandwidth requirement of up to 175MBit/s and the missing handling of network failures where the remote client has to reconnect. In particular for multi-client scenarios, handling both the bandwidth problem and the reconnection problem is of utmost importance to allow a satisfactory interaction between the involved users.
To overcome these problems, we propose a novel efficient low-cost multi-client remote collaboration system for the exploration of quasi-static scenes that is designed as a scalable client-server system which handles an arbitrary number of exploration clients under real-world network conditions (including the recovery from full outages) and using consumer-grade hardware. The system consists of a voxel block hashing based reconstruction client, a server managing the reconstructed model and the streaming states of the connected clients as well as the exploration clients themselves (see SLAMCast: Large-Scale, Real-Time 3D Reconstruction and Streaming for Immersive Multi-Client Live Telepresence). The realization of the system relies on the following two key innovations:
* [leftmargin=1em]
* A novel scene representation and transmission protocol based on Marching Cubes (MC) indices enables the system to operate in low-bandwidth remote connection scenarios. Rather than reconstructing geometry on the server site or even performing server-side rendering, our system encodes the scene as a compressed sequence of voxel block indices and values, leaving the final geometry reconstruction to the exploration client. This results in significantly reduced bandwidth requirements compared to previous voxel based approaches [35].
* For the scalable, reliable and efficient management of the streaming states of the individual exploration clients, we propose a novel thread-leveled GPU hash set and map datastructure that guarantees successful concurrent retrieval, insertion and removal of millions of entries on the fly while preserving key uniqueness without any prior knowledge about the data.
From a system point of view, the extension of the system towards multiple reconstruction clients [15] is also envisioned but beyond the scope of this paper. In order to overcome the inherently limited resolution of voxel-based scene representations, we also include a lightweight projective texture mapping approach that enables the visualization of texture details at the full resolution of the depth camera on demand. Users collaboratively exploring the continuously captured scene experience a strong telepresence effect and are directly able to start conversation about the distant environment. We motivate the need of a client server system, provide a discussion of the respective challenges and design choices, and evaluate the proposed system regarding latency, visual quality and accuracy. Furthermore, we demonstrate its practicality in a multi-client remote servicing and inspection role-play scenario with non-expert users (see supplemental video).
## 1 Related Work
[FIGURE:S1.F1][ENDFIGURE]
In this section, we provide an overview of previous efforts related to our novel large-scale, real-time 3D reconstruction and streaming framework for immersive multi-client telepresence categorized according to the developments regarding telepresence, 3D reconstruction and hashing.
### Telepresence
Real-time 3D reconstruction is a central prerequisite for many immersive telepresence applications. Early multi-camera telepresence systems did not allow the acquisition and transmission of high-quality 3D models in real-time to remote users due to limitations regarding the hardware at the time [12, 24, 36, 48, 47, 27] or the applied techniques such as the lacking reconstruction accuracy of shape-from-silhouette approaches for concave surface regions [42, 29]. Then the spreading access to affordable commodity depth sensors such as the Microsoft Kinect led to the development of several 3D reconstruction approaches at room scale [19, 31, 32, 34, 20, 13]. However, the high sensor noise as well as temporal inconsistency in the reconstruction limited the quality of the reconstructions. Furthermore, Photoportals [26] have been proposed to provide immersive access to pre-captured 3D virtual environments while also supporting remote collaborative exploration. However, including live-captured contents comes at the cost of a significant lag as well as a reduced resolution. In contrast, the Holoportation system [41] is built on top of the accurate real-time 3D reconstruction pipeline Fusion4D [8] and involves real-time data transmission as well as AR and VR technology to achieve an end-to-end immersive teleconferencing experience. However, massive hardware requirements, i.e. several high-end GPUs running on multiple desktop computers, were needed to achieve real-time performance, where most of the expensive hardware components need to be located at the local user’s side. In the context of static scene telepresence, Mossel and Kröter [35] developed an interactive single-exploration-client VR application based on current voxel block hashing techniques [22]. Although the system is restricted to only one exploration client, the bandwidth requirements of this approach have been reported to be up to 175MBit/s in a standard scenario. A further issue resulting from the direct transmission of the captured data to the rendering client occurs in case of network interruptions where the exploration client has to reconnect to the reconstruction client. Since the system does not keep track of the transmitted data, parts of the scene that are reconstructed during network outage will be lost. While previous approaches are only designed for single client telepresence or do not support interactive collaboration, our approach overcomes these limitations and enables a variety of new applications.
### 3D Reconstruction
The key to success of the recently emerging high-quality real-time reconstruction frameworks is the underlying data representation that is used to fuse the incoming sensor measurements. Especially the modeling of surfaces in terms of implicit truncated signed distance fields (TSDFs) has become well-established for high-quality reconstructions. Earlier of these volumetric reconstruction frameworks such as KinectFusion [19, 37] rely on the use of a uniform grid so that the memory requirement linearly scales with the overall grid size and not with the significantly smaller subset of surface areas. As this is impractical for handling large-scale scenes, follow-up work focused on the development of efficient data structures for real-time volumetric data fusion by exploiting sparsity in the TSDF. This has been achieved based on using moving volume techniques [44, 52], representing scenes in terms of blocks of volumes that follow dominant planes [17] or height maps that are parameterized over planes [45], or using dense volumes only in the vicinity of the actual surface areas to store the TSDF [4, 39, 22]. The allocated blocks that need to be indexed may be addressed based on tree structures or hash maps. Tree structures model the spatial hierarchy at the cost of a complex parallelization and a time-consuming tree traversal which can be avoided with the use of hash functions that, however, discard the hierarchy. Nießner et al. [39] proposed real-time 3D reconstruction based on a spatial voxel block hashing framework that has been later optimized [22]. Drift that may lead to the accumulation of errors in the reconstructed model [39] can be counteracted by implementing loop closure [21, 7]. Due to its efficiency, we built our remote collaboration system on top of the voxel block hashing approach and adapt the latter to the requirements discussed before. Very recently, Golodetz et al. [15] presented a system for multi-client collaborative acquisition and reconstruction of static scenes with smartphones. For each connected camera, a submap representing the client-specific scene part is reconstructed and managed by a server. After capturing has finished, all submaps are merged into a final globally consistent 3D model to avoid artifacts arising from non-perfectly matching submap borders [21]. In contrast we focus on the development of a practical collaboration system for on-the-fly scene inspection and interaction by an arbitrary number of exploration clients. In this scenario, issues such as the submap jittering caused by progressive relocalization during the capturing process have to be handled carefully in order to preserve an acceptable VR experience. As the respective adequate adjustment of the submaps has to be evaluated in the scope of comprehensive user studies, we consider this challenge to be beyond the scope of this paper.
### Hashing
Lossless packing of sparse data into a dense map can be achieved via hashing. However, developing such data structures on the GPU offering the reliability of their CPU-side counterparts is highly challenging. Current voxel block hashing techniques [39, 22, 7] including hierarchical voxel block hashing [23] rely on the high camera frame rate to clean up block allocation failures in subsequent frames and, thus, guarantee consistent but not necessarily successful insertion and removal. Only the guarantee regarding key uniqueness is strictly enforced to avoid that duplicate blocks are allocated and integrated during fusion. Although data integration for some voxel blocks (and re-integration [7]) might, hence, be staggered to a few subsequent frames, model consistency is still ensured by the high frame rate fusion. To achieve a more reliable GPU hashing, perfect hashing approaches [28, 3, 49] have been proposed that aim at collision-free hashing, but are hardly applicable for online reconstruction. In the context of collision handling, minimizing the maximum age of the hash map, i.e. the maximum number of required lookups during retrieval, by reordering key-value pairs similar to Cuckoo Hashing improves the robustness of the hash map construction [14]. Similar to Alcantara et al. [1], who analyzed different collision resolving strategies, the entry size is restricted to 64-bit due to the limited support size of atomic exchange operations. However, these approaches do not support entry removal and insertion is allowed to fail in case the defined upper bound on the maximum age is not achieved. Stadium Hashing [25] supports concurrent insertion and retrieval, but lacks removal, by avoiding entry reordering that would otherwise lead to synchronization issues. Recently, Ashkinani et al. [2] presented a fully dynamic hash map supporting concurrent insertion, retrieval, and also removal based chaining to resolve collisions. However, their data structure cannot enforce key uniqueness, which is an essential property required by voxel block hashing frameworks to preserve model consistency. In contrast, our hash map data structure overcomes all of the aforementioned limitations and is specifically suited for continuously updated reconstruction and telepresence scenarios.
## 2 Design Choices
Data Representation | Flexibility | Individual Exploration | Re-Connection | Data Management | Compactness
---|---|---|---|---|---
RGB-D Data | - | - | - | easy | good
Voxel Block Model | ✓ | ✓ | ✓ | easy | bad
Mesh | ✓ | ✓ | ✓ | hard | good
MC index based Model | ✓ | ✓ | ✓ | easy | very good
Table 1: Advantages and disadvantages of different scene representations for
remote collaboration systems.
In a practical remote communication and collaboration system, users should be able to directly start a conversation about the – possibly very large – environment/scene and experience an immersive live experience without the need for time-consuming prerecording similar to a telephone call. Such systems rely on efficient data representation and processing (see Table 1), immediate transmission as well as fast and compact data structures to allow reconstructing and providing a virtual 3D model in real time to remote users. In order to meet the requirements regarding usability, latency, and stability, several crucial design choices have to be taken into account. In particular, we thus focus on the discussion of a system design that benefits a variety of applications, while allowing the distribution of the computational burden according to the hardware availability respectively, i.e. to the cloud or to the remote expert’s equipment, and scaling to many remote clients.
Naïve Input Video StreamingAn obvious strategy for the interactive exploration of a live-captured scene by the user is the transmission of the RGB-D input sequence and the reconstruction of the scene model at the exploration client’s site (see Figure 1 top left). Whereas the current state of the art in image and video compression techniques as well as real-time reconstruction would certainly be sufficient for the development of such systems, this approach has several limitations. First, such a setup imposes an extremely high computational burden to the remote expert’s machine, where both the reconstruction and the rendering have to be performed, such that a smooth VR experience at 90Hz may not be guaranteed. Furthermore, in case of network outages, parts of the scene that are acquired while the exploration client is disconnected cannot be recovered automatically and the local user performing the capturing of the scene is forced to move back and acquire the missing parts again. In the worst case where the exploration client completely looses the currently reconstructed model, e.g. when the user accidentally closes the client, the whole capturing session must be restarted. In contrast, this problem can be avoided by instead streaming parts of the fused 3D model where the streaming order is not limited to the acquisition order and can, thus, be controlled for each exploration client independently according to their particular interests.
Full Cloud Video StreamingAlternatively, the full reconstruction including triangulation could be performed on a central cloud server and only RGB-D video streams are transmitted from/to the users (see Figure 1 top right). While re-connections do not require further handling and data loss is no longer an issue, however, Internet latency becomes an apparent problem and prohibits an immersive VR experience. Lags in transmitting the video data directly affect the user experience. Standard approaches trying to compensate this issue rely on the view-adapted transmission of 360 degree video data (e.g. [6, 18, 10]). This allows inspecting the scene based on head rotations, however, translations through the scene are not supported. Furthermore, this not only requires that the users do not perform any fast movements, but also results in drastically increased bandwidth requirements due to the transmission of 360 degree video data which can easily result in the range of around 100MBit/s for SD video at 30Hz or more than 1GBit/s for 4K resolution at 120Hz respectively [33] which is higher than streaming the 3D model. The additional use of techniques for view specification based on e.g. fixation prediction [10] result in additional delays of around 40ms which represents a noticeable perceivable lag in remote collaboration scenarios and reduces the interactive experience. In addition, when the reconstruction is finished or paused and the 3D model does not change for a certain time, the video stream of the renderings still requires a constantly high amount of bandwidth whereas the bandwidth required for streaming the 3D model would immediately drop to zero.
Mesh Data StreamingWhen deciding for the aforementioned server architecture, there remains still the question which data should be transferred from the server to the exploration clients. Similar to full cloud-based video streaming, mesh updates could be streamed to the exploration clients and directly rendered at their machines using standard graphics APIs. Whereas the mesh representation is more compact in comparison to the voxel block model that is used for reconstruction, the number of triangles in each updated block largely differs depending on the amount of surface inside resulting in significantly more complicated and less efficient data management, updating and transmission. Furthermore, the vertex positions, which are given in the global coordinate system, are much harder to compress due to their irregular and arbitrary bit pattern. Instead, we propose a novel bandwidth-optimized representation based on Marching Cubes indices (see subsection 3.2) that is even more compact after compression due to its more regular nature.
Centralized Data ProcessingWe focus on the development of a system that is particularly designed for collaboration tasks where users can explore and interact with the captured scene while at the same time being able to observe the other client’s interactions. For this purpose, a central server is placed between the individual clients to simplify the communication between clients and move shared computational work away from the clients. Using a server avoids complicated and error-prone dense mesh networks between all the exploration clients. Furthermore, it naturally facilitates the integration of multiple reconstruction clients and it allows lower hardware requirements at the exploration clients. This, in turn, makes the system suitable for a much broader variety of users. Powerful hardware, required for the scalability to a large number of clients, can be provided as practical cloud services or similar services (see Figure 1 bottom).
Hash Data StructureEfficient data structures are crucial for efficiently and reliably managing the set of updated blocks for each connected exploration client as well as the scene model and therefore have to be adequately taken into account during the design phase. For data management, fast and efficient retrieval of subsets as well as guaranteed modification through duplicate-free insertion and deletion, which both implicitly perform retrieval to ensure uniqueness, are strictly required to avoid data loss during transmission or redundant streaming of data. In particular, the streaming states of each connected client, i.e. the set of updated data that needs to be transmitted, must be maintained in real-time to avoid delays during live exploration. Since the support for re-connections is a major feature of our telepresence system, these states will contain the list of blocks updated in the time while the connection was down or all blocks in case the client was closed accidentally by the user. Selecting a subset (which involves retrieval and deletion) as well as filling the state (which should be duplicate-free to avoid redundant transmissions) should, hence, be performed as fast as possible in parallel on the GPU for which hash data structures are highly suitable and have been well-established (e.g. [39, 22]). While recently developed hashing approaches work well with high-frame-rate online 3D reconstruction techniques, their lack of strong guarantees regarding hash operations make them hardly applicable to use cases with high reliability requirements such as telepresence systems. Dispensing with the uniqueness guarantee would lead to redundantly transmitted data and, hence, wasted bandwidth whereas artifacts such as holes will occur when insertion, removal, and retrieval cannot be guaranteed and these blocks get lost during streaming from the reconstruction client until the exploration client. With a novel hash map data structure that supports concurrent insertion, removal, and retrieval including key uniqueness preservation while running on a thread level, we directly address these requirements. A detailed evaluation regarding run time and further relevant design choices are provided in the supplemental material.
## 3 Proposed Remote Collaboration System
[FIGURE:S3.F2][ENDFIGURE]
The overall server-client architecture of our novel framework for efficient large-scale 3D reconstruction and streaming for immersive remote collaboration based on consumer hardware is illustrated in Figure 2 and the tasks of the involved components are shown in Figure 3. RGB-D data acquired with commodity 3D depth sensors as present in a growing number of smartphones or the Kinect device are sent to the reconstruction client, where the 3D model of the scene is updated in real time and transmitted to the server. The server manages a copy of the reconstructed model, a corresponding, novel, bandwidth-optimized voxel block representation, and the further communication with connected exploration clients. Finally, at the exploration client, the transmitted scene parts are triangulated to update the locally generated mesh which can be immersively explored i.e. with VR devices. Clients can connect at any time before or after the capturing process has started. In the following sections, we provide more detailed descriptions of the individual components of our framework, i.e. the reconstruction client, the server, and the exploration client, which is followed by an in-depth discussion of the novel data structure (see section 4). Additional implementation details for each component are provided in the supplemental material.
[FIGURE:S3.F3][ENDFIGURE]
### Reconstruction Client
The reconstruction client receives a stream of RGB-D images acquired by a user and is responsible for the reconstruction and streaming of the virtual model. We use voxel block hashing [39, 22] to reconstruct a virtual 3D model from the image data. Since the bandwidth is limited, the as-efficient-as-possible data handling during reconstruction is of great importance. For this purpose, we consider only voxel blocks that have already been fully reconstructed and for which no further immediate updates have to be considered, i.e. blocks that are not visible in the current sensor’s view anymore and have been streamed out to CPU memory [35]. In contrast, transmitting blocks that are being still actively reconstructed and, thus, will change over time which results in an undesirable visualization experience for exploration clients. Furthermore, continuously transmitting these individual blocks during the reconstruction process results in extremely increasing bandwidth requirements which make this approach infeasible to real-world scenarios. In contrast to Mossel and Kröter [35], we concurrently insert the streamed-out voxel blocks into a hash set which allows us to control the amount of blocks per package that are streamed and avoids lags by distributing the work across multiple frames similar to the transfer buffer approach of the InfiniTAM system [22]. To mitigate the delay caused by transmitting only fully reconstructed parts of the scene, we add the currently visible blocks at the very end of the acquisition process as well as when the user stops moving during capturing or the hardware including the network connection are powerful enough to stream the complete amount of queued entries. In particular, we check whether the exponential moving average (EMA) of the stream set size over a period of \(\tau\) = 5 seconds [54] is below a given threshold and the last such prefetching operation is at least 5 seconds ago. The EMA is updated as
\[EMA_{\tau}^{(t_{n+1})}=u\,EMA_{\tau}^{(t_{n})}+(v-u)\,s_{n}+(1-u)\,s_{n+1}\] (1)
with
\[u=e^{-a},\quad v=\frac{1-u}{a},\quad a=\frac{t_{n+1}-t_{n}}{\tau}.\] (2)
This ensures that the delayed but complete model is available to the server and the exploration clients at all times. After fetching a subset of stream set (via concurrent removal) and the respective voxel data from the model, we compress them using lossless compression [5] and send them to the server. In addition to the pure voxel data, the reconstruction client and the exploration clients send their camera intrinsics and current camera pose to the server where they are forwarded to each connected exploration client to enable interactive collaboration. Furthermore, requests for high-resolution textures on the model by the exploration clients, required e.g. for reading text or measurement instruments, are handled by transmitting the sensor’s current RGB image to the reconstruction client where it is forwarded to the server and the exploration clients. To make our framework also capable of handling quasi-static scenes, where the scene is allowed to change between two discrete timestamps, as e.g. occurring when an instrument cabinet has to be opened before being able to read the instruments, our framework also comprises a reset function that allows the exploration client to request scene updates for selected regions. This can be achieved by deleting the reconstructed parts of the virtual model that are currently visible and propagating the list of these blocks to the server.
### Server
The server component is responsible for managing the global voxel block model and the list of queued blocks for each connected exploration client. Furthermore, it converts incoming TSDF voxel blocks into our novel MC voxel block representation. Finally, it forwards messages between clients and distributes camera and client pose data for an improved immersion and client interaction.
In order to reduce the computational burden and infrastructural requirements regarding network bandwidth, the streamed data should be as compact as possible while being efficiently to process. Instead of streaming the model in the original TSDF voxel block representation of the voxel block hashing technique [35] to the exploration clients, we compute and transmit a bandwidth-optimized representation based on Marching Cubes [30]. Thus, a TSDF voxel (12 bytes), composed of a truncated signed distance field (TSDF) value (4 bytes), a fusion weight (4 bytes), and a color (3 bytes + 1 byte alignment), is reduced to a MC voxel, i.e. a Marching Cubes index (1 byte), and a color value (3 bytes). Furthermore, we cut off those voxel indices \(i\) and colors \(c\) where no triangles will be created, i.e. for
\[\mathcal{S}^{c}=\left\{(i,c)\mid i=0\lor i=255\right\},\] (3)
by setting the values \(i\) and \(c\) to zero. While omitting the interpolation weights, resulting in lossy compression, might seem drastic in terms of reconstruction quality, we show that the achieved improvement regarding compression ratio and network bandwidth requirement outweigh the slight loss of accuracy in the reconstruction (see section 5). Compared to a binary representation of the geometry that would lead to the same quality and a similar compression ratio, our MC index structure directly encodes the triangle data and enables the independent and parallel processing at the remote site by removing neighborhood dependencies.
Incoming data sent by the reconstruction client are first concurrently integrated into the TSDF voxel block model and then used to update the corresponding blocks and their seven neighbors in negative direction in the MC voxel block representation. Updating the neighbors is crucial to avoid cuts in the mesh due to outdated and inconsistent MC indices. To avoid branch divergence and inefficient handling of special cases, we recompute the whole blocks instead of solely recomputing the changed parts. The list of updated MC voxel blocks is then concurrently inserted to each exploration client’s stream hash set. Maintaining such a set for each connected client not only enables advanced streaming strategies required for a lag-free viewing experience (see subsection 3.3). It also allows them to reconnect at any point in time, e.g. after network outages, and still explore the entire model since their stream sets are initially filled with the complete list of voxel blocks via concurrent insertion. After selecting all relevant blocks, a random subset of at most the request size limit is extracted via concurrent removal and the corresponding voxel data are retrieved, compressed [5] and sent to the exploration client.
### Exploration Client
The exploration client’s tasks comprise generating surface geometry from the transmitted compact representation in terms of MC indices, updating the current version of the reconstructed model at the remote site, and the respective rendering of the model in real-time. Therefore, exploration clients are allowed to request reconstructed voxel blocks according to the order of their generation during reconstruction, depending on whether they are visible in the current view of the client, or in a random order which is particularly useful in the case when the currently visible parts of the model are already complete, and thus, other parts of the scene can be prefetched. Since the exploration client controls the request rate and size, a lag-free viewing experience is achieved by adapting these parameters depending on the client’s hardware resources.
The received MC voxel blocks are decompressed in a dedicated thread, and the block data is passed to a set of reconstruction threads which generate the scene geometry from the MC indices and colors of the voxels. We reduce the number of draw calls to the graphics API by merging \(15^{3}\) voxel blocks into a mesh block instead of rendering each voxel block separately [35]. To reduce the number of primitives rendered each frame, we compute three level of details (LoDs) from the triangle mesh, where one voxel, eight voxels or 64 voxels respectively are represented by a point and the point colors are averaged over the voxels. During the rendering pass, all visible mesh blocks are rendered, while their LoD is chosen according to the distance from the camera. We refer to the supplemental material for more details.
To allow a better interaction between the involved clients, each exploration client additionally sends its own pose to the server, which distributes it to other exploration clients, so that each user can observe the poses and movements of other exploration clients within the scene. Analogously, the current pose of the reconstruction client is visualized in terms of the respectively positioned and oriented camera frustum. Furthermore, users can interactively explore the reconstructed environment beyond pure navigation by measuring 3D distances between interactively selected scene points. For the purpose of depicting structures below the resolution of the voxel hashing pipeline as e.g. required for reading measurement instruments or texts, the exploration client can send requests to the server upon which the RGB image currently captured by the sensor is directly projected onto the respective scene part and additionally visualized on a virtual measurement display.
## 4 Hash Map and Set Data Structures
[FIGURE:S4.F4][ENDFIGURE]
For the purpose of large-scale 3D reconstruction and streaming to an arbitrary number of remote exploration clients, we developed a thread-safe GPU hash data structure allowing fast and simple management including dynamic concurrent insertion, removal and retrieval of millions of entries with strong success guarantees. In comparison to pure 3D reconstruction, maintaining consistency in multi-client telepresence is much more challenging since streaming data between clients requires that updates are not lost e.g. due to synchronization failures. Whereas previous approaches either allow failures [39, 22, 14] or do not ensure key uniqueness [25, 2], our robust hash data structure is not limited in this regard and represents the key to realize our real-time remote collaboration system. A detailed evaluation in terms of design choices and runtime performance can be found in the supplemental material.
General DesignOur streaming pipeline is built upon two different hash data structures. The server and the individual client components use an internal map structure, that stores unique keys and maps a value to each of them, whereas the server-client streaming protocol relies on a set structure, which only considers the keys. Thus, the major difference lies in the kind of stored data whereas the proposed algorithm for retrieval, insertion and removal is shared among them. We built upon the single-entry data structure by Kähler et al. [22] which stores the values, i.e. key-value pairs for the map structure (voxel block hashing and server model) and keys for the set (streaming states, see Figure 2) into a linear array. Collisions are resolved through linked lists using per-entry offsets to the next elements and a stack structure that maintains the set of available linked list entries. Voxel block hashing based reconstruction approaches rely on the high camera frame rate to clean up block allocation failures in subsequent frames [39, 22, 7, 23] and, therefore, reduce synchronization to a minimum. In contrast, failures in our telepresence system result in data loss during data transmission which cannot be recovered. Thus, we need additional indicators to determine whether an entry is occupied and locks for synchronization to handle cases where several threads attempt to modify the same entry simultaneously. Furthermore, we maintain a strong invariant which is required to achieve correct concurrency on the thread-level: _At any time, the entry positions and the links to colliding values are preserved._Figure 4 demonstrates mixed insertion and removal operations on our thread-safe hash data structure. Detailed descriptions and implementation details of the hash and stack data structures as well as further design remarks are provided in the supplemental material.
RetrievalSince our proposed invariant ensures that entry positions are not allowed to change, finding an element in the hash map or set can be safely implemented as a read-only operation. First, the bucket \(b\) of a given key value is computed according to the underlying hashing function. In case of spatial hashing, this function could be defined as
\[b=\left(x\cdot p_{1}\oplus y\cdot p_{2}\oplus z\cdot p_{3}\right)\bmod n\] (4)
where \((x,y,z)\) are the voxel block coordinates, \(p_{1}=73856093,p_{2}=19349669,p_{3}=83492791\) represent prime numbers, and \(n\) denotes the number of buckets [39, 22]. We check whether the entry is occupied and its key matches the query. If both conditions are met, we found the key and return the current position. Otherwise, we traverse the linked list through the offsets and check each entry in a similar manner.
InsertionFor successful concurrent insertion, the modification of an entry by several threads needs to be handled while avoiding deadlocks. We handle the latter problem by by looping over a non-blocking insertion function, which is allowed to fail, until the value is found in the data structure. In the non-blocking version, we first check if the value is already inserted (by performing retrieval). If the entry is not found, there are two possible scenarios: The value can be inserted at the bucket (if this entry is not occupied) or at the end of the bucket’s linked list. In both cases, other threads might attempt to also modify the entry at the same time. This not only requires locking (which might fail to prevent deadlocks), but also a second occupancy check. If both the lock is successfully acquired and the entry is still free, the value is stored and the entry is marked as occupied and unlocked. In case the bucket was initially occupied (second scenario), we first find the end of the linked list by traversing the offsets and lock that entry. Afterwards, we extract a new linked list position from the stack, store the value there, set the occupancy flag and reset its offset to zero. Note that the offset is intentionally not reset in the removal operation to avoid a race condition (see the section below for details). Finally, the offset to the new linked list entry is stored and the acquired lock is released.
RemovalRemoving elements as required when selecting voxel blocks for client-server streaming, is similar to insertion and also involves double checking during lock acquisition as well as looping over a non-blocking version. Again, there are two possible scenarios: The entry may be located at the bucket or inside the linked list. In the former case, we try to acquire the lock and then reset the value and mark the entry as unoccupied. In contrast to the approach by Nießner et al. [39], the first linked list entry is not moved to the bucket to preserve our invariant. Threads that try to erase this value might, otherwise, fail to find it. We evaluated the impact of this change and observed that runtime performance was not affected. If the value is inside the linked list (second scenario), we first find the previous entry and lock both entries. Afterwards, the current entry is reset and marked as unoccupied, the offset of the previous entry is updated, and both locks are finally released. As mentioned earlier, the offset is kept to avoid a race condition where other threads concurrently performing direct or indirect retrieval (inside insertion and removal) might not be able to access the remainder of the linked list which would lead to failures in all three operations. Thus, we avoid the need for additional synchronization in the retrieval operation by delaying this step to the insertion operation.
## 5 Evaluation
Dataset | Voxel Size [mm] | Bandwidth [MBit/s] | Model Size [# Voxel Blocks]
---|---|---|---
| | MC 128 | MC 256 | MC 512 | MC 1024 | TSDF 512 |
_heating_room_ | 5 | 4.5 (8.0) | 8.8 (12.3) | 17.5 (30.9) | 32.7 (71.3) | 561.5 (938.8) | 897 ×103
_pool_ | 5 | 4.6 (7.1) | 9.0 (14.0) | 17.8 (29.7) | 29.3 (54.5) | 489.3 (937.0) | 637 ×103
_fr1/desk2_ | 5 | 8.1 (11.6) | 16.2 (23.8) | 32.6 (46.8) | 61.0 (95.0) | 764.0 (938.6) | 134 ×103
_fr1/room_ | 5 | 12.3 (23.6) | 16.4 (23.6) | 32.1 (42.2) | 57.6 (87.9) | 739.7 (938.0) | 467 ×103
_heating_room_ | 10 | 5.1 (7.6) | 9.2 (14.4) | 14.6 (27.8) | 20.2 (63.7) | 216.8 (937.1) | 147 ×103
_pool_ | 10 | 5.6 (8.5) | 9.9 (16.0) | 13.6 (27.2) | 16.9 (52.3) | 176.3 (937.0) | 104 ×103
_fr1/desk2_ | 10 | 8.7 (11.2) | 14.3 (21.8) | 19.6 (39.2) | 24.4 (71.3) | 170.1 (436.4) | 23 ×103
_fr1/room_ | 10 | 9.2 (12.5) | 15.7 (23.5) | 22.9 (46.1) | 28.5 (88.8) | 207.8 (936.6) | 86 ×103
Table 2: Bandwidth measurements of our system for various scenes. We compared
mean (and maximum) bandwidths of our optimized MC voxel structure with
128-1024 blocks/request and 100Hz request rate to the standard TSDF
representation with 512 blocks/request and unlimited rate. Across all scenes,
our optimized representation saved more than 90% of the bandwidth and scales
linearly with the package size.
Dataset | Voxel Size [mm] | Time [min] | Model Size [# Voxel Blocks]
---|---|---|---
| | MC 128 | MC 256 | MC 512 | MC 1024 | TSDF 512 |
_heating_room_ | 5 | 4:06 | 3:08 | 2:40 | 2:32 | 2:31 | 897 ×103
_pool_ | 5 | 2:14 | 1:32 | 1:12 | 1:09 | 1:08 | 637 ×103
_fr1/desk2_ | 5 | 0:39 | 0:31 | 0:27 | 0:24 | 0:22 | 134 ×103
_fr1/room_ | 5 | 1:46 | 1:14 | 1:01 | 0:57 | 0:56 | 467 ×103
_heating_room_ | 10 | 1:49 | 1:44 | 1:44 | 1:44 | 1:44 | 147 ×103
_pool_ | 10 | 0:54 | 0:50 | 0:50 | 0:50 | 0:50 | 104 ×103
_fr1/desk2_ | 10 | 0:21 | 0:19 | 0:19 | 0:19 | 0:18 | 23 ×103
_fr1/room_ | 10 | 0:46 | 0:42 | 0:41 | 0:41 | 0:41 | 86 ×103
Table 3: Time measurements of our system for various scenes. We compared the
time to stream the whole model represented by our optimized MC voxel structure
with 128-1024 blocks/request and 100Hz request rate to the standard TSDF
representation with 512 blocks/request and unlimited rate. The reconstruction
speed is given by TSDF 512 and serves as a lower bound. For a voxel resolution
of 5mm, a package size of 512 voxel blocks results in the best trade-off
between required bandwidth and total streaming time. Increasing the size leads
to slightly better results with less latency, but substantially higher
bandwidths. For a resolution of 10mm, the optimal streaming time is reached
with even smaller package sizes.
After providing implementation details, we perform an analysis regarding bandwidth requirements and the visual quality of our compact scene representation. This is accompanied by the description of the usage of our framework in a live remote collaboration scenario as well as a discussion of the respective limitations.
### Implementation
We implemented our framework using up to four desktop computers taking the roles of one reconstruction client, one server, and two exploration clients. Each of the computers has been equipped with an Intel Core i7-4930K CPU and 32GB RAM. Furthermore, three of them have been equipped with a NVIDIA GTX 1080 GPU with 8GB VRAM, whereas the fourth computer made use of a NVIDIA GTX TITAN X GPU with 12GB VRAM. For acquisition, we tested two different RGB-D sensors by using the Microsoft Kinect v2, which delivered data with a resolution of 512 \(\times\) 424 pixels at 30Hz, and by using an ASUS Zenfone AR, which captured RGB-D data with a resolution of 224 \(\times\) 172 pixels at 10Hz. Although the ASUS device is, in principle, capable of performing measurements at frame rates of 5-15Hz, we used 10Hz as a compromise between data completeness and speed. Each of the exploration client users was equipped with an HTC Vive HMD with a native resolution of 1080 \(\times\) 1200 pixels per eye whereas the recommended rendering resolution (reported by the VR driver) is 1512 \(\times\) 1680 pixels per eye, leading to a total resolution of 3024 \(\times\) 1680 pixels. Please note that the higher recommended resolution (in comparison to the display resolution) originates from the lens distortion applied by the VR system. All computers were connected via a local network.
### Bandwidth and Latency Analysis
[FIGURE:S5.F5][ENDFIGURE]
[FIGURE:S5.F6][ENDFIGURE]
In the following, we provide a detailed quantitative evaluation of the bandwidth requirements of our novel collaboration system. For the purpose of comparison, we recorded two datasets _heating_room_ and _pool_ (see supplemental material) with the Kinect v2, and also used two further publicly available standard datasets that were captured with the Kinect v1 [46]. Throughout the experiment, we loaded a dataset and performed the reconstruction on the computer equipped with the NVIDIA GTX TITAN X. The model is then streamed to the server (second computer) and further to a benchmark client (third computer). Compared to the exploration client, the benchmark client is started simultaneously to the reconstruction client, requests voxel blocks with a fixed predefined frame rate of 100Hz, and directly discards the received data to avoid overheads. Using this setup, we measured the mean and maximum bandwidth required for streaming the TSDF voxel block model from the reconstruction client to the server and the MC voxel block model from the server to the benchmark client. Furthermore, we also measured the time until the model has been completely streamed to the benchmark client. For the voxel block hashing pipeline, we used 5mm and 10mm for the voxel size, 60mm for the truncation region and hash maps with \(2^{20}\) and \(2^{22}\) buckets as well as excess list sizes matching the respective active GPU and passive CPU voxel block pool sizes of \(2^{19}\) and \(2^{20}\) blocks. The server and reconstruction client used the passive parameter set for their hash maps and sets. The results of our experiment are shown in Table 2 and Table 3. A further evaluation regarding the server scalability is provided in the supplemental material.
Across all scenes and voxel sizes, the measured mean and maximum bandwidths for our novel MC voxel structure scale linearly with the package size and are over one order of magnitude smaller compared to the standard TSDF voxel representation. We measured higher bandwidths at 10mm voxel size than at 5mm for package sizes of 128 and 256 blocks. Our stream hash set automatically avoids duplicates, which saves bandwidth in case the system works at its limits and can be considered as an adaptive streaming. At 10mm this triggers substantially less and thus, more updates are sent to the server and exploration clients. We also observed by a factor of two larger bandwidths for the datasets captured with the Kinect v1 in comparison to the ones recorded by us with the Kinect v2. This is mainly caused by the lower reliability of the RGB-D data which contains more sensor noise as well as holes, which, in turn, results in a larger number of allocated voxel blocks that need to be streamed. Furthermore, the faster motion induces an increased motion blur within the images, and thus leads to larger misalignments in the reconstructed model as well as even more block allocations. However, this problem is solely related to the reconstruction pipeline and does not affect the scalability of our collaboration system.
The overall system latency is determined by the duration until newly seen parts of the scene are queued for transmission, i.e. until they are streamed out to CPU memory, the latency of the network, and the package size of the exploration client’s requests. Since the whole system runs in real-time, i.e. data are processed in the order of tens of milliseconds, the runtime latency within the individual components has a negligible impact on the total latency of the system. In order to evaluate the bandwidth requirements and the overall latency, we performed further measurements as depicted in Figure 6 and Figure 5. Whereas the bandwidth for transmitting the TSDF voxel block representation has a high variance and ranges up to our network’s limit of 1Gbit/s, our bandwidth optimized representation has not only lower requirements, i.e. a reduction by more than 90%, but also a significantly lower variance. For a package size of 256 blocks, the model is only slowly streamed to the exploration client which results in a significant delay until the complete model has been transmitted. Larger sizes such as 512 blocks affect both the mean bandwidth and the variance while further increases primarily affect the variance since less blocks than the package size need to be streamed (see Figure 6). This effect also becomes apparent in Figure 5 where lower package sizes lead to a smooth streaming and larger delays whereas higher values reduce the latency. Furthermore, the delay between the reconstruction client and the server in the order of seconds is directly related to our choice of only transmitting blocks that have been streamed out to save bandwidth. Note that directly streaming the actively reconstructed voxel blocks is infeasible due to extremely increasing bandwidth requirements (see Section 3.1). Once our automatic streaming of the visible parts triggers, which can be seen in the rapid increases of the RC’s stream set (RC SS), the gap between the current model at the reconstruction client and the streamed copy at the server becomes smaller. Since the visible blocks are streamed in an arbitrary order, this results in lots of updates for already existing neighboring MC voxel blocks at the server site that need to be streamed to the exploration client. Therefore, the exploration client’s model grows slower than the server’s model but this gap is closed shortly after the server received all visible blocks. Note that the effects of this prefetching approach can be also seen in the reconstruction client’s bandwidth requirements, where high values are typically observed when this mechanism is triggered.
In comparison to per-frame streaming [35], we transmit data per block which allows the recovery from network outages as well as advanced streaming strategies controlled by the remote user. Therefore, depending on the possibly very high number of eligible blocks from streaming, e.g. all visible blocks after re-connection, scene updates may appear unordered and patch-by-patch which can affect the subjective latency (see the supplemental video). However, due to the controllable strategies, the objective latency until these visible data are fully transmitted is much smaller than for inflexible frame-based approaches.
### Scene Model Completeness and Visual Quality
In addition to the bandwidth analysis, we have also evaluated the model completeness during transmission for our novel hash map data structure in comparison to previous techniques that allow failures [39]. Thus, we measured the model size in terms of voxel blocks at the reconstruction client, where the streaming starts, and at the exploration client, where the data is finally transmitted to. To reduce side effects caused by distributing the computational load, we have chosen a package size of 1024 blocks (see Table 3). Whereas previous GPU hashing techniques work well for 3D reconstruction and failures can be cleaned up in subsequent frames, they are not suitable for large-scale collaboration scenarios where blocks are often sent only once to save bandwidth. Insertion and removal failures will, hence, lead to holes in the reconstruction that cannot be repaired in the future (see Figure 7).
We also provide a qualitative visual comparison of our bandwidth-optimized scene representation based on Marching Cubes indices. In order to reduce the bandwidth requirements by over 90%, we omitted the interpolation of vertex positions and colors. Figure 8 shows a comparison between our approximation and the interpolated mesh, where both representations have been reconstructed using a voxel resolution of 5mm. While the interpolated model has a smooth appearance, the quality of our approximation is slightly lower at edges but, otherwise, resembles the overall visual quality quite well. However, for small highly textured objects, staircase artifacts become visible and lead to worse reconstruction results (see Figure 9). Note that our system allows compensating this issue by using our projective texture mapping approach to enable higher resolution information on demand.
### Live Remote Collaboration
<figure><img src="content_image/1805.03709/completeness_old.png"><figcaption>(a) Hash Map by Nießner et al. [39].</figcaption></figure>
<figure><img src="content_image/1805.03709/mc_approx_good.png"><figcaption>(a) With Color Interpolation.</figcaption></figure>
To verify the usability of our framework, we conducted a live remote collaboration experiment where a local user and two remotely connected users collaboratively inspect the local user’s environment supported by audio-communication (i.e. via Voice over IP (VoIP)). For this experiment, we selected people who were unfamiliar to our framework and received a briefing regarding the controls. Furthermore, these user have never been in the respective room before.
While one person took the role of a local user operating the acquisition device, two different remotely connected exploration clients provide support regarding maintenance and safety. The exploration clients can interactively inspect the acquired scene, i.e. the maintenance expert guides the person operating the acquisition device to allow the observation of measurement instruments. By allowing scene resets, where parts of the scene can be updated on demand, our system allows certain scene manipulations such as opening the door to a switch board that has to be checked by the maintenance expert. Furthermore, the scene model can be visualized at higher texture resolution based on the transmission of the live-captured RGB image upon request and its usage in a separate virtual 2D display or directly on the scene geometry. This allows checking instruments or even reading text (see supplemental material for further details and evaluation). Measurements performed based on the controllers belonging to the HMD devices are of sufficient accuracy to allow detecting safety issues or select respective components for replacement. The interaction flow of this experiment is also showcased in the supplemental video. In addition to the Kinect v2, we also used an ASUS Zenfone AR (\(224\times 172\) pixels, up to 15Hz) for RGB-D acquisition. However, the limited resolution and frame rate affect the reconstruction quality obtained with the smartphone.
Furthermore, the users testing our framework particularly liked the options to reset certain scene parts to get an updated scene model as well as the possibility of interacting with the scene by performing measurements and inspecting details like instrument values. After network outages or wanted disconnections from the collaboration process, the capability of re-connecting to re-explore the in-the-meantime reconstructed parts of the scene was also highly appreciated and improved the overall experience significantly. In fact, they reported a good spatial understanding of the environment.
### Limitations
<figure><img src="content_image/1805.03709/mc_approx_bad.png"><figcaption>(a) With Color Interpolation.</figcaption></figure>
Despite allowing an immersive live collaboration between an arbitrary number of clients, our system still faces some limitations. In particular, the acquisition and reconstruction of a scene with a RGB-D camera may be challenging for unexperienced users, who tend to move and turn relatively fast resulting in high angular and linear velocities as well as potential motion blur. As a consequence, the reconstruction is more susceptible to misalignments. Whereas loop-closure techniques [7] compensate this issue, their uncontrollable update scheme during loop closing would cause nearly the entire model to be queued for streaming. This would impose much higher bandwidth requirements to the client connections and prohibit remote collaboration over the Internet. Submap approaches [21] avoid this problem, but issues such as the submap jittering caused by progressive relocalization during the capturing process have to be handled carefully in order to preserve an acceptable VR experience and require a respective evaluation in the scope of a comprehensive user study. Furthermore, we stream the virtual model in the TSDF voxel representation between the reconstruction client and the server which requires both to be in a local network. However, the increasing thrust in cloud services could fill this gap. While we believe that the usability of our novel system significantly benefits from mobile devices with built-in depth cameras, the current quality and especially the frame rate of the provided RGB-D data is inferior compared to the Kinect family resulting in low-quality reconstructions.
## 6 Conclusion
We presented a novel large-scale 3D reconstruction and streaming framework for immersive multi-client live telepresence that is especially suited for remote collaboration and consulting scenarios. Our framework takes RGB-D inputs acquired by a local user with commodity hardware such as smartphones or the Kinect device from which a 3D model is updated in real-time. This model is streamed to the server which further manages and controls the streaming process to the, theoretically, arbitrary number of connected remote exploration clients. As such as system needs to access and process the data in highly asynchronous manner, we have built our framework upon – to the best of our knowledge – the first thread-safe GPU hash map data structure that guarantees successful concurrent insertion, retrieval and removal on a thread level while preserving key uniqueness required by current voxel block hashing techniques. Efficient streaming is achieved by transmitting a novel, compact representation in terms of Marching Cubes indices. In addition, the inherently limited resolution of voxel-based scene representations can be overcome with a lightweight projective texture mapping approach which enables the visualization textures at the resolution of the depth sensor of the input device. As demonstrated by a variety of qualitative experiments, our framework is efficient regarding bandwidth requirements, and allows a high degree of immersion into the live captured environments.
###### Acknowledgements.
This work was supported by the DFG projects KL 1142/11-1 (DFG Research Unit FOR 2535 Anticipating Human Behavior) and KL 1142/9-2 (DFG Research Unit FOR 1505 Mapping on Demand).
## References
* [1] D. A. F. Alcantara. _Efficient Hash Tables on the GPU_. PhD thesis, University of California at Davis, 2011.
* [2] S. Ashkiani, M. Farach-Colton, and J. D. Owens. A Dynamic Hash Table for the GPU. In _IEEE Int. Parallel and Distributed Processing Symposium_, pp. 419–429, 2018.
* [3] F. C. Botelho, R. Pagh, and N. Ziviani. Practical Perfect Hashing in Nearly Optimal Space. _Inf. Syst._, 38(1):108–131, 2013.
* [4] J. Chen, D. Bautembach, and S. Izadi. Scalable Real-time Volumetric Surface Reconstruction. _ACM Trans. Graph._, 32:113:1–113:16, 2013.
* [5] Y. Collet and C. Turner. Smaller and faster data compression with Zstandard. https://code.fb.com/core-data/smaller-and-faster-data-compression-with-zstandard/, 2016. Accessed: 2019-01-29.
* [6] X. Corbillon, G. Simon, A. Devlic, and J. Chakareski. Viewport-adaptive navigable 360-degree video delivery. In _2017 IEEE Int. Conf. on Communications_, pp. 1–7, 2017.
* [7] A. Dai, M. Nießner, M. Zollhöfer, S. Izadi, and C. Theobalt. BundleFusion: Real-time Globally Consistent 3D Reconstruction using On-the-fly Surface Reintegration. _ACM Trans. Graph._, 36(3):24, 2017.
* [8] M. Dou et al. Fusion4D: Real-time Performance Capture of Challenging Scenes. _ACM Trans. Graph._, 35(4):114:1–114:13, 2016.
* [9] A. J. Fairchild, S. P. Campion, A. S. García, R. Wolff, T. Fernando, and D. J. Roberts. A Mixed Reality Telepresence System for Collaborative Space Operation. _IEEE Trans. on Circuits and Systems for Video Technology_, 27(4):814–827, 2016.
* [10] C.-L. Fan, J. Lee, W.-C. Lo, C.-Y. Huang, K.-T. Chen, and C.-H. Hsu. Fixation Prediction for 360\({}^{\circ}\) Video Streaming in Head-Mounted Virtual Reality. In _Proc. of the 27th Workshop on Network and Operating Systems Support for Digital Audio and Video_, pp. 67–72, 2017.
* [11] G. Fontaine. The Experience of a Sense of Presence in Intercultural and Int. Encounters. _Presence: Teleoper. Virtual Environ._, 1(4):482–490, 1992.
* [12] H. Fuchs, G. Bishop, K. Arthur, L. McMillan, R. Bajcsy, S. Lee, H. Farid, and T. Kanade. Virtual Space Teleconferencing Using a Sea of Cameras. In _Proc. of the Int. Conf. on Medical Robotics and Computer Assisted Surgery_, pp. 161 – 167, 1994.
* [13] H. Fuchs, A. State, and J. Bazin. Immersive 3D Telepresence. _Computer_, 47(7):46–52, 2014.
* [14] I. García, S. Lefebvre, S. Hornus, and A. Lasram. Coherent Parallel Hashing. _ACM Trans. Graph._, 30(6):161:1–161:8, 2011.
* [15] S. Golodetz, T. Cavallari, N. A. Lord, V. A. Prisacariu, D. W. Murray, and P. H. S. Torr. Collaborative Large-Scale Dense 3D Reconstruction with Online Inter-Agent Pose Optimisation. _IEEE Trans. on Visualization and Computer Graphics_, 24(11):2895–2905, Nov 2018.
* [16] R. M. Held and N. I. Durlach. Telepresence. _Presence: Teleoper. Virtual Environ._, 1(1):109–112, 1992.
* [17] P. Henry, D. Fox, A. Bhowmik, and R. Mongia. Patch Volumes: Segmentation-Based Consistent Mapping with RGB-D Cameras. In _Int. Conf. on 3D Vision_, 2013.
* [18] M. Hosseini and V. Swaminathan. Adaptive 360 VR Video Streaming: Divide and Conquer. _IEEE Int. Symp. on Multimedia_, pp. 107–110, 2016.
* [19] S. Izadi et al. KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera. In _Proc. of the ACM Symp. on User Interface Software and Technology_, pp. 559–568, 2011.
* [20] B. Jones et al. RoomAlive: Magical Experiences Enabled by Scalable, Adaptive Projector-camera Units. In _Proc. of the Annual Symp. on User Interface Software and Technology_, pp. 637–644, 2014.
* [21] O. Kähler, V. A. Prisacariu, and D. W. Murray. Real-Time Large-Scale Dense 3D Reconstruction with Loop Closure. In _European Conference on Computer Vision_, pp. 500–516, 2016.
* [22] O. Kähler, V. A. Prisacariu, C. Y. Ren, X. Sun, P. Torr, and D. Murray. Very High Frame Rate Volumetric Integration of Depth Images on Mobile Devices. _IEEE Trans. on Visualization and Computer Graphics_, 21(11):1241–1250, 2015.
* [23] O. Kähler, V. A. Prisacariu, J. P. C. Valentin, and D. W. Murray. Hierarchical Voxel Block Hashing for Efficient Integration of Depth Images. In _IEEE Robotics and Automation Letters_, pp. 1(1):192–197, 2016.
* [24] T. Kanade, P. Rander, and P. J. Narayanan. Virtualized reality: constructing virtual worlds from real scenes. _IEEE MultiMedia_, 4(1):34–47, 1997.
* [25] F. Khorasani, M. E. Belviranli, R. Gupta, and L. N. Bhuyan. Stadium Hashing: Scalable and Flexible Hashing on GPUs. In _Proc. of the Int. Conf. on Parallel Architecture and Compilation_, pp. 63–74, 2015.
* [26] A. Kunert, A. Kulik, S. Beck, and B. Froehlich. Photoportals: Shared References in Space and Time. In _Proc. of the 17th ACM Conf. on Computer Supported Cooperative Work & Social Computing_, pp. 1388–1399, 2014.
* [27] G. Kurillo, R. Bajcsy, K. Nahrsted, and O. Kreylos. Immersive 3D Environment for Remote Collaboration and Training of Physical Activities. In _IEEE Virtual Reality Conference_, pp. 269–270, 2008.
* [28] S. Lefebvre and H. Hoppe. Perfect Spatial Hashing. _ACM Trans. Graph._, 25(3):579–588, 2006.
* [29] C. Loop, C. Zhang, and Z. Zhang. Real-time High-resolution Sparse Voxelization with Application to Image-based Modeling. In _Proc. of the High-Performance Graphics Conference_, pp. 73–79, 2013.
* [30] W. E. Lorensen and H. E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. In _Proc. of the 14th Annual Conf. on Computer Graphics and Interactive Techniques_, pp. 163–169, 1987.
* [31] A. Maimone, J. Bidwell, K. Peng, and H. Fuchs. Enhanced personal autostereoscopic telepresence system using commodity depth cameras. _Computers & Graphics_, 36(7):791 – 807, 2012.
* [32] A. Maimone and H. Fuchs. Real-time volumetric 3D capture of room-sized scenes for telepresence. In _Proc. of the 3DTV-Conference_, 2012.
* [33] S. Mangiante, G. Klas, A. Navon, Z. GuanHua, J. Ran, and M. D. Silva. VR is on the Edge: How to Deliver 360\({}^{\circ}\) Videos in Mobile Networks. In _Proc. of the Workshop on Virtual Reality and Augmented Reality Network_, pp. 30–35, 2017.
* [34] D. Molyneaux, S. Izadi, D. Kim, O. Hilliges, S. Hodges, X. Cao, A. Butler, and H. Gellersen. Interactive Environment-Aware Handheld Projectors for Pervasive Computing Spaces. In _Proc. of the Int. Conf. on Pervasive Computing_, pp. 197–215, 2012.
* [35] A. Mossel and M. Kröter. Streaming and exploration of dynamically changing dense 3d reconstructions in immersive virtual reality. In _Proc. of IEEE Int. Symp. on Mixed and Augmented Reality_, pp. 43–48, 2016.
* [36] J. Mulligan and K. Daniilidis. View-independent scene acquisition for tele-presence. In _Proc. IEEE and ACM Int. Symp. on Augmented Reality_, pp. 105–108, 2000.
* [37] R. A. Newcombe et al. KinectFusion: Real-Time Dense Surface Mapping and Tracking. In _Proc. of IEEE Int. Symp. on Mixed and Augmented Reality_. IEEE, 2011.
* [38] R. A. Newcombe, D. Fox, and S. M. Seitz. DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time. In _IEEE Conf. on Computer Vision and Pattern Recognition_, pp. 343–352, 2015.
* [39] M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger. Real-time 3D Reconstruction at Scale Using Voxel Hashing. _ACM Trans. Graph._, 32(6):169:1–169:11, 2013.
* [40] NVIDIA Corporation. CUDA Toolkit Documentation. https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html, 2016. Accessed: 2019-01-29.
* [41] S. Orts-Escolano et al. Holoportation: Virtual 3D Teleportation in Real-time. In _Proc. of the Annual Symp. on User Interface Software and Technology_, pp. 741–754, 2016.
* [42] B. Petit, J.-D. Lesage, C. Menier, J. Allard, J.-S. Franco, B. Raffin, E. Boyer, and F. Faure. Multicamera Real-Time 3D Modeling for Telepresence and Remote Collaboration. _Int. Journal of Digital Multimedia Broadcasting_, 2010.
* [43] PresenterMedia. PowerPoint Templates, 3D Animations, and Clipart. https://presentermedia.com/, 2009. Accessed: 2019-01-29.
* [44] H. Roth and M. Vona. Moving volume kinectfusion. In _Proc. of the British Machine Vision Conference_, pp. 112.1–112.11, 2012.
* [45] T. Schöps, J. Engel, and D. Cremers. Semi-dense visual odometry for ar on a smartphone. In _Int. Symp. on Mixed and Augmented Reality_, 2014.
* [46] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers. A Benchmark for the Evaluation of RGB-D SLAM Systems. In _Proc. of the Int. Conf. on Intelligent Robot Systems_, 2012.
* [47] T. Tanikawa, Y. Suzuki, K. Hirota, and M. Hirose. Real world video avatar: Real-time and real-size transmission and presentation of human figure. In _Proc. of the Int. Conf. on Augmented Tele-existence_, pp. 112–118, 2005.
* [48] H. Towles, W. Chen, R. Yang, S. Kum, H. Fuchs, N. Kelshikar, J. Mulligan, K. Daniilidis, C. C. Hill, L. Holden, B. Zeleznik, A. Sadagic, and J. Lanier. 3D Tele-Collaboration Over Internet2. In _Proc. of the Int. Workshop on Immersive Telepresence_, 2002.
* [49] T. T. Tran, M. Giraud, and J.-S. Varré. Perfect Hashing Structures for Parallel Similarity Searches. _IEEE Int. Parallel and Distributed Processing Symposium Workshop_, pp. 332–341, 2015.
* [50] R. Vasudevan, G. Kurillo, E. Lobaton, T. Bernardin, O. Kreylos, R. Bajcsy, and K. Nahrstedt. High-Quality Visualization for Geographically Distributed 3-D Teleimmersive Applications. _IEEE Trans. on Multimedia_, 13(3):573–584, 2011.
* [51] T. Whelan, M. Kaess, M. Fallon, H. Johannsson, J. Leonard, and J. McDonald. Kintinuous: Spatially Extended KinectFusion. In _RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras_, 2012.
* [52] T. Whelan, M. Kaess, H. Johannsson, M. Fallon, J. J. Leonard, and J. McDonald. Real-time large-scale dense RGB-D SLAM with volumetric fusion. _The Int. Journal of Robotics Research_, 34(4-5):598–626, 2015.
* [53] B. G. Witmer and M. J. Singer. Measuring Presence in Virtual Environments: A Presence Questionnaire. _Presence: Teleoper. Virtual Environ._, 7(3):225–240, 1998.
* [54] G. Zumbach and U. Müller. Operators on inhomogeneous time series. _Int. Journal of Theoretical and Applied Finance_, 4(01):147–177, 2001.
|
1512.01770 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 62692,
"num_imgs": 3,
"llama3_tokens_count": 21536
} | [
"content_image/1512.01770/figure-2-tangle.png",
"content_image/1512.01770/figure-1-ggm.png",
"content_image/1512.01770/figure-3-dms.png"
] | # Complementarity between tripartite quantum correlation and bipartite Bell inequality violation in three qubit states
Palash Pandya\({}^{\ddagger}\), Avijit Misra\({}^{\dagger}\), Indranil Chakrabarty\({}^{\ddagger}\)
\({}^{\ddagger}\) Center for Security Theory and Algorithmic Research,
International Institute of Information Technology, Gachibowli, Hyderabad, India
\({}^{\dagger}\) Harish Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad, UP, India.
###### Abstract
We find a single parameter family of genuinely entangled three qubit pure states, called the maximally Bell inequality violating states (MBV), which exhibit maximum Bell inequality violation by the reduced bipartite system for a fixed amount of genuine tripartite entanglement quantified by the so called tangle measure. This in turn implies that there holds a complementary relation between the Bell inequality violation by the reduced bipartite systems and the tangle present in the three qubit states, not necessarily pure. The MBV states also exhibit maximum Bell inequality violation by the reduced bipartite systems of the three qubit pure states with a fixed amount of genuine tripartite correlation quantified by the generalized geometric measure, a genuine entanglement measure of multiparty pure states, and the discord monogamy score, a multipartite quantum correlation measure from information theoretic paradigm. The aforementioned complementary relation has also been established for three qubit pure states for the generalized geometric measure and the discord monogamy score respectively. The complementarity between the Bell inequality violation by the reduced bipartite systems and the genuine tripartite correlation suggests that the Bell inequality violation in the reduced two qubit system comes at the cost of the total tripartite correlation present in the entire system.
## I Introduction
Quantum entanglement [1; 2] allows correlation between distant parties that are completely forbidden in the classical regime [3]. In 1964, John Bell formally showed that predictions of quantum mechanics are incompatible with the notion of local realism [4]. After Bell’s celebrated work there have been numerous conceptual and technical developments for studying nonlocality in quantum systems [5]. The Bell inequality and Bell-type inequalities [4; 6; 7; 8] set an upper bound on the correlations between measurement statistics of many particle systems that cannot be explained by local hidden variable models. If the measurement statistics violates some Bell-type inequality then the particles are said to posses nonlocal correlation [9]. Nonlocality is not only of fundamental interest but it has also been employed as a key resource in several quantum information protocols such as key distribution [10] and quantum randomness generation [11]. On the other hand quantum entanglement has been a key resource for various information theoretic protocols such as dense coding [12; 13], teleportation [14] and many others [15]. As quantum entanglement and nonlocality both are essential as resources in information theory it is interesting to establish the link between them. It is always exciting to know which states are both entangled and nonlocal and thus can be useful for several information theoretic protocols simultaneously. For example in Ref. [16], it has been established that all quantum states useful for teleportation are nonlocal resources. All bipartite pure entangled states violate the Bell inequality and the magnitude of the violation is directly proportional to the amount entanglement of the states [17]. In Ref. [18], the necessary and sufficient condition for a two qubit mixed state to violate the Bell-CHSH inequality has been derived. Nonlocality and entanglement has been extensively studied in qubit systems and anisotropic one dimensional _XY_ model in presence of a transverse magnetic field in [19] and [20] respectively. Nonlocality in more than two parties is much more complex than the two party scenario. Therefore, the link between the multiparty nonlocality and multiparty correlation is far more complex. Despite remarkable progress the precise link is still not clear in this regard [5].
In Ref. [21], the relation between Svetlichny’s inequality [8] and a specific genuine entanglement measure, tangle has been studied for families of three qubit pure states. One of the main problems in detecting genuine multiparty nonlocality in three qubit systems is to distinguish between the violations arising from reduced density matrices and from the tripartite state [22]. However, in Ref. [23] it has been shown very recently, that in specific cases one or two body correlation functions are sufficient to detect nonlocality in many body systems. Moreover, several entanglement criteria have been proposed based on the two body expectation values [24]. Considering the significance of the Bell inequality violations by reduced density matrices in composite systems, in this work we study how the Bell inequality violation by the reduced bipartite systems depends on the genuine tripartite correlation in three qubit pure states. We find that a single parameter class of genuinely entangled three qubit pure states, abbreviated as the maximally Bell inequality violating states (MBV), exhibits maximum Bell inequality violation by one of its reduced two qubit systems for a fixed amount of genuine tripartite correlation. This implies a complementary relation between the bipartite Bell inequality violation and the genuine tripartite correlation measures in three qubit pure states, with the MBV states lying at the boundary of the complementary relation. In this work, we consider the well known tangle measure [27] and the generalized geometric measure (GGM), a genuine entanglement measure of multipartite pure states [25; 26] from the entanglement separability paradigm and the discord monogamy score (DMS) [28; 29] from the information theoretic paradigm. The complementary relation holds for all the three aforementioned correlation measures. As the \(GHZ\) and the \(W\) classes are two disjoint but complete subsets of genuinely entangled three qubit pure states, we consider the \(GHZ\) and the \(W\) classes separately to establish the aforementioned complementary relation. The \(GHZ\) and \(W\) classes of states are characterized by five and four independent parameters respectively, as we will discuss in Sec. IV. We have established the complementarity analytically for the entire \(W\) class of states, while for the \(GHZ\) class of states we have kept one parameter fixed, i.e., we consider four parameters to establish the complementary relation analytically. However, from numerical study we have claimed that the complementary relation holds in general for the set of three qubit pure states. Given the complementary relation between the maximum bipartite Bell inequality violation and the tangle holds for all three qubit pure states, it can be proved that it holds for all three qubit mixed states, by using the convexity properties of the maximum bipartite Bell inequality violation and the tangle. Thus, we claim that the complementary relation for the tangle holds for all three qubit states, not necessarily pure. Our result can be used in a scenario where three parties share genuinely entangled systems to perform information theoretic protocols among them and at the same time they might need nonlocal resources between their subparts. In this regard it is very useful to know which state is more nonlocal in the bipartite scenario for a fixed amount of genuine tripartite correlation.
The organization of the paper is as follows. In Sec. II, we provide a brief review of the measures of genuine quantum correlations that we have used in this work. We discuss the Bell-CHSH inequality and a No-go theorem for the same in Sec. III. In Sec. IV, we study the relation of the Bell inequality violation in the reduced subsystems with the genuine correlation present in the tripartite system and establish the complementarity between them. We summarize our result and conclude in Sec. V.
## II Various Measures of Tripartite Quantum Correlation
In this section we briefly discuss the different types of measures that we use in this work for quantifying quantum correlation of the genuinely entangled three qubit pure states. By genuinely entangled it is meant that the quantum state under consideration is not separable in any bipartite cut. These measures are _A._Tangle, _B._ Generalized Geometric Measure (GGM), and _C._ Discord Monogamy Score (DMS). Among them the first two belong to the entanglement-separability paradigm, while the third one belongs to the information-theoretic paradigm. Moreover, the measures, tangle and discord monogamy score are based on the concept of monogamy of quantum correlations and the GGM is conceptualized using the geometric distance between two quantum states.
### Tangle
The tangle is a genuine entanglement measure of three qubit systems that is conceptualized based on monogamy of quantum correlations [27; 30]. The quantum monogamy score [13] corresponding to the square of the bipartite entanglement measure, called the Concurrence [31], is the tangle [27]. Concurrence of a two qubit system is given as \(C(\rho_{AB})=\max\{0,\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}\},\) where \(\lambda_{1},\dots,\lambda_{4}\) are the square roots of the eigenvalues of \(\rho_{AB}\tilde{\rho}_{AB}\) in decreasing order, \(\tilde{\rho}_{AB}=(\sigma_{y}\otimes\sigma_{y})\rho_{AB}^{*}(\sigma_{y}\otimes \sigma_{y})\). Here the complex conjugation \(\rho_{AB}^{*}\) is taken in the computational basis, and \(\sigma_{y}\) is the Pauli spin matrix.
Therefore, the tangle of a three-qubit state \(\rho_{ABC}\) is given by [27]
\[\tau(\rho_{ABC})=C^{2}_{A:BC}-C^{2}_{AB}-C^{2}_{AC}.\] (1)
The tangle is always non-negative [27]. In particular, it is zero only for the \(W\)-class states and positive for the \(GHZ\)-class states [32].
### Generalized Geometric Measure
A multipartite pure quantum state \(|\psi_{A_{1},A_{2},\ldots,A_{N}}\rangle\) is genuinely multipartite entangled if it is not separable across any bipartition. The Generalized Geometric Measure (GGM) [26] quantifies the genuine multipartite entanglement for these \(N\)-party states based on the distance from the set of all multiparty states that are not genuinely entangled. The GGM is given as
\[{\cal G}(\psi_{A_{1},A_{2},\ldots,A_{N}})=1-\max_{\ket{x}}|\bra{x}\ket{\psi_{A _{1},A_{2},\ldots,A_{N}}}|^{2}.\] (2)
This maximization is done over all states \(|x\rangle\) which are not genuinely entangled. An equivalent mathematical expression of Eq.(2), is the following
\[\mathcal{G}(\psi_{n})=1-\max\{\lambda^{2}_{I:L}|I\cup L=\{A_{1},\ldots,A_{N}\} ,I\cap L=\emptyset\},\] (3)
where \(\lambda_{I:L}\) is the maximal Schmidt coefficient in the bipartite split \(I:L\) of \(|\psi_{N}\rangle\).
### Discord Monogamy Score
The discord monogamy score is a genuine quantum correlation measure from the information theoretic paradigm, which is based on monogamy of quantum correlations [30], as the name suggests. Quantum discord [28] is considered to be the the difference between the total correlation and the classical correlation of a two-party quantum state \(\rho_{AB}\) and is given as
\[D(\rho_{AB})=S(\rho_{B})+S(\rho_{A|B})-S(\rho_{AB}),\] (4)
where \(S(\rho)=-\mbox{tr}(\rho\log_{2}\rho)\) is the von Neumann entropy of the quantum state \(\rho\). Here \(\rho_{A}\) and \(\rho_{B}\) are the reduced density matrices of the quantum state \(\rho_{AB}\). \(S(\rho_{A|B})\) is the measured conditional entropy when a projection-valued measurement is performed on \(B\) and is given by
\[S(\rho_{A|B})=\min_{\{P_{i}\}}\sum_{i}p_{i}S(\rho_{A|i}).\] (5)
The minimization is carried out over the complete set of rank-one projectors \(\{P_{i}\}\) and \(p_{i}=\mbox{tr}_{AB}[(\mathbb{I}_{A}\otimes P_{i})\rho_{AB}(\mathbb{I}_{A} \otimes P_{i})]\), is the probability of obtaining the outcome \(P_{i}\). \(\mathbb{I}_{A}\) is the identity operator on the Hilbert space of \(A\). The output state is \(\rho_{A|i}=\frac{1}{p_{i}}\mbox{tr}_{B}[(\mathbb{I}_{A}\otimes P_{i})\rho_{AB} (\mathbb{I}_{A}\otimes P_{i})]\), corresponding to the outcome \(P_{i}\).
The quantum monogamy score corresponding to the quantum discord is the discord monogamy score [29] which is given as
\[\delta_{D}=D_{A:BC}-D_{AB}-D_{AC}.\] (6)
Note that the quantum discord for three-qubit states can be both monogamous and non-monogamous unlike the square of the concurrence [29].
## III Bell Inequality Violation
In 1964 John Bell established that violation of the Bell inequality by a two-party state excludes any local realistic description of the state. All bipartite pure entangled states violate the Bell inequality and thus forbid any local realistic description for them. The necessary and sufficient condition for a two-qubit mixed state to violate the Bell-CHSH inequality [6] was first given in Ref. [18]. For an arbitary two-qubit state \(\rho\), violation of the Bell-CHSH inequality implies that the Bell-CHSH value \(\mathcal{S}\) is greater than \(2\). It can be shown that the maximum Bell-CHSH value \(\mathcal{S}(\rho)\) for a two-qubit state \(\rho\) is given by [18]
\[\mathcal{S}(\rho)=2\sqrt{M(\rho)},\] (7)
where \(M(\rho)=m_{1}+m_{2}\), with \(m_{1}\) and \(m_{2}\) are the two largest eigenvalues of \(T_{\rho}^{T}T_{\rho}\). \(T_{\rho}\) is the correlation matrix associated with the two-qubit state \(\rho\) with entries \((T_{\rho})_{ij}=Tr(\sigma_{i}\otimes\sigma_{j}\rho)\), where \(\sigma_{i}\)’s are the Pauli matrices. \(T_{\rho}^{T}\) denotes the transpose of \(T_{\rho}\). Therefore, violation of the Bell-CHSH inequality implies
\[M(\rho)>1.\] (8)
Henceforth, by violation of the Bell inequality by a two-qubit state we mean violation of the Bell-CHSH inequality. Note that in this article, we consider the Bell inequality violation by two qubit quantum states only. The quantity \(\mathcal{B}(\rho_{AB})\), defined as
\[\mathcal{B}(\rho_{AB})=\max\{0,M(\rho_{AB})-1\}\,,\] (9)
quantifies the amount of Bell inequality violation and hence the nonlocal correlation of two qubit quantum state \(\rho_{AB}\)[33]. Among the three reduced two qubit states of a three qubit pure state \(\ket{\psi}\), to pick the one with the maximum Bell inequality violation we define
\[\mathcal{B}_{\max}(\psi)=\max\{\mathcal{B}(\rho_{AB}),\mathcal{B}(\rho_{BC}), \mathcal{B}(\rho_{AC})\}\,.\] (10)
In this context it is interesting to note that there are two qubit mixed entangled states which do not violate the Bell inequality [18]. In other words, violation of the Bell inequality by a quantum state is not synonymous with the idea of the state being entangled. It was shown that such a local character of quantum correlations traces back to the monogamy trade-off obeyed by bipartite Bell correlation [34]. Monogamy for the Bell inequality violation [33] implies that, _if a quantum state shared by three qubits leads to the violation of the Bell inequality among any two of its sub-parts, then it prohibits its violation for the other two states which the sub-parts share with the third party._ Monogamy of the Bell inequality violation, thus imposes general constraints on the nature of entanglement and Bell correlation [34]. In this paper we deal with the Bell inequality violations by the reduced two qubit systems of the three qubit pure states \(\ket{\psi}_{ABC}\). Since only one among the three reduced density matrices of \(\ket{\psi}_{ABC}\) can violate the Bell inequality, the bipartite Bell inequality violation of \(\ket{\psi}_{ABC}\) implies that it either comes from \(\rho_{AB}\), \(\rho_{AC}\) or \(\rho_{BC}\). As a direct consequence, only one of \(\mathcal{B}(\rho_{AB})\), \(\mathcal{B}(\rho_{BC})\), or \(\mathcal{B}(\rho_{AC})\) is non-zero, which is then picked up by the quantity \(\mathcal{B}_{\max}(\psi)\).
## IV Bell inequality violation versus Quantum Correlation
In this section we establish a relation between the bipartite Bell inequality violation and the genuine tripartite correlation for three qubit pure states. In particular we show that there exists a complementary relation between the genuine tripartite quantum correlation measures and the bipartite Bell inequality violation of three qubit pure states. We identify the single parameter family of genuinely tripartite entangled three qubit pure states that give the maximum bipartite Bell inequality violation for a fixed amount of tripartite correlation, thus lying at the boundary of the aforesaid complementary relation for each of the three correlation measures that are considered in this work. This single parameter family of the genuinely entangled three qubit pure states is given by
\[\ket{\psi}_{\mathit{m}}=\frac{\ket{000}+\mathit{m}(\ket{010}+\ket{101})+\ket{1 11}}{\sqrt{2+2\mathit{m}^{2}}},\] (11)
where \(m\in[0,1]\). These states belong to the \(GHZ\) class when \(m\in[0,1)\). For \(\mathit{m}=1\), the state belongs to the \(W\) class as the tangle is zero at this point. We denote this class of states as the maximally Bell inequality violating (MBV) class of states. This class of states has been recognized as the maximally dense-coding capable (MDCC) [13], for having maximal dense coding capabilities with fixed amount of genuine tripartite quantum correlations.
### Tangle versus the Maximum Bell Inequality Violation
We first derive the complementarity between the tangle and the bipartite Bell inequality violation for genuinely entangled three qubit pure states. As the \(GHZ\) and the \(W\) classes are two disjoint but complete subsets of genuinely entangled three qubit pure states [32], it is sufficient to establish the complementarity for \(GHZ\) and \(W\) classes individually.
The \(GHZ\) class is defined as a set of states which can be converted into the \(GHZ\) state, \(\frac{1}{\sqrt{2}}\left(\ket{000}+\ket{111}\right)\), using SLOCC with non-vanishing probability [32]. This class of states is characterized by five parameters, \(\alpha,\beta,\gamma,\delta\) and \(\phi\), with a general state in the class being defined as follows
\[\ket{\psi}_{GHZ}=\sqrt{K}\left(c_{\delta}\ket{000}+s_{\delta}e^{i\phi}\ket{ \varphi_{\alpha}}\ket{\varphi_{\beta}}\ket{\varphi_{\gamma}}\right),\] (12)
where \(c_{\delta}\) and \(s_{\delta}\) stand for \(\cos{\delta}\) and \(\sin{\delta}\) respectively, \(K=\left(1+c_{\alpha}c_{\beta}c_{\gamma}c_{\phi}s_{2\delta}\right)^{-1}\) is the normalizing constant, and
\[\ket{\varphi_{\alpha}} =c_{\alpha}\ket{0}+s_{\alpha}\ket{1},\]
\[\ket{\varphi_{\beta}} =c_{\beta}\ket{0}+s_{\beta}\ket{1},\]
\[\ket{\varphi_{\gamma}} =c_{\gamma}\ket{0}+s_{\gamma}\ket{1},\] (13)
with parameter ranges, \(\alpha,\beta,\gamma\in(0,\pi/2]\), \(\delta\in(0,\pi/4]\) and \(\phi\in[0,2\pi)\). When \(\phi=0\), we denote the corresponding \(GHZ\) subclass and the respective states by \(GHZ^{R}\) and \(\ket{\psi}_{GHZ^{R}}\) respectively. It is worth mentioning that the complementary relation is shown analytically, to hold for the \(GHZ^{R}\) class of states for all the three correlation measures considered in this paper. However, numerical studies suggest that it holds for the entire \(GHZ\) class of states for all the three correlation measures.
To derive the complementary relation for the \(GHZ^{R}\) class state, we begin with the following theorem.
**Theorem 1**.: _If two three qubit pure states \(\ket{\psi}_{GHZ^{R}}\) and \(\ket{\psi}_{m}\) are such that the corresponding tangles, \(\tau(\psi_{GHZ^{R}})\) and \(\tau(\psi_{m})\) are equal, then the maximal bipartite Bell inequality violations necessarily follow_
\[\mathcal{B}_{\max}({\psi}_{m})\geq\mathcal{B}_{\max}({\psi_{GHZ^{R}}}).\] (14)
Proof.: The tangle for \(\ket{\psi}_{m}\) is given by (see Appendix A),
\[\tau(\psi_{m})=1-\frac{4m^{2}}{(1+m^{2})^{2}}.\] (15)
From Eq.(47) and Eq.(15), it follows that
\[\mathcal{B}_{\max}({\psi_{\mathit{m}}})+\tau(\psi_{\mathit{m}})=1.\] (16)
The tangle for \(\ket{\psi}_{GHZ^{R}}\) is given by
\[\tau(\psi_{GHZ^{R}})=\frac{s^{2}_{\alpha}s^{2}_{\beta}s^{2}_{\gamma}s^{2}_{2 \delta}}{\left(1+c_{\alpha}c_{\beta}c_{\gamma}s_{2\delta}\right)^{2}}.\] (17)
As \(\tau(\psi_{\mathit{m}})=\tau(\psi_{GHZ^{R}})\), the Bell inequality violation for \(\ket{\psi_{m}}\) can be written as
\[\mathcal{B}_{\max}({\psi_{\mathit{m}}})=1-\frac{s^{2}_{\alpha}s^{2}_{\beta}s^{ 2}_{\gamma}s^{2}_{2\delta}}{\left(1+c_{\alpha}c_{\beta}c_{\gamma}s_{2\delta} \right)^{2}}.\] (18)
Let us consider the case when the reduced bipartition \(\rho_{BC}\) violates the Bell-CHSH inequality, which implies \(\mathcal{B}_{\max}(\psi_{GHZ^{R}})=\mathcal{B}(\rho_{BC})>0\). The Bell inequality violation for the density matrix \(\rho_{BC}\) of \(\ket{\psi_{GHZ^{R}}}\) is given by
\[\mathcal{B}(\rho_{BC})=\frac{\left(c_{\alpha}^{2}-c_{\beta}^{2}-c_{\gamma}^{2} +2c_{\beta}^{2}c_{\gamma}^{2}\right)s^{2}_{2\delta}-\mathcal{X}^{2}}{(1+ \mathcal{X})^{2}},\] (19)
where \(\mathcal{X}=c_{\alpha}c_{\beta}c_{\gamma}s_{2\delta}\). Now,
\[\begin{split}&\mathcal{B}\left({\psi_{\mathit{m}}}\right)- \mathcal{B}\left(\rho_{BC}\right)\\ =& 1-\frac{s^{2}_{\alpha}s^{2}_{\beta}s^{2}_{\gamma} s^{2}_{2\delta}+\left(c_{\alpha}^{2}+2c_{\beta}^{2}c_{\gamma}^{2}-c_{\beta}^{2 }-c_{\gamma}^{2}\right)s_{2\delta}^{2}-\mathcal{X}^{2}}{\left(1+\mathcal{X} \right)^{2}}\\ =& 1+\frac{\mathcal{X}^{2}-s_{2\delta}^{2}+\left(s_{ \alpha}^{2}c_{\beta}^{2}s_{\gamma}^{2}+c_{\gamma}^{2}s_{\alpha}^{2}+c_{\beta}^ {2}s_{\gamma}^{2}+c_{\gamma}^{2}s_{\beta}^{2}\right)s_{2\delta}^{2}}{(1+ \mathcal{X})^{2}}.\end{split}\]
Note that, \(\mathcal{B}_{\max}({\psi_{\mathit{m}}})-\mathcal{B}(\rho_{BC})\geq 1-s_{2 \delta}^{2}\geq 0\).
Similarly, when \(\mathcal{B}_{\max}(\psi_{GHZ^{R}})=\mathcal{B}(\rho_{AB})\) or \(\mathcal{B}_{\max}(\psi_{GHZ^{R}})=\mathcal{B}(\rho_{AC})\) (See Appendix C for the Bell inequality violation of \(\rho_{AC}\) and \(\rho_{AB}\)), one can also show that if \(\tau(\psi_{\mathit{m}})=\tau(\psi_{GHZ^{R}})\), then the following inequalities respectively hold
Hence the relation, \(\mathcal{B}_{\max}(\psi_{m})\geq\mathcal{B}_{\max}(\psi_{GHZ^{R}})\) if \(\tau(\psi_{\mathit{m}})=\tau(\psi_{GHZ^{R}}).\) ∎
Let us now derive the relation between the bipartite Bell inequality violation of an arbitrary \(W\) class state \(\ket{\psi}_{W}\) and \(\ket{\psi}_{m}\). Note that the \(W\) class states have zero tangle, and for \(\ket{\psi}_{m}\) the tangle, \(\tau(\psi_{m})=0\), if and only if \(m=1\). In which case we denote \(\ket{\psi}_{m}\) as \(\ket{\psi}_{1}\). Next, we prove the following theorem for the \(W\) class states.
**Theorem 2**.: _Among all three-qubit pure states belonging to the \(W\) class, \(\ket{\psi}_{1}\) exhibits the maximum bipartite Bell inequality violation, i.e.,_
\[\mathcal{B}_{\max}(\psi_{1})\geq\mathcal{B}_{\max}(\psi_{W}),\] (20)
_where \(\ket{\psi}_{1}\), is the MBV state \(\ket{\psi}_{m}\) for \(m=1\)._
Proof.: For any two qubit state \(\rho_{XY}\), \(\mathcal{B}(\rho_{XY})\leq 1\)[18]. Putting \(m=1\) in Eq.(47), we get \(\mathcal{B}_{\max}(\psi_{1})=1\), which implies that \(\ket{\psi}_{1}\) shows the maximum bipartite Bell inequality violation among all \(W\) class states. Hence the result,
\[\mathcal{B}_{\max}(\psi_{1})\geq\mathcal{B}_{\max}(\psi_{W}).\qed\]
<figure><img src="content_image/1512.01770/figure-2-tangle.png"><figcaption>Figure 1: Complementarity between the tangle, τ(ψ) and the maximum Bellinequality violation, Bmax(ψ) for 106 number of Haar uniformly generatedrandom three qubit pure states. The MBV states lies at the boundary. Both axesare dimensionless.</figcaption></figure>
From Theorem 1. and 2., for a three qubit pure state \(\ket{\psi}\) either from the \(GHZ^{R}\) or the \(W\) class, we have \(\mathcal{B}_{\max}(\psi)<\mathcal{B}_{\max}(\psi_{m})\) when \(\tau(\psi)=\tau(\psi_{m})\). Therefore, for the states in \(GHZ^{R}\) and \(W\) classes, one has the following complementary relation
\[\tau(\psi)+\mathcal{B}_{\max}(\psi)\leq 1.\] (21)
In Fig.1, the complementary relation between the tangle and the bipartite Bell inequality violation has been studied for Haar uniformaly generated random three qubit pure states. The numerical evidence suggests that the complementary relation holds in general for all the three qubit pure states. From numerical evidences, we claim that:
_For **an arbitrary three qubit pure state**\(\ket{\psi}\), the tangle \(\tau(\psi)\), and the maximal bipartite Bell inequality violation \(\mathcal{B}_{\max}(\psi)\), follow the complementary relation given in Eq. 21._
**Extension to mixed states:** In what follows, we will prove that if the complementary relation between the tangle and the maximum bipartite Bell inequality violation holds for all three qubit pure states, then it also holds for three qubit mixed states. We will use the convexity property of the tangle and maximum bipartite Bell inequality violation by the reduced two qubit states respectively, of an arbitrary three qubit mixed states.
Proof.: The tangle of mixed three qubit states has been defined by convex roof construction [35], i.e., the tangle of a mixed three qubit quantum state \(\rho\) is given as
\[\tau(\rho)=\min_{\{p_{i},|\psi_{i}\rangle\}}\sum_{i}p_{i}\tau(\psi_{i}),\] (22)
where the minimization is over the all pure state decomposition of \(\rho\), such that \(\rho=\sum_{i}p_{i}\op{\psi_{i}}\), where \(p_{i}\geq 0\) and \(\sum_{i}p_{i}=1.\) The convexity property of the tangle of mixed states follows from the definition itself. Now, we will prove the convexity property of the maximum bipartite Bell inequality violation \(\mathcal{B}_{\max}(\rho)\) for three qubit mixed states. The maximum Bell-CHSH value [18] for the reduced two qubit system \(\rho_{AB}=\Tr_{C}[\rho_{ABC}]\) of a three qubit mixed state \(\rho_{ABC}\) is given by
\[\mathcal{S}(\rho_{AB} )=\left|\Tr[(O_{AB}^{m}\otimes\mathbb{I}_{C})\rho_{ABC}]\right|\]
\[= \left|\sum_{i}p_{i}\left(\Tr[\left(O_{AB}^{m}\otimes\mathbb{I}_{C }\right)|\psi_{ABC}^{i}\rangle\langle\psi_{ABC}^{i}|]\right)\right|\,.\] (23)
Here, \(O_{AB}^{m}\) is the optimal Bell-CHSH operator [18] for \(\rho_{AB}\), \(\mathbb{I}_{C}\) is the identity operator on the Hilbert space of \(C\), and \(\rho_{ABC}=\sum_{i}p_{i}|\psi_{ABC}^{i}\rangle\langle\psi_{ABC}^{i}|\) is an arbitrary pure state decomposition of \(\rho_{ABC}\). Now using the subadditivity property of absolute value we have
\[\mathcal{S}(\rho_{AB})\leq\sum_{i}p_{i} \left|\Tr[\left(O_{AB}^{m}\otimes\mathbb{I}_{C}\right)|\psi_{ABC} ^{i}\rangle\langle\psi_{ABC}^{i}|]\right|\]
\[= \sum_{i}p_{i}\left|\Tr[O_{AB}^{m}\rho_{AB}^{i}]\right|,\] (24)
where \(\rho_{AB}^{i}=\Tr_{C}[|\psi_{ABC}^{i}\rangle\langle\psi_{ABC}^{i}|]\). In the RHS of the above inequality, \(O_{AB}^{m}\) is an non-optimal Bell-CHSH operator for the states \(\rho_{AB}^{i}\)’s. If it is replaced by \(O_{AB}^{i,m}\) that is optimal for \(\rho_{AB}^{i}\) i.e., gives the maximum Bell-CHSH value, for \(\rho_{AB}^{i}\) for all \(i\), then we have
\[\mathcal{S}(\rho_{AB})\leq\sum_{i}p_{i}\left|\Tr[O_{AB}^{i,m}\rho _{AB}^{i}]\right|\,,\]
\[\mathcal{S}\left(\sum_{i}p_{i}\rho_{AB}^{i}\right)\leq\sum_{i}p_{ i}\mathcal{S}(\rho_{AB}^{i}).\] (25)
Similar relations can be shown for the other two subsystems \(\rho_{AC}\) and \(\rho_{BC}\) as well. Following Eq. (7), the Ineq. (IV.1) can be written as
\[\sqrt{M\left(\sum_{i}p_{i}\rho_{AB}^{i}\right)}\leq\sum_{i}p_{i} \left(\sqrt{M(\rho_{AB}^{i})}\right)\,.\] (26)
Note that \(M(\rho)\) for a two qubit system \(\rho\) is defined as in Eq. (7). It is necessary to mention that Eq. (26) is derived for a two qubit system \(\rho_{AB}\) which is a convex combination of two qubit states \(\rho_{AB}^{i}\)’s, where \(\rho_{AB}^{i}=\Tr_{C}[|\psi_{ABC}^{i}\rangle\langle\psi_{ABC}^{i}|]\). Therefore, \(\rho_{AB}^{i}\)’s are rank two states as \(|\psi_{ABC}^{i}\rangle\)’s are three qubit pure states. However, note that Eq. (26) holds true for any convex combination \(\rho_{AB}=\sum_{j}q_{j}\rho_{AB}^{j}\) (\(q_{j}\geq 0\), \(\sum_{j}q_{j}=1\)), of an arbitrary two qubit state \(\rho_{AB}\) as we have only used linearity property of trace and sub-additivity property of absolute value in deriving the Eq. (26). Therefore, Eq. (26) implies convexity of the function \(\sqrt{M(\rho)}\). As square of a convex non-negative function is still convex [36], it follows that
\[M\left(\sum_{i}p_{i}\rho_{AB}^{i}\right)\leq\sum_{i}p_{i}\left[M(\rho_{AB}^{i} )\right]\,.\] (27)
Subtracting 1 from both sides we get
\[M\left(\sum_{i}p_{i}\rho_{AB}^{i}\right)-1\leq\sum_{i}p_{i}[M(\rho_{AB}^{i})-1 ]\,.\] (28)
Hence, it follows that
\[\max\{0,M(\rho_{AB})-1\}\leq \max\{0,\sum_{i}p_{i}[M(\rho_{AB}^{i})-1]\}\]
\[\leq \sum_{i}p_{i}\max\{0,[M(\rho_{AB}^{i})-1]\}\]
\[\mathcal{B}(\rho_{AB})\leq \sum_{i}p_{i}\mathcal{B}(\rho_{AB}^{i})\,.\] (29)
Similarly,
\[\mathcal{B}(\rho_{BC})\leq \sum_{i}p_{i}\mathcal{B}(\rho_{BC}^{i}),\] (30)
\[\mathcal{B}(\rho_{AC})\leq \sum_{i}p_{i}\mathcal{B}(\rho_{AC}^{i}).\] (31)
Now, the maximum bipartite Bell Inequality violation, \(\mathcal{B}_{\max}(\rho_{ABC})=\max\{\mathcal{B}_{AB}(\rho_{ABC}),\mathcal{B}_ {AC}(\rho_{ABC}),\)\(\mathcal{B}_{BC}(\rho_{ABC})\}\). Without loss of generality say \(\mathcal{B}_{\max}(\rho_{ABC})=\mathcal{B}(\rho_{AB})\). Again,
\[\mathcal{B}(\rho_{AB})\leq \sum_{i}p_{i}\mathcal{B}(\rho_{AB}^{i})\]
\[\leq \sum_{i}p_{i}[\max\{\mathcal{B}(\rho_{AB}^{i}),\mathcal{B}(\rho_{ BC}^{i}),\mathcal{B}(\rho_{AC}^{i})\}]\]
\[\leq \sum_{i}p_{i}[\mathcal{B}_{\max}(|\psi_{ABC}^{i}\rangle)].\] (32)
Therefore,
\[\mathcal{B}_{\max}(\rho_{ABC})\leq\sum_{i}p_{i}[\mathcal{B}_{\max}(|\psi_{ABC} ^{i}\rangle)].\] (33)
Hence, the maximum bipartite Bell inequality violation of a three qubit mixed state is convex.
As the tangle and the maximum bipartite Bell inequality violation of a three qubit mixed state both are convex it implies that the complementary relation in Eq. (21) holds for all three qubit mixed states if it holds for all three qubit pure states. ∎
Therefore, from the support of numerical study for entire \(GHZ\) class states, we claim in the following that the complementary relation holds for all three qubit states, not necessarily pure.
**Claim 1**.: _For an arbitrary three-qubit state \(\rho\), the tangle \(\tau(\rho)\) and the maximum bipartite Bell Inequality violation \(\mathcal{B}_{\max}(\rho)\) follows the following complementary relation_
\[\tau(\rho)+\mathcal{B}_{\max}(\rho)\leq 1.\] (34)
### GGM versus Maximum Bell Inequality Violation
Let us now derive the complementarity between the GGM and the bipartite Bell inequality violation for the \(GHZ^{R}\) and \(W\) class states.
**Lemma 1**.: _If for a three qubit pure state \(\ket{\psi}_{GHZ^{R}}\), the GGM is obtained from, say the \(A:BC\) bipartite split, then the only reduced bipartite system of \(\ket{\psi}_{GHZ^{R}}\) that can violate the Bell inequality is \(\rho_{BC}\)._
Proof.: The GGM of the state \(\ket{\psi}_{GHZ^{R}}\), is given as \(\mathcal{G}(\psi_{GHZ^{R}})=1-\max\{g_{A},g_{B},g_{C}\}\), where \(g_{i}\) is the maximum eigenvalue of the reduced state \(\rho_{i}\) of \(\ket{\psi}_{GHZ^{R}}\), with \(i\in\{A,B,C\}\). These maximum eigenvalues, \(g_{i}\)’s, are given by (see Appendix B),
\[g_{A}=\frac{1}{2}\left(1+\sqrt{1+\frac{s_{\alpha}^{2}(c_{\beta}^ {2}c_{\gamma}^{2}-1)s_{2\delta}^{2}}{(1+\mathcal{X})^{2}}}\right),\] (35)
\[g_{B}=\frac{1}{2}\left(1+\sqrt{1+\frac{s_{\beta}^{2}(c_{\alpha}^ {2}c_{\gamma}^{2}-1)s_{2\delta}^{2}}{(1+\mathcal{X})^{2}}}\right),\] (36)
\[g_{C}=\frac{1}{2}\left(1+\sqrt{1+\frac{s_{\gamma}^{2}(c_{\alpha} ^{2}c_{\beta}^{2}-1)s_{2\delta}^{2}}{(1+\mathcal{X})^{2}}}\right),\] (37)
where, \(\mathcal{X}=c_{\alpha}c_{\beta}c_{\gamma}s_{2\delta}\). Let us suppose, without loss of generality, that the GGM is obtained from the bipartite split \(A:BC\). Consequently, \(\max\{g_{A},g_{B},g_{C}\}=g_{A}\) and \(\mathcal{G}(\psi_{GHZ^{R}})=1-g_{A}\).
Now, if \(g_{A}>g_{B}\) we obtain the following condition
\[s_{\alpha}^{2}\left(c_{\beta}^{2}c_{\gamma}^{2}-1\right) \geq s_{\beta}^{2}\left(c_{\alpha}^{2}c_{\gamma}^{2}-1\right)\]
\[\Rightarrow\left(c^{2}_{\alpha}-c^{2}_{\beta}\right)s^{2}_{\gamma} \geq 0,\] (38)
and similarly \(g_{A}>g_{C}\) implies
\[\left(c^{2}_{\alpha}-c^{2}_{\gamma}\right)s^{2}_{\beta}\geq 0.\] (39)
The Bell-CHSH values \(M(\rho_{BC})\), \(M(\rho_{AC})\) and \(M(\rho_{AB})\) for the reduced states \(\rho_{BC}\), \(\rho_{AC}\) and \(\rho_{AB}\) of \(\ket{\psi}_{GHZ^{R}}\) (see Appendix C), are given as
\[M(\rho_{BC})=1+\frac{(c_{\alpha}^{2}-c_{\beta}^{2}-c_{\gamma}^{2 }+2c_{\beta}^{2}c_{\gamma}^{2})s^{2}_{2\delta}-\mathcal{X}^{2}}{(1+\mathcal{X} )^{2}},\] (40)
\[M(\rho_{AC})=1+\frac{(c_{\beta}^{2}-c_{\alpha}^{2}-c_{\gamma}^{2 }+2c_{\alpha}^{2}c_{\gamma}^{2})s^{2}_{2\delta}-\mathcal{X}^{2}}{(1+\mathcal{X })^{2}},\] (41)
\[M(\rho_{AB})=1+\frac{(c_{\gamma}^{2}-c_{\alpha}^{2}-c_{\beta}^{2 }+2c_{\alpha}^{2}c_{\beta}^{2})s^{2}_{2\delta}-\mathcal{X}^{2}}{(1+\mathcal{X} )^{2}}.\] (42)
Now using Eq.(38), we obtain
\[M(\rho_{BC})-M(\rho_{AC})=\frac{2\left(c^{2}_{\alpha}-c^{2}_{\beta}\right)s_{ \gamma}^{2}s_{2\delta}^{2}}{(1+\mathcal{X})^{2}}\geq 0.\] (43)
Similarly, Eq.(39) implies
\[M(\rho_{BC})-M(\rho_{AC})=\frac{2\left(c^{2}_{\alpha}-c^{2}_{\gamma}\right)s_{ \beta}^{2}s_{2\delta}^{2}}{(1+\mathcal{X})^{2}}\geq 0.\] (44)
It follows from the no-go theorem for the Bell Inequality violation that if \(\rho_{BC}\) violates the Bell-CHSH inequality, then \(M(\rho_{BC})>1\), \(M(\rho_{AC})\leq 1\) and \(M(\rho_{AB})\leq 1\). Hence, from Eq.(43) and Eq.(44), we get \(\mathcal{B}(\rho_{BC})>0\) and \(\mathcal{B}(\rho_{AC})=\mathcal{B}(\rho_{AB})=0\).
Note that cyclic permutations of the variables \(\alpha\), \(\beta\) and \(\gamma\) enables one to obtain, say \(g_{B}\) and \(g_{C}\) from \(g_{A}\). Therefore, a similar proof holds for the cases when the GGM is obtained from the other two bipartite splits. This completes the proof of the lemma. ∎
While we have proved Lemma 1. for \(GHZ^{R}\) class, a numerical check for \(10^{6}\) Haar uniformly generated random \(GHZ\) class states indicates that this lemma holds for all the \(GHZ\) class states.
**Theorem 3**.: _If \(\ket{\psi}_{GHZ^{R}}\) and \(\ket{\psi}_{m}\) be two three qubit pure states such that the corresponding GGM, \(\mathcal{G}(\psi_{GHZ^{R}})\) and \(\mathcal{G}(\psi_{m})\) are equal then the maximal bipartite Bell inequality violations for the two states necessarily follow_
\[\mathcal{B}_{\max}(\psi_{m})\geq\mathcal{B}_{\max}(\psi_{GHZ^{R}}).\] (45)
Proof.: The GGM (Appendix B) and the maximum Bell inequality violation of the MBV state \(\ket{\psi}_{m}\) (Appendix C), are given as
\[\mathcal{G}(\psi_{m})=\frac{1}{2}-\frac{m}{1+m^{2}},\] (46)
\[\mathcal{B}_{\max}(\psi_{m})=\frac{4m^{2}}{(1+m^{2})^{2}},\] (47)
\[\mathcal{B}_{\max}(\psi_{\mathit{m}})=4\left(\frac{1}{2}-\mathcal{G}(\psi_{ \mathit{m}})\right)^{2}.\] (48)
Let us assume that for \(\ket{\psi}_{GHZ^{R}}\), \(\max\{g_{A},g_{B},g_{C}\}=g_{A}\), then the GGM is obtained from \(A:BC\) bipartite split and
\[\mathcal{G}(\psi_{m})=\mathcal{G}(\psi_{GHZ^{R}})=1-g_{A}\\ =\frac{1}{2}\left(1-\sqrt{1+\frac{s_{\alpha}^{2}(c_{\beta}^{2}c_{ \gamma}^{2}-1)s_{2\delta}^{2}}{(1+\mathcal{X})^{2}}}\right).\] (49)
Then from Eq.(48), we have
\[\mathcal{B}_{\max}(\psi_{\mathit{m}})_{A:BC}=1+\frac{(c_{\alpha}^{2}+c_{\beta} ^{2}c_{\gamma}^{2}-1)s_{2\delta}^{2}-\mathcal{X}^{2}}{(1+\mathcal{X})^{2}},\] (50)
where the subscript \(A:BC\) indicates that the GGM of \(\ket{\psi}_{GHZ^{R}}\) is obtained from the bipartite split \(A:BC\). From Lemma 1., it follows that if GGM is obtained from \(A:BC\) split, then only \(\mathcal{B}(\rho_{BC})\) can be nonzero and thus \(\mathcal{B}_{\max}(\psi_{GHZ^{R}})=\mathcal{B}(\rho_{BC})\). Therefore, to prove theorem 3., we only have to show \(\mathcal{B}_{\max}(\psi_{m})_{A:BC}\geq\mathcal{B}(\rho_{BC})\).
Now,
\[\begin{split}&\mathcal{B}_{\max}(\psi_{\mathit{m}})_{A:BC}- \mathcal{B}(\rho_{BC})\\ =& 1+\frac{(c_{\beta}^{2}s_{\gamma}^{2}-s_{\gamma}^{ 2})s_{2\delta}^{2}}{(1+\mathcal{X})^{2}}=\ 1-\frac{s_{\beta}^{2}s_{\gamma}^{2} s_{2\delta}^{2}}{(1+\mathcal{X})^{2}}\geq 0,\end{split}\]
as \(\mathcal{X}\geq 0\).
Similarly, it can be proved that when \(\mathcal{G}(\psi_{GHZ^{R}})=1-{g}_{B}\), or \(\mathcal{G}(\psi_{GHZ^{R}})=1-{g}_{C}\), we have \(\mathcal{B}_{\max}(\psi_{\mathit{m}})_{B:AC}\geq\mathcal{B}(\rho_{AC})\) or \(\mathcal{B}_{\max}(\psi_{\mathit{m}})_{C:AB}\geq\mathcal{B}(\rho_{AB})\) respectively.
Hence, \(\mathcal{B}_{\max}(\psi_{m})\geq\mathcal{B}_{\max}(\psi_{GHZ^{R}})\), when \(\mathcal{G}(\psi_{m})=\mathcal{G}(\psi_{GHZ^{R}})\). ∎
Now we derive the complementary relation between the Bell inequality violation and GGM of the \(W\) class states. The \(W\) state, \(\frac{1}{\sqrt{3}}(\ket{001}+\ket{010}+\ket{100})\), is the representative for the \(W\) class which is a set of all states that can be converted to the \(W\) state using SLOCC with non-zero probability. The \(W\) class states are given by
\[\ket{\psi}_{W}=\left(\sqrt{d}\ket{000}+\sqrt{a}\ket{001}+\sqrt{b}\ket{010}+ \sqrt{c}\ket{100}\right),\] (51)
where \(a+b+c+d=1\) is the normalizing condition, and \(a,b,c>0\), \(d=1-(a+b+c)\geq 0\)[32]. To establish the complementary relation let us first prove the following lemma for \(W\) class states.
**Lemma 2**.: _If for a three qubit pure state \(\ket{\psi}_{W}\), the GGM is obtained from, say \(A:BC\) split, then the only reduced bipartite system of \(\ket{\psi}_{W}\) that can violate the Bell inequality is \(\rho_{BC}\)._
Proof.: The maximum eigenvalues corresponding to the reduced systems \(\rho_{A}\), \(\rho_{B}\) and \(\rho_{C}\) for \(\ket{\psi}_{W}\) are respectively given by (see Appendix B),
\[w_{A}=\frac{1}{2}\left(1+\sqrt{1-4(a+b)c}\right),\] (52)
\[w_{B}=\frac{1}{2}\left(1+\sqrt{1-4(a+c)b}\right),\] (53)
\[w_{C}=\frac{1}{2}\left(1+\sqrt{1-4(b+c)a}\right),\] (54)
where \(w_{X}\) is the maximum eigenvalue of the subsystem \(\rho_{X}\) of state \(\ket{\psi_{W}}\), with \(X\in\{A,B,C\}\). If we assume that the GGM is obtained from the bipartite split \(A:BC\), then it implies that \(w_{A}>w_{B}\) and \(w_{A}>w_{C}\).
From \(w_{A}\geq w_{B}\) and \(w_{A}\geq w_{C}\) we respectively get
\[a\left(b-c\right) \geq 0,\] (55)
\[b\left(a-c\right) \geq 0.\] (56)
For the reduced states \(\rho_{BC}\), \(\rho_{AC}\) and \(\rho_{AB}\) of \(\ket{\psi}_{W}\), the Bell-CHSH values \(M(\rho_{BC})\), \(M(\rho_{AC})\) and \(M(\rho_{AB})\) are (see appendix C) respectively given as
\[M(\rho_{BC})=\frac{1}{2}+\frac{12ab-4ac-4bc+\sqrt{V}}{2},\] (57)
\[M(\rho_{AC})=\frac{1}{2}+\frac{12ac-4ab-4bc+\sqrt{V}}{2},\] (58)
\[M(\rho_{AB})=\frac{1}{2}+\frac{12bc-4ab-4ac+\sqrt{V}}{2},\] (59)
where, \(V=[(\sqrt{a}+\sqrt{b}-\sqrt{c})^{2}+d][(\sqrt{a}-\sqrt{b}+\sqrt{c})^{2}+d][(- \sqrt{a}+\sqrt{b}+\sqrt{c})^{2}+d][(\sqrt{a}+\sqrt{b}+\sqrt{c})^{2}+d]\).
Then from Eq.(55) and Eq.(56), one can show
\[M(\rho_{BC})-M(\rho_{AC})=8a\left(b-c\right) \geq 0,\] (60)
\[M(\rho_{BC})-M(\rho_{AB})=8b\left(a-c\right) \geq 0.\] (61)
The two equations above demonstrate that \(M(\rho_{BC})\) is larger among the three. It follows from the no-go theorem for Bell inequality violation, that when \(M(\rho_{BC})>1\), then \(M(\rho_{AC})\leq 1\) and \(M(\rho_{AB})\leq 1\). Therefore, it follows from Eq.(9), that only \(\mathcal{B}(\rho_{BC})>0\) and \(\mathcal{B}(\rho_{AC})=\mathcal{B}(\rho_{AB})=0\), when \(\mathcal{G}(\psi_{W})=1-w_{A}\). Permutation of parameters \(a\), \(b\) and \(c\) gives rise to the cases when the GGM is obtained from the other two bipartite splits, and follow a similar proof. This completes the proof of the lemma. ∎
From the Lemma 1., Lemma 2. and the numerical studies for \(GHZ\) class states we conjecture that for all entangled three qubit pure states, if the bipartite split \(X:YZ\) gives the GGM, then among the three reduced bipartite states, only \(\rho_{YZ}\) can exhibit the Bell inequality violation.
**Theorem 4**.: _If \(\ket{\psi}_{W}\) and \(\ket{\psi}_{m}\) are two three qubit pure states such that the respective GGM, \(\mathcal{G}(\psi_{W})\) and \(\mathcal{G}(\psi_{m})\) are equal, then the maximal bipartite Bell inequality violations necessarily follow_
\[\mathcal{B}_{\max}(\psi_{m})\geq\mathcal{B}_{\max}(\psi_{W}).\] (62)
Proof.: Let us first assume, that \(\max\{w_{A},w_{B},w_{C}\}=w_{A}\) and hence, \(\mathcal{G}(\psi_{W})=1-w_{A}\). Therefore,
\[\mathcal{G}(\psi_{m})=\mathcal{G}(\psi_{W})=1-w_{A}\\ =\frac{1}{2}\left(1-\sqrt{1-4(a+b)c}\right).\] (63)
Employing Eq.(48), the corresponding maximum Bell inequality violation for \(\ket{\psi}_{m}\) is given as
\[\mathcal{B}_{\max}(\psi_{\mathit{m}} )_{A:BC}=1-4(a+b)c,\] (64)
with the subscript indicating that the bipartite split \(A:BC\) gives the GGM for \(\ket{\psi}_{W}\).
From Lemma 2., it follows that only \(\mathcal{B}(\rho_{BC})\geq 0\), and hence, \(\mathcal{B}_{\max}(\psi_{GHZ^{R}})=\mathcal{B}(\rho_{BC})\). Thus, we only need to show that \(\mathcal{B}_{\max}(\psi_{m})_{A:BC}>\mathcal{B}(\rho_{BC})\):
Now,
\[\begin{split}\mathcal{B}_{\max}(\psi_{\mathit{m}})_{A:BC}& -\mathcal{B}(\rho_{BC})\\ =&\frac{3}{2}-\frac{4ac+4bc+12ab+\sqrt{V}}{2},\\ \end{split}\]
where \(V=[(\sqrt{a}+\sqrt{b}-\sqrt{c})^{2}+d][(\sqrt{a}-\sqrt{b}+\sqrt{c})^{2}+d][(- \sqrt{a}+\sqrt{b}+\sqrt{c})^{2}+d][(\sqrt{a}+\sqrt{b}+\sqrt{c})^{2}+d]\).
It can be shown that the minimum of \(\mathcal{B}_{\max}(\psi_{\mathit{m}})_{A:BC}-\mathcal{B}(\rho_{BC})\) is zero, and therefore \(\mathcal{B}_{\max}(\psi_{\mathit{m}})_{A:BC}\geq\mathcal{B}(\rho_{BC})\). The same can be proved for the cases when the other two bipartite splits give the GGM, following the cyclic order of the parameters \(a\), \(b\) and \(c\).
<figure><img src="content_image/1512.01770/figure-1-ggm.png"><figcaption>Figure 2: Complementarity between the GGM, G(ψ) and the maximum Bellinequality violation, Bmax(ψ) for 106 number of Haar uniformly generatedrandom three qubit pure states. The MBV states lies at the boundary. Both axesare dimensionless.</figcaption></figure>
∎
From Theorem 3. and 4., for a state \(\ket{\psi}\) belonging to either of the \(GHZ^{R}\) or \(W\) classes, we have that \(\mathcal{B}_{\max}(\psi)\leq\mathcal{B}_{\max}(\psi_{m})\), when \(\mathcal{G}(\psi)=\mathcal{G}(\psi_{m})\). Hence, for the states in \(GHZ^{R}\) and \(W\) classes, one can show the following complementary relation
\[4\mathcal{G}(\psi)\big{(}1-\mathcal{G}(\psi)\big{)}+\mathcal{B}_ {\max}(\psi)\leq 1.\] (65)
We study the complementary relation between the GGM and the maximum Bell inequality violation for Haar uniformly generated random genuinely entangled pure three qubit states in Fig.2. Numerical study (appendix D) shows that it holds for all the \(GHZ\) class states. Therefore, we propose that the following complementary relation holds for all three qubit pure states.
**Claim 2**.: _For any three qubit pure state \(\ket{\psi}\) the GGM, \(\mathcal{G}(\psi)\) and the maximal bipartite Bell inequality violation, \(\mathcal{B}_{\max}(\psi)\) obey the following complementary relation_
\[4\mathcal{G}(\psi)\big{(}1-\mathcal{G}(\psi)\big{)}+\mathcal{B}_{\max}(\psi) \leq 1.\] (66)
### Discord Monogamy Score versus Maximum Bell Inequality Violation
<figure><img src="content_image/1512.01770/figure-3-dms.png"><figcaption>Figure 3: Complementarity between the discord monogamy score, δD(ψ) and themaximum Bell inequality violation, Bmax(ψ) for 106 number of Haar uniformlygenerated random three qubit pure states. The MBV states lie at the boundary.Both axes are dimensionless.</figcaption></figure>
Moreover, we study the complementary relation between the discord monogamy score (DMS), and the bipartite Bell inequality violation numerically (see Fig.3). The DMS is not always non-negative as the discord is not monogamous. The MBV states lie at the boundary of the complementary relation in this case too.
## V Conclusions
Quantifying nonlocality and finding a relation between nonlocality and quantum correlations is an important yet challenging task in multipartite and higher dimensional quantum systems. In recent past it has been observed that correlation statistics of two body systems can be very fruitful in inferring the multipartite properties of a composite quantum system. In this article we find a single parameter class of states called the MBV class that exhibits the maximum Bell inequality violation for reduced two qubit systems for a fixed amount of genuine tripartite correlation. The measures that we have chosen to quantify genuine tripartite correlation, belong to both the entanglement separability paradigm (the tangle and the GGM) as well as the information theoretic paradigm (DMS). This result in turn establishes a complementary relation between the Bell-CHSH violation by the reduced bipartite systems and the genuine quantum correlations (the tangle) in all three qubit states, with the MBV class lying at the boundary of the complementary relation. The complementary relation between the Bell-CHSH violation by the reduced bipartite systems and the genuine quantum correlations also holds for genuine correlation measure the GGM and the DMS in three qubit pure states, with the MBV class lying at the boundary of the complementary relation. The complementary relation suggests that for all the three measures, the Bell inequality violation of the reduced two party system comes at the cost of genuine tripartite correlations present in the whole system.
###### Acknowledgements.
AM thanks Dagmar Bruss, Michael Horne, Marcus Huber, Hermann Kampermann, Miguel Navascues, Ujjwal Sen and Jochen Szangolies for fruitful discussions and suggestions. PP acknowledges IIIT-Hyderabad for support during the MS project. AM acknowledges IIIT-Hyderabad for hospitality and support during academic visit and research fellowship of DAE, Govt. of India.
## Appendix A Expressions for tangle
For a three qubit pure state \(\ket{\psi}_{ABC}=\sum_{i,j,k=0}^{1}c_{ijk}\ket{ijk}\), the 3-tangle or the residual tangle [27] is given as
\[\tau_{ABC} =\tau_{A:BC}-\tau_{AB}-\tau_{AC},\]
\[=4\left|s_{1}-2s_{2}+4s_{3}\right|,\] (67)
where,
\[s_{1} =c_{000}^{2}c_{111}^{2}+c_{001}^{2}c_{110}^{2}+c_{010}^{2}c_{101} ^{2}+c_{100}^{2}c_{011}^{2},\] (68)
\[s_{2} =c_{000}c_{111}\left(c_{011}c_{100}+c_{101}c_{010}+c_{110}c_{001}\right)\]
\[+c_{011}c_{100}\left(c_{101}c_{010}+c_{110}c_{001}\right)\]
\[+c_{101}c_{010}c_{110}c_{001},\] (69)
\[s_{3} =c_{000}c_{110}c_{101}c_{011}+c_{111}c_{001}c_{010}c_{100}.\] (70)
### GHZ Class
For \(\ket{\psi}_{GHZ}=\sqrt{K}\left(c_{\delta}\ket{000}+s_{\delta}e^{i\phi}\ket{ \varphi_{\alpha}}\ket{\varphi_{\beta}}\ket{\varphi_{\delta}}\right)\) (Eq.(12)), \(s_{1}\), \(s_{2}\) and \(s_{3}\) are respectively given as
\[s_{1}\] (71)
\[s_{2} =3e^{3i\phi}t_{1}t_{2}\frac{\left(c_{\delta}+2e^{i\phi}t_{1} \right)}{(1+\mathcal{X}_{\phi})^{2}},\] (72)
\[s_{3}\] (73)
where, \(t_{1}=c_{\alpha}c_{\beta}c_{\gamma}s_{\delta}\), \(t_{2}=s_{\alpha}^{2}s_{\beta}^{2}s_{\gamma}^{2}s_{\delta}^{2}\) and \(\mathcal{X}_{\phi}=c_{\alpha}c_{\beta}c_{\gamma}c_{\phi}s_{2\delta}\). Hence, the tangle of \(\ket{\psi}_{GHZ}\) is given as
\[\tau_{GHZ}=\frac{s^{2}_{\alpha}s^{2}_{\beta}s^{2}_{\gamma}s^{2}_{2\delta}}{(1+ c_{\alpha}c_{\beta}c_{\gamma}c_{\phi}s_{2\delta})^{2}}\] (74)
### Maximally Bell inequality violating (MBV) class
For the MBV states \(\ket{\psi_{\mathit{m}}}=\frac{\ket{000}+\mathit{m}(\ket{010}+\ket{101})+\ket{1 11}}{\sqrt{2+2\mathit{m}^{2}}}\) (Eq.(11)), \(s_{1}\), \(s_{2}\) and \(s_{3}\) are respectively given by
\[s_{1} =\frac{1+m^{4}}{4(1+m^{2})^{2}},\] (75)
\[s_{2} =\frac{m^{2}}{4(1+m^{2})^{2}},\] (76)
\[s_{3} =0,\] (77)
and the tangle is,
\[\tau_{m}=1-\frac{4m^{2}}{(1+m^{2})^{2}}.\] (78)
## Appendix B Expressions for GGM
The GGM for a three qubit pure state \(\ket{\psi}_{ABC}\), is calculated as
\[\mathcal{G}(\psi_{ABC})=1-\max\{\lambda_{A},\lambda_{B},\lambda_{C}\},\]
where \(\lambda_{A}\), \(\lambda_{B}\) and \(\lambda_{C}\) are the largest eigenvalues of the reduced systems \(\rho_{A}\), \(\rho_{B}\) and \(\rho_{C}\) respectively.
### GHZ Class
For \(\ket{\psi}_{GHZ}\), the eigenvalues of the reduced systems \(\rho_{A}\), \(\rho_{B}\) and \(\rho_{C}\) are respectively given as follows.
For subsystem \(\rho_{A}\),
\[g_{A}^{\pm}=\frac{1}{2}\left(1\pm\sqrt{1+\frac{s_{\alpha}^{2}(c_ {\beta}^{2}c_{\gamma}^{2}-1)s_{2\delta}^{2}}{(1+\mathcal{X_{\phi}})^{2}}} \right),\]
For subsystem \(\rho_{B}\),
\[g_{B}^{\pm}=\frac{1}{2}\left(1\pm\sqrt{1+\frac{s_{\beta}^{2}(c_{ \alpha}^{2}c_{\gamma}^{2}-1)s_{2\delta}^{2}}{(1+\mathcal{X_{\phi}})^{2}}} \right),\]
For subsystem \(\rho_{C}\),
\[g_{C}^{\pm}=\frac{1}{2}\left(1\pm\sqrt{1+\frac{s_{\gamma}^{2}(c_ {\alpha}^{2}c_{\beta}^{2}-1)s_{2\delta}^{2}}{(1+\mathcal{X_{\phi}})^{2}}} \right).\]
Here \(\mathcal{X_{\phi}}=c_{\alpha}c_{\beta}c_{\gamma}c_{\phi}s_{2\delta}\). In each of the subsystem, it is clear that the eigenvalue \(g^{+}_{X}\) is largest, and will be relabeled \(g_{X}\), where \(X\in\{A,B,C\}\).
### W Class
For \(\ket{\psi}_{W}=\sqrt{d}\ket{000}+\sqrt{a}\ket{001}+\sqrt{b}\ket{010}+\sqrt{c} \ket{100}\), the eigenvalues of the reduced subsystems are as follows. For subsystem \(\rho_{A}\),
\[w_{A}^{\pm}=\frac{1}{2}\left(1\pm\sqrt{1-4(a+b)c}\right),\]
For subsystem \(\rho_{B}\),
\[w_{B}^{\pm}=\frac{1}{2}\left(1\pm\sqrt{1-4(a+c)b}\right),\]
For subsystem \(\rho_{C}\),
\[w_{C}^{\pm}=\frac{1}{2}\left(1\pm\sqrt{1-4(b+c)a}\right),\]
Again, the eigenvalue \(w^{+}_{X}\) in each pair is largest of the two, where \(X\in\{A,B,C\}\).
### MBV class
For \(\ket\psi_{m}\), the eigenvalues for \(\rho_{A}\) and \(\rho_{C}\) are \(\{1/2,1/2\}\), and eigenvalues of \(\rho_{B}\) are given by,
\[\lambda_{B}^{\pm}=\frac{1}{2}\pm\frac{m}{1+m^{2}}.\]
Clearly, \(\lambda_{B}^{+}\) is the greatest among all the eigenvalues. Hence, the GGM is given by
\[\mathcal{G}(\psi_{m})=1-\lambda_{B}^{+}=\frac{1}{2}-\frac{m}{1+m^{2}}.\] (79)
## Appendix C Expressions for the Bell inequality violation
Here we give the analytical expressions for the Bell inequality violations for \(GHZ^{R}\) and \(W\) classes of states, calculated as per the method prescribed by Horodecki _et al._[18]. The Bell inequality violation is quantified as \(\mathcal{B}(\rho_{AB})=\max\{0,M(\rho)-1\}\), where \(M(\rho_{AB})\) is the sum of the two largest eignevalues of the matrix, \(T_{AB}^{T}T_{AB}\). Here, \(T_{AB}\) is the correlation matrix such that \((T_{AB})_{ij}=\mbox{Tr}[(\sigma_{i}\otimes\sigma_{j})\rho_{AB}]\). As we are dealing with three qubit pure states, the Bell inequality violation is calculated for each of the three reduced two qubit systems, \(\rho_{AB}\), \(\rho_{BC}\) and \(\rho_{AC}\).
### \(Ghz^{r}\) Class
For the state \(\ket{\psi}_{GHZ^{R}}\), the eigenvalues of the matrix \(T^{T}_{BC}T_{BC}\) are as follows,
\[\lambda_{1} =\frac{c_{\alpha}^{2}s_{\beta}^{2}s_{\gamma}^{2}s_{\delta}^{2}}{( 1+\mathcal{X})^{2}},\] (80)
\[\lambda_{2} =\mathcal{A}_{BC}-\frac{\sqrt{\mathcal{D}}}{2(1+\mathcal{X})^{2}},\] (81)
\[\lambda_{3} =\mathcal{A}_{BC}+\frac{\sqrt{\mathcal{D}}}{2(1+\mathcal{X})^{2}},\] (82)
where, \(\mathcal{X}=c_{\alpha}c_{\beta}c_{\gamma}c_{2\delta}\), and
\[\mathcal{A}_{BC}=\frac{1}{2}+\frac{(c_{\alpha}^{2}-c_{\beta}^{2}-c_{\gamma}^{2 }+2c_{\beta}^{2}c_{\gamma}^{2})s^{2}_{2\delta}-\mathcal{X}^{2}}{2(1+\mathcal{X })^{2}},\] (83)
\[\begin{split}\mathcal{D}=1+& 4\mathcal{X}+s_{2\delta }^{2}\big{[}-s_{2\delta}^{2}\sum c_{\alpha}+(4\mathcal{X}-2c_{2\delta}^{2}) \sum c_{\alpha}^{2}\\ &+2(1+c_{2\delta}^{2})\sum c_{\alpha}^{2}c_{\beta}^{2}-8\mathcal{ X}+4\mathcal{X}^{2}\big{]},\end{split}\] (84)
with \(\sum c_{\alpha}=(c_{\alpha}+c_{\beta}+c_{\gamma})\), \(\sum c_{\alpha}^{2}=(c_{\alpha}^{2}+c_{\beta}^{2}+c_{\gamma}^{2})\) and \(\sum c_{\alpha}^{2}c_{\beta}^{2}=(c_{\alpha}^{2}c_{\beta}^{2}+c_{\alpha}^{2}c_ {\gamma}^{2}+c_{\beta}^{2}c_{\gamma}^{2})\).
It can be shown that the eigenvalues are in the following order, \(\lambda_{1}<\lambda_{2}<\lambda_{3}\). Thus, the Bell-CHSH value, \(M(\rho_{BC})=\lambda_{3}+\lambda_{2}=2\mathcal{A}_{BC}\),
\[M(\rho_{BC})=1+\frac{(c_{\alpha}^{2}-c_{\beta}^{2}-c_{\gamma}^{2}+2c_{\beta}^{ 2}c_{\gamma}^{2})s^{2}_{2\delta}-\mathcal{X}^{2}}{(1+\mathcal{X})^{2}}.\] (85)
Similarly, for both the subsystems \(\rho_{AC}\) and \(\rho_{AB}\), the eigenvalues of \(T^{T}_{AC}T_{AC}\) and \(T^{T}_{AB}T_{AB}\) respectively, follow the ordering \(\lambda_{1}<\lambda_{2}<\lambda_{3}\). The corresponding Bell-CHSH values are given as
\[M(\rho_{AC})=1+\frac{(c_{\beta}^{2}-c_{\alpha}^{2}-c_{\gamma}^{2}+2c_{\alpha}^ {2}c_{\gamma}^{2})s^{2}_{2\delta}-\mathcal{X}^{2}}{(1+\mathcal{X})^{2}}.\] (86)
\[M(\rho_{AB})=1+\frac{(c_{\gamma}^{2}-c_{\alpha}^{2}-c_{\beta}^{2}+2c_{\alpha}^ {2}c_{\beta}^{2})s^{2}_{2\delta}-\mathcal{X}^{2}}{(1+\mathcal{X})^{2}}.\] (87)
There is a inherent exchange symmetry in the expressions, where in, if we swap parameters \(\alpha\) and \(\beta\) in \(M(\rho_{BC})\) we obtain \(M(\rho_{AC})\). Similarly, swapping \(\beta\) and \(\gamma\) in \(M(\rho_{AC})\) gives \(M(\rho_{AB})\). For each subsystem, we define the Bell inequality violation as \(\mathcal{B}(\rho_{XY})=\max\{0,M(\rho_{XY})-1\}\).
### W Class
For the state \(\ket{\psi}_{W}\), the eigenvalues for the matrix \(T_{BC}^{T}T_{BC}\) are given as,
\[\lambda_{1} =4ab,\] (88)
\[\lambda_{2} =\frac{1}{2}+2(ab-ac-bc)-\frac{\sqrt{V}}{2},\] (89)
\[\lambda_{3} =\frac{1}{2}+2(ab-ac-bc)+\frac{\sqrt{V}}{2},\] (90)
where \(a+b+c+d=1\), and \(V=[(\sqrt{a}+\sqrt{b}-\sqrt{c})^{2}+d][(\sqrt{a}-\sqrt{b}+\sqrt{c})^{2}+d][(- \sqrt{a}+\sqrt{b}+\sqrt{c})^{2}+d][(\sqrt{a}+\sqrt{b}+\sqrt{c})^{2}+d]\).
The eigenvalues follow the ordering, \(\lambda_{2}<\lambda_{1}<\lambda_{3}\), implying \(M(\rho_{BC})=\lambda_{1}+\lambda_{3}\),
\[M(\rho_{BC})=\frac{1}{2}+\frac{12ab-4ac-4bc+\sqrt{V}}{2}.\] (91)
Similarly, for the subsystems \(\rho_{AC}\) and \(\rho_{AB}\), the eigenvalues for \(T_{AC}^{T}T_{AC}\) and \(T_{AB}^{T}T_{AB}\) follow the same ordering as mentioned earlier and hence, the Bell-CHSH values for both the bipartitions are given by
\[M(\rho_{AC}) =\frac{1}{2}+\frac{-4ab+12ac-4bc+\sqrt{V}}{2},\] (92)
\[M(\rho_{AB}) =\frac{1}{2}+\frac{-4ab-4ac+12bc+\sqrt{V}}{2}.\] (93)
Note that, here too an exchange symmetry similar to the \(GHZ^{R}\) class of states holds. By swapping \(a\) and \(b\) in \(M(\rho_{AB})\), we get \(M(\rho_{AC})\). Then by swapping \(b\) and \(c\) we obtain \(M(\rho_{BC})\) from \(M(\rho_{AC})\). The Bell inequality violations for each subsystem are then obtained as, \(\mathcal{B}(\rho_{XY})=\max\{0,M(\rho_{XY})-1\}\).
### MBV class
In case of the MBV class state \(\ket{\psi}_{m}\), the correlation matrices for \(\rho_{BC}\) and \(\rho_{AB}\) are equal and hence the expressions for the Bell-CHSH values are equal. Therefore, only \(\rho_{AC}\) can exhibit violation of the Bell Inequality. The eigenvalues for \(T_{AC}^{T}T_{AC}\) for \(\rho_{AC}\) are then,
\[\lambda_{1} =1,\] (94)
\[\lambda_{2} =\lambda_{3}=\frac{4m^{2}}{(1+m^{2})^{2}}.\] (95)
This implies that
\[M(\rho_{AC})=1+\frac{4m^{2}}{(1+m^{2})^{2}}.\] (96)
Therefore,
\[M(\psi_{m})=1+\frac{4m^{2}}{(1+m^{2})^{2}}.\] (97)
## Appendix D Numerical Method
To perform the numerical study we have generated Haar-uniformly distributed random three qubit pure states. As the sets of fully separable, bi-separable and the \({\it W}\) class states have measure zero with respect to the set of \({\it GHZ}\) class states, almost all of the Haar-uniformly generated random pure states belong to the \({\it GHZ}\) class. This has been cross checked by calculating the non vanishing tangle for the states generated in the aforesaid way. Now for each such randomly generated state we have evaluated the maximum bipartite Bell inequality violation and the three correlation measures, namely the tangle, the GGM and the DMS. We have performed our study for \(10^{7}\) number of randomly generated states for each measure. However, the plots exhibit the numerical study for \(10^{6}\) number of states.
## References
* (1) E. Schrödinger, M. Born, Mathematical Proceedings of the Cambridge Philosophical Society **31** 555 (1935); E. Schrödinger, P. Dirac, A. M. Mathematical Proceedings of the Cambridge Philosophical Society **32** 446 (1936).
* (2) R. Horodecki, P. Horodecki, M. Horodecki, _et al_., Rev. Mod. Phys. **81**, 865 (2009).
* (3) A. Einstein, B. Podolsky, N. Rosen, Phys. Rev. **47** (10): 777–780 (1935).
* (4) J. Bell, Physics **1** 3, 195–200, (1964).
* (5) N. Brunner, D. Cavalcanti, S. Pironio _et al_., Rev. Mod. Phys. **86** 419 (2014).
* (6) J.Clauser, M. Horne, A. Shimony, _et al_., Phys. Rev. Lett. **23** 880 (1969);
* (7) D.M. Greenberger. M Horne, A. Shimony _et al_., Am. J. Phys. **58**, 1131 (1990); N.D. Mermin, Phys. Rev. Lett. **65**, 1838 (1990); M. Ardehali, Phys. Rev. A **46**, 5375 (1992); A.V. Belinskii and D.N. Klyshko, Phys. Usp. **36** 653 (1993); A. Peres, Found. Phys. **29**, 589, (1999); I. Pitowsky and K. Svozil, Phys. Rev. A **64**, 014102 (2001); I. Pitowsky, Quantum Probabilty–Quantum Logic (Springer, Berlin, 1989); R. F. Werner and M. M. Wolf, Phys. Rev. A **64**, 032112 (2001); W. Laskowski, T. Paterek, M. Zukowski _et al_., Phys. Rev. Lett. **93**, 200401 (2004); L. Aolita, R. Gallego, A. Cabello _et al_., Phys. Rev. Lett. **108**, 100401 (2012). M. Żukowski and C. Brukner, Phys. Rev. Lett. **88**, 210401 (2001);
* (8) G. Svetlichny, Phys. Rev. D **35**, 3066 (1987);
* (9) Michael Redhead, Incompleteness, Nonlocality and Realism, ISBN: 9780198242383.
* (10) A.K. Ekert, Phys. Rev. Lett. **67**, 661 (1991); J Barrett, L. Hardy and A. Kent, Phys. Rev.Lett. **95** 010503 (2005); A. Acin, N. Brunner, N. Gisin_et al_, Phys. Rev. Lett. **98**, 230501 (2007).
* (11) S. Pironio, A. Acin, S. Massar _et al_., Nature (London) **464**, 1021 (2010); R. Colbeck, Ph.D. Thesis, University of Cambridge, 2007; R. Colbeck and A. Kent, J. Phys. A **44**, 095305 (2011).
* (12) C. H. Bennett, S. J. Wisener, Phys. Rev. Lett. **69**, 2881 (1992).
* (13) R. Nepal, R. Prabhu, A. Sen(De) _et al_., Phys. Rev. A **87**, 032336 (2013).
* (14) C. H. Bennett, G. Brassard, C Crépeau _et al_., Phys. Rev. Lett. **70**, 1895 (1993); R. Horodecki, M. Horodecki, P. Horodecki, Phys. Lett. A 222 (1996) 21.
* (15) S.K. Sazim, I. Chakrabarty Eur. Phys. J. D **67**, 174 (2013); D.Deutsch, A.Ekert, R Jozsa _et al_, Phys. Rev. Lett. **77**, 2818 (1996); L. Goldenberg and L. Vaidman, Phys. Rev. Lett. **75** 1239 (1995); A Cabello Phys. Rev. A **61** 052312 (2000); C. Li, H-S Song and L. Zhou, Journal of Opptics B: Quantum semiclass. opt. **5** 155(2003); A. K. Pati, Phys. Rev. A **63**, 014320 (2001); S. Adhikari, B. S. Choudhury, Phys. Rev. A **74** 032323 (2006); I. Chakrabarty, B.S. Choudhary arXiv:0804:2568; I. Chakrabarty Intl J. Quant. Inf. **7**, 559 (2009); S. Adhikari, A.S. Majumdar, N. Nayak, Phys. Rev. A. **77** 042301 (2008); I. Ghiu, Phys. Rev. A **67** 012323 (2003); S. Adhikari, I Chakrabarty, B.S. Choudhary, J. Phys. A **39** 8439 (2006); V. Buzek, V. Vedral, M.B. Plenio _et al_., Phys. Rev. A **55** 3327 (1997); A Orieux, A. D’Arrigo, G Ferranti _et al_., arXiv:1410.3678; A. D’Arrigo, R. Lo Franco, G. Benenti _et al_., Ann. Phys. **350** (2014).
* (16)D. Cavalcanti, A Acin, N. Brunner _et al_., Phys. Rev. A **87**, 042104 (2013).
* (17) N. Gisin, Phys. Lett. A 154, 201 (1991);
* (18) R. Horodecki, P. Horodecki, M. Horodecki, Phys. Lett. A, **200**, 340 (1995).
* (19) J. Batle, M. Casas, J. Phys. A: Math. Theor. **44** 445304 (2011).
* (20) J. Batle, M. Casas, Phys. Rev. A **82**, 062101 (2010).
* (21) S. Ghose, N. Sinclair, S. Debnath _et al_., Phys. Rev. Lett. **102**, 250404 (2009); A. Ajoy and P. Rungta, Phys. Rev. A **81**, 052334 (2010).
* (22) D. collins, N. Gisin, S. Popescu _et al_., Phys. Rev. Lett. **88**, 170405 (2002); J.L. Cereceda, phys. Rev. A **66**, 024102 (2002).
* (23) N. Brunner, J. Sharam, T. Vertesi, Phys.A:Math.Theor. **43**, 385303 (2010); M. Wiesniak, M. Nawareg and M. Zukowski, Phys.Rev. A **86**, 042339 (2012); J. Tura, R. Augusiak, A. B. Sainz _et al_., Science **344** 1256 (2014).
* (24) G.Tóth, Phys. Rev. A **71**, 010301 (2005); J. Korbickz, J.I Cirac, J. Wehr _et al_., Phys. Rev. Lett. **94** 153601 (2005); G. Tóth, C. Knapp, O Günhe _et al_., Phys. Rev. Lett. **99**, 250405 (2007); O.Gittsovich, P.Hyllus, and O. Günhe, Phys. Rev. A **82**, 032306 (2010); M. Markiewicz, W. Laskowski, T. Paterek _et al_., Phys. Rev. A **87**, 034301 (2013).
* (25) A. Shimony, Ann. N.Y. Acad. Sci. **755**, 675 (1995); H. Barnum and N. Linden, J. Phys. A **34**, 6787 (2001); T.-C. Wei and P.M. Goldbart, Phys. Rev. A **68**, 042307 (2003).
* (26) Aditi Sen (De) and Ujjwal Sen, Phys. Rev. A **81**, 012308 (2010); T. Das, S. Singha Roy, S. Bagchi _et al_., arXiv: 1509.02085; D. Sadhukhan, S. Singha Roy, A.K. Pal _et al_., arXiv:1511.03998.
* (27) V. Coffman, J. Kundu and W. K. Wootters, Phys. Rev. A, **61**, 052306 (2000).
* (28) L. Henderson and V. Vedral, J. Phys. A **34**, 6899 (2001); H. Ollivier and W. H. Zurek, Phys. Rev. Lett. **88**, 017901 (2001); A. Misra, A. Biswas, A. K. Pati _et al_., Phys. Rev. E **91**, 052125 (2015).
* (29) R. Prabhu, A.K. Pati, A Sen(De) _et al_., Phys. Rev. A **85**,040102(R) (2012); G. L. Giorgi, _ibid_. **84**, 054301 (2011).
* (30) M. Koashi and A. Winter, Phys. Rev. A **69**, 022309 (2004); T. J. Osborne and F. Verstraete, Phys. Rev. Lett. **96**, 220503 (2006); G. Adesso, A. Serafini, and F. Illuminati, Phys. Rev. A **73**, 032345 (2006); T. Hiroshima, G. Adesso, and F. Illuminati, Phys. Rev. Lett. **98**, 050503 (2007); M. Seevinck, Phys. Rev. A **76**, 012106 (2007); S. Lee and J. Park, Phys. Rev. A **79**, 054309 (2009); A. Kay, D. Kaszlikowski, and R. Ramanathan, Phys. Rev. Lett. **103**, 050501 (2009); & references therein; A. Streltsov, G. Adesso, M. Piani _et al._, Phys. Rev. Lett. **109**, 050503 (2012).
* (31) S. Hill and W.K. Wootters, Phys. Rev. Lett. **78**, 5022 (1997).
* (32) W. Dür, G. Vidal, J. I. Cirac, Phys. Rev. A **62**, 062314 (2000).
* (33) D. Sadhukhan, S. Singha Roy, D. Rakshit _et al_., New J. Phys. **17** 043013 (2015).
* (34) T.R. de Oliveira, A. Saguia and M.S. Sarandy, EPL **100**, 60004 (2012).
* (35) A. Osterloh, J. Siewert, and A. Uhlmann, Phys. Rev. A **77**, 032310 (2008)
* (36) Convex Analysis by R. T. Rockafellar, Princeton Uni. Press (1997).
|
1406.7628 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 26263,
"num_imgs": 6,
"llama3_tokens_count": 6620
} | [
"content_image/1406.7628/x1.png",
"content_image/1406.7628/x2.png",
"content_image/1406.7628/x3.png",
"content_image/1406.7628/x4.png",
"content_image/1406.7628/x5.png",
"content_image/1406.7628/x6.png"
] | # Solar activity in the past and the chaotic behaviour of the dynamo
Rainer Arlt
R. Arlt Leibniz Institute for Astrophysics Potsdam
Tel.: +49-331-7499-354
Fax: +49-331-7499-526
¹N. Weiss Department of Applied Mathematics and Theoretical Physics
University of Cambridge, UK
²
Nigel Weiss
R. Arlt Leibniz Institute for Astrophysics Potsdam
Tel.: +49-331-7499-354
Fax: +49-331-7499-526
¹N. Weiss Department of Applied Mathematics and Theoretical Physics
University of Cambridge, UK
²
[FOOTNOTE:2][ENDFOOTNOTE]
[FOOTNOTE:4][ENDFOOTNOTE]
Received: date / Accepted: date
###### Abstract
The record of solar activity is reviewed here with emphasis on peculiarities. Since sunspot positions tell us a lot more about the solar dynamo than the various global sunspot numbers, we first focus on the records of telescopic observations of sunspots leading to positional information. Then we turn to the proxy record from cosmogenic isotope abundances, which shows recurrent grand minima over the last 9500 years. The apparent distinction between episodes of strong modulation, and intervening episodes with milder modulation and weaker overall activity, hints at the solar dynamo following a variety of solutions, with different symmetries, over the course of millennia.
†
[FOOTNOTE:†][ENDFOOTNOTE]
∎
## 1 Introduction
Telescopic observations of sunspots have revealed both the 11-year Schwabe cycle and the interruption of activity associated with the Maunder Minimum in the 17th century. New analyses of early records (including those of Schwabe) confirm that the pattern associated with the butterfly diagram has been present for the past 300 years. There is also evidence of differential rotation, with suggestions of anomalous behaviour during the Maunder Minimum.
The record of solar activity has been extended back for almost 10 000 years by measuring the abundances of the cosmogenic isotopes \({}^{10}\)Be and \({}^{14}\)C in ice cores and tree rings. This record reveals many grand minima, with a characteristic spacing of around 200 years (the de Vries cycle) but the appearance of these grand minima itself varies with a characteristic timescale of over 2000 years. We interpret the grand minima and maxima as resulting from deterministic modulation of the nonlinear solar dynamo, oscillating chaotically with the mean Hale period of around 22 years. The present peculiar evolution of the solar cycle may turn out to be an enlightening period in this respect as well, since it will be observed by various methods including helioseismology.
Although the Sun’s magnetic field is now predominantly dipolar, simple nonlinear models show that symmetry can flip to give quadrupolar or even mixed-mode behaviour. We suggest that such flipping explains the long-term, multimillennial variability of the activity record.
## 2 The sunspot record
The sunspot number goes back to Wolf (1859) who defined an index of solar activity based on the number of sunspot groups and the total number of individual spots on the observable hemisphere of the Sun. The time series starts in 1749 with the observations by Johann Staudacher (Nuremberg) and has been continued in terms of the International Sunspot Number until the present day. An alternative index was defined by Hoyt and Schatten (1998) who only counted the group numbers – an index that is more robust against variable capabilities of seeing small spots and allowed the time series to go back to the first days of telescopic observations of the Sun in 1610.
The time-series are very often used as a proxy for some sort of magnetic field strength in the interior of the Sun and are compared with dynamo models. These global indices, however, cannot give any insight into the topology of the magnetic fields that presumably generate the spots on the surface. One may imagine that the knowledge of the heliographic positions of the spots can be used to infer the equatorial symmetry, the latitudinal distribution, and the rotational symmetry of the underlying magnetic fields as well as the lifetime of magnetic structures on the solar surface.
Sunspot positions are now being collected in a database using results from a USAF network of observing stations, following on the photoheliographic programme conducted by the Royal Greenwich Observatory (RGO) which effectively also was a network of stations around the world. The RGO/USAF set started in 1874 and stores only the average position of sunspot groups together with their total area. In parallel, several other programmes were collecting sunspot positions of various amounts, or are still doing so.
<figure><img src="content_image/1406.7628/x1.png"><figcaption>Figure 1: Butterfly diagram obtained from the sunspot positions derived fromthe drawings of Samuel Heinrich Schwabe in 1825–1867. (After Arlt et al.2013.)</figcaption></figure>
Before that, a large set of sunspot data is available from Friedrich Wilhelm Gustav Spörer who observed from 1861 to 1894 from the towns of Anklam and Potsdam, Germany. He confirmed that the lull of reports on sunspots in the second half of the 17th century represented a real low in solar activity for decades. His work was recognized later by Maunder who was eventually credited with that discovery, whence the name “Maunder minimum”. Spörer’s observations and measurements were published in a series of papers (Spörer 1874, 1878, 1880, 1886, 1894), but his original sunspot drawings – if they existed – are lost.
Before Spörer, Richard Carrington had already made the discovery that the Sun is not rotating uniformly, but faster at the equator than at higher latitudes (Carrington 1859). However, his observational data only cover a rather short period, from November 1853 to March 1861 (Zolotova et al. 2010, Lepshokov et al. 2012).
A great extension of the butterfly diagram into the past comes from the observations by Samuel Heinrich Schwabe, who drew sunspots in a solar disk each day he saw at least a glimpse of the Sun from Nov 5, 1825, to Dec 31, 1867. He actually observed until Dec 15, 1868, but his last observing book was lost. Schwabe was also the first to publish a paper suggesting that the sunspot number varies periodic (Schwabe 1844). The positions and sizes of all sunspots seen by Schwabe were measured by Arlt et al. (2013). The resulting butterfly diagram is shown in Fig. 1.
The time around 1800 is poorly covered by observations, the longest record being that preserved by Honoré Flaugergues, who reported useful sunspot observations in 1788–1830. These, however, are yet to be analysed. They consist mostly of timings at a transit instrument. Flaugergues gave transit times of the solar limb and spots at a vertical and an oblique wire, which will allow us to determine the latitudes of spots.
<figure><img src="content_image/1406.7628/x2.png"><figcaption>Figure 2: Butterfly diagram of the period around and after the Maunder minimumwith sunspot positions from various sources. The period until 1719 shows thepositions derived by Ribes and Nesme-Ribes (1993) which were digitized byVaquero et al. (<http://haso.unex.es/>); the year 1727 shows two additionalobservations found at Paris observatory by the authors. The period of1749-1799 contains observations by Staudacher (Arlt 2009a), Zucconi (Cristo etal. 2011), and Hamilton (Arlt 2009b). Higher contrast is used than in Fig. 1because of the fewer spots available.</figcaption></figure>
A very interesting record of observations is stored in the library of the Leibniz Institute for Astrophysics Potsdam. About 1000 drawings of the Sun made by Johann Caspar Staudacher cover the period of 1749–1799. The drawings are not accompanied by much verbal information about the telescope or the observing method. There are no indications of the orientations of the solar disks. Fortunately, detailed drawings of partial solar eclipses showing the path and direction of the Moon clearly show that the Sun was projected on a screen behind the telescope, i.e. all images are mirrored. The resulting butterfly diagram is shown in Fig. 2 (Arlt 2009a). It is remarkable that the first two cycles observed by Staudacher do not show a clear butterfly shape. Since the observer recorded the sunspots with the projection method, they are certainly not plotted ‘at random’ into the disks, though the uncertainty of the orientations holds true for the entire data set.
The observations were complemented by the very accurate drawings of Ludovico Zucconi in 1754–1760 (Zucconi 1760) which were analyzed by Cristo et al. (2011). They fill in the gaps of the observations by Staudacher around the minimum near 1755 and may help understand the unusual time–latitude distribution of Staudacher’s spots. At the other end of Staudacher’s observing period, additional positions of a small number of sunspots were derived from the records of Hamilton and Gimingham at Armagh Observatory in 1795–1797 (Arlt 2009b).
Further back in time, we find the analysis by Ribes and Nesme-Ribes (1993) who measured the positions of sunspots seen by a variety of astronomers at the Observatoire de Paris, during and after the Maunder minimum, resulting in data for 1672–1719 (Fig. 2). There is a striking absence of sunspots until about 1714 with only the southern hemisphere being populated, by roughly 80 spots (plus 4 spots in the northern hemisphere and 6 spots right on the equator). This result again shows that the solar cycle does not necessarily show a butterfly shape for the time–latitude distribution of spots. The actual activity cycle may have persisted during the Maunder minimum as seen in the cosmic ray record (see Sect. 4) at high time resolution (Beer et al. 1998; Berggren et al. 2009).
A fair number of observers recorded sunspots in the period before the Maunder minimum, starting with the first telescopic observations by Galileo Galilei and Thomas Harriot in 1610, followed by Christoph Scheiner and Johannes Fabricius, who first published the telescopic sunspot observations, in a printed pamphlet (Fabricius 1611). There is no compilation of sunspot positions for all the available sources yet, but visual inspection of the images indicates normal spot distributions before the Maunder minimum.
## 3 Results from the sunspot record
Sunspots show the latitudinal differential rotation of the Sun. This has first been derived quantitatively by Carrington (1859) and Peters (1859). The rotation of newly emerged sunspot groups is, by the way, different from the rotation of the bulk gas at the surface at the same latitude (Tuominen 1962, Pulkkinen and Tuominen 1998).
Historical sunspot observations may actually allow measurements of the differential rotation. A recent attempt by Arlt and Fröhlich (2012) employed Bayesian inference on the observations by Staudacher to obtain positions, orientation angles and differential rotation parameter at the same time and delivered a latitudinal shear compatible with that of today. There is a slight but insignificant hint that the differential rotation was stronger in the first third of Staudacher’s observations than during the remaining period. This coincides with the period of non-butterfly shaped distribution of spot latitudes over time which is an automatic side product of the analysis.
The spot positions derived by Ribes and Nesme-Ribes (1993) also indicate a stronger differential rotation at the end of the Maunder minimum, along with an unusual spot distribution as well. There is actually no fundamental reason why the solar magnetic field should always adopt a chiefly dipolar structure (antisymmetric with respect to the equator). Quadrupolar modes or mixed symmetries are certainly possible and may lead to butterfly diagrams deviating from the one we know today.
<figure><img src="content_image/1406.7628/x3.png"><figcaption>Figure 3: The 9400-year record of solar magnetic activity derived fromcosmogenic isotopes for the period from 9400 BP to the present, where‘present’ means 1950. Upper panel: the principal components record of theproduction rate, based on the INTCAL09 record for 14C and GRIP and EDMLabundances for 10Be. Lower panel: the modulation function Φ, in MeV, aftercorrection in an attempt to eliminate the effects of variations in the dipolemoment of the geomagnetic field. The shaded strips denote the intervals withvigorous modulation of solar magnetic activity, associated with the presenceof prominent grand maxima and grand minima. (After McCracken et al. 2013.)</figcaption></figure>
Another issue of the solar cycle and the solar dynamo is the coupling between the hemispheres. Zolotova et al. (2010) studied the temporal variation of the phase lag between the cycles separated into hemispheres. The phase difference varies and changes sign approximately every 35–40 years, giving a full period for the phase lag change of 70–80 years.
## 4 The cosmic-ray record
Galactic cosmic rays are deflected by magnetic fields in the heliosphere and so the solar cycle modulates the flux of galactic cosmic rays into the Earth’s atmosphere. The detection of the near-Earth cosmic rays by decay products in the atmosphere (mostly neutrons) delivered a 60-year record which shows a very good anti-correlation with the sunspot record over the last five cycles – see Usoskin (2013) for a review. Cosmic rays also lead to the production of isotopes such as \({}^{14}\)C and \({}^{10}\)Be, which are absorbed into tree-rings or into polar icecaps, where their abundances can be measured with great precision. A 1000-year time-series of the isotope productions of \({}^{10}\)Be and \({}^{14}\)C (Muscheler et al. 2007) shows good agreement with the sunspot record, although some significant differences remain. More recently, the \({}^{14}\)C data have been combined with \({}^{10}\)Be measurements from Greenland and Antarctic ice cores to produce a much longer, composite time-series of cosmic radiation with a duration of 9400 years. A Principal Components Analysis has then been used to filter out most of the climatic effects and to reveal the presence of recurrent grand maxima and grand minima (Steinhilber et al. 2012; Abreu et al. 2013: McCracken et al. 2013).
The resulting time series can then be converted into a record of the modulation function \(\Phi\), which can be very roughly interpreted as the mean loss of momentum-to-charge ratio of a cosmic ray particle in the heliosphere (Usoskin 2013). Fig. 3 displays the principal components rate of production of cosmogenic isotopes from 9400 BP to 1950 CE, together with the derived variation of the modulation function \(\Phi\), which has been corrected to take account of changes in the Earth’s magnetic field (Knudsen et al. 2008).
<figure><img src="content_image/1406.7628/x4.png"><figcaption>Figure 4: Comparison of Fourier amplitude spectra for intervals with andwithout grand minima. Shown in black is the spectrum for Φ over the intervalfrom 6300 to 4300 BP (containing four prominent grand minima). The spectrum inred is for the interval from 4700 to 3500, which contained no grand minima (ascan be seen in Fig. 3). Apart from the Gleissberg peak at 87 yr and a possiblecoincidence around 135 yr, the two spectra have little in common. (AfterMcCracken et al. 2013.)</figcaption></figure>
The variations in the shaded regions of Fig. 3 are all similar to those in the most recent millennium, which includes the Maunder, Spörer, Wolf and Oort Grand Minima. Between these regions there are intervals, from 7100 to 6300 BP and from 4700 to 3500 BP, and again from 2200 to 1700 BP during which there are no grand minima and variations in isotope production are relatively low. This distinction becomes even more apparent if we compare the Fourier amplitude spectra for intervals with and without grand extrema, as displayed in Fig. 4. The only clear coincidence is for the Gleissberg cycle, with a period of 87 yr. The interval with strong modulation shows the familiar de Vries period of 208 yr and other peaks at 150 and 350 yr, while the 2300 yr Hallstatt period shows up in the full record. The intervals on either side, with only weak modulation, have a broad peak around 300 yr and a sharper peak at 140 yr but the spectra are generally flatter. All this confirms the immediate impression that the behaviours in the shaded and unshaded regions of Fig. 3 are qualitatively different. The Hallstatt period then represents the characteristic timescale for transitions from one regime to the other.
## 5 Chaotic modulation and symmetry-breaking
The Sun’s cyclic activity is governed by macroscopic physics and is therefore deterministic. It is not practicable, however, to follow the emergence of every flux tube to form the sunspots that are shown in Fig. 1 and so we have to focus on averaged behaviour, which is deterministic but subject to stochastic disturbances. Then it is apparent that activity cycles must be regarded as manifestations of a chaotic oscillator, with sensitive dependence on initial conditions (e.g. Zeldovich et al. 1983; Spiegel 2009). The evolution of such a system is represented by a trajectory in phase space; provided the stochastic perturbations are not too large, the disturbed trajectories are always _shadowed_ by nearby trajectories of the undisturbed chaotic system (Ott 1993). Turning to the observational records described above, we should therefore expect the chaotic system to generate modulation corresponding to grand minima and grand maxima, whose origin can be understood by reference to the mathematical structure of the problem (Tobias and Weiss 2007). Simple models do indeed reproduce similar behaviour, which is associated with the appearance of multiply periodic (“quasiperiodic” to mathematicians) solutions and global bifurcations that lead to chaos.
<figure><img src="content_image/1406.7628/x5.png"><figcaption>Figure 5: Modulation and symmetry changes in a Cartesian mean-field dynamomodel governed by partial differential equations, showing the toroidal fieldas a function of latitude and time. (a) Active fields with dipole symmetry,exhibiting grand minima associated with loss of symmetry and hemisphericpatterns. (b) A transition from dipole symmetry to quadrupole symmetry duringa grand minimum (for the same parameter values). (After Beer et al. 1998.)</figcaption></figure>
<figure><img src="content_image/1406.7628/x6.png"><figcaption>Figure 6: Phase portraits illustrating flipping between dipole and quadrupolepolarities for simple model systems. Upper panel: a trajectory for the PDEs,corresponding to Fig. 5 above, projected onto the 3-dimensional space spannedby the dipole energy, the quadrupole energy and the perturbed kinetic energy.Lower panel: the same but for the ODEs, with the perturbed velocity as theordinate. In each case the symmetry flips occasionally at deep grand minima.(After Knobloch et al. 1998.)</figcaption></figure>
In the simplest illustrative model, cyclic dynamo action sets in at an oscillatory (Hopf) bifurcation that leads to periodic behaviour, with trajectories that are attracted to a limit cycle in the phase space; this is followed by a second Hopf bifurcation that leads to doubly periodic solutions that lie on a 2-torus in the 3-dimensional phase space; after a series of period-doubling bifurcations (associated with a heteroclinic bifurcation) behaviour becomes chaotic (Tobias, Weiss and Kirk 1995), though the chaotic modulation still retains a memory of its original periodicity (Ott 1993). Since this bifurcation sequence was originally demonstrated for normal form equations (governing a saddle-node/Hopf bifurcation) it is generic and therefore expected to be robust. Indeed, the same pattern appears in simple dynamo models governed by partial differential equations (Tobias 1996) and in mean-field dynamos (Küker et al. 1999; Pipin 1999; Bushby 2006). It follows that grand minima and grand maxima should be interpreted as _deterministic_ effects, associated with chaotic modulation, and not as products of large-scale stochastic disturbances.
All dynamo models allow two families of solutions that bifurcate from the trivial, field-free state: these families have either dipole symmetry (with toroidal fields that are antisymmetric about the equator) or quadrupole symmetry (with symmetric toroidal fields). Mixed modes can only appear as a result of symmetry-breaking bifurcations in the nonlinear domain, which may lead to a complicated web of stable and unstable solution branches (Jennings and Weiss 1992). After the Maunder minimum, the solar magnetic field appears to have gained dipolar symmetry by the second half of the 18th century, through a period of mixed symmetry (with nearly all spots in one hemisphere only) and a period poorly covered by observations.
Similar properties are exhibited by an idealized mean-field dynamo model, governed by partial differential equations (Beer, Tobias and Weiss 1998), which also allows transitions between dipolar and quadrupolar symmetries during grand minima. This behaviour is shown in Fig. 5. These properties are also demonstrated by even simpler models, governed by low-order systems of ordinary differential equations (Knobloch et al. 1998). Phase portraits for both PDEs and ODEs are displayed in Fig. 6. The former show both cyclic variations (predominantly horizontal) and large-amplitude modulation, as well as occasional changes of symmetry. With the ODEs it is possible to filter out the cyclic variability, leaving only the modulation with flips of symmetry near the origin, at very deep grand minima. Note the reduced amplitude of modulation in the quadrupole regime as compared with dipole fields. By changing the parameters in the model systems it is possible to find mixed-mode cycles too; they are likewise modulated, and different symmetries may coexist without the possibility of flipping.
These results for highly simplified models reveal generic properties that would be shared by solutions of the much more complicated equations that describe a real stellar dynamo. What they show is not only that grand maxima and grand minima are associated with deterministic modulation of cyclic activity but also that symmetry changes may provide a natural explanation for the changes in behaviour that were reported by McCracken et al. (2013) and summarized in Sections 2–4 above. A more detailed discussion will appear elsewhere (Tobias and Weiss, in preparation).
###### Acknowledgements.
RA is grateful to the libraries of Leibniz Institute for Astrophysics Potsdam, the Royal Astronomical Society, and the Observatoire de Paris for permitting the digitization of observing records. NW gratefully acknowledges many discussions with Jürg Beer and Steven Tobias over the past two decades.
## References
* (1) J.A. Abreu, J. Beer, F. Steinhilber, M. Christl, P.W. Kubik, Space Sci. Rev. **176**, 343 (2013).
* (2) R. Arlt, Solar Phys. **255**, 143 (2009a).
* (3) R. Arlt, Astron. Nachr. **330**, 311 (2009b).
* (4) R. Arlt, H.-E. Fröhlich, Astron. Astrophys. **543**, A7 (2012).
* (5) R. Arlt, R. Leussu, N. Giese, K. Mursula, I.G. Usoskin, Mon. Not. R. Astron. Soc. **433**, 3165 (2013).
* (6) A.-M. Berggren, J. Beer, G. Possnert, A. Aldahan, P. Kubik, M. Christl, Geophys. Res. Lett. **36**, L11801 (2009).
* (7) J. Beer, S.M. Tobias, N.O. Weiss, Solar Phys. **181**, 237 (1998).
* (8) P.J. Bushby, Mon. Not. R. Astron. Soc. **371**, 772 (2006).
* (9) R.C. Carrington, Mon. Not. R. Astron. Soc. **19**, 81 (1859).
* (10) A. Cristo, J.M. Vaquero, F. Sánchez-Bajo, J. Atm. Solar-Terr. Phys. **73**, 187 (2011).
* (11) J. Fabricius, _De maculis in sole observatis_ (Borner Senior & Rehefeld, Wittenberg, 1611).
* (12) D.V. Hoyt, K.H. Schatten, Solar Phys. **181**, 491 (1998).
* (13) R.L. Jennings, N.O. Weiss, Mon. Not. R. Astron. Soc. **252**, 249 (1992).
* (14) E. Knobloch, S.M. Tobias, N.O. Weiss, Mon. Not. R. Astron. Soc. **297**, 1123 (1998).
* (15) M.F. Knudsen, P. Riisager, D. Donadini, I. Snowball, R. Muscheler, B. Korhonen, L.J. Personen, Earth Planet. Sci. Lett. **272**, 319 (2008).
* (16) M. Küker, R. Arlt, G. Rüdiger, Astron. Astrophys. **343**, 977 (1999).
* (17) D.Kh. Lepshokov, A.G. Tlatov, V.V. Vasil’eva, Geomagn. Aeronomy **52**, 843 (2012).
* (18) K.G. McCracken, J. Beer, F. Steinhilber, J. Abreu, Solar Phys. **286**, 609 (2013).
* (19) R. Muscheler, F. Joos, J. Beer, S.A. Müller, M. Vonmoos, I. Snowball, Quatern. Sci. Rev. **26**, 82 (2007).
* (20) E. Ott, _Chaos in Dynamical Systems_ (Cambridge University Press, Cambridge, 1993).
* (21) C.H.F. Peters, Mon. Not. R. Astron. Soc. **19**, 173 (1859).
* (22) V.V. Pipin, Astron. Astrophys. **346**, 295 (1999).
* (23) P. Pulkkinen, I. Tuominen, Astron. Astrophys. **332**, 748 (1998).
* (24) J.C. Ribes, E. Nesme-Ribes, Astron. Astrophys. **276**, 549 (1993).
* (25) H. Schwabe, Astron. Nachr. **21**, 233 (1844).
* (26) E.A. Spiegel, Space. Sci. Rev. **1**44, 25 (2009).
* (27) G. Spörer, Publ. Astron. Ges. **13** (1874).
* (28) G. Spörer, Publ. Astrophys. Obs. Potsdam **1** (1878).
* (29) G. Spörer, Publ. Astrophys. Obs. Potsdam **5** (1880).
* (30) G. Spörer, Publ. Astrophys. Obs. Potsdam **17**, 220 (1886).
* (31) G. Spörer, Publ. Astrophys. Obs. Potsdam **32** (1894).
* (32) F. Steinhilber, J.A. Abreu, J. Beer, I. Brunner, M. Christl, H. Fischer, U. Heikkilä, P.W. Kubik, M. Mann, K.G. McCracken, H. Miller, H. Miyahara, H. Oerter, F. Wilhelms, Proc. Natl. Acad. Sci. USA **109**, 5967 (2012).
* (33) S.M. Tobias, Astron. Astrophys. **307**, L21 (1996).
* (34) S.M. Tobias, N.O. Weiss in _Mathematical Aspects of Natural Dynamos_, ed. E. Dormy, A.M. Soward, pp. 281-311 (CRC Press, Boca Raton).
* (35) S.M. Tobias, N.O. Weiss, V. Kirk, Mon. Not. R. Astron. Soc. **273**, 1150 (1995).
* (36) J. Tuominen, Z. Astrophys. **55**, 110 (1962).
* (37) I.G. Usoskin, Living Rev. Solar Phys. **10**, 1, http://www.livingreviews.org/lrsp-2013-1 (2013).
* (38) R. Wolf, Mittheil. Zürich **2**, 3 (1859).
* (39) Y.B. Zeldovich, A.A. Ruzmaikin, D.D. Sokoloff, _Magnetic Fields in Astrophysics_ (Gordon & Breach, New York, 1983).
* (40) N.V. Zolotova, D.I. Ponyavin, R. Arlt, I. Tuominen, Astron. Nachr 331, 765 (2010)
* (41) L. Zucconi, _De heliometri structura et usu_ (Auctoris æræ, Venice, 1760).
|
1708.09113 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 70729,
"num_imgs": 27,
"llama3_tokens_count": 22528
} | [
"content_image/1708.09113/CSF.png",
"content_image/1708.09113/angenent.png",
"content_image/1708.09113/drugan.png",
"content_image/1708.09113/drugankleene.png",
"content_image/1708.09113/withcircle.png",
"content_image/1708.09113/sphere3.png",
"content_image/1708.09113/sphere3.png",
"content_image/1708.09113/sphere1x.png",
"content_image/1708.09113/sphere2x.png",
"content_image/1708.09113/sphere4.png",
"content_image/1708.09113/sphere3-2.png",
"content_image/1708.09113/shoot1ax.png",
"content_image/1708.09113/shoot1bx.png",
"content_image/1708.09113/shoot1cx.png",
"content_image/1708.09113/shoot2ax.png",
"content_image/1708.09113/shoot2bx.png",
"content_image/1708.09113/shoot3ax.png",
"content_image/1708.09113/shoot3bx.png",
"content_image/1708.09113/shoot3cx.png",
"content_image/1708.09113/half_low.png",
"content_image/1708.09113/half_high.png",
"content_image/1708.09113/four.png",
"content_image/1708.09113/bishoot2.png",
"content_image/1708.09113/ray_lower.png",
"content_image/1708.09113/embeddedT3.png",
"content_image/1708.09113/immersedT3.png",
"content_image/1708.09113/S3together.png"
] | # A survey of closed self-shrinkers with symmetry
Gregory Drugan
Hojoo Lee
Xuan Hien Nguyen
###### Abstract.
In this paper, we survey known results on closed self-shrinkers for mean curvature flow and discuss techniques used in recent constructions of closed self-shrinkers with classical rotational symmetry. We also propose new existence and uniqueness problems for closed self-shrinkers with bi-rotational symmetry and provide numerical evidence for the existence of new examples.
## 1. Introduction
The self-shrinking solitons for mean curvature flow are ancient solutions to the flow that evolve by “shrinking” self-similarly about a point. A time-slice for a self-shrinking flow is a hypersurface, called a _self-shrinker_, that satisfies a non-linear second order elliptic equation involving the mean curvature. When the flow shrinks about the origin, the self-shrinker equation for a time-slice \(\Sigma\) is
\[{\Delta}_{{\mathbf{g}}_{{}_{\Sigma}}}\mathbf{X}=\alpha{\mathbf{X}}^{\bot},\]
where \(\alpha<0\) is a constant and \({\Delta}_{{\mathbf{g}}_{{}_{\Sigma}}}\mathbf{X}\) equals to the mean curvature vector for \(\Sigma\).
The first variation formula shows that self-shrinkers are minimal hypersurfaces in the Riemmanian manifold \(\mathbb{R}^{n+1}\) with conformal metric
\[e^{\frac{\alpha|\mathbf{X}|^{2}}{n}}\left({dx_{1}}^{2}+\cdots+{dx_{n+1}}^{2} \right).\]
This geometric variational characterization leads to the reduction that self-shrinkers with rotational or bi-rotational symmetry correspond to geodesics in the plane equipped with induced conformal metrics. So, the existence and uniqueness of closed self-shrinkers with one of these types of symmetry can be reduced to the study of closed geodesics in two dimensional manifolds.
Self-shrinkers play a vital role in the theory of mean curvature flow and admit a number of interesting applications. Husiken’s monotonicity formula [38, Section 3] shows that self-shrinkers model the asymptotic behavior of mean curvature flow at type I singularities. Self-shrinkers can also be used as barriers to explore different phenomena for solutions to mean curvature flow. For instance, the existence of a self-shrinking torus was recently used in [18] to construct initial entire graphs whose mean curvature flow evolves away from the heat flow.
The focus of this survey is on closed self-shrinkers with rotational or bi-rotational symmetry. The paper is organized as follows:
* In Section 2, we introduce the self-shrinker equation and highlight various elliptic and parabolic characterizations of self-shrinkers.
* In Section 3, we discuss existence and uniqueness results for closed self-shrinkers. We begin with the classification of self-shrinking curves in \({\mathbb{R}}^{2}\), including proofs of two geometric conservation laws for self-shrinking curves [1, 22, 28]. Next, we review rigidity results for closed self-shrinkers due to Huisken [37, 38] and Brendle [12]. Finally, we mention several examples of self-shrinkers with symmetry [9, 16, 20, 49].
* In Section 4, we give a detailed sketch of the recent variational proof for the existence of an embedded, torus self-shrinker with rotational symmetry [19]. The proof uses a modified curve shortening flow to find a closed geodesic in the upper half-plane. A new feature of this variational proof is that it comes with an upper bound for the weighted length of the constructed geodesic.
* In Section 5, we expand on how the shooting method can be used to construct closed self-shrinkers with rotational symmetry. In the first part of this section, we outline the analysis used in [16] to construct an immersed sphere self-shrinker with rotational symmetry. Then, we illustrate how the behavior of geodesics for three shooting problems can be used to generate more examples of closed self-shrinkers with rotational symmetry [20]. We end this section with a discussion on the role of continuity in the shooting method.
* In Section 6, we consider the problem of constructing closed self-shrinkers with bi-rotational symmetry. Here we propose the existence of various bi-rotational \({\mathbb{T}}^{3}\) and \({\mathbb{S}}^{3}\) self-shrinkers in \({\mathbb{R}}^{4}\) and present numerical approximations of their symmetric profile curves.
* Finally, in Section 7, we present a list of old and new open problems on the existence and uniqueness of closed self-shrinkers.
## 2. Elliptic and parabolic characterizations of self-shrinkers
### Self-shrinker equation
A hypersurface \(\Sigma\) in Euclidean space \({\mathbb{R}}^{n+1}\) is a _self-shrinker_ with the constant coefficient \(\alpha<0\) when it solves the quasi-linear elliptic partial differential system of second order
(2.1) \[{\Delta}_{{\mathbf{g}}_{{}_{\Sigma}}}\mathbf{X}=\alpha{\mathbf{X}}^{\bot},\]
where \(\mathbf{X}\) is the position vector for \(\Sigma\), \({\mathbf{g}}_{{}_{\Sigma}}\) is the induced metric on \(\Sigma\), \({\Delta}_{{\mathbf{g}}_{{}_{\Sigma}}}\) is the Laplace-Beltrami operator for \({\mathbf{g}}_{{}_{\Sigma}}\), and \(\mathbf{X}^{\bot}\) is the orthogonal projection of \(\mathbf{X}\) into the normal bundle of \(\Sigma\). We note that \({\Delta}_{{\mathbf{g}}_{{}_{\Sigma}}}\mathbf{X}\) equals the mean curvature vector \(\mathbf{H}\) for \(\Sigma\). When we orient \(\Sigma\) by a smooth unit normal vector field \(\mathbf{N}\) and introduce the mean curvature , we obtain the the scalar partial differential equation
(2.2) \[H=\alpha\,{\mathbf{X}}\cdot\mathbf{N}.\]
Though (2.2) resembles the classical constant mean curvature equation, in general, the standard techniques (such as method of moving planes) do not directly work for self-shrinkers.
Locally, a self-shrinker may be written as the graph of a function \(u:\Omega\subset\mathbb{R}^{n}\to\mathbb{R}\), where \(u\) is a solution to
(2.3) \[\textrm{div}_{\mathbb{R}^{n}}\left(\frac{Du}{\sqrt{1+|Du|^{2}}}\right)=\alpha \frac{u-x\cdot Du}{\sqrt{1+|Du|^{2}}}.\]
### Ancient solutions for mean curvature flow
A self-shrinker \(\Sigma\) corresponds to the \(t=\left(t_{0}+\frac{1}{2\alpha}\right)\) time slice of a mean curvature flow that shrinks to the origin at the extinction time \(t_{0}\). More explicitly, the one parameter family of hypersurfaces
\[{\Sigma}_{t}:=\sqrt{2\alpha\left(t-t_{0}\right)\,}\,{\Sigma}\]
is a solution to mean curvature flow (MCF)
(2.4) \[\left(\frac{\partial}{\partial t}\mathbf{X}(\cdot,t)\right)^{\bot}=\mathbf{H}( \cdot,t),\]
for all ancient time \(t\in\left(-\infty,t_{0}\right)\). Here \(\mathbf{H}(\cdot,t)\) denotes the mean curvature vector for the time slice \({\Sigma}_{t}\). It follows that self-shrinkers correspond to ancient solutions to MCF that evolve over time by homotheties.
### Monotonicity of the Gaussian area
Consider the backward heat kernel \(\rho\left(\mathbf{X},\,t\right)=\frac{1}{{\left(4\pi\left(t_{0}-t\right)\, \right)}^{\frac{n}{2}}}e^{-\,\frac{{|\mathbf{X}|}^{2}}{4\left(t_{0}-t\right)}}\) for the backward heat equation \(f_{t}=-{\Delta}_{{\mathbb{R}}^{n+1}}f\). Huisken [38, Section 3] showed that closed hypersurfaces evolving under MCF, for \(t<t_{0}\), satisfy
(2.5) \[\frac{\partial}{\partial t}\int_{\Sigma_{t}}\rho=-\int_{\Sigma_{t}}\rho\,{ \left|\,\mathbf{H}+\frac{1}{2\left(t_{0}-t\,\right)}{\mathbf{X}}^{\bot}\right| }^{2}\leq 0.\]
It follows from this monotonicity formula that mean curvature flow behaves asymptotically like a self-shrinker at a singularity where the curvature does not blow-up too fast [38, Theorem 3.5]. Also, see Hamilton’s monotonicity formula [30] and the local monotonicity formula due to Ecker [21].
### Minimality of self-shrinkers
The first variation formula for weighted area (Ilmanen’s lecture notes [40, Section 2] and Morgan’s book [48, Chapter 18]) shows that self-shrinkers are variational objects. A submanifold \(\Sigma\) immersed in a Riemannian manifold \(\left(\mathcal{M},g\right)\) with density \(e^{\Psi}\) is minimal if and only if its weighted mean curvature vector \({\mathbf{H}}_{\Psi}={\mathbf{H}}-{\left({\nabla}_{\mathcal{M}}\Psi\right)}^{\bot}\) vanishes, where \(\mathbf{H}\) denotes the mean curvature vector field on \(\Sigma\) in \(\left(\mathcal{M},g\right)\) and \({\left({\nabla}_{\mathcal{M}}\Psi\right)}^{\bot}\) is the orthogonal projection of the vector field \({\nabla}_{\mathcal{M}}\Psi\) into the normal bundle of \(\Sigma\). Given a hypersurface \(\Sigma\) in \({\mathbb{R}}^{n+1}\) and constant \(\alpha<0\), the following three statements are equivalent to each other:
* \(\Sigma\) is critical for the Gaussian area functional \({\int}_{\Sigma}\;e^{\frac{\alpha|\mathbf{X}|^{2}}{2}}{dvol}_{\Sigma}\).
* \(\Sigma\) satisfies the Euler-Lagrange equation \({\Delta}_{{\mathbf{g}}_{{}_{\Sigma}}}\mathbf{X}={\left[\,{\nabla}_{{\mathbb{R} }^{n+1}}\left(\frac{\alpha\,{|\mathbf{X}|}^{2}}{2}\;\right)\,\right]}^{\bot}= \alpha\mathbf{X}^{\bot}\) for the Gaussian area.
* \(\Sigma\) is a minimal submanifold in the Riemmanian manifold \({\mathbb{R}}^{n+1}\) with metric \(e^{\frac{\alpha|\mathbf{X}|^{2}}{n}}\left({dx_{1}}^{2}+\cdots+{dx_{n+1}}^{2}\right)\).
### Minimal cones as self-shrinkers
The Clifford cone \({x_{1}}^{2}+{x_{2}}^{2}={x_{3}}^{2}+{x_{4}}^{2}\) over the Clifford torus in \({\mathbb{S}}^{3}\) is a self-shrinker in \({\mathbb{R}}^{4}\). More generally, whenever \({\mathcal{S}}^{n-1}\) is a minimal hypersurface in the round hypersphere \({\mathbb{S}}^{n}\), its cone \(C\left(\mathcal{S}\right):=\left\{r\mathbf{p}\,:\,r\in\mathbb{R},\mathbf{p}\in \mathcal{S}\right\}\) becomes a minimal hypersurface in \({\mathbb{R}}^{n+1}\). Since \({\mathbf{X}}\cdot\mathbf{N}=0\) on \(C\left(\mathcal{S}\right)\), it follows that a minimal cone is a self-shrinker satisfying \(H=0=\alpha\,{\mathbf{X}}\cdot\mathbf{N}\) for any coefficient \(\alpha\).
### Normalization of the coefficient
Without loss of generality, we can use dilations to normalize the coefficient \(\alpha<0\). In the literature, two normalizations \(\alpha=-1\) (for instance, [38]) and \(\alpha=-\frac{1}{2}\) (for instance, [14]) are common. Unless otherwise noted, we take the normalization \(\alpha=-\frac{1}{2}\) for the remainder of the survey.
## 3. Results on existence and uniqueness of closed self-shrinkers
### Shrinking curves in the plane
In 1956, Mullins [52] introduced the one-dimensional mean curvature flow, _the curve shortening flow_, in \(\mathbb{R}^{2}\) and constructed examples of solitons for the flow. In 1986, Gage and Hamilton [25] solved the shrinking conjecture by showing that a convex curve collapses to a _round point_ under the curve shortening flow. The curve remains convex and becomes _circular_ as it shrinks, in the sense that the ratio of the inscribed radius to the circumscribed radius approaches \(1\), the ratio of the maximum curvature to the minimum curvature approaches \(1\), and the higher order derivatives of the curvature converge to \(0\) uniformly.
In 1987, Grayson [27] proved the striking result that an embedded, not necessarily convex, closed curve eventually becomes convex, and therefore it eventually contracts to a round point, under the curve shortening flow. In 1998, Huisken gave a concise proof [39] of Grayson’s Theorem, using a distance comparison argument and the classification and characterization of solitons as asymptotic models for singularities. We refer the interested reader to recent proofs by Andrews-Bryan [5, 6] and Magni-Mantegazza [46].
In the mid 1980’s, Abresch-Langer [1] and Epstein-Weinstein [22] independently investigated the self-shrinking solitons for the curve shortening flow. Here are numerical approximations of some of self-shrinkers:
<figure><img src="content_image/1708.09113/CSF.png"><figcaption>Figure 1. Examples of self-shrinkers for the curve shortening flow.</figcaption></figure>
Unlike higher dimensional cases, the one-dimensional self-shrinker equation admits explicit first integrals.
**Theorem 1** (**Geometric conservation laws for self-shrinking curves**, [1, 22, 28]).: Let the function \(\kappa\) denote the curvature of an immersed non-flat self-shrinker with the coefficient \(\alpha<0\) in the \(xy\)-plane.
1. The quantity \(\kappa\,e^{\alpha\left(\frac{{x}^{2}+{y}^{2}}{2}\right)}\) is constant. Hence, the curvature \(\kappa\) is an increasing function of the radius \(\sqrt{x^{2}+y^{2}}\).
2. The entropy \({\kappa}^{2}+\alpha\ln\left({\kappa}^{2}\right)+{\left(\frac{d\kappa}{d\theta} \right)}^{2}\) is constant, where \(\theta\) is the angle between the tangent vector and the \(x\)-axis.
Proof.: Let \({\mathbf{X}}(s)=\left(x(s),y(s)\right)\) denote an immersed non-flat self-shrinker parameterized by an arc length \(s\). We introduce the angle function \(\theta(s)\) between the unit tangent vector \(T(s)=\frac{d{\mathbf{X}}}{ds}\) and the \(x\)-axis, the unit normal vector \(\mathbf{N}(s)=\mathrm{J}\,\mathbf{T}(s)\) (with the \(\frac{\pi}{2}\)-rotation \(\mathrm{J}\)), the signed curvature \(\kappa(s)=\frac{d\theta}{ds}\), tangential support function \(\tau(s)={\mathbf{X}}(s)\cdot\mathbf{T}(s)\), and normal support function \(\nu(s)={\mathbf{X}}(s)\cdot\mathbf{N}(s)\). Combining the self-shrinker equation \(\kappa(s)=\alpha\,\nu(s)\) with the coefficient \(\alpha<0\) and the structure equations for the curve
(3.1) \[\frac{d\tau}{ds}=1+\kappa\nu,\quad\frac{d\nu}{ds}=-\kappa\tau,\quad\frac{d}{ds }\left(\frac{{\tau}^{2}+{\nu}^{2}}{2}\right)=\tau,\quad x^{2}+y^{2}={\tau}^{2} +{\nu}^{2}\]
implies the conservation law
\[\frac{d}{ds}\left[\kappa\,e^{\alpha\left(\frac{{x}^{2}+{y}^{2}}{2}\right)} \right]=\frac{1}{\alpha}\,\frac{d}{ds}\left(\nu\,e^{\alpha\left(\frac{{\tau}^{ 2}+{\nu}^{2}}{2}\right)}\right)=\frac{1}{\alpha}\left(-\kappa+\alpha\nu\right) \,\tau\,e^{\alpha\left(\frac{{\tau}^{2}+{\nu}^{2}}{2}\right)}=0,\]
which guarantees that the geometric quantity \(\kappa\,e^{\alpha\left(\frac{{x}^{2}+{y}^{2}}{2}\right)}\) is constant on the self-shrinker ([28, Lemma 5.5] for \(\alpha=-1\) and [1, Theorem A]). Reading the first two structure equations (3.2) with respect to the angle \(\theta\) yields
(3.2) \[\frac{d\tau}{d\theta}=\frac{1}{\kappa}+\nu,\quad\frac{d\nu}{d\theta}=-\tau, \quad\frac{1}{\kappa}=-{\nu}_{\theta\theta}-\nu.\]
Combining these and the self-shrinker equation \(\kappa=\alpha\,\nu\) implies the conservation law ([22, Section 1] for \(\alpha=-1\)):
\[\frac{d}{d\theta}\left[{{\kappa}_{\theta}}^{2}+{\kappa}^{2}+\alpha\ln\left({ \kappa}^{2}\right)\right]=2{\kappa}_{\theta}\left[{\kappa}_{\theta\theta}+ \kappa+\frac{\alpha}{\kappa}\right]=2{\kappa}_{\theta}\,\alpha\left[\left({\nu }_{\theta\theta}+\nu\right)+\frac{1}{\kappa}\right]=0.\]
∎
Recently, classification results and conservation laws for self-shrinkers were developed in other geometric contexts by Halldorsson [28, 29] and Chang [13].
### Rigidity results for self-shrinkers
Round spheres admit geometric characterizations both as constant mean curvature (CMC) surfaces and as self-shrinkers for the mean curvature flow. The classical theorems of Jellett, Alexandrov, and Hopf show that round spheres posses some rigidity as CMC hypersurfaces. Jellett’s Theorem in \({\mathbb{R}}^{3}\) and its generalization [26, 35, 36, 44, 50], which uses Hsiung-Minkowski integral formulas [35, 36], confirms that a closed, star-shaped, CMC hypersurface is round. Alexandrov used his method of moving planes to prove that an embedded, closed CMC hypersurface in \({\mathbb{R}}^{n+1}\) must be a round sphere. The embedded assumption is essential due to the existence of immersed tori in \({\mathbb{R}}^{3}\) with positive constant mean curvature, see Abresch [2] and Wente [53]. Hopf showed if a closed immersed CMC surface in \({\mathbb{R}}^{3}\) is a topological sphere, then it must be round. One of Hopf’s two proofs [31] exploits a beautiful fact that the Codazzi equation implies the existence of globally well-defined holomorphic quadratic differential on CMC surfaces. The proof can be generalized to a wider class of surfaces in more general ambient spaces, for instance, as in [3, 11, 23].
Despite the similarity between the CMC and self-shrinker equations, the classical rigidity results of Alexandrov and Hopf for CMC hypersurfaces do not hold for self-shrinkers. (There are examples of an embedded \(\mathbb{S}^{1}\times\mathbb{S}^{n-1}\) self-shrinker and an immersed, non-round \(\mathbb{S}^{n}\) self-shrinker in \(\mathbb{R}^{n+1\geq 3}\).) In addition, analogues of the classical Weierstrass-Enneper representation (holomorphic resolution of minimal surfaces) or Kenmotsu representation (which prescribes harmonic Gauss map of CMC surfaces [42]) are not known for self-shrinkers. However, just as in the CMC setting, round spheres do posses some rigidty as closed self-shrinkers.
In 1984, Huisken [37] established that a convex hypersurface in \({\mathbb{R}}^{n+1\geq 3}\) shrinks to a round point by showing that the hypersurface, under a rescaled flow, converges to a totally umbilical hypersurface. Hence, Huisken’s result in \({\mathbb{R}}^{n+1\geq 3}\) is a higher dimensional analogue of the Gage-Hamilton Theorem for the curve shortening flow in \({\mathbb{R}}^{2}\). Since self-shrinkers keep their shape under the flow, these _parabolic asymptotic convergence results_ implicitly imply the _elliptic rigidity result_ that a closed, convex self-shrinker in any dimension is round. We highlight two additional rigidity results for closed self-shrinkers:
*
* _Rigidity of spheres as mean-convex self-shrinkers in \({\mathbb{R}}^{n+1\geq 3}\)_: In 1990, Huisken [38, Theorem 4.1] showed that a closed, mean-convex (\(H>0\)) self-shrinker must be a round sphere. The key starting point in his argument is to combine the self-shrinker equation and Simons’ identity for the squared length \({|A|}^{2}\) of the second fundamental form to obtain an explicit expression for the Laplacian of the well-defined quotient function \(\frac{{|A|}^{2}}{H^{2}}\). Since the self-shrinker is compact, the maximum principle guarantees that \(\frac{{|A|}^{2}}{H^{2}}\) is constant. Subsequent analysis of the pde for \(\frac{{|A|}^{2}}{H^{2}}\) and Hsiung-Minkowski integral formulas ultimately lead to the conclusion that the mean curvature \(H\) is a positive constant. Now, a mean-convex self-shrinker has positive support function. Therefore, the self-shrinker is round. See also Montiel’s Theorem [50].
*
* _Rigidity of spheres as embedded \({\mathbb{S}}^{2}\) self-shrinkers in \({\mathbb{R}}^{3}\)_: In 2016, Brendle [12] proved the long-standing Alexandrov-Hopf type conjecture showing that an embedded, topological \(\mathbb{S}^{2}\) self-shrinker in \({\mathbb{R}}^{3}\) must be a round sphere. Unlike in the CMC case, combining the self-shrinker equation with the Codazzi equations does not produce a holomorphic quadratic differential on self-shrinkers, so the Hopf type approach does not directly work for self-shrinkers. The key result in Brendle’s proof is that the sign of the normal support function does not change on an embedded \({\mathbb{S}}^{2}\) self-shrinker in \({\mathbb{R}}^{3}\), i.e., the closed, embedded self-shrinker is star-shaped. Thus, the self-shrinker is mean-convex (by the definition of the self-shrinker equation), and by Huisken’s Rigidity Theorem it is a round sphere.
### Examples of self-shrinkers
Though round spheres are rigid as CMC hypersurfaces and self-shrinkers under certain additional assumptions, there are numerous examples that contrast the rigidity results from the previous section. Hsiang-Teng-Yu [33] proved that there exist infinitely many distinct CMC immersions of \({\mathbb{S}}^{2k-1}\) in \({\mathbb{R}}^{2k\geq 4}\), see also [34]. These examples come from studying closed hypersurfaces with bi-rotational symmetry. (We consider the problem of constructing closed self-shrinkers with bi-rotational symmetry in Section 6.) A rich source of examples of immersed self-shrinkers comes from hypersurfaces with rotational symmetry. Using the shooting method for geodesics (see Section 5), an infinite number of complete, self-shrinkers for each of the rotational topological types: \(\mathbb{S}^{n}\), \(\mathbb{S}^{1}\times\mathbb{S}^{n-1}\), \(\mathbb{R}^{n}\), and \(\mathbb{S}^{1}\times\mathbb{S}^{n-1}\) were constructed in [20].
In the following, we introduce the geodesic equation for the profile curve of a self-shrinker with rotational symmetry and highlight a few modern examples of closed self-shrinkers. We note that even though rotational self-shrinkers have a variational characterization, it is unknown if the geodesic equation is integrable.
*
* _Profile curves of rotational shrinkers as geodesics in the half-plane_: Self-shrinkers are minimal submanifolds in the Remannian manifold \(\mathbb{R}^{n+1\geq 3}\) equipped with the conformal metric \(e^{-\frac{|\mathbf{X}|^{2}}{2n}}\left({dx_{1}}^{2}+\cdots+{dx_{n+1}}^{2}\right).\) Applying the first variation formula to a rotational hypersurface \(\mathbf{X}(s,\omega)=\left(x(s),r(s)\omega\right)\), \(s\in\mathbb{R}\), \(\omega\in\mathbb{S}^{n-1}\), with the profile curve \((x(s),r(s))\) in the half-plane \(\mathbb{H}=\{(x,r)\in\mathbb{R}^{2}\,|\,r>0\}\), shows that it is a self-shrinker if and only if its profile curve is a geodesic for the conformal metric \[g_{Ang}=r^{2(n-1)}e^{-\frac{x^{2}+r^{2}}{2}}(dx^{2}+dr^{2}).\] The geodesic equation for the profile curve \((x(s),r(s))\) of a self-shrinker with rotational symmetry is \[\frac{x^{\prime}r^{\prime\prime}-x^{\prime\prime}r^{\prime}}{x^{\prime 2}+r^{ \prime 2}}=\left(\frac{n-1}{r}-\frac{r}{2}\right)x^{\prime}+\frac{1}{2}xr^{ \prime},\] and the Gauss curvature of the Riemannian manifold \((\mathbb{H},g_{Ang})\) is given by \[K=\frac{r^{2}+(n-1)}{r^{2n}}e^{\frac{x^{2}+r^{2}}{2}}>0.\]
*
* _The fundamental examples of rotational self-shrinkers_: The sphere of radius \(\sqrt{2n}\) centered at the origin, a flat plane through origin, and a round cylinder of radius \(\sqrt{2(n-1)}\) with axis through the origin are examples of self-shrinkers with rotational symmetry. We have the following profile curves for these fundamental examples: * _Round sphere_: \(x^{2}+r^{2}=2n\) with the profile curve \((x(s),r(s))=\left(\sqrt{2n}\cos\left(\frac{s}{\sqrt{2n}}\right),\sqrt{2n}\sin \left(\frac{s}{\sqrt{2n}}\right)\right)\). * _Flat plane_: \(x\equiv 0\) with the profile curve \((x(s),r(s))=(0,s)\). * _Round cylinder_: \(r\equiv\sqrt{2(n-1)}\) with the profile curve \((x(s),r(s))=(s,\sqrt{2(n-1)})\).
*
* _Angenent’s torus_: Using the _shooting method_ for geodesics, Angenent [9] gave the first proof of the existence of an embedded torus (\(\mathbb{S}^{1}\times\mathbb{S}^{n-1}\)) self-shrinker in \(\mathbb{R}^{n+1}\) with a rotational symmetry. See also [16, 19, 49]. <figure><img src="content_image/1708.09113/angenent.png"><figcaption>Figure 2. The profile curve whose rotation about the horizontal axis is anembedded torus self-shrinker.</figcaption></figure>
*
* _Immersed sphere self-shrinker_: Motivated by Angenent’s construction and using the shooting method from the axis of rotation, it was shown in [16] that there exists an immersed and non-embedded \(\mathbb{S}^{n}\) self-shrinker in \(\mathbb{R}^{n+1}\) with a rotational symmetry (see Section 5.1 for a detailed sketch of the proof). The existence of this immersed self-shrinkers explains why the embeddedness assumption is essential in Brendle’s rigidity result for embedded \(\mathbb{S}^{2}\) self-shrinkers in \(\mathbb{R}^{3}\)[12]. <figure><img src="content_image/1708.09113/drugan.png"><figcaption>Figure 3. The profile curve whose rotation about the horizontal axis is animmersed sphere self-shrinker.</figcaption></figure>
*
* _Immersed self-shrinkers_: Building on the work in [9, 16, 43], infinitely many immersed and non-embedded self-shrinkers for each of the rotational topological types: \(\mathbb{S}^{n}\), \(\mathbb{S}^{1}\times\mathbb{S}^{n-1}\), \(\mathbb{R}^{n}\), and \(\mathbb{S}^{1}\times\mathbb{R}^{n-1}\) were constructed in [20]. The main idea for the construction is to study the behavior of solutions to the geodesic equation near two known self-shrinkers and use continuity arguments to find complete self-shrinkers between them. See Section 5.2 for illustrations on how to carry out this heuristic. <figure><img src="content_image/1708.09113/drugankleene.png"><figcaption>Figure 4. The profile curve whose rotation about the horizontal axis is animmersed torus self-shrinker.</figcaption></figure>
* _Møller’s embedded shrinkers with higher genus in \(\mathbb{R}^{3}\)_: In 2011, Møller [49] performed a smooth desingularization of two rotational self-shrinkers: Angenent’s torus and the round sphere. Møller’s shrinkers are generalizations of Costa’s embedded three-end minimal surface [32], which can be viewed as a smooth desingularization of two rotational minimal surfaces: a catenoid and a plane passing the neck of the catenoid. More concretely, Møller proved the existence of a large lower bound \(N_{0}\) such that for each even \(g=2k\geq N_{0}\) there exists a closed self-shrinker \({\Sigma}_{g}\) in \(\mathbb{R}^{3}\) with genus \(g\) that is invariant under the dihedral symmetry group with \(2g\) elements. Furthermore, the sequence of self-shrinkers \({\Sigma}_{g}\) converges in the Hausdorff sense (and smoothly away from the two initial intersection circles) to the union of Angenent’s torus and the round sphere. <figure><img src="content_image/1708.09113/withcircle.png"><figcaption>Figure 5. Two intersecting geodesics in Møller’s desingularization of self-shrinking tori and round sphere</figcaption></figure>
## 4. Variational method for an embedded \(\mathbb{S}^{1}\times\mathbb{S}^{n-1}\) self-shrinker in \(\mathbb{R}^{n+1}\)
In this section, we outline the parabolic proof from [19] of the existence of a rotational self-shrinking torus. The proof uses variational techniques, applied to the geodesic problem from Section 3.3, to find a simple, closed geodesic and gives an estimate for the length of this geodesic in \((\mathbb{H},g_{Ang})\). It is unknown if this proof recovers the closed geodesics constructed in [9]. (See Section 7.)
**Theorem 2** ([19]).: For \(n\geq 2\), there exists a simple, closed geodesic \(\gamma_{\infty}\), for the conformal metric
\[g_{Ang}=r^{2(n-1)}e^{-(x^{2}+r^{2})/2}(dx^{2}+dr^{2})\]
on the half-plane \(\mathbb{H}=\{(x,r)\in\mathbb{R}^{2}\,|\,r>0\}\). Moreover, its length \(L_{n}(\gamma_{\infty})\) in the metric \(g_{Ang}\) is less than the length of the double cover of the half-line \(x=0\):
\[L_{n}(\gamma_{\infty}):=\int_{\mathbb{S}^{1}}r^{n-1}e^{-(x^{2}+r^{2})/4}\sqrt{ x^{\prime 2}+r^{\prime 2}}du<2\int_{0}^{\infty}s^{n-1}e^{-s^{2}/4}ds.\]
The idea for finding this closed geodesic is to study a modified curve shortening flow:
(4.1) \[\frac{\partial}{\partial t}\gamma_{t}=\frac{k_{g}}{K}\bf{n},\]
where \(k_{g}\) is the geodesic curvature and \(K>0\) is the Gauss curvature in \((\mathbb{H},g_{Ang})\). The goal is to create an initial curve \(\gamma_{0}\) whose evolution under the flow converges to the geodesic \(\gamma_{\infty}\). To do this we consider a special family of initial curves and study their evolutions under the modified curve shortening flow. The advantage of this approach is that the evolution is well-known and the crux of the proof is in selecting the appropriate family of initial curves, all of which have length less than twice the length of the half-line \(x=0\) and enclose Gauss area of exactly \(2\pi\).
The modified curve shortening flow has two important properties.
* _The flow decreases length_: Since the arc length \(ds\) evolves according to \(\frac{\partial}{\partial t}ds=-\frac{k_{g}^{2}}{K}\,ds\), the length \(L_{n}(\gamma_{t})\) is non-increasing: \[\frac{d}{dt}L_{n}(\gamma_{t})=-\int_{\gamma_{t}}\frac{k_{g}^{2}}{K}\,ds\leq 0.\]
* _The flow preserves total Gauss area of \(2\pi\)_: When the evolving curves \(\gamma_{t}\) are simple, closed curves bounding domains \(\Omega_{t}\), the Gauss-Bonnet formula gives \[\frac{d}{dt}\iint_{\Omega_{t}}KdA=-\oint_{\gamma_{t}}k_{g}\,ds=-2 \pi+\iint_{\Omega_{t}}KdA.\] In particular, if the total Gauss area enclosed by the initial curve equals \(2\pi\), then the total Gauss area enclosed by \(\gamma_{t}\) is also \(2\pi\) as long as the flow exists.
Working in regions where the Gaussian curvature is uniformly bounded from above and below away from 0, the short-time existence for the flow follows from [24, 7]. This gives the following long-time existence result.
**Proposition 1** (Long-time existence).: Let \(\gamma_{0}\) be a simple closed curve. If the domain \(\Omega_{0}\) enclosed by \(\gamma_{0}\) satisfies \(\iint_{\Omega_{0}}KdA=2\pi\), then the evolution of \(\gamma_{0}\) with normal velocity \(k_{g}/K\) exists for all time.
By selecting a family of initial curves, all of which have length less than twice the length of the half-line \(x=0\) and enclose total Gauss area of exactly \(2\pi\), Proposition 1 guarantees that the evolutions of these curves exist for all time, and because the length decreases, the flow starting from each curve does not converge to a double cover of the half-line \(\{x=0\}\). To select the family of initial curves, we consider rectangles \(R[a,b,c]\) with vertices \((a,-c),(a,c),(b,c),(b,-c)\), \(a<b\) and \(c>0\).
<figure><img src="content_image/1708.09113/sphere3.png"><figcaption>Figure 7. The shape of the geodesic S[t] when t>0 is sufficiently small.</figcaption></figure>
Numerics show that all the rectangles with \(c=1/2\) will satisfy the requirement on length. To simplify computations, we take \(c=c_{0}\approx 0.481\) to be the positive real number such that \(\frac{e^{-c_{0}^{2}/4}}{\int_{0}^{c_{0}}e^{-x^{2}/4}dx}=2\).
**Proposition 2**.: For every \(a,b\in(0,\infty)\) and \(n\geq 2\), we have
(4.2) \[L_{n}(a,b,c_{0})<2\int_{0}^{\infty}s^{n-1}e^{-s^{2}/4}ds.\]
The proof of this result is by induction on the dimension and numerical verification on lower dimensions. The induction is quite involved and makes heavy use of Taylor expansions.
**Proposition 3**.: There is a smooth function \(\varphi:\mathbb{R}_{+}\to\mathbb{R}_{+}\) with \(\varphi(a)>a\), with \(\lim_{a\to 0}\varphi(a)=0\), so that the family of rectangles \(R[a,\varphi(a),c_{0}]\) satisfies
\[\iint_{R[a,\varphi(a),c_{0}]}KdA=2\pi,\quad L_{n}(a,\varphi(a),c_{0})<2\int_{0 }^{\infty}s^{n-1}e^{-s^{2}/4}ds.\]
Next we consider the modified curve shortening flow for the initial curves \(R[a,\varphi(a),c_{0}]\).
**Proposition 4**.: Let \(\Phi:\mathbb{R}_{+}\times\mathbb{R}_{+}\to C^{0}(\mathbb{S}^{1},\mathbb{H})\) be the map with the following properties:
1. \(\Phi(a,0)=R[a,\varphi(a),c_{0}]\),
2. For fixed \(a\), \(\Phi(a,t)\) satisfies the evolution equation (4.1).
There exists an \(a_{0}\in\mathbb{R}_{+}\) so that \(\Phi(a_{0},t)\) intersects the line \(r=\sqrt{2(n-1)}\) for all time \(t\in\mathbb{R}_{+}\).
Proof.: The set of curves that do not intersect the line \(r=\sqrt{2(n-1)}\) is split into two disjoint sets
\[A_{1} =\{\text{continuous closed curves }\gamma:\mathbb{S}^{1}\to \mathbb{H}\mid\gamma(s)<r_{n},s\in S^{1}\},\]
\[A_{2} =\{\text{continuous closed curves }\gamma:\mathbb{S}^{1}\to \mathbb{H}\mid\gamma(s)>r_{n},s\in S^{1}\}.\]
The line \(r=\sqrt{2(n-1)}\) is a geodesic (it is the profile curve for the round cylinder self-shrinker), so it is stationary under the modified curve shortening flow. By the maximum principle, if \(\Phi(a,t_{0})\in A_{i}\) for some \(i=1,2\) and some time \(t_{0}\), then \(\Phi(a,t)\in A_{i}\) for \(t\geq t_{0}\). Consider the following subsets of \(\mathbb{R}_{+}\):
\[U_{i} =\{a\in\mathbb{R}_{+}\mid\exists t>0,\Phi(a,t)\in A_{i}\}\]
Both \(U_{1}\) and \(U_{2}\) are open and \(U_{1}\cap U_{2}=\emptyset\). Therefore \(U_{1}\cup U_{2}\neq\mathbb{R}_{+}\), which proves the existence of \(a_{0}\). ∎
The remainder of the proof of Theorem 2 is dedicated to showing that the curves \(\gamma_{t}:=\Phi(a_{0},t)\) converge to a simple, closed geodesic that is symmetric about the \(r\)-axis and convex in the Euclidean sense. First, we show that on compact sets, the curves \(\gamma_{t}\) approach a geodesic along a subsequence.
**Proposition 5**.: There is a sequence \(t_{i}\) so that
(4.3) \[\int_{\gamma_{t_{i}}\cap E}|k|\,ds\to 0\]
for any compact subset \(E\) of the open half-plane \(\mathbb{H}\).
**Proposition 6**.: Let \(t_{i}\) be the sequence from Proposition 5 (or possibly one of its subsequences) and let \(E\) be a compact set in \(\mathbb{H}\). If for some \(p_{i}\in\mathbb{S}^{1}\), \(i\in\mathbb{N}\), the sequence \(\{\gamma_{i}(p_{i})\}\) converges to a point \(P\in E\), then there exists a subsequence \({i_{j}}\) so that the connected component of \(\gamma_{i_{j}}\cap E\) containing \(\gamma_{i_{j}}(p_{i_{j}})\) converges in \(C^{1}\) in \(E\). The limit curve contains \(P\) and satisfies the geodesic equation in \(E\).
For \(t>0\) we know each curve \(\gamma_{t}\) restricted to the positive quadrant is a graph over the \(r\)-axis. Since the flow preserves symmetry, this follows from the initial evolution along with a result from [8] on intersection points. The proof is completed by showing that a convergent subsequence of \(\gamma_{t}\) stays within a compact domain. This is done using properties of geodesics written as graphs over the \(r\)-axis and length estimates for \(\gamma_{t}\). Once it is know that there is a subsequence of \(\gamma_{t}\) that stays in a compact subset of \(\mathbb{H}\), it follows that there is a subsequence of \(\gamma_{t}\) that converges to a geodesic \(\gamma_{\infty}\) with the desired properties.
## 5. Shooting method for closed self-shrinkers with rotational symmetry
Recall from Section 3.3 that a hypersurface with rotational symmetry \(\mathbf{X}(s,\omega)=\left(x(s),r(s)\omega\right)\), \(s\in\mathbb{R}\), \(\omega\in\mathbb{S}^{n-1}\) is a self-shrinker if and only if the profile curve \((x(s),r(s))\) is a geodesic in \((\mathbb{H},g_{Ang})\) (see Section 3.3). Consequently, the construction of closed self-shrinkers with rotational symmetry can be reduced to finding closed geodesics and geodesic arcs that intersect the \(x\)-axis orthogonally in \((\mathbb{H},g_{Ang})\).
The geodesic equation for the profile curve \((x(s),r(s))\) of a self-shrinker with rotational symmetry is
(5.1) \[\frac{x^{\prime}r^{\prime\prime}-x^{\prime\prime}r^{\prime}}{x^{\prime 2}+r^{ \prime 2}}=\left(\frac{n-1}{r}-\frac{r}{2}\right)x^{\prime}+\frac{1}{2}xr^{ \prime}.\]
Reparametrizing the profile curve so that \(x^{\prime}(s)^{2}+r^{\prime}(s)^{2}=1\), shows that the tangent angle \(\alpha(s)\) solves the system
(5.2) \[\left\{\begin{array}[]{lll}x^{\prime}(s)&=&\cos\alpha(s),\\ r^{\prime}(s)&=&\sin\alpha(s),\\ \alpha^{\prime}(s)&=&\left(\frac{n-1}{r(s)}-\frac{r(s)}{2}\right)\cos\alpha(s) +\frac{x(s)}{2}\sin\alpha(s).\end{array}\right.\]
For \((x_{0},r_{0})\in\mathbb{H}\) and \(\alpha_{0}\in\mathbb{R}\), we let \(\Gamma[x_{0},r_{0},\alpha_{0}](s)\) denote the unique solution to (5.2) satisfying
\[\Gamma[x_{0},r_{0},\alpha_{0}](0)=(x_{0},r_{0}),\quad\Gamma[x_{0},r_{0},\alpha _{0}]^{\prime}(0)=(\cos(\alpha_{0}),\sin(\alpha_{0})).\]
The geodesics \(\Gamma[x_{0},r_{0},\alpha_{0}]\) depend smoothly on the parameters \([x_{0},r_{0},\alpha_{0}]\in\mathbb{H}\times\mathbb{R}\), and as was shown in [16] this dependence extends smoothly to geodesics that intersect the \(x\)-axis orthogonally.
### An immersed sphere self-shrinker
In this section, we give a detailed illustration of how the shooting method for the geodesic equation (5.2) can be used to prove the following result.
**Theorem 3** ([16]).: There exists an immersed and non-embedded \(\mathbb{S}^{n}\) self-shrinker in \(\mathbb{R}^{n+1}\).
We begin by considering the one-parameter shooting problem:
\[S[t]\textrm{ is the solution to~{}(\ref{SSEq}) with }S[t](0)=(t,0)\textrm{ and }S[t]^{\prime}(0)=(0,1).\]
Notice that the solutions \(S[0]\) and \(S[\sqrt{2n}]\) are the profile curves for a flat \(\mathbb{R}^{n}\) self-shrinker and the round \(\mathbb{S}^{n}\) self-shrinker, respectively. After describing the shape of \(S[t]\) for small \(t>0\), we will show that there is \(x_{*}\in(0,\sqrt{2n})\) so that \(S[x_{*}]\) intersects the \(r\)-axis orthogonally (see Figure 10). Since the geodesic equation is symmetric with respect to reflections about the \(r\)-axis, the geodesic \(S[x_{*}]\) intersects the \(x\)-axis orthogonally at two points (see Figure 3) and is the profile curve for an immersed \(\mathbb{S}^{n}\) self-shrinker.
**Step 1**: The first step in the proof is to show that \(S[t]\) has the following shape for small \(t>0\):
<figure><img src="content_image/1708.09113/sphere3.png"><figcaption>Figure 7. The shape of the geodesic S[t] when t>0 is sufficiently small.</figcaption></figure>
In order to do this, we need to understand the behavior of the geodesic \(S[t]\) as it travels away from the \(x\)-axis, turns around, and travels back towards the \(x\)-axis. Following the proof in [16], we analyze \(S[t]\) by writing it locally as graphs over the \(r\)-axis. The local graphical component \(x=f(r)\) satisfies
(5.3) \[\frac{f^{\prime\prime}}{1+f^{\prime 2}}=\left(\frac{r}{2}-\frac{n-1}{r}\right) f^{\prime}-\frac{1}{2}f.\]
Taking a derivative of (5.3), we have
(5.4) \[\frac{f^{\prime\prime\prime}}{1+f^{\prime 2}}=\frac{2f^{\prime}(f^{\prime \prime})^{2}}{(1+f^{\prime 2})^{2}}+\left(\frac{r}{2}-\frac{n-1}{r}\right)f^{ \prime\prime}+\frac{n-1}{r^{2}}f^{\prime}.\]
Notice that (5.4) is a second order differential equation for \(f^{\prime}\) with positive coefficient on \(f^{\prime}\). Much of the behavior of the geodesics \(S[t]\) can be understood by analyzing these equations. We have the following results from [16]:
**Proposition 7**.: For \(t>0\), let \(f_{t}\) denote the solution to (5.3) with \(f_{t}(0)=t\) and \(f_{t}^{\prime}(0)=0\). Then \(f_{t}^{\prime\prime}<0\), and there is a point \(b_{t}>\sqrt{2(n-1)}\) so that \(\lim_{r\to b_{t}}f^{\prime}_{t}(r)=-\infty\) and \(f_{t}(b_{t})>-\infty\). Moreover, there exists \(\tilde{t}>0\) so that if \(t\in(0,\tilde{t}\,]\), then
\[b_{t}\geq\sqrt{\log{\frac{2}{\pi t^{2}}}},\]
\[\frac{-4(n+1)}{\sqrt{\log{\frac{2}{\pi t^{2}}}}}\leq f_{t}(b_{t})<0,\]
and \(f_{t}(r)<0\), for \(t\in(2\sqrt{n},b_{t}]\).
This proposition tells us that when \(t>0\) is sufficiently small, the first component of \(S[t]\) written as a graph over the \(r\)-axis is concave down, decreasing, and it crosses the \(r\)-axis, before it blows-up at the point \(B_{t}=(f_{t}(b_{t}),b_{t})\). (Here, the graphical component blows-up in the sense that the tangent line at \(B_{t}\) is orthogonal to the \(r\)-axis.)
<figure><img src="content_image/1708.09113/sphere1x.png"><figcaption>Figure 8. The initial shape of the first graphical component of S[t] writtenas a graph over the r-axis when 0<t<~t.</figcaption></figure>
In addition, Proposition 7 tells us \(b_{t}\to\infty\) and \(f_{t}(b_{t})\to 0\) as \(t\to 0\). This behavior allows us to study the second component of \(S[t]\) written as a graph over the \(r\)-axis when \(t>0\) is sufficiently small.
**Proposition 8**.: Let \(\tilde{t}>0\) be given as in the conclusion of Proposition 7. For \(t\in(0,\tilde{t}\,]\), let \(B_{t}=(f_{t}(b_{t}),b_{t})\), where \(f_{t}\) is the solution to (5.3) with \(f_{t}(0)=t\) and \(f_{t}^{\prime}(0)=0\), and define \(g_{t}\) to be the unique solution to (5.3) with \(g_{t}(b_{t})=f_{t}(b_{t})\) and \(\lim_{r\to b_{t}}g_{t}^{\prime}(r)=\infty\). Then there exists \(0<\bar{t}<\tilde{t}\) so that for \(t\in(0,\bar{t}\,]\), the solution \(g_{t}\) has the following properties: there is \(a_{t}\in(0,\sqrt{2(n-1)})\) so that \(g_{t}\) is a maximally extended solution to (5.3) on the interval \((a_{t},b_{t})\); there is a point \(c_{t}\in(a_{t},b_{t})\) so that \(g_{t}^{\prime}(c_{t})=0\); \(g_{t}^{\prime\prime}>0\); and \(0<g_{t}(a_{t})<\infty\).
This proposition tells us that when \(t>0\) is sufficiently small, the second component of \(S[t]\) written as a graph over the \(r\)-axis is concave up, it crosses the \(r\)-axis, and it blows-up at the point \(Q_{t}=(g_{t}(a_{t}),a_{t})\), where \(0<a_{t}<\sqrt{2(n-1)}\) and \(0<g_{t}(a_{t})<\infty\).
<figure><img src="content_image/1708.09113/sphere2x.png"><figcaption>Figure 9. The initial shape of the second graphical component of S[t] writtenas a graph over the r-axis when 0<t<¯t.</figcaption></figure>
Combining these two propositions shows that \(S[t]\) has the initial shape illustrated in Figure 7 when \(0<t<\tilde{t}\).
**Step 2**: The second step is to increase the shooting parameter \(t>0\) and show that there is \(x_{*}\in(0,\sqrt{2n})\) so that \(S[x_{*}]\) has the following shape:
<figure><img src="content_image/1708.09113/sphere4.png"><figcaption>Figure 10. A sketch of the geodesic S[x∗].</figcaption></figure>
Using the notation from Step 1, we know that for \(0<t\leq\bar{t}\), the geodesic \(S[t]\) is strictly convex as it travels from \(P_{t}=(t,0)\) to \(B_{t}\) to \(C_{t}\) to \(Q_{t}\), and \(Q_{t}\) lies in the same quadrant as \(P_{t}\). We note that the the geodesic arc from \(P_{t}\) to \(Q_{t}\) is strictly convex in the sense that \((x^{\prime}r^{\prime\prime}-r^{\prime}x^{\prime\prime})\) is non-vanishing along the curve.
<figure><img src="content_image/1708.09113/sphere3-2.png"><figcaption>Figure 11. A geodesic arc S[t] with property P(t).</figcaption></figure>
To define the initial shooting coordinate \(x_{*}\), we introduce the property \(\mathscr{P}(t)\) for the geodesic \(S[t]\). As in Figure 11, we say that \(\mathscr{P}(t)\) holds if
* The geodesic \(S[t]\) contains points \(B_{t}\), \(C_{t}\), and \(Q_{t}\) as described in Step 1.
* The geodesic \(S[t]\) is strictly convex as it travels from \(P_{t}\) to \(Q_{t}\).
* The geodesic \(S[t]\) crosses the \(r\)-axis between \(P_{t}\) and \(B_{t}\) and then crosses back over between \(C_{t}\) and \(Q_{t}\).
We define the initial shooting coordinate \(x_{*}\) by
(5.5) \[x_{*}=\sup\{x>0\,:\,\mathscr{P}(t)\textrm{ holds for all }t\in(0,x]\}.\]
It follows from Step 1 that \(x_{*}\) is well-defined and \(x_{*}\geq\bar{t}\).
By construction, for \(0<t<x_{*}\), the geodesic \(S[t]\) satisfies property \(\mathscr{P}(t)\), and by continuous dependence on initial conditions, \(S[t]\) converges to \(S[x_{*}]\) as \(t\uparrow x_{*}\). The following result summarizes the main properties of \(S[x_{*}]\) established in [16].
**Proposition 9**.: Let \(x_{*}\) be the initial shooting value defined in (5.5). Then
* \(x_{*}<\sqrt{2n}\).
* As \(t\uparrow x_{*}\), the points \(B_{t}\), \(C_{t}\), \(Q_{t}\) on the geodesics \(S[t]\) converge to distinct points \(B_{x_{*}}\), \(C_{x_{*}}\), \(Q_{x_{*}}\) on \(S[x_{*}]\) in a compact subset of \(\mathbb{H}\).
* The geodesic \(S[x_{*}]\) is strictly convex as it travels from \(P_{x_{*}}\) to \(Q_{x_{*}}\).
* The geodesic \(S[x_{*}]\) crosses the \(r\)-axis between \(P_{x_{*}}\) and \(B_{x_{*}}\).
* The point \(Q_{x_{*}}\) lies on the \(r\)-axis.
It follows from this proposition along with the symmetry of solutions to (5.1) with respect to reflections about the \(r\)-axis shows that \(S[x_{*}]\) is the profile curve for an immersed \(\mathbb{S}^{n}\) self-shrinker with the shape illustrated in Figure 3.
**Remark 5.1**.: In Step 2, the geodesic \(S[x_{*}]\) is analyzed as the limit of the geodesics \(S[t]\) as \(t\uparrow x_{*}\). Now, each geodesic \(S[t]\) is strictly convex as a curve from \(P_{t}\) to \(Q_{t}\). However, this does not imply that \(S[x_{*}]\) is strictly convex. In fact, without further argument, we can only conclude that \(S[x_{*}]\) is convex in the sense that \((x^{\prime}r^{\prime\prime}-r^{\prime}x^{\prime\prime})\) may vanish but does not change sign. There are indeed examples where the limiting geodesic may not be strictly convex. For instance, the geodesics \(S[t]\) converge to the \(r\)-axis as \(t\downarrow 0\). In the case of \(S[x_{*}]\), proving the existence of the point \(C_{x_{*}}\) ensures that \(S[x_{*}]\) is strictly convex as it travels from \(P_{x_{*}}\) to \(Q_{x_{*}}\).
### A collection of shooting problems for closed self-shrinkers
In this section, we sketch the behavior of geodesics for three shooting problems and illustrate how this behavior can be used to construct closed self-shrinkers. The analysis for the results stated in this section can be found in [20].
#### 5.2.1. A second immersed sphere self-shrinker
Consider the shooting problem:
\[S[t]\textrm{ is the solution to~{}(\ref{SSEq}) with }S[t](0)=(t,0)\textrm{ and }S[t]^{\prime}(0)=(0,1).\]
For this shooting problem we consider the geodesics \(S[t]\) when \(t>0\) is close to 0. Given \(N>0\), there exists \(\varepsilon>0\) so that when \(0<t<\varepsilon\), the geodesic \(S[t]\) is strictly convex as it crosses back and forth over the \(r\)-axis with \(N\) local maximums to the left of the \(r\)-axis and \(N\) local minimums to the right of the \(r\)-axis.
<figure><img src="content_image/1708.09113/shoot1ax.png"><figcaption>Figure 12. S[t] for t>0 close to 0.</figcaption></figure>
As was shown in the previous section, this shooting problem leads to the existence of the profile curve, \(S[x_{*}]\), for an immersed \(\mathbb{S}^{n}\) self-shrinker. We will sketch a proof that there is \(x_{**}\in(0,x_{*})\) so that \(S[x_{**}]\) is the profile curve for a second immersed \(\mathbb{S}^{n}\) self-shrinker.
For \(t\) close to \(x_{*}\), the geodesic \(S[t]\) has the following shape:
<figure><img src="content_image/1708.09113/shoot1bx.png"><figcaption>Figure 13. S[t] for t close to x∗.</figcaption></figure>
Notice that the second local maximum points lie on different sides of the \(r\)-axis in the previous two figures. As \(t\) varies from \(0\) to \(x_{*}\), a continuity argument can be used to show that there is \(x_{**}\in(0,x_{*})\) so that the second local maximum lies on the \(r\)-axis. By the symmetry of geodesics with respect to reflections about the \(r\)-axis, \(S[x_{**}]\) intersects the \(x\)-axis orthogonally at two points and is the profile curve for an immersed \(\mathbb{S}^{n}\) self-shrinker.
<figure><img src="content_image/1708.09113/shoot1cx.png"><figcaption>Figure 14. S[x∗∗] is the profile curve for a second immersed Sn self-shrinker.</figcaption></figure>
#### 5.2.2. An embedded torus self-shrinker
Consider the shooting problem:
\[T[t]\textrm{ is the solution to~{}(\ref{SSEq}) with }T[t](0)=(0,t)\textrm{ and }T[t]^{\prime}(0)=(1,0).\]
For this shooting problem we consider the geodesics \(T[t]\) when \(t>0\) is close to 0. Solutions to this shooting problem behave similarly to solutions from the previous shooting problem, maintaining their strict convexity as they cross back and forth over the \(r\)-axis with local maximums to the left of the \(r\)-axis and local minimums (for \(s>0\)) to the right of the \(r\)-axis.
<figure><img src="content_image/1708.09113/shoot2ax.png"><figcaption>Figure 15. T[t] for t>0 close to 0.</figcaption></figure>
Increasing \(t\) away from \(0\) leads to the profile curve for an immersed \(\mathbb{S}^{n}\) self-shrinker with the shape illustrated in Figure 3. In particular, we have two convex geodesic arcs with local maximums on opposite sides of the \(r\)-axis. A continuity argument can be used to show that there is a simple closed strictly convex geodesic that intersects the \(r\)-axis orthogonally at two points. Such a geodesic is the profile curve of an \(\mathbb{S}^{1}\times\mathbb{S}^{n-1}\) self-shrinker.
<figure><img src="content_image/1708.09113/shoot2bx.png"><figcaption>Figure 16. The profile curve of an embedded S1×Sn−1 self-shrinker.</figcaption></figure>
#### 5.2.3. An immersed torus self-shrinker
Consider the shooting problem:
\[T[t]\textrm{ is the solution to~{}(\ref{SSEq}) with }T[t](0)=(0,t)\textrm{ and }S[t]^{\prime}(0)=(1,0).\]
For this shooting problem we consider the geodesics \(T[t]\) when \(t<\sqrt{2(n-1)}\) is close to \(\sqrt{2(n-1)}\). Solutions to this shooting problem cross back and forth over the geodesic \(r\equiv\sqrt{2(n-1)}\) oscillating in a shape that resembles \(\infty\):
<figure><img src="content_image/1708.09113/shoot3ax.png"><figcaption>Figure 17.</figcaption></figure>
Decreasing \(t\) away from \(\sqrt{2(n-1)}\) again leads to the profile curve for an immersed \(\mathbb{S}^{n}\) self-shrinker with the shape illustrated in Figure 3. In particular, right before \(t\) reaches this profile curve, \(T[t]\) has the following shape:
<figure><img src="content_image/1708.09113/shoot3bx.png"><figcaption>Figure 18.</figcaption></figure>
Comparing the geodesic arcs in the previous two figures, we see that the second local maximum points lie on opposite sides of the \(r\)-axis. It follows that there is \(r_{*}\in(0,\sqrt{2(n-1)})\) so that \(T[r_{*}]\) intersects the \(r\)-axis orthogonally at two points:
<figure><img src="content_image/1708.09113/shoot3cx.png"><figcaption>Figure 19. T[r∗] is the profile curve for an immersed S1×Sn−1 self-shrinker.</figcaption></figure>
### The role of continuity in the shooting method
In this section we discuss the role of continuity in the shooting method for constructing closed geodesics in \((\mathbb{H},g_{Ang})\). First of all, since the shooting method is implemented by varying the initial position and velocity for solutions to (5.2), the continuous dependence of solutions on initial conditions is the main force at work. Next, we observe that the geodesic equation (5.1), re-written as
\[\frac{x^{\prime}r^{\prime\prime}-x^{\prime\prime}r^{\prime}}{(x^{\prime 2}+r^{ \prime 2})^{3/2}}=\left(\frac{n-1}{r}-\frac{r}{2}\right)\frac{x^{\prime}}{ \sqrt{x^{\prime 2}+r^{\prime 2}}}+\frac{1}{2}x\frac{r^{\prime}}{\sqrt{x^{ \prime 2}+r^{\prime 2}}},\]
gives uniform bounds for the Euclidean curvature of geodesic arcs in a fixed compact subset of \(\mathbb{H}\). This tells us that as we vary the initial conditions in the shooting problem, the limit of geodesic arcs in a fixed compact subset of \(\mathbb{H}\) will not develop corners or collapse onto a curve with multiplicity. (We note that this type of behavior does occur at the boundary of \(\mathbb{H}\) when geodesics converge to half-entire graphs with multiplicity [20]).
Now, the geodesic equation (5.1) is symmetric with respect to reflections across the \(r\)-axis, so the existence of a closed geodesic can be established by finding a geodesic arc that intersects the \(r\)-axis orthogonally at two points. One way to find such a geodesic arc is to build a framework where the following three properties hold:
1. There is a point \(P_{1}\) so that the geodesic arc obtained from shooting orthogonally to the \(r\)-axis at \(P_{1}\) with tangent angle 0 contains a point \(Q_{1}\), located to the left of the \(r\)-axis with tangent angle \(\pi\). <figure><img src="content_image/1708.09113/half_low.png"><figcaption>Figure 20.</figcaption></figure>
2. There is a point \(P_{2}\) so that the geodesic arc obtained from shooting orthogonally to the \(r\)-axis at \(P_{2}\) with tangent angle 0 contains a point \(Q_{2}\), located to the right of the \(r\)-axis with tangent angle \(\pi\). <figure><img src="content_image/1708.09113/half_high.png"><figcaption>Figure 21.</figcaption></figure>
3. For each point \(P\) on the \(r\)-axis, between \(P_{1}\) and \(P_{2}\), the geodesic arc obtained from shooting orthogonally to the \(r\)-axis at \(P\) with tangent angle 0 travels from \(P\) to a point \(Q\) with tangent angle \(\pi\). In addition, as \(P\) varies from \(P_{1}\) to \(P_{2}\), the geodesic arcs connecting \(P\) to \(Q\) vary smoothly and remain inside a compact subset of \(\mathbb{H}\). (We note that under these assumptions, (5.1) implies that the geodesics are strictly convex at \(Q\)).
**Remark 5.2**.: In this example, continuity, combined with the behavior of the geodesic arcs through \(P_{1}\) and \(P_{2}\), guarantees that there is a point \(P_{*}\) on the \(r\)-axis (between \(P_{1}\) and \(P_{2}\)) so that the geodesic arc from \(P_{*}\) to \(Q_{*}\) intersects the \(r\)-axis orthogonally at \(Q_{*}\). Notice how the third property is stated to ensure that continuity can be applied to the problem. For a shooting problem like this, the existence of the points \(Q\) needs to be established as continuous dependence on initial conditions does not guarantee that the orthogonal tangent lines (tangent angle \(\pi\)) will be preserved as \(P\) varies.
**Remark 5.3**.: As an alternative to tracking the points \(Q\) as \(P\) varies from \(P_{1}\) to \(P_{2}\), the crossing angles \(\alpha\), where the geodesic arc meets the \(r\)-axis could be studied instead. In the above illustrations, \(\alpha_{1}\) is less than \(\pi\) and \(\alpha_{2}\) is greater than \(\pi\). In this case, if the crossing points exist, the crossing angles vary continuously, and the geodesic arcs remain in a compact subset of the domain, then there is a geodesic arc that intersects the \(r\)-axis with a crossing angle of \(\pi\).
## 6. Self-shrinkers with bi-rotational symmetry
In this section, we consider the problem of constructing closed self-shrinkers with bi-rotational symmetry. A hypersurface in \(\mathbb{R}^{M_{1}+M_{2}+2}\) has _bi-rotational symmetry_ if it can be written in the form
\[\mathbf{X}(s,p_{1},p_{1})=\left(x(s){\mathbf{p}_{1}},y(s){\mathbf{p}_{2}} \right),\]
where \((x(s),y(s))\) is a curve in the positive quadrant \(\mathcal{Q}=\{(x,y)\in\mathbb{R}^{2}\,|\,x>0,y>0\}\), \({\mathbf{p}_{1}}\in{\mathbb{S}}^{M_{1}}\), \({\mathbf{p}_{2}}\in{\mathbb{S}}^{M_{2}}\), and \(M_{1}\) and \(M_{2}\) are positive integers.
Historically, bi-rotational symmetry has produced rich examples of solutions to classical problems. Hsiang [34] proved the existence of infinitely many distinct bi-rotational CMC immersions of \({\mathbb{S}}^{N-1}\) in \({\mathbb{R}}^{N\geq 4}\). These examples show that Hopf’s Theorem does not extend to higher dimensions. Alencar-Barros-Palmas-Reyes-Santo [4, Theorem 1.2] proved the existence of various bi-rotational minimal hypersurfaces in \({\mathbb{R}}^{M_{1}+M_{2}+2\geq 8}\) with \(M_{1},M_{2}\geq 2\), that are asymptotic or doubly asymptotic to the minimal Clifford cone. Bombieri-De Giorgi-Giusti [10, Section IV] investigated _bi-radial_ graphs of the form \(x_{2m+1}=F(\sqrt{{x_{1}}^{2}+\cdots+{x_{m}}^{2}\,},\sqrt{{x_{m+1}}^{2}+\cdots+ {x_{2m}}^{2}})\). They proved the existence of an entire, non-flat minimal graph in \(\mathbb{R}^{9}\), which showed that Bernstein’s Theorem does not hold in \(\mathbb{R}^{N\geq 9}\). Recently, Del Pino-Kowalczyk-Wei [15] studied the asymptotic behavior of the Bombieri-De Giorgi-Giusti graph and proved the existence of a counterexample to De Giorgi’s conjecture for the Allen-Cahn equation in \(\mathbb{R}^{N\geq 9}\).
The following reduction tells us that the profile curve of a bi-rotational self-shrinker is a weighted geodesic in \(\mathcal{Q}\).
**Proposition 10** (**Bi-rotational self-shrinkers from weighted geodesics in \(\mathcal{Q}\)**).: Let \(\alpha<0\) be a constant, let \(M_{1}\) and \(M_{2}\) be positive integers, and let \(\gamma(s)=(x(s),y(s))\), \(s\in I\), be an immersed curve in the positive quadrant \(\mathcal{Q}\). The following conditions are equivalent.
* The bi-rotational hypersurface \[{\Sigma}^{M_{1}+M_{2}+1}=\left\{\;\mathbf{X}=(x(s){\mathbf{p}_{1}},y(s){ \mathbf{p}_{2}})\in{\mathbb{R}}^{M_{1}+M_{2}+2}\;\;|\;\;s\in I,\;{\mathbf{p}_{ 1}}\in{\mathbb{S}}^{M_{1}},\;{\mathbf{p}_{2}}\in{\mathbb{S}}^{M_{2}}\;\right\}\] is a self-shrinker, satisfying \({\Delta}_{{\mathbf{g}}_{{}_{\Sigma}}}\mathbf{X}=\alpha\mathbf{X}^{\bot}\).
* The profile curve \(\gamma(s)=(x(s),y(s))\) is a weighted geodesic in the positive quadrant \(\mathcal{Q}\) equipped with the density \[x^{M_{1}}y^{M_{2}}e^{\,\alpha\frac{x^{2}+y^{2}}{2}}.\]
* The profile curve \(\gamma(s)=(x(s),y(s))\) satisfies the geodesic equation (6.1) \[\frac{x^{\prime}y^{\prime\prime}-x^{\prime\prime}y^{\prime}}{x^{\prime 2}+y^{ \prime 2}}=-\left(\frac{M_{1}}{x}+\alpha x\right)y^{\prime}+\left(\frac{M_{2}} {y}+\alpha y\right)x^{\prime}.\]
As in the rotational symmetry case, the shooting method can be used to explore profile curves of bi-rotational self-shrinkers, and the existence of a closed self-shrinker with bi-rotational symmetry is equivalent to finding a solution to (6.1) that is closed or intersects the boundary of \(\mathcal{Q}\) orthogonally at two points.
### The shooting method for bi-rotational self-shrinkers
In this section we adopt the normalization \(\alpha=-\frac{1}{2}\), and we make the additional assumption that \(M_{1}=M_{2}=M\geq 1\). Then, the geodesic equation (6.1) becomes
(6.2) \[\frac{x^{\prime}y^{\prime\prime}-x^{\prime\prime}y^{\prime}}{x^{\prime 2}+y^{ \prime 2}}=-\left(\frac{M}{x}-\frac{1}{2}x\right)y^{\prime}+\left(\frac{M}{y}- \frac{1}{2}y\right)x^{\prime}.\]
Notice that for a geodesic \(y=y(x)\) written as a graph over the \(x\)-axis, we have the following equation which resembles equation (5.3) for self-shrinkers with rotational symmetry:
(6.3) \[\frac{y^{\prime\prime}}{1+y^{\prime 2}}=-\left(\frac{M}{x}-\frac{1}{2}x\right) y^{\prime}+\left(\frac{M}{y}-\frac{1}{2}y\right).\]
Examples of bi-rotational self-shrinkers in \({\mathbb{R}}^{2M+2}\) include the minimal Clifford cone, round cylinders \(\mathbb{S}^{M}\times\mathbb{R}^{M+1}\) of radius \(\sqrt{2M\,}\) with ‘axis’ through the origin, and the round sphere of radius \(\sqrt{2(2M+1)}\) centered at the origin. We have the following profile curves for these examples:
* _Minimal Clifford cone_: \(y=x\),
* _Round cylinders_: \(y=\sqrt{2M\,}\) and \(x=\sqrt{2M\,}\),
* _Round sphere_: \(x^{2}+y^{2}=2(2M+1)\).
<figure><img src="content_image/1708.09113/four.png"><figcaption>Figure 22. Geodesics corresponding to the Clifford cone, two round cylinders,and the round sphere.</figcaption></figure>
The assumption \(M_{1}=M_{2}=M\) introduces an additional symmetry as the geodesic equation (6.2) is now symmetric with respect to reflections about the line \(y=x\). Consequently, a closed geodesic can be constructed by finding a geodesic arc that intersects the line \(y=x\) orthogonally at two points.
Reparametrizing a solution to (6.2) so that the curve satisfies \(x^{\prime}(s)^{2}+y^{\prime}(s)^{2}=1\), shows that the tangent angle \(\alpha(s)\) is a solution to the system
(6.4) \[\left\{\begin{array}[]{lll}x^{\prime}(s)&=&\cos\alpha(s),\\ y^{\prime}(s)&=&\sin\alpha(s),\\ \alpha^{\prime}(s)&=&-\left(\frac{M_{1}}{x(s)}-\frac{x(s)}{2}\right)\sin\alpha (s)+\left(\frac{M_{2}}{y(s)}-\frac{y(s)}{2}\right)\cos\alpha(s).\end{array}\right.\]
This leads to the shooting problem:
(6.5) \[T[t]\textrm{ is the solution to~{}(\ref{bi:SSEq}) with }T[t](0)=(t,t)\textrm{ and }T[t]^{\prime}(0)=\left(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right).\]
As described in Section 5.3, the existence of a closed geodesic can be established by showing that the above shooting problem exhibits the two types of behavior illustrated in Figure 23.
<figure><img src="content_image/1708.09113/bishoot2.png"><figcaption>Figure 23.</figcaption></figure>
McGrath has posted a preprint [47] which shows that \(T[t]\) exhibits the behavior illustrated on the left of Figure 23, for large values of the parameter \(t\). In this preprint, McGrath presents a shooting method argument similar to the ones used to construct closed self-shrinkers with rotational symmetry in [9] and [16]. The approach is to analyze the geodesics \(T[t]\) as \(t\) decreases from \(\infty\) and deduce through continuity that \(T[t_{*}]\) is a closed geodesic for some \(t_{*}>0\). We note that the presence of the additional term in (6.2), when compared to (5.1), makes the analysis of bi-rotational geodesics more complicated than their rotational counterparts.
### Numerical approximations for profile curves of bi-rotational self-shrinkers in \({\mathbb{R}}^{4}\)
In this section we present numerical approximations of _symmetric_ profile curves for closed self-shrinkers with bi-rotational symmetry in the case where \(\alpha=-\frac{1}{2}\) and \(M_{1}=M_{2}=1\). We used Wolfram Mathematica to plot numerical solutions to the system (6.4) for various initial values in the shooting problem (6.5).
1. _Embedded \({\mathbb{T}}^{3}\) self-shrinker in \({\mathbb{R}}^{4}\)_: A detailed analysis of (6.3), adapted from the crossing arguments in [16] and [20], confirms there is a \(t_{2}>0\) so that the geodesic \(T[t]\) has the initial shape illustrated on the left of Figure 23 for \(t\geq t_{2}\). Numerics show that there is a \(t_{1}>\sqrt{6}\) so that \(T[t_{1}]\) has the following initial shape: <figure><img src="content_image/1708.09113/ray_lower.png"><figcaption>Figure 24.</figcaption></figure> The framework of Remark 5.3 can now be implemented to prove the existence of a simple, closed geodesic. However, as discussed in Section 5.3, additional details on the behavior of the geodesics \(T[t]\), \(t\in[t_{1},t_{2}]\) are needed to run the continuity argument, and the existence of a geodesic with the shape of \(T[t_{1}]\) in Figure 24 still needs to be established. Here is a numerical approximation of a simple, closed geodesic: <figure><img src="content_image/1708.09113/embeddedT3.png"><figcaption>Figure 25.</figcaption></figure>
2. _Three immersed \({\mathbb{T}}^{3}\) self-shrinkers in \({\mathbb{R}}^{4}\)_: <figure><img src="content_image/1708.09113/immersedT3.png"><figcaption>Figure 26.</figcaption></figure>
3. _Two immersed \({\mathbb{S}}^{3}\) self-shrinkers in \({\mathbb{R}}^{4}\)_: <figure><img src="content_image/1708.09113/S3together.png"><figcaption>Figure 27.</figcaption></figure>
## 7. Open Problems
We end the survey with a list of open problems for closed self-shrinkers. One may also raise similar questions for the existence and uniqueness of closed self-shrinkers in higher dimensions.
**Open Problem 1**.: Is Angenet’s rotational torus the only closed, embedded, genus 1 self-shrinker in \(\mathbb{R}^{3}\)?
The uniqueness of Angenent’s torus is even unknown in the class of self-shrinkers with rotational symmetry. The embeddedness assumption is necessary due to the immersed examples constructed in [20]. Very recently, motivated by Lawson’s Theorem [45] for embedded minimal surfaces in the three-dimensional sphere, Mramor and Wang [51, Corollary 1.2] exploited the variational characterization of self-shrinkers (Section 2.4) to show that an embedded self shrinking torus in \(\mathbb{R}^{3}\) is un-knotted.
**Open Problem 2**.: Is the round sphere centered at the origin the only embedded \(\mathbb{S}^{3}\) self-shrinker in \(\mathbb{R}^{4}\)?
The embeddedness assumption is necessary due to the existence of immersed and non-embedded \(\mathbb{S}^{3}\) self-shinkers constructed in [16, 20]. Uniqueness is known in the rotational case [17, 43].
**Open Problem 3**.: Existence and uniqueness of closed self-shrinkers with bi-rotational symmetry.
Are there bi-rotational self-shrinkers for the numeric profile curves presented in Section 6.2? Are there any uniqueness results for these or other bi-rotational examples?
## References
* [1] U. Abresch, J. Langer, _The normalized curved shortening flow and homothetic solutions_, **J. Differential Geom.** 23 (1986), 175–196.
* [2] U. Abresch, _Constant mean curvature tori in terms of elliptic functions_, **J. Reine Angew. Math.** 374 (1987), 169–192.
* [3] U. Abresch. H. Rosenberg, _A Hopf differential for constant mean curvature surfaces in \({\mathbb{S}}^{2}\times\mathbb{R}\) and \({\mathbb{H}}^{2}\times\mathbb{R}\)_, **Acta Math.** 193 (2004), no. 2, 141–174.
* [4] H. Alencar, A. Barros, O. Palmas, J. G. Reyes, W. Santos, _\(O(m)\times O(n)\)-invariant minimal hypersurfaces in \({\mathbb{R}}^{m+n}\)_, **Ann. Global Anal. Geom.** 27 (2005), 179–199.
* [5] B. Andrews, P. Bryan, _A comparison theorem for the isoperimetric profile under curve-shortening flow_, **Comm. Anal. Geom.** 19 (2011), no. 3, 503–539.
* [6] B. Andrews, P. Bryan, _Curvature bound for curve shortening flow via distance comparison and a direct proof of Grayson’s theorem_, **J. Reine Angew. Math.** 653 (2011), 179–187.
* [7] S. Angenent, _Parabolic equations for curves on surfaces. I. Curves with \(p\)-integrable curvature_, **Ann. of Math.** (2), 132 (1990), 451–483.
* [8] S. Angenent _Parabolic equations for curves on surfaces. II. Intersections, blow-up and generalized solutions_, **Ann. of Math.** (2), 133 (1991), 171–215.
* [9] S. Angenent, _Shrinking doughnuts_, Nonlinear diffusion equations and their equilibrium states, 3 (Gregynog, 1989), 21–38, Progr. Nonlinear Differential Equations Appl. 7, Birkhäuser, Boston, 1992.
* [10] E. Bombieri, E. De Giorgi, E. Giusti, _Minimal cones and the Bernstein problem_, **Invent. Math.** 7 (1969), 243–268.
* [11] R. L. Bryant, _Complex analysis and a class of Weingarten surfaces_, arXiv preprint arXiv:1105.5589 (2011).
* [12] S. Brendle, _Embedded self-similar shrinkers of genus \(0\)_, **Annals of Math.** 183 (2016), 715–728.
* [13] J.-E. Chang, _One dimensional solutions of the \(\lambda\)-self shrinkers_, **Geom. Dedicata.** 189 (2017), 97–112.
* [14] T. H. Colding, W. P. Minicozzi II, E. K. Pedersen, _Mean curvature flow_, **Bull. Amer. Math. Soc. (N.S.)** 52 (2015), no. 2, 297–333.
* [15] M. del Pino, M. Kowalczyk, J. Wei, _On De Giorgi’s conjecture in dimension \(N\geq 9\)_, **Ann. of Math.** (2) 174 (2011), no. 3, 1485–1569.
* [16] G. Drugan, _An immersed \(S^{2}\) self-shrinker_, **Trans. Amer. Math. Soc.** 367 (2015), no. 5, 3139–3159.
* [17] G. Drugan, _Self-shrinking solutions to mean curvature flow_. Ph.D. thesis, University of Washington, 2014.
* [18] G. Drugan, X. H. Nguyen, _Mean curvature flow of entire graphs evolving away from the heat flow_, **Proc. Amer. Math. Soc.** 145 (2017), no. 2, 861–869.
* [19] G. Drugan, X. H. Nguyen, _Shrinking doughnuts via variational methods_, arXiv preprint arXiv:1708.08808 (2017).
* [20] G. Drugan, S. J. Kleene, _Immersed self-shrinkers_, **Trans. Amer. Math. Soc.** 369 (2017), no. 10, 7213–7250.
* [21] K. Ecker, _Regularity theory for mean curvature flow_, Progress in Nonlinear Differential Equations and their Applications, 57. Birkhäuser, Boston, Inc., Boston, MA, 2004.
* [22] C. L. Epstein, M. I. Weinstein, _A stable manifold theorem for curve shortening equations_, **Comm. Pure Appl. Math.** 40 (1987), no. 1, 119–139.
* [23] A. Fraser, R. Schoen, _Uniqueness theorems for free boundary minimal disks in space forms_, **Int. Math. Res. Not.** 2015, no. 17, 8268–8274.
* [24] M. Gage, _Deforming curves on convex surfaces to simple closed geodesics_, **Indiana Univ. Math. J.** 39 (1990), 1037–1059.
* [25] M. Gage, R. S. Hamilton, _The heat equation shrinking convex plane curves_, **J. Differential Geom.** 23 (1986), 69–96.
* [26] V. Gimeno, _Isoperimetric inequalities for submanifolds. Jellett-Minkowski’s formula revisited_, **Proc. Lond. Math. Soc.** (3) 110 (2015), no. 3, 593–614.
* [27] M. A. Grayson, _The heat equation shrinks embedded plane curves to round points_, **J. Differential Geom.** 26 (1987), 285–314.
* [28] H. P. Halldorsson, _Self-similar solutions to the curve shortening flow_, **Trans. Amer. Math. Soc.** 364 (2012), no. 10, 5285–5309.
* [29] H. P. Halldorsson, _Self-similar solutions to the mean curvature flow in the Minkowski plane \({\mathbb{R}}^{1,1}\)_, **J. Reine Angew. Math.** 704 (2015), 209–243.
* [30] R. S. Hamilton, _Monotonicity formulas for parabolic flows on manifolds_, **Comm. Anal. Geom.** 1 (1993), no 1. 127–137.
* [31] H. Hopf, _Differential geometry in the large: seminar lectures New York University 1946 and Stanford University 1956_. Vol. 1000. Springer, 2003.
* [32] D. Hoffman, _The computer-aided discovery of new embedded minimal surfaces_, **Math. Intelligencer**, 9 (1987), no. 3, 8–21.
* [33] W. Y. Hsiang, Z. H. Teng, W.C. Yu, _New examples of constant mean curvature immersions of spheres into Euclidean space_, **Ann. of Math.** 117 (1983), no. 3, 609–625.
* [34] W. Y. Hsiang, _Generalized rotational hypersurfaces of constant mean curvature in the Euclidean spaces. I._**J. Differential Geom.** 17 (1982), no. 2, 337–356.
* [35] C. C. Hsiung, _Some integral formulas for closed hypersurfaces in Riemannian space_, **Pacific J. Math.** 6 (1956), 291–299.
* [36] C. C. Hsiung, _Some integral formulas for closed hypersurfaces_, **Math. Scand.** 2 (1959), 286–294.
* [37] G. Huisken, _Flow by mean curvature of convex surfaces into spheres_, **J. Differential Geom.** 20 (1984), no. 1, 237–266.
* [38] G. Huisken, _Asymptotic behavior for singularities of the mean curvature flow_, **J. Differential Geom.** 31 (1990), no. 1, 285–299.
* [39] G. Huisken, _A distance comparison principle for evolving curves_, **Asian J. Math.** 2 (1998), no. 1, 127–133.
* [40] Ilmanen, _Lectures on Mean Curvature Flow and Related Equations_, 1998.
* [41] N. Kapouleas, S. J. Kleene, N. M. Møller, _Mean curvature self-shrinkers of high genus: non-compact examples_, arXiv preprint arXiv:1106.5454 (2011), to appear in **J. Reine Angew. Math.**
* [42] K. Kenmotsu, _Weierstrass formula for surfaces of prescribed mean curvature_, **Math. Ann.** 245 (1979), no. 2, 89–99.
* [43] S. J. Kleene, N. M. Møller, _Self-shrinkers with a rotational symmetry_, **Trans. Amer. Math. Soc.** 366 (2014), no. 8, 3943–3963.
* [44] K.-K. Kwong, _An extension of Hsiung-Minkowski formulas and some applications_, **J. Geom. Anal.** 26 (2016), no. 1, 1–23.
* [45] H. B. Lawson, _The unknottedness of minimal embeddings_, **Invent. Math.** 11 (1970), 183–187.
* [46] A. Magni, C. Mantegazza, _A note on Grayson’s theorem_, **Rend. Semin. Mat. Univ. Padova**, 131 (2014), 263–279.
* [47] P. McGrath, _Closed mean curvature self-shrinking surfaces of generalized rotational type_, arXiv preprint arXiv:1507.00681 (2015).
* [48] F. Morgan, _Geometric Measure Theory. A Beginner’s Guide_, fourth ed. Elsevier/Academic Press, Amsterdam, 2009.
* [49] N. M. Møller, _Closed self-shrinking surfaces in \(\mathbb{R}^{3}\) via the torus_, arXiv preprint arXiv:1111.7318 (2011).
* [50] S. Montiel, _Unicity of constant mean curvature hypersurfaces in some Riemannian manifolds_, **Indiana Univ. Math. J.** 48 (1999), no. 2, 711–748.
* [51] A. Mramor, S. Wang, _On the topological rigidity of compact self shrinkers in \({\mathbb{R}}^{3}\)_, arXiv preprint arXiv:1708.06581 (2017).
* [52] W. W. Mullins, _Two Dimensional Motion of Idealized Grain Boundaries_, **Journal of Applied Physics**, 27 (1956), no. 8, 900–904.
* [53] H. Wente, _Counterexample to a conjecture of H. Hopf_, **Pacific J. Math.** 121 (1986), no. 1, 193–243.
|
1802.05142 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 2926,
"num_imgs": 0,
"llama3_tokens_count": 541
} | [] | Final remarks and perspectives sec:conclu
We have given the fundamental concepts and techniques in mathematical morphology, and have shown how to interpret these techniques in terms of mathematical logic, namely in propositional logic. This connection has originated a new domain called morphologic. We have used dilation operators in order to define belief revision operators and belief merging operators.
We have shown that we can find some operators defined in the literature when the dilation operators come from a distance. Moreover we have extended the class of belief revision operators and the class of belief merging operators by using a larger class of operators, in particular having the extensivity and exhaustivity properties.
A similar work has been done using contraction operators. These operators are used in two ways in order to define explanatory relations. It is interesting to note that the use of different structuring elements is determinant in the way the information is structured. The examples in Section abdu:examples point out in a clear way this phenomenon.
Under the assumption that the geometry comes from the Hamming distance between interpretations, we have shown how to compute dilation, erosion, last erosion, ultimate erosion, opening and skeleton operators over formulas. These calculations constitute the basis of our applications to different tasks in knowledge representation.
We have proven that our general operators of revision and fusion are well behaved, in particular they satisfy the AGM postulates and the postulates of integrity constraints belief merging. We have also proven that the explanatory relations defined using morphologic satisfied suitable structural properties.
Potential extensions would be to analyze how minimality criteria for could be expressed in the proposed framework, as the ones proposed for abduction bienvenu2008,eiter1995,halland2012, revision for Horn clauses DP11,DP15,ZPZ13 or for description logics ARXIV,QLB06,QY08,RW09,RW10,WWT10, or more generally for institutions AABH15 and satisfaction systems MA:AIJ-17.
One interesting feature that is worth to remark is the fact that morphologic allows us to give an ordered structure to the pieces of information. That is, it allows having preferences over the formulas. It is exploited by the morphological total pre-order defined by Equation order3. Note that these preferences depend on the structuring element used for defining dilations and erosions.
Finally, our approach provides a reusable framework for performing numerous operations on formulas, and includes computational and axiomatic building blocks, to be applied in different reasoning problems.
Future work will aim to apply the tools of morphologic in order to explain multiple observations and for putting dynamics in the explanatory process. We also expect to treat mediation process using the tools developed in this work.
|
1309.7899 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 8310,
"num_imgs": 3,
"llama3_tokens_count": 2737
} | [
"content_image/1309.7899/Fig1.png",
"content_image/1309.7899/Fig2.png",
"content_image/1309.7899/Fig3.png"
] | # Comment on “Semiclassical and Quantum Analysis of a Focussing Free Particle Hermite Wavefunction”, by Paul Strange (arXiv:1309.6753 [quant-ph])
Andrea Aiello\({}^{1,2}\)
andrea.aiello@mpl.mpg.de
\({}^{1}\) Max Planck Institute for the Science of Light, G\(\ddot{u}\)nther-Scharowsky-Strasse 1/Bau24, 91058 Erlangen, Germany
\({}^{2}\)Institute for Optics, Information and Photonics, University of Erlangen-Nuernberg, Staudtstrasse 7/B2, 91058 Erlangen, Germany
February 24, 2024
###### Abstract
In the recent and very enjoyable paper (Paul Strange, _Semiclassical and Quantum Analysis of a Focussing Free Particle Hermite Wavefunction_, arXiv:1309.6753[quant-ph]), Professor Strange has studied a particular solution of the free particle Schrödinger equation in which the time and space dependence are not separable. After recognizing the fact that “The Schrödinger equation has an identical mathematical form to the paraxial wave equation […]”, he claims to “describe and try to gain insight into an exotic, apparently accelerating solution of the free particle Schrödinger equation that is square integrable and which also displays some unusual characteristics.” It is the main aim of this short note to show that the wavefunction described by Prof. Strange is simply one particular Hermite-Gauss solution of the paraxial wave equation and even more “exotic” examples can be found by considering, e.g., Laguerre-Gauss solutions.
Professor Strange [1] considered a solution of the Schrödinger time-dependent equation for one-dimensional motion of a free particle of mass \(m\),
\[i\hbar\frac{\partial\psi(x,t)}{\partial t}=-\frac{\hbar^{2}}{2m} \frac{\partial^{2}\psi(x,t)}{\partial x^{2}},\] (1)
that at \(t=0\) takes the form
\[\psi(x,t)=\frac{1}{2^{n}n!}\left(\frac{m\omega}{\hbar\pi}\right)^ {1/4}e^{-m\omega x^{2}/(2\hbar)}H_{n}\left(\sqrt{\frac{m\omega}{\hbar}}x\right)\] (2)
where \(\omega=1/t_{c}\). This equations is a 1-dimensional case of the more general function
\[\psi_{\mu\nu}(x,y,t)= \;\frac{1/w_{0}}{\sqrt{2^{\mu+\nu-1}\pi\mu!\nu!}}\exp\left(-\frac {x^{2}+y^{2}}{w_{0}^{2}}\frac{1}{1+it/t_{0}}\right)\exp\left[-i\left(\mu+\nu+1 \right)\arctan\left(\frac{t}{t_{0}}\right)\right]\]
\[\;\times H_{\mu}\left(\frac{\sqrt{2}\,x}{w_{0}\sqrt{1+t^{2}/t_{0} ^{2}}}\right)\,H_{\nu}\left(\frac{\sqrt{2}\,y}{w_{0}\sqrt{1+t^{2}/t_{0}^{2}}} \right),\] (3)
which is a solution of the 2-dimensional Schrödinger equation for a free particle of mass \(m\)
\[i\hbar\frac{\partial\psi_{\mu\nu}(x,y,t)}{\partial t}=-\frac{ \hbar^{2}}{2m}\left[\frac{\partial^{2}\psi_{\mu\nu}(x,y,t)}{\partial x^{2}}+ \frac{\partial^{2}\psi_{\mu\nu}(x,y,t)}{\partial y^{2}}\right],\] (4)
provided that \(t_{0}=mw_{0}^{2}/(2\hbar)\), \(\mu,\nu\in\{0,1,2,\dots\}\) and \(w_{0}>0\) is a free parameter with the dimensions of a length that fixes the size of the wavefunction. The solution (Comment on “Semiclassical and Quantum Analysis of a Focussing Free Particle Hermite Wavefunction”, by Paul Strange (arXiv:1309.6753 [quant-ph])) can be easily found by taking a Hermite-Gauss solution \(u_{\mu\nu}(x,y,x)\) of the paraxial wave equation [2]
\[2ik\frac{\partial u_{\mu\nu}(x,y,x)}{\partial z}=-\frac{\partial ^{2}u_{\mu\nu}(x,y,z)}{\partial x^{2}}-\frac{\partial^{2}u_{\mu\nu}(x,y,z)}{ \partial y^{2}},\] (5)
and making the formal replacements \(z\to t\) and \(k\to m/\hbar\), where \(k\) is the wavenumber of the Hermite-Gauss beam. When \(\mu=0=\nu\) the wavefunction \(\psi_{\mu\nu}(x,y,t)\) reduces to the well-know free-space Gaussian wavepacket
\[\psi_{00}(x,y,t)= \;\sqrt{\frac{2}{\pi}}\exp\left(-\frac{x^{2}+y^{2}}{w_{0}^{2}} \frac{1}{1+it/t_{0}}\right)\exp\left[-i\arctan\left(\frac{t}{t_{0}}\right) \right],\] (6)
widely studied in the literature [3].
More wavefunctions with counter-intuitive behavior may be found by considering the Laguerre-Gauss solutions of the paraxial wave equation. Consider, for example, the following function:
\[\psi_{\ell}(x,y,t)= \;\frac{2^{\frac{1+\ell}{2}}}{\sqrt{\pi\ell!}}\exp\left(-\frac{x^ {2}+y^{2}}{w_{0}^{2}}\frac{1}{1+it/t_{0}}\right)\frac{\left(x+iy\right)^{\ell} }{\left[w_{0}\left(1+it/t_{0}\right)\right]^{1+\ell}},\] (7)
with \(\ell\in\{0,\pm 1,\pm 2,\ldots\}\). Equation (7) represents an exact, square-integrable solution of the free particle Schrödinger equation in 2+1 dimensions, which displays some “exotic” features, as orbital angular momentum equal to \(\ell\hbar\) without “rotation”. What does this mean? Well, when thinking about a free particle one usually imagines a little ball propagating along a rectilinear trajectory with constant velocity. However, in quantum mechanics the particle may be initially, at \(t=0\), prepared in a state that do not resemble at all a small ball, namely a spatially localized object. Conversely, one can prepare a state where the particle is perfectly delocalized along a ring:
\[\lvert\psi_{\ell}(x,y,t=0)\rvert^{2}= \;\frac{2^{1+\ell}}{\pi\ell!}\exp\left(-2\frac{x^{2}+y^{2}}{w_{0} ^{2}}\right)\frac{\left(x^{2}+y^{2}\right)^{\ell}}{w_{0}^{1+\ell}}.\] (8)
In fact, there is nothing strange about Eq. (7) from the viewpoint of quantum mechanics. If one calculates the probability density \(\rho_{\ell}=\psi_{\ell}^{*}\psi_{\ell}\) and the probability current density
\[\bm{j}_{\ell}=\frac{\hbar}{2mi}\left[\psi_{\ell}^{*}\left(\bm{e}_ {x}\frac{\partial\psi_{\ell}}{\partial x}+\bm{e}_{y}\frac{\partial\psi_{\ell}} {\partial y}\right)-\left(\bm{e}_{x}\frac{\partial\psi_{\ell}^{*}}{\partial x} +\bm{e}_{y}\frac{\partial\psi_{\ell}^{*}}{\partial y}\right)\psi_{\ell}\right],\] (9)
it is not difficult to see that the continuity equation is identically satisfied:
\[\frac{\partial\rho_{\ell}}{\partial t}+\frac{\partial j_{\ell x}} {\partial x}+\frac{\partial j_{\ell y}}{\partial y}=0.\] (10)
As time goes by, the particle described by \(\psi_{\ell}(x,y,t)\) does not move, in the sense that \(\langle x\rangle=0=\langle y\rangle\) and \(\langle p_{x}\rangle=0=\langle p_{y}\rangle\), where \(p_{x}=-i\hbar\partial_{x}\), etc. The other relevant expectation values are:
\[\langle x^{2}+y^{2}\rangle=\frac{1+\ell}{2}w_{0}^{2}\left(1+\frac {t^{2}}{t_{0}^{2}}\right),\qquad\langle p_{x}^{2}+p_{y}^{2}\rangle=\left(1+ \ell\right)\frac{2\hbar^{2}}{w_{0}^{2}},\] (11)
and
\[\langle H\rangle=\frac{\langle p_{x}^{2}+p_{y}^{2}\rangle}{2m}= \left(1+\ell\right)\frac{\hbar^{2}}{mw_{0}^{2}},\qquad\langle xp_{x}-yp_{x} \rangle=\ell\hbar.\] (12)
These equations describe the motion of a free quantum particle moving in a ring whose radius change with time, as shown in Fig. 1, which illustrates the time evolution of the probability density \(\rho_{1}(x,y,t)\) evaluated for \(\ell=1\):
<figure><img src="content_image/1309.7899/Fig1.png"><figcaption>Figure 1: Time evolution of the probability density ρ1(x,y,t). At t=−2t0 theparticle is in the ring at the left column. At t=0 it reaches its minimumradius as permitted by the uncertainty principle, and then it begins to expandagain.</figcaption></figure>
The same “free-focussing” phenomenon may be better observed by taking a cross section at \(y=0\) of the probability density \(\rho_{1}(x,y,t)\), as shown in Fig. 2.
<figure><img src="content_image/1309.7899/Fig2.png"><figcaption>Figure 2: Time evolution of the section of the probability densityρ1(x,y=0,t) evaluated at y=0.</figcaption></figure>
Finally, in Fig. 3 we can follow the “motion” of the particle by plotting the streamlines of the vector field \(\bm{j}_{1}(x,y,t)\):
<figure><img src="content_image/1309.7899/Fig3.png"><figcaption>Figure 3: Snapshots of the streamlines plot of the probability currentdensity j1(x,y,t) evaluated for ℓ=1. Note that the handiness of the rotationdoes not change at t=0.</figcaption></figure>
From the figures above it appears in a clear manner why these phenomena seems counter intuitive if one think of a particle as a bullet. However, from a quantum mechanical point of view the description furnished above is perfectly consistent.
## References
* (1) P. Strange, “Semiclassical and Quantum Analysis of a Focussing Free Particle Hermite Wavefunction”, arXiv:1309.6753[quant-ph].
* (2) A. E. Siegman, _Lasers_, (University Science Books, USA, 1986).
* (3) C. G. Darwin, “Free motion in quantum mechanics”, Proc. R. Soc. A **117** 258–93 (1928).
|
0812.3492 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 80607,
"num_imgs": 8,
"llama3_tokens_count": 25108
} | [
"content_image/0812.3492/x1.png",
"content_image/0812.3492/x2.png",
"content_image/0812.3492/x4.png",
"content_image/0812.3492/x5.png",
"content_image/0812.3492/x7.png",
"content_image/0812.3492/x8.png",
"content_image/0812.3492/x10.png",
"content_image/0812.3492/x12.png"
] | # Effective shape and phase behaviour of short charged rods
Eelco Eggen\({}^{1}\), Marjolein Dijkstra\({}^{2}\), and René van Roij\({}^{1}\)
\({}^{1}\)Institute for Theoretical Physics, Utrecht University, Leuvenlaan 4, 3584 CE Utrecht, The Netherlands
\({}^{2}\)Soft Condensed Matter, Debye Institute for Nanomaterials Science, Utrecht University, Princetonplein 5, 3584 CC Utrecht, The Netherlands
8 July 2008
###### Abstract
We explicitly calculate the orientation-dependent second virial coefficient of short charged rods in an electrolytic solvent, assuming the rod-rod interactions to be a pairwise sum of hard-core and segmental screened-Coulomb repulsions. From the parallel and isotropically averaged second virial coefficient, we calculate the effective length and diameter of the rods, for charges and screening lengths that vary over several orders of magnitude. Using these effective dimensions, we determine the phase diagram, where we distinguish a low-charge and strong-screening regime with a liquid crystalline nematic and smectic phase, and a high-charge and weak-screening regime with a plastic crystal phase in the phase diagram.
## I Introduction
The study of suspensions of non-spherical colloidal particles started with the experimental works of Zocher (1925) and Bawden et al. (1936), and with Onsager’s theoretical work (Onsager, 1949). It has since developed into a very versatile field of research. A lot of attention has been focussed on needle-shaped rods, either naturally occurring ones such as viruses like Tobacco Mosaic Virus or fd virus Bawden et al. (1936); Fraden et al. (1989); Dogic and Fraden (2001), or laboratory-synthesized ones such as Boehmite rods (van Bruggen et al., 1996). In recent years, however, a plethora of non-spherical particles have been synthesized that are not extremely elongated, for example: ellipsoidal colloids with aspect ratio \(\sim 3\)(Snoeks et al., 2000); colloidal dumbbells (Johnson et al., 2005); or nano-particles with the shape of a rod, disk, snowman, cube, cap, or raspberry, (Yin and Alivisatos, 2005; Sheu et al., 1990; Jun et al., 2005; Murphy et al., 2005; Zoldesi and Imhof, 2005; Cho et al., 2005; Kraft et al., 2008). These particles are often charged when dissolved in a polar solvent like water, and hence their pair-interactions involve not only the anisotropic steric short-range repulsions but also electrostatic long-range repulsions. The strength of the latter is determined by the charge on the particle and the range is determined by the Debye screening length of the solvent (Derjaguin, 1940; Verwey and Overbeek, 1948). For small charges and strong screening (i.e. high salt concentrations), one expects the steric interactions to be dominant (if we assume that dispersion forces can be neglected). Hence, one can use computer simulations or theoretical studies of _hard_ anisotropic bodies (Bolhuis and Frenkel, 1997; Pfleiderer and Schilling, 2007; Cuesta and Martínez-Ratón, 1997; Martínez-Ratón and Cuesta, 1999; Eppenga and Frenkel, 1984; Esztermann et al., 2006) to get an idea of the phase diagram of the system as a function of concentration. In the case of a high charge or weak screening (i.e. low salt concentration), however, the situation is less clear-cut. There, the degree of anisotropy of the electrostatic interactions is not obvious from the outset: on the one hand one expects the soft screened-Coulomb interactions to wash out the hard-core anisotropy such that the interactions become effectively more spherically symmetric, while on the other hand there are the intriguing findings reported for example in Refs. (Ramirez and Kjellander, 2006; Chapot et al., 2004). The studies in these papers apply to systems of charged anisotropic particles in a screening medium. It was found that the electrostatic anisotropy persists to infinitely large distances as the asymptotic decay of each multipole contribution to the electrostatic potential due to a nonspherical charge distribution is equal (Ramirez and Kjellander, 2006; Chapot et al., 2004). This conclusion is in sharp contrast to the case of a charge distribution in vacuum, where the monopole potential decays more slowly than that of the dipole, as each order multipole contribution decays slower than the next one does. In our paper we investigate the interplay between hard-core and electrostatic interactions for non-spherical particles, for the relatively simple particle shape of spherocylinders.
It is well-established by now that non-spherical colloidal particles can form a wealth of phases in thermodynamic equilibrium. Needle-like colloidal rods, for instance, form a phase sequence I–N–Sm–X upon increasing the concentration from very dilute up to close packing, where I is the completely disordered isotropic fluid phase, N the liquid crystalline nematic phase with orientational ordering, Sm the smectic-A phase built from orientationally ordered liquid-like layers, and X a fully ordered crystal phase (Zocher, 1925; Onsager, 1949; Veerman and Frenkel, 1990; Vroege and Lekkerkerker, 1992; Bolhuis and Frenkel, 1997; Fraden et al., 1989; Poniewierski and Hołyst, 1990; Somoza and Tarazona, 1990; Graf and Löwen, 1997). This phase sequence for colloidal needles is well-established for hard-core interactions (Bolhuis and Frenkel, 1997; Poniewierski and Hołyst, 1990; Somoza and Tarazona, 1990). Also, for softer electrostatic screened-Coulomb repulsions in the case of charged needles, at least in the regime where the length of the rods is much larger than the diameter and the screening length of the electrolytic solvent. This ensures that the effective diameter of the rods is much smaller than the length (Stroobants et al., 1986; Fraden et al., 1989). By contrast, particles with shapes that are sufficiently close to spherical are _not_ expected to exhibit the liquid crystalline phases N and Sm due to their small anisotropy. Instead, for such near-spherical particles one would expect a plastic crystalline phase (P) to appear in the phase diagram, residing in between the isotropic fluid and the fully ordered crystal. The P phase is characterized by positional ordering on a lattice, but without long-ranged orientational ordering of the particles. For instance, a phase sequence I–P–X upon increasing the concentration has indeed been established in simulations of short hard spherocylinders and of hard dumbbells with a length-to-diameter ratio smaller than about 0.35 (Bolhuis and Frenkel, 1997; Vega and Monson, 1997; Marechal and Dijkstra, 2008). The question we address in this paper concerns the effect of colloidal charge and ionic screening on the effective shape of relatively short rods, and on their expected phase sequence upon increasing the concentration. On the basis of the well-established increase of the effective diameter of charged needles compared to their hard-core diameter (Stroobants et al., 1986), it is to be expected that high colloidal charges and weak-screening conditions (i.e. low salt concentrations) lead to a decreased anisotropy of short charged rods. Hence, this will lead to a larger tendency of the system to exhibit a plastic crystal phase instead of liquid crystalline phases in the phase diagram, even if the hard-core shape would allow for liquid crystalline equilibrium phases.
Of course, suspensions of charged rods have been extensively studied theoretically before. Many of these studies are based on Onsager’s second virial theory for hard rods (Onsager, 1949), which is modified and extended to take into account the effects of charge and screening on the isotropic-to-nematic transition (Stroobants et al., 1986; Sato and Teramoto, 1991; Nyrkova and Khokhlov, 1986; Nyrkova et al., 1997; Chen and Koch, 1996; Potemkin et al., 2002). Some of these studies, for example those of Refs. (Onsager, 1949; Stroobants et al., 1986; Sato and Teramoto, 1991), focus on the needle limit in which the rod length is very large compared to the screening length. In this limit, only the diameter is affected by the electrostatic effects, but in such a way that the effective geometry of the rod remains needle-like. In Refs. (Nyrkova and Khokhlov, 1986; Nyrkova et al., 1997) rod lengths of the order of (or larger than) the Debye length are considered, at the expense, however, of ignoring many of the prefactors such that the theory is essentially a scaling theory. Interestingly, this scaling theory predicts nematic–nematic coexistence in some parameter regime, which was later confirmed in Ref. (Chen and Koch, 1996). This coexistence regime is characterized by a small rod charge density, such that the effective geometry of the rod is no longer needle-like. Another limit that was studied in detail is the limit of weak electrostatic interactions, which naturally leads to a perturbative description (Weyerich et al., 1990; Chen and Koch, 1996; Graf and Löwen, 1999). These schemes are very successful at describing the effective (non-needle-like) geometry that shows up in the angular dependence of the second virial coefficient. Another very interesting effect was identified in Ref. (Potemkin et al., 2002), where the correlation free energy of the many-body system of charged rods and counterions was calculated, resulting in an enhanced tendency to orientational ordering and also the possibility of nematic–nematic coexistence. With the notable exception of Ref. (Graf and Löwen, 1999), however, most of these works on charged rods focus on the isotropic and nematic phases and hence, implicitly, on rods which are sufficiently elongated to give liquid crystalline phases at all.
In this paper we take a slightly different perspective. We explicitly calculate the orientation-dependent second virial coefficient of rather short charged rods numerically, for colloidal charges and screening lengths that vary over many decades. Such calculations, in which we use expansions in spherical harmonics, do not require only the asymptotic far-field expressions of the multipoles (such as considered in Refs. (Ramirez and Kjellander, 2006; Chapot et al., 2004)), but in fact their full distance dependence. From the resulting second virial coefficient, we determine an effective hard-core length and diameter. Subsequently, we use these—in combination with the published hard-core phase diagram (Bolhuis and Frenkel, 1997)—to determine the expected phase sequence upon increasing the concentration. This scheme is too crude to distinguish subtleties such as whether or not there is a nematic-nematic coexistence regime or to what extent the isotropic-nematic phase gap is affected. However, it is supposed to indicate reliably whether liquid crystalline (N and Sm) or plastic crystal (P) phases are to be expected in between the isotropic (I) and crystalline (X) phase. We focus on the case where the rod length is of the order of the screening length or smaller, in contrast to most of the previous theoretical work. This is the regime where the crossover from N and Sm to P is expected to occur. In the limit where the rod length is small and the hard-core interactions are important, we give a simplified theoretical description that turns out to be in remarkable agreement with the numerical results. As our numerical approach relies on an expansion in spherical harmonics of the effective pair interaction between two rods, it leads to explicit but involved expressions. We present some of the mathematical technicalities of the derivation of these expressions in the appendix.
## II Model
We consider a system of identical charged colloidal rods suspended in an electrolyte solvent of dielectric constant \(\epsilon\), Debye screening length \(\kappa^{-1}\), and Bjerrum length \(l_{\rm B}=e^{2}/(4\pi\epsilon k_{\rm B}T)\), at temperature \(T\). Here \(e\) is the elementary charge, and \(k_{\rm B}\) is the Boltzmann constant. The rods are assumed to have the shape of a spherocylinder consisting of a cylinder of length \(L\) and diameter \(D\) capped by two hemispheres also of diameter \(D\). The rods have a fixed charge, which we treat here as an (effective) line-charge density \(e\lambda\) distributed homogeneously on the axis of the cylinder. We are interested in the effective pair potential \(V(\mathbf{r};\hat{\omega},\hat{\omega}^{\prime})\) between two rods with orientations \(\hat{\omega}\) and \(\hat{\omega}^{\prime}\) at a center-to-center vector \(\mathbf{r}\), thermally averaged over the degrees of freedom of the electrolyte solvent (characterized by \(\kappa^{-1}\) and \(l_{\rm B}\)). In the spirit of Derjaguin, Landau, Verwey and Overbeek (DLVO), we assume that the effective pair potential consists of steric hard-core repulsions and electrostatic screened-Coulomb interactions between segments of the line charge of the two rods. We ignore short-ranged Van der Waals attractions (i.e. we assume the particle and the solvent to be index-matched or that the dispersion forces are cancelled by steric or charge stabilization). Within these approximations the effective pair potential can be written as
\[\beta V(\mathbf{r};\hat{\omega},\hat{\omega}^{\prime})=\left\{\begin{array}[]{ l}\infty\qquad\mbox{for overlapping rods,}\\ \\ \beta V_{\rm e}(\mathbf{r};\hat{\omega},\hat{\omega}^{\prime})\hfill\mbox{ otherwise,}\end{array}\right.\] (1)
where \(\beta^{-1}=k_{\rm B}T\), the overlap refers to the hard-core repulsions, and the electrostatic interaction potential is given by
\[\beta V_{\rm e} (\mathbf{r};\hat{\omega},\hat{\omega}^{\prime})\]
\[{}=l_{\rm B}\lambda^{2}\int_{-\frac{L}{2}}^{+\frac{L}{2}}{\rm d}l \int_{-\frac{L}{2}}^{+\frac{L}{2}}{\rm d}l^{\prime}\,\frac{\exp[-\kappa| \mathbf{r}+l^{\prime}\hat{\omega}^{\prime}-l\hat{\omega}|]}{|\mathbf{r}+l^{ \prime}\hat{\omega}^{\prime}-l\hat{\omega}|}.\] (2)
The integration variables \(l\) and \(l^{\prime}\) play the role of coordinates running along the cylinder axis of each of the two rods, from one end of the cylinder to the other end. In the long-rod limit, \(L/D\gg 1\) and \(\kappa L\gg 1\), one can replace the integration domains in equation (2) by the full real axis, together with the constraint that the cylinder axes are in “cross configuration” (i.e. the axes intersect when projected onto the plane parallel to both axes). Otherwise, the potential vanishes. One then easily shows that \(V_{\rm e}\) only depends on the shortest distance and the relative angle between the two rods (Onsager, 1949; Stroobants et al., 1986; Chen and Koch, 1996). Here we focus on shorter rods, for which this simplification does not apply. In the appendices we derive systematic series expansions in spherical harmonics to describe the angular and position dependence of \(V_{\rm e}\) explicitly, focussing on rods that are rather short compared to the Debye screening length (which sets the range of the electrostatic repulsions). More specifically, the expansion of the angular dependence is truncated and we consider each term as an expansion in \(\kappa L\) up to fourth order (see appendix). We compare the result with the large-\(\kappa L\) limit.
The present model can be characterized by a few dimensionless combinations. In the limit of uncharged rods (\(\lambda=0\)), the aspect ratio \(L/D\) of the hard-core dimensions is of primary importance. However, for the charged rods of present interest, the ratio \(\kappa L\) (of the hard-core length to the Debye screening length of the solvent) gives more information on the interaction anisotropy. The ratio \(\kappa D\) is relevant as a measure of ionic strength. Dimensional inspection of the expression in equation (2) shows that the strength of the electrostatic interactions is determined by the dimensionless (square of the) line charge density
\[q\equiv\frac{l_{\rm B}\lambda^{2}}{\kappa}.\] (3)
These dimensionless combinations can span quite a range of numerical values in experimental systems. For instance, for fd virus suspended in water one finds (Dogic and Fraden, 2001)\(L/D>100\), \(\kappa D\simeq 0.1\)–\(1\), and \(q=70\)–\(700\), and recently synthesized silica dumbbells in oily solvents [(41)] are best characterized by \(L/D\simeq 1\), \(\kappa D\simeq 1\), and \(q\simeq 10^{2}\). Short (double stranded) DNA chains have \(\kappa D\simeq 0.1\)–\(1\) and \(q\simeq 0.1\)–\(10\), while their length can be varied by the number of base pairs included in the sequence. These chains can be characterized as rigid rods up to the persistence length corresponding to \(L/D\simeq 50\). Moreover, present-day synthesis techniques allow for the tuning of surface charge, in principle at least, from essentially vanishing to extremely high. This is achieved for example by using different coatings with varying degrees of ion-dissociation of the surface groups. It is therefore of interest to investigate the thermodynamics of the present model over a wide range of parameters.
## III Thermodynamics and Effective dimensions
With the pair potential specified by equations (1) and (2), and with an explicit scheme to evaluate it as explained in the appendix, we can study the macroscopic properties of suspensions of these charged rods. In principle, we do this as a function of concentration, for various \(q\), \(\kappa D\), and \(L/D\). Here we circumvent the complexity of the full statistical-mechanical calculation of free energies and phase diagrams of the system at hand. We do this by mapping the second virial coefficient of the _charged_ spherocylinders of interest onto that of _hard_ spherocylinders with an effective cylinder length \(L_{\rm eff}\) and an effective diameter \({D_{\rm eff}}\) that we will calculate below. We then _presume_ that the phase diagram of the system of charged rods follows from that of the effective hard-rod system, which we take from published computer simulation data (Bolhuis and Frenkel, 1997). It is well-known from these and follow-up simulations of hard rods, as well as density functional theory (Poniewierski and Hołyst, 1990; Somoza and Tarazona, 1990), that this system exhibits a sequence of phase transitions upon increasing the concentration that strongly depends on the aspect ratio \(L/D\): sufficiently elongated hard rods with \(L/D>3.7\) have a phase sequence isotropic–nematic–smectic–crystal (I–N–Sm–X), sufficiently short hard rods \(0<L/D<0.35\) show a sequence I–P–X with P a plastic crystal, and in between there are two more regimes in which the N and P phase, respectively, do no longer appear in the phase sequence. Below we determine how the analogous crossovers between these regimes of the effective system, as determined by \(L_{\rm eff}/D_{\rm eff}\), depend on the parameters \(q\), \(\kappa D\), and \(L/D\).
<figure><img src="content_image/0812.3492/x1.png"><figcaption>Figure 1: The effective excluded volume κ3E, as a function of the angle γbetween the two rod orientations, for different values of the charge parameterq. We used the parameter values κL=1 and κD=0.01 (such that L/D=100).</figcaption></figure>
A key ingredient of our calculation is the effective excluded volume \(E(\hat{\omega},\hat{\omega}^{\prime})\) of two charged rods with orientations \(\hat{\omega}\) and \(\hat{\omega}^{\prime}\), defined as
\[E(\hat{\omega},\hat{\omega}^{\prime})=\int{\rm d}\mathbf{r}\,\Bigl{(}1-\exp \bigl{[}-\beta V(\mathbf{r};\hat{\omega},\hat{\omega}^{\prime})\bigr{]}\Bigr{)},\] (4)
where the pair potential between the rods is given in Eqs. (1) and (2). Note that \(E(\hat{\omega},\hat{\omega}^{\prime})\) is in fact twice the corresponding second virial coefficient, and that the nomenclature “effective excluded volume” stems from the fact that it reduces to the actual excluded volume of the pair in the case of purely hard-core interactions. On the basis of symmetry arguments one easily checks that the angular dependence of \(E(\hat{\omega},\hat{\omega}^{\prime})\) is in fact only through the angle \(\gamma=\arccos(\hat{\omega}\cdot\hat{\omega}^{\prime})\) between the cylinder axes of the two rods. In Fig. 1 we show this \(\gamma\)-dependence of \(E\) for rods characterized by \(\kappa L=1\) and \(\kappa D=0.01\) (so \(L/D=100\) and weak screening), for several charge parameters \(q\) ranging from \(q=0\) (uncharged) to \(q=0.01\) (fairly charged). The results of Fig. 1 stem from a combination of numerical and analytic procedures explained in detail in the appendix. These involve a five-fold integration: over the contour of the rods \(l\) and \(l^{\prime}\) in Eq. (2), and the center-to-center separation vector \(\mathbf{r}\) in Eq. (4).
The key observations of Fig. 1, which is typical for many system parameters, are that for increasing \(q\) the effective excluded volume becomes (i) less anisotropic, and (ii) larger in magnitude. Moreover, for all \(q\) the effective excluded volume is larger for perpendicular orientations than for parallel ones. Qualitatively, and in fact quantitatively for many parameters, this behaviour is identical to that of _hard_ spherocylinders of effective length \(L_{\rm eff}\) and diameter \(D_{\rm eff}\), for which the excluded volume is given by (Onsager, 1949)
\[\mathcal{V}_{\rm eff}(\hat{\omega},\hat{\omega}^{\prime})=\frac{4\pi}{3}D_{\rm eff }^{3}+2\pi L_{\rm eff}D_{\rm eff}^{2}+2L_{\rm eff}^{2}D_{\rm eff}\sin\gamma.\] (5)
In principle one can fit the functional form of Eq. (5) to the numerical results such as those of Fig. 1 to determine the effective hard-core dimensions \(L_{\rm eff}\) and \(D_{\rm eff}\) for given charged-rod parameters. However, instead of fitting the full angular dependence numerically, it is more convenient to match the isotropically-averaged effective excluded volume and the parallel one, given by
\[E_{\rm iso}= \frac{1}{(4\pi)^{2}}\int{\rm d}\hat{\omega}\int{\rm d}\hat{\omega }^{\prime}\,E(\hat{\omega},\hat{\omega}^{\prime})\]
\[= \frac{1}{2}\int_{0}^{\pi}{\rm d}\gamma\,\sin\gamma\,E(\gamma),\] (6)
\[E_{\parallel}= E(\hat{\omega},\hat{\omega})=E(\gamma=0),\] (7)
to the values for spherocylinders with effective hard-core dimensions \(L_{\rm eff}\) and \(D_{\rm eff}\)
\[\mathcal{V}_{\rm iso}= \frac{4\pi}{3}D_{\rm eff}^{3}+2\pi L_{\rm eff}D_{\rm eff}^{2}+ \frac{\pi}{2}L_{\rm eff}^{2}D_{\rm eff},\] (8)
\[\mathcal{V}_{\parallel}= \frac{4\pi}{3}D_{\rm eff}^{3}+2\pi L_{\rm eff}D_{\rm eff}^{2},\] (9)
respectively. This procedure yields the effective hard-core dimensions
\[D_{\rm eff}= \left[\frac{3E_{\parallel}}{4\pi}\left(1+3\Delta-\sqrt{3\Delta(2+ 3\Delta)}\right)\right]^{\frac{1}{3}},\] (10)
\[\frac{L_{\rm eff}}{D_{\rm eff}}= 2\Delta+\frac{2}{3}\sqrt{3\Delta(2+3\Delta)},\] (11)
where we used, for notational convenience, the dimensionless anisotropy parameter \(\Delta\) defined as
\[\Delta\equiv\frac{E_{\rm iso}-E_{\parallel}}{E_{\parallel}}.\] (12)
It turns out that inserting \(L_{\rm eff}\) and \(D_{\rm eff}\) as obtained from Eqs. (10), (11), and (12) into Eq. (5) gives an angular dependence that is in very good agreement with the numerically obtained effective excluded volume of charged rods.
It is also interesting to compare our numerical results with analytic expressions that are valid in the limit where \(L/D\gg 1\) and \(\kappa L\gg 1\), as obtained by Stroobants et al. (1986). In this needle-limit the effective excluded volume is given by
\[E_{\infty}(\gamma)=2L^{2}\kappa^{-1}\sin\gamma \biggl{[}\gamma_{\rm E}+\ln 2\pi q-\ln\sin\gamma\]
\[{}+\Gamma\left(0,\frac{2\pi q\exp[-\kappa D]}{\sin\gamma}\right) \biggr{]},\] (13)
where \(\gamma_{\rm E}\approx 0.577\) is the Euler-Mascheroni constant and where the incomplete gamma function (or exponential integral) is defined by
\[\Gamma(\alpha,x)=\int_{x}^{\infty}{\rm d}y\,y^{\alpha-1}\exp[-y].\] (14)
From this expression—using the Onsager limit \(\mathcal{V}_{{\rm iso},\infty}=(\pi/2)L^{2}D_{{\rm eff},\infty}\) for the isotropically averaged excluded volume—the effective diameter can be calculated
\[\kappa D_{{\rm eff},\infty} {}=\gamma_{\rm E}+\ln 2\pi q+\ln 2-\frac{1}{2}\]
\[{}+\frac{2}{\pi}\int_{0}^{\pi}{\rm d}\gamma\,\sin^{2}\gamma\, \Gamma\left(0,\frac{2\pi q\exp[-\kappa D]}{\sin\gamma}\right).\] (15)
The effective length is taken equal to the rod length \(L_{{\rm eff},\infty}=L\).
## IV Numerical Results
Calculations such as those of Fig. 1 are reasonably accurate for values of \(\kappa L\) roughly up to 2. For higher values the applied approximations become poor, such that for \(\kappa L\gtrsim 3\) the calculations become even qualitatively unreliable for many of our parameters. For this reason we restrict most of our attention to the regime where \(\kappa L\leq 2\).
<figure><img src="content_image/0812.3492/x2.png"><figcaption>Figure 2: The effective diameter κDeff as a function of the rod diameter κDfor (a) κL=1 and different values for the charge parameter q; (b) q=10 anddifferent values for the rod length κL. The rod dimensions are scaled by thescreening length κ−1. The thin solid line is a guide to the eye, representingthe hard-core limit Deff=D. The small solid circles give the values for κD=0from the numerical calculations. The larger open circles are obtained by theapproximation given in Eq. (17).</figcaption></figure>
<figure><img src="content_image/0812.3492/x4.png"><figcaption>Figure 3: The effective excluded volume κ3E, as a function of the angle γbetween the rod orientations, calculated using different numerical schemes(see text), involving M discrete charges, Monte-Carlo (MC) integration, andthe present analytic approach. We used the parameter values κL=1, κD=0.25(such that L/D=4) and q=1.</figcaption></figure>
In order to assess the accuracy of our calculations, we compare some of the results of our calculations with those obtained from more extensive numerical integration schemes. One is given by the same spatial integration scheme as before, but with the (effective) line-charge density replaced by a discrete charge distribution. The rod charge is represented by an odd number of charge units (\(M=2N+1\)) distributed evenly on the cylinder axis, where one unit is always located on the center of the axis, and two units are always located on the two end points of the axis. The latter are of magnitude \(e\lambda L/(4N)\), while all others are of magnitude \(e\lambda L/(2N)\). This ensures that the total charge is \(e\lambda L\) and the continuum limit \(N\rightarrow\infty\) yields the correct homogeneous line charge. The other scheme uses the same discrete charge density as described above, but uses a Monte-Carlo (MC) scheme to perform the integration. This scheme is denoted by the plusses in Fig. 3. The agreement between the results obtained from the different schemes, as shown in Fig. 3, is excellent for \(M\geq 13\), particularly when considering that the shape of the effective excluded volume differs significantly from the hard-core case for these parameters. Therefore, we conclude that our calculation correctly predicts the angular dependence of the effective excluded volume of short charged rods.
<figure><img src="content_image/0812.3492/x5.png"><figcaption>Figure 4: The effective length Leff/L as a function of the rod diameter κD for(a) κL=1 and different values for the charge parameter q; (b) q=10 anddifferent values for the rod length κL. The rod diameter is scaled by thescreening length κ−1 and the effective length is scaled by the rod length L.</figcaption></figure>
In the previous section, we have shown that the angular dependence of the effective excluded volume can be used to calculate the effective rod dimensions \(L_{\rm eff}\) and \(D_{\rm eff}\)—from the values of \(E_{\parallel}\) and \(E_{\perp}\)—by applying Eqs. (10), (11), and (12). Fig. 2(a) shows the numerically calculated effective diameter as a function of the real diameter for \(\kappa L=1\) and a range of charge parameters \(q\). Fig. 2(b) shows the same function, but then for \(q=10\) and a range of rod lengths \(\kappa L\). Note that all (effective) rod dimensions are expressed in units of the screening length. In Fig. 2(b) the needle limit \(\kappa L\gg 1\), given by Eq. (III), is plotted for comparison. Both graphs clearly reveal two regimes
\[D_{\rm eff}\simeq\left\{\begin{array}[]{cl}D_{\rm e}&\mbox{for }D\ll D_{\rm e} ,\\ \\ D&\mbox{for }D\gg D_{\rm e}.\end{array}\right.\] (16)
These can be identified as an electrostatic regime at small \(\kappa D\) (weak screening) and a hard-core regime at high enough \(\kappa D\) (strong screening). In the hard-core regime, the effective diameter equals the hard-core diameter, while in the (weakly screened) electrostatic regime the effective diameter saturates to a plateau value \(D_{\rm e}\). This electrostatic effective diameter depends on \(q\) and \(\kappa L\), and increases with increasing \(q\) and \(\kappa L\). Also, it is (much) larger than the hard-core diameter due to the (strong) rod-rod repulsions. Values of the electrostatic effective diameter are included in Fig.2, where the small solid circles represent values obtained from numerical calculations for \(\kappa D=0\). The larger open circles represent the following simple approximation for \(D_{\rm e}\).
In the short-rod limit, we can treat the double layer around the rod as spherically symmetric, with an effective point charge \(e\lambda L\) in the center, such that also the pair potential is spherically symmetric. This gives \(\Delta=0\), and hence from Eq. (10), for large enough \(q\) (or small enough \(\kappa D\)), we obtain the electrostatic effective diameter from the simple expression
(17)
This approximation is given in Fig. 2 by the larger open circles. Both graphs show good agreement for \(\kappa L\leq 1\) and all values for \(q\). Fig. 2(b) also shows that the regime \(\kappa L\leq 2\)—which is reliably accessible with our truncated numerical scheme—evolves smoothly to the needle-limit \(\kappa L\gg 1\) of Stroobants et al. (1986). The curve for \(\kappa L=3\) shows some signatures of the numerical instabilities we encounter for larger \(\kappa L\).
In a similar fashion we can also study the effective length \(L_{\rm eff}\) of the rods. Fig. 4(a) shows results of numerical calculations of the effective rod length for \(\kappa L=1\) and a range of charge parameters \(q\). Fig. 4(b) is the result for \(q=10\) and a range of rod lengths \(\kappa L\). The rod dimensions are expressed in units of the Debye length, whereas \(L_{\rm eff}\) is expressed in units of the hard-core length. We distinguish again two asymptotic regimes, the strong screening (hard core) regime \(\kappa D\gg 1\) where \(L_{\rm eff}=L\), and the weak-screening (electrostatic) regime \(\kappa D\ll 1\) where \(L_{\rm eff}\) reaches a plateau value that depends on \(q\) and \(\kappa L\). Note also that \(L_{\rm eff}<L\) which is perhaps unexpected at first sight. Naively, one could expect the effective length to increase with increasing effective excluded volume. However, as Sato and Teramoto (1991) pointed out, the effective length decreases with increasing rod charge density because of end effects. Thus, the increase of the effective excluded volume—due to the increase of the rod charge density—is purely caused by the increase of the effective diameter. Moreover, this increase balances the decreasing in effective length such that the total effective particle length \(L_{\rm eff}+D_{\rm eff}\) does increase with increasing rod charge density. Inspection of Fig. 4(a) also reveals numerical (convergence) problems for \(q\geq 100\) at \(\kappa D\gtrsim 1\), where \(\kappa L_{\rm eff}\) sharply drops and rises before reaching the hard-core limit \(L_{\rm eff}=L\). This is in fact only a minor problem in practice, as it only occurs in the regime where \(L_{\rm eff}/D_{\rm eff}\lesssim 0.1\). There, the anisotropic contribution to the effective excluded volume is much smaller than the isotropic part. Upon approach of the needle-limit \(\kappa L\gg 1\), see Fig. 4(b), we find that \(L_{\rm eff}\) approaches \(L\) for all values of \(\kappa D\), as expected.
## V Phase behaviour
We have determined the effective length and diameter of charged rods, by mapping their orientation-dependent second virial coefficient onto that of effective hard rods. Subsequently, we also study the effective length-to-diameter ratio \(L_{\rm eff}/D_{\rm eff}\). In Fig. 5 we show this effective aspect ratio as a function of the rod charge for \(\kappa L=1\) and a range of rod diameters \(\kappa D\). All curves with \(\kappa D>0\) essentially decrease from their maximum value—the hard-core aspect ratio \(L/D\)—towards the curve given by \(\kappa D=0\). This indicates that the effective dimensions of charged rods become independent of the hard-core diameter for large charge parameters, where we enter the electrostatic regime. Also, since the effective aspect ratio for \(\kappa D=0\) is a decreasing function for large \(q\), we see that the charged rods essentially behave like charged spheres upon increasing the charge above a certain value.
<figure><img src="content_image/0812.3492/x7.png"><figcaption>Figure 5: The effective aspect ratio Leff/Deff as a function of the chargeparameter q for κL=1 and different values for the rod diameter κD.Different—possibly coexisting—phases are associated with a certain range of(effective) aspect ratios. See the text for an explanation of theabbreviations and the boundary values.</figcaption></figure>
Moreover, Fig. 5 reveals a local maximum for very small \(\kappa D\), in the regime where \(q\simeq 1\). This effect can be understood by considering the electrostatic regime for small charge parameters \(q\). Eq. (11) shows that the effective aspect ratio is governed by the dimensionless anisotropy parameter \(\Delta\), which is defined in Eq. (12). In the electrostatic regime, this anisotropy can be shown—up to first order—to be proportional to \(q\). The reason for this is that the linear approximation of the effective excluded volume is orientation independent (Chen and Koch, 1996). Therefore, the difference between the isotropically-averaged and parallel values is of second order in \(q\), whereas the parallel value itself is of first order. The effective aspect ratio is of order \(\sqrt{\Delta}\), and thus increases as \(\sqrt{q}\). Conversely, for \(q\gtrsim 1\) the effective length is more or less constant, and the effective aspect ratio decreases again due to the increase of the effective diameter.
The horizontal dotted lines in Fig. 5 indicate the crossover values (0.35, 3.5, and 3.7) for regimes with different phase sequences. The values for these aspect ratios are taken from simulation results of hard-spherocylinder systems by Bolhuis and Frenkel (1997). These simulations consist of explicit free-energy calculations of coexisting phases, where the most dilute phase is always given by an isotropic fluid (I), and the most dense phase by a fully ordered crystal (X). Depending on the aspect ratio, different phases were found in between these two phases. For aspect ratios exceeding \(\sim 3.7\) the phase sequence I–N–Sm–X was found upon increasing the density. Here, the N and Sm denote the nematic and smectic-A liquid crystalline phases, respectively. Somewhat shorter rods, with an aspect ratio in the narrow regime between \(\sim 3.5\) and \(\sim 3.7\), can still form a smectic-A but no longer a nematic phase, and hence have a phase sequence I–Sm–X. Even shorter hard rods, with an aspect ratio in between \(\sim 0.35\) and \(\sim 3.5\) cannot form a thermodynamically stable smectic-A phase, and thus crystallize directly into a fully ordered crystal from the isotropic fluid, yielding a phase sequence I–X. Very short hard rods, with an aspect ratio smaller than \(\sim 0.35\), exhibit a plastic (P) crystal phase, such that the phase sequence is I–P–X. The plastic crystal phase is characterized by orientational disorder, but has translational order as in a crystal phase (Bolhuis and Frenkel, 1997; Vega and Monson, 1997). This regime arises naturally in the case that \(\kappa L\) is small. Then, such a crystal forms because of the essentially isotropic long-range repulsive interactions, but the competition with entropic effects prevents the rods from aligning.
We use the mapping of the charged-rod system onto the effective hard-rod system to give an indication of the phase sequence of systems of charged rods as a function of the parameters \(\kappa L\), \(\kappa D\) (or \(L/D\)), and \(q\). For instance, from the curve for \(\kappa D=1\) in Fig. 5, we see that the effective aspect ratio never exceeds unity for any \(q\). This excludes the possibility of a nematic or smectic-A liquid crystal phase. The curve starts off at its maximum (in the limit where \(q\to 0\)), where the effective aspect ratio equals the hard-core aspect ratio \(L/D=\kappa L=1\). It crosses the value \(L_{\rm eff}/D_{\rm eff}=0.35\) at \(q\approx 2.35\), such that a sufficiently large rod charge density allows for a plastic crystal phase. Similarly, for \(\kappa D=0.1\) (which corresponds to \(L/D=10\)), we find all four phase sequences upon increasing \(q\).
<figure><img src="content_image/0812.3492/x8.png"><figcaption>Figure 6: Boundary lines for given values for the effective aspect ratioLeff/Deff. See the text for an explanation of the abbreviations of thedifferent regime labels. The points are results of the numerical calculations,and the lines are given by a simplified theory. We fix (a) κL=1, and (b)L/D=20, respectively.</figcaption></figure>
By determining the intersections of the effective aspect ratio with the crossover values of the hard-rod system, we construct “phase diagrams” indicating the different regimes. In Fig. 6 we present two examples of such diagrams in the plane spanned by \(q\) and \(\kappa D\). In Fig. 6(a), we fix \(\kappa L=1\), such that the horizontal axis could read \(D/L\) as well. In Fig. 6(b) we fix \(L/D=20\), such that the change in \(\kappa D\) physically corresponds to a change in salt concentration (while keeping the particle dimensions fixed). The symbols denote the crossover values for the effective aspect ratio as determined from our numerical data (such as presented in Fig. 5). The lines are based on an approximate theoretical model to be discussed in section VI.
Both diagrams in Fig. 6 show that rods with sufficiently high surface charge density always show the I–P–X sequence. This is due to the essentially spherical nature of the effective shape of highly charged rods. The limit of uncharged rods is determined by the hard-core sequence that corresponds to \(L/D\). The I–N–Sm–X regime at fixed \(\kappa L\) in Fig. 6(a) is completely bounded. First, by a hard-core regime when \(\kappa D\gtrsim 0.27\), where the liquid crystal phases cannot exist even for \(q=0\) because \(L/D\lesssim 3.7\). Second, by an electrostatic regime in the weak-screening limit of small \(\kappa D\), where the rods effectively behave as spheres since \(D_{\rm eff}\gg L_{\rm eff}\). Conversely, the trends displayed for fixed \(L/D\) in Fig. 6(b) are monotonic, with an I–N–Sm–X regime that extends to higher \(q\) with increasing \(\kappa D\).
<figure><img src="content_image/0812.3492/x10.png"><figcaption>Figure 7: (a) The effective length κLeff as a function of the rod length κLfor κD=0.1 and different values for the charge parameter q. The thin solidline represents the needle or hard-core limit, where Leff≃L. (b) The effectivediameter κDeff as a function of the rod charge parameter q for κD=0.1 anddifferent values for the rod length κL. The (effective) rod dimensions arescaled by the screening length κ−1.</figcaption></figure>
## VI A Simpler Model
For small values of the effective surface-charge density, we found that the electrostatic contribution to the effective excluded volume is essentially isotropic in nature. This means that the anisotropic effects are primarily due to the hard-core anisotropy (as apparent from Fig. 1), such that
\[L_{\rm eff}^{2}D_{\rm eff}\simeq L^{2}D.\] (18)
On this basis, we propose here a simple model, which turns out to describe our numerical findings with remarkable accuracy. This model introduces a “spherical approximation” of the electrostatic contribution to the effective excluded volume, which involves the orientation-dependent diameter \(\bar{D}(\gamma)\). The volume of a sphere of this diameter is equal to the hard-core excluded volume of a pair of rods
\[\frac{4\pi}{3}\bar{D}(\gamma)^{3}=\frac{4\pi}{3}D^{3}+2\pi LD^{2}+2L^{2}D\sin\gamma.\] (19)
We approximate the effective excluded volume by the value for a charged sphere of diameter \(\bar{D}(\gamma)\) and an effective surface charge that equals the total amount of effective charge on the rods
\[\bar{E}(\gamma)=\frac{4\pi}{3}\bar{D}(\gamma)^{3}\]
\[{}+4\pi\int_{\bar{D}(\gamma)}^{\infty}{\rm d}r\,r^{2}\left(1-\exp \left[-q\kappa^{2}L^{2}\frac{\exp[-\kappa r]}{\kappa r}\right]\right).\] (20)
Note that the only orientation dependence of the electrostatic contribution to this effective excluded volume (i.e. the second term) comes from the integral boundary \(\bar{D}(\gamma)\). To calculate the effective dimensions, we only need the parallel and isotropically averaged values of the effective excluded volume. In the parallel case (\(\gamma=0\)) this value is readily calculated
\[\bar{E}_{\parallel}=\frac{4\pi}{3}\bar{D}_{\parallel}^{3}\]
\[{}+4\pi\int_{\bar{D}_{\parallel}}^{\infty}{\rm d}r\,r^{2}\left(1- \exp\left[-q\kappa^{2}L^{2}\frac{\exp[-\kappa r]}{\kappa r}\right]\right),\] (21)
where
\[\frac{4\pi}{3}\bar{D}_{\parallel}^{3}=\frac{4\pi}{3}D^{3}+2\pi LD^{2}.\] (22)
The isotropically-averaged value can be calculated numerically by using expression (VI). However, we approximate it by the value for a charged sphere of diameter \(\bar{D}_{\rm iso}\) (using the same total effective charge), which is taken from the isotropic average of the hard-core excluded volume
\[\frac{4\pi}{3}\bar{D}_{\rm iso}^{3}=\frac{4\pi}{3}D^{3}+2\pi LD^{2}+\frac{\pi} {2}L^{2}D.\] (23)
This approximation yields the simple expression
\[\bar{E}_{\rm iso}=\frac{4\pi}{3}\bar{D}_{\rm iso}^{3}\]
\[{}+4\pi\int_{\bar{D}_{\rm iso}}^{\infty}{\rm d}r\,r^{2}\left(1- \exp\left[-q\kappa^{2}L^{2}\frac{\exp[-\kappa r]}{\kappa r}\right]\right).\] (24)
With our explicit expressions (VI) and (VI), we evaluate the effective dimensions from Eqs. (10) and (11) as before. The resulting crossover values of the hard-rod system are shown by the curves in Fig. 6, and are in very good agreement with the numerical calculations (denoted by the symbols). The key to this remarkable accuracy lies in the fact that the anisotropic electrostatic contributions are relatively unimportant, because the rod length is small with respect to the screening length (i.e. \(\kappa L\leq 2\)). Thus, our simple model accounts for the hard-core anisotropy correctly, as well as for the isotropic electrostatic contribution.
In a sense, this theoretical description can be viewed as a kind of perturbation theory, where we expand the pair potential as a function of \(\kappa L\). The hard-core repulsion represents the zero-th order. The lowest-order contribution to \(V_{\rm e}\) is quadratic in \(\kappa L\) and independent of rod orientations. Also, it happens to correspond to the interaction potential of two point charges \(e\lambda L\). If we plug this approximation of \(V(\mathbf{r};\hat{\omega},\hat{\omega}^{\prime})\) into the expression of the effective excluded volume (given by Eq. (4)), we obtain an expression where the integral boundary \(\bar{D}\) is still a function of both the angle between the rod orientations and the direction of the center-to-center separation vector \(\mathbf{r}\). In fact, it is given by the distance where the rods touch, given a certain orientational configuration. By setting this overlap diameter to a value that is independent of the orientation of \(\mathbf{r}\), but still respects the total hard-core excluded volume, we effectively neglect its dependence on \(\kappa L\). This choice is justified by the fact that (for small \(\kappa L\)) the size of the double layer around the particles is larger than the variations in the overlap diameter \(\bar{D}\). That is why our simple theoretical description can be interpreted as a perturbation theory of a hard-rod reference system with an (almost) isotropic electrostatic contribution. Unfortunately, it completely fails to describe the anisotropic effects in the electrostatic regime. In this regime the anisotropic details of the electrostatic contributions do become important compared to the hard-core contributions.
## VII Discussion and Conclusion
The numerical results presented in this paper give access to a part of the parameter space where there is a large difference between the effective length and the real length. In this regime, one cannot hope that the theory of Stroobants et al. (1986) gives any accurate results, as this is based on the needle limit where \(L_{\rm eff}\simeq L\). The perturbation theory of Chen and Koch (1996) breaks down for most of our parameter values. This is because it is based on small charges, and thus fails to describe the effect of large rod surface-charge densities. Also, this theory is not accurate for large differences between the effective and hard-core diameter.
In Fig. 7(a), we show results of numerical calculations of the effective rod length as a function of the hard-core length, for \(\kappa D=0.1\). Note that again the effective length is always smaller than (or equal to) the hard-core length. Also, in accordance with the results from Fig. 4, there is a hard-core regime for small values of the charge parameter \(q\), as well as for small values of the rod length \(\kappa L\), for which the total amount of effective rod charge is small. On the other hand, there is an electrostatic regime. In Fig. 4, this was shown to be the case for decreasing values of \(\kappa D\), where the plateau value (i.e. the electrostatic length) depends on \(q\) and \(\kappa L\). However, from Fig. 7(a), it can be seen that this electrostatic length depends mostly on the rod length \(\kappa L\), and not really on the charge parameter \(q\), as long as either \(q\) or \(\kappa L\) is large enough. Furthermore, the effective length is “wedged” in between the electrostatic length and the hard-core length, where the electrostatic length approaches the hard-core length in the needle limit (\(\kappa L\gg 1\)). Unfortunately, there is no analytic theory yet that describes our numerical results for this electrostatic length as a function of \(\kappa L\). Therefore, it would be worthwhile to gain new insight in the effect of electrostatics on the effective rod length for intermediate \(\kappa L\)—neglecting hard-core interactions—in the case of large rod charges. Additionally, Fig. 7(b) shows results of numerical calculations of the effective diameter as a function of the charge parameter \(q\). For \(q\gtrsim 1\), there is a smooth transition to the theoretical needle limit of Ref. (Stroobants et al., 1986), where \(\kappa L\gg 1\). Conversely, this is not the case for \(q\lesssim 1\), due to the fact that the approximations leading to Eq. (III) do not give the correct effective excluded volume for small values of \(q\) and (nearly) parallel rods. More investigations need to be made into this regime.
In conclusion, we have numerically studied the second virial coefficient of short charged rods dispersed in an electrolyte, presuming pairwise screened-Coulomb interactions between the line-charge segments of the rods. The control parameters of interest are the hard-core length \(L\) and diameter \(D\), the Debye screening length of the medium \(\kappa^{-1}\), and the charge parameter \(q\). The main resulting quantities are the effective diameter \(D_{\rm eff}\) and length \(L_{\rm eff}\) of the rods. By a mapping onto an effective hard-core system—for which the sequence of phases between the dilute isotropic phase and the dense crystalline phase is known for all aspect ratios—we predict the relations between control parameters and the expected phase sequence explicitly. We have also constructed a simplified model, based on the diameter \(\bar{D}(\gamma)\) of Eq. (19), which reproduces the numerical results accurately at the expense of much less computational effort. This model is particularly successful in the regime of large effective aspect ratios (\(L_{\rm eff}/D_{\rm eff}>1\)) and small ratios of the rod length to the screening length (\(\kappa L<1\)).
An important result of this work is that highly charged short rods at low salt concentrations (i.e. at strong Coulomb couplings) have a strong tendency to form plastic crystals upon compression. The plasticity stems from the large effective diameter, which make the rods behave essentially as inflated repulsive spheres with only small nonspherical interactions that are too weak to cause orientational ordering in the crystalline phase. This finding could be important in the study of silica or gold nanorods, that have reasonably large hard-core aspect ratio (like \(L/D\simeq 5\)). Here, liquid crystalline phases could be expected, but only if the charge on the rods is small enough.
###### Acknowledgements.
It is a pleasure to thank Ahmet Demirörs for explaining his preliminary experimental results on charged dumbbells.
## Appendix A The pair interaction of two charged rods
The pair interaction of two charged rods is given by Eq. (2), where we assume that the electrostatic interaction is determined by integrating over pairs of effective line-charge elements interacting with the screened Coulomb potential. The distance between these pairs is given by a superposition of the relative position of the rods and the combination of the position of the line elements along both rods. Since the integral in Eq. (2) cannot be calculated analytically, we try to simplify the calculation. By expanding the integrand in spherical harmonics, we obtain terms that factorize into two functions of the respective positions
\[\frac{\exp[-|\mathbf{r}-\mathbf{s}|]}{|\mathbf{r}-\mathbf{s}|}= \sum_{l=0}^{\infty}(2l+1)k_{l}(r)P_{l}(\hat{\mathbf{r}}\cdot\hat{ \mathbf{s}})i_{l}(s)\quad\mbox{for }r>s,\]
\[= 4\pi\sum_{l=0}^{\infty}\sum_{m=-l}^{+l}k_{l}(r)Y_{l,m}(\hat{ \mathbf{r}})i_{l}(s)Y_{l,m}^{*}(\hat{\mathbf{s}}),\] (25)
where \(i_{l}\) and \(k_{l}\) are the modified spherical Bessel functions of the first and second kind, respectively. These functions are given by
\[i_{l}(x)= \sqrt{\frac{\pi}{2x}}I_{l+\frac{1}{2}}(x),\] (26)
\[k_{l}(x)= \sqrt{\frac{2}{\pi x}}K_{l+\frac{1}{2}}(x),\] (27)
where \(I_{\nu}\) and \(K_{\nu}\) are the modified (cylindrical) Bessel functions of the first and second kind, respectively. The Legendre polynomials \(P_{l}\) are expanded into spherical harmonics \(Y_{l,m}\) using the famous addition theorem. We use the notation where \(r=|\mathbf{r}|\), and \(\hat{\mathbf{r}}=\mathbf{r}/r\). Finally, the asterisk “\(*\)” denotes complex conjugation. The unit vector as given in the arguments of each of the spherical harmonic functions should be interpreted as the two angles in spherical coordinates with respect to an arbitrarily chosen reference frame. Since the Legendre polynomials of the dot product of the two orientations is independent of this choice, so is the sum over \(m\) of the product of the two spherical harmonics.
We note that one could consider rewriting the expression of the pair potential in rotational invariants (as used in Ref. (Graf and Löwen, 1999)). These are functions of three orientations, including a sum over \(m\) of a product of three spherical harmonic functions multiplied by Clebsch-Gordon coefficients. They form a complete set of orthogonal functions dependent only on the relative orientations of \(\hat{\mathbf{r}}\), \(\hat{\mathbf{\omega}}\), and \(\hat{\mathbf{\omega}}^{\prime}\) with respect to each other . However, it turns out that in our case these are not really helpful. Alternatively, one could consider a resummation of the expansion in spherical harmonics, such that each term has a faster asymptotic decay than the previous term. This is not the case here, since each Bessel function \(k_{l}\) has the same asymptotic decay as \(k_{0}\)(Ramirez and Kjellander, 2006).
## Appendix B Domains of integration
<figure><img src="content_image/0812.3492/x12.png"><figcaption>Figure 8: Illustration of the domain of integration of the superposition ofthe positions of two line elements. The dashed circle of radius r divides theparallelogram into two domains.</figcaption></figure>
The integration over line elements of both rods in Eq. (2) is in fact an integration of the vector \(l\hat{\omega}-l^{\prime}\hat{\omega}^{\prime}\) over a parallelogram-shaped area in the plane tangent to both rod orientations. This area is illustrated in Fig. 8. There is a straightforward choice for the reference frame and a substitution of variables
\[\hat{\omega}= \left(\cos\frac{\gamma}{2},\sin\frac{\gamma}{2},0\right),\] (28)
\[\hat{\omega}^{\prime}= \left(\cos\frac{\gamma}{2},-\sin\frac{\gamma}{2},0\right),\] (29)
\[l\hat{\omega}-l^{\prime}\hat{\omega}^{\prime}= (\rho\cos\varphi,\rho\sin\varphi,0),\] (30)
where \(\gamma\) is the angle between the two rod orientations. The polar coordinates \(\rho\) and \(\varphi\) describe the same plane as \(l\) and \(l^{\prime}\). The parallelogram can be cut up into four equivalent pieces, keeping only the terms in the expansion (25) where \(l\) and \(m\) are both even. The integral boundaries of the first quadrant (\(0\leq\varphi\leq\pi/2\)) satisfy
\[0\leq\rho\leq\frac{L\sin\gamma}{2\sin\left(\varphi+\frac{\gamma}{2}\right)}.\] (31)
It is important to note that the functional form of the integrant can vary as a function of \(\rho\), because \(k_{l}\) and \(i_{l}\) switch roles when \(r<\rho\). We shall split the result of our expansion into each order in \(l\) and \(m\), to be examined separately. We write
\[\beta V_{\rm e}(r,\theta,\phi;\hat{\omega},\hat{\omega}^{\prime})\]
\[{}=\kappa l_{\rm B}\lambda^{2}\underbrace{\sum_{l=0}^{\infty}\sum _{m=-l}^{+l}}_{l,m\;{\rm even}} \frac{(-1)^{\frac{l+m}{2}}(2l+1)(l-m)!}{2^{l}\left(\frac{l+m}{2} \right)!\left(\frac{l-m}{2}\right)!}\]
\[{}\times\mathcal{A}_{l,m}(r;\gamma)P_{l,m}(\cos\theta)\cos(m\phi),\] (32)
where \(P_{l,m}\) are the associated Legendre functions. We have used that for \(l\) and \(m\) both even
\[\frac{Y_{l,m}(\theta,\phi)+Y_{l,-m}(\theta,\phi)}{2}\]
\[{}=\sqrt{\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}}\,P_{l,m}(\cos \theta)\cos(m\phi),\] (33)
and
\[\frac{1}{2}\left(Y_{l,m}^{*}\left(\vartheta=\frac{\pi}{2},\varphi \right)+Y_{l,-m}^{*}\left(\vartheta=\frac{\pi}{2},\varphi\right)\right)\]
\[{}=(-1)^{\frac{l+m}{2}}\sqrt{\frac{2l+1}{4\pi}}\frac{\sqrt{(l+m)! (l-m)!}}{2^{l}\left(\frac{l+m}{2}\right)!\left(\frac{l-m}{2}\right)!}\,\cos(m \varphi).\] (34)
The integral \(\mathcal{A}_{l,m}(r;\gamma)\) in Eq. (32) is given by
\[\mathcal{A}_{l,m}(r;\gamma)=\frac{4}{\sin\gamma}\int_{0}^{\frac{\pi}{2}}{\rm d }\varphi\,\cos(m\varphi)\mathcal{B}_{l}(r;\varphi,\gamma),\] (35)
where
\[\mathcal{B}_{l}(r;\varphi,\gamma)=k_{l}(\kappa r)\int_{0}^{\frac{L\sin\gamma}{ 2\sin\left(\varphi+\frac{\gamma}{2}\right)}}{\rm d}\rho\,\rho\,i_{l}(\kappa\rho)\] (36)
for \(r>\frac{L\sin\gamma}{2\sin\left(\varphi+\frac{\gamma}{2}\right)}\), and
\[\mathcal{B}_{l}(r;\varphi,\gamma)= k_{l}(\kappa r)\int_{0}^{r}{\rm d}\rho\,\rho\,i_{l}(\kappa\rho)\]
\[{}+i_{l}(\kappa r)\int_{r}^{\frac{L\sin\gamma}{2\sin\left(\varphi +\frac{\gamma}{2}\right)}}{\rm d}\rho\,\rho\,k_{l}(\kappa\rho)\] (37)
for \(r<\frac{L\sin\gamma}{2\sin\left(\varphi+\frac{\gamma}{2}\right)}\).
Let us have another look at Fig. 8. The dashed circle indicates the value for which the variables \(r\) and \(s\) in Eq. (25) switch (in this case \(s\) is replaced by \(\rho\)). Consider the first quadrant (i.e. the upper right-hand corner). Let us also assume \(\gamma<\pi/2\). In the end, we will calculate the effective excluded volume for \(0<\gamma<\pi\), but this expression is symmetric in \(\gamma\leftrightarrow\pi-\gamma\) (due to up-down symmetry) so we need only the first half of this interval. We describe the integral boundary for \(\rho\) as a function of \(\varphi\) just as we describe the boundary of the parallelogram by \(\rho\) as a function of \(\varphi\). However, the integrand in \(\mathcal{B}\) changes when the boundary of the parallelogram intersects with the circle of radius \(r\). Therefore—depending on the value of \(r\)—we have one to three domains for \(\mathcal{B}\) as a function of \(\varphi\)
\[\begin{array}[]{l}\varphi\in\left[0,\frac{\pi}{2}\right]\hfill\mbox{ for }r<\frac{L\sin\gamma}{2},\\ \\ \varphi\in\left[0,\alpha(r)\right],\left[\alpha(r),\beta(r)\right],\left[\beta (r),\frac{\pi}{2}\right]\hfill\\ \\ \mbox{ for }\frac{L\sin\gamma}{2}<r<L\sin\frac{\gamma}{2},\\ \\ \varphi\in\left[0,\alpha(r)\right],\left[\alpha(r),\frac{\pi}{2}\right]\qquad \mbox{ for }L\sin\frac{\gamma}{2}<r<L\cos\frac{\gamma}{2},\\ \\ \varphi\in\left[0,\frac{\pi}{2}\right]\hfill\mbox{ for }r>L\cos\frac{\gamma}{2},\end{array}\]
where
\[\alpha(r)= {\rm arcsin}\left(\frac{L\sin\gamma}{2r}\right)-\frac{\gamma}{2},\] (38)
\[\beta(r)= \pi-{\rm arcsin}\left(\frac{L\sin\gamma}{2r}\right)-\frac{\gamma} {2},\] (39)
are the angles for which the circle intersects the boundary of the parallelogram. In each domain, we calculate the integral \(\mathcal{A}_{l,m}\) using the corresponding expression for the integrant \(\mathcal{B}_{l}\): (37) if the circle segment lies in the interior of the parallelogram; (36) if it lies outside of the parallelogram.
## Appendix C The limit for parallel rods
In principle, calculations of the effective excluded volume for parallel rods involves the limit \(\gamma\to 0\) of Eqs. (35)–(39). To obtain the correct result, one has to take care to perform the limit correctly in each expression, which is not straightforward. It is much easier to re-evaluate the expressions in this limit analytically, starting with Eqs. (28)–(30). We use the same reference frame, but a different substitution of variables
\[\hat{\omega}= (1,0,0),\] (40)
\[\hat{\omega}^{\prime}= (1,0,0),\] (41)
\[l\hat{\omega}-l^{\prime}\hat{\omega}^{\prime}= (\pm x,0,0),\] (42)
where \(x=|l-l^{\prime}|\). Now the integration is performed over relative positions of two points on a single line. Half of the combinations is positive (\(l>l^{\prime}\)), the other half is negative (\(l<l^{\prime}\)). The integration boundaries of either set is given by
\[0\leq x\leq L.\] (43)
The length over which each combination \(l,l^{\prime}\) is realized, for a certain value of \(x\), is given by \(L-x\). In accordance with the previous expressions, we define the integral \(\mathcal{A}\) for parallel rods as
\[\mathcal{A}_{l,m}(r;\gamma=0)=2k_{l}(\kappa r)\int_{0}^{L}{\rm d}x\,(L-x)i_{l} (\kappa x)\] (44)
for \(r>L\), and
\[\mathcal{A}_{l,m}(r;\gamma=0)= 2k_{l}(\kappa r)\int_{0}^{r}{\rm d}x\,(L-x)i_{l}(\kappa x)\]
\[{}+2i_{l}(\kappa r)\int_{r}^{L}{\rm d}x\,(L-x)k_{l}(\kappa x)\] (45)
for \(r<L\). Note that the expressions are independent of \(m\).
## Appendix D Notations, integrals and Taylor series expansions
In order to calculate the integral \(\mathcal{A}\), we first need to calculate the integral \(\mathcal{B}\) by performing the integration—over the radial coordinate \(\rho\)—in Eqs. (36) and (37). Introducing the notation
\[\mathcal{I}_{l}(z)= \int_{0}^{z}{\rm d}x\,x\,i_{l}(x),\] (46)
\[\mathcal{K}_{l}(z)= \int_{z}^{\infty}{\rm d}x\,x\,k_{l}(x),\] (47)
we can rewrite \(\mathcal{B}\) as
\[\kappa^{2}\mathcal{B}_{l}(r;\varphi,\gamma)\]
\[{}=\left\{\begin{array}[]{l}k_{l}(\kappa r)\mathcal{I}_{l}\left( \frac{\kappa L\sin\gamma}{2\sin\left(\varphi+\frac{\gamma}{2}\right)}\right) \hfill\mbox{for }r>\frac{L\sin\gamma}{2\sin\left(\varphi+\frac{\gamma}{2} \right)},\\ \\ \\ k_{l}(\kappa r)\mathcal{I}_{l}(\kappa r)+i_{l}(\kappa r)\mathcal{K}_{l}(\kappa r )\\ \\ {}-i_{l}(\kappa r)\mathcal{K}_{l}\left(\frac{\kappa L\sin\gamma}{2\sin\left( \varphi+\frac{\gamma}{2}\right)}\right)\quad\mbox{for }r<\frac{L\sin\gamma}{2\sin\left(\varphi+\frac{\gamma}{2}\right)}.\end{array}\right.\] (48)
Unfortunately, there is no (easy) way to write the expressions in Eqs. (46) and (47) explicitly for arbitrary \(l\). However, one can give explicit expressions (necessary for our calculations) for \(l=0,2,4\). First, the Bessel functions
\[i_{0}(z)=\frac{\sinh(z)}{z},\] (49)
\[i_{2}(z)=\frac{(z^{2}+3)\sinh(z)-3z\cosh(z)}{z^{3}},\] (50)
\[i_{4}(z)=\]
\[\frac{(z^{4}+45z^{2}+105)\sinh(z)-(10z^{3}+105z)\cosh(z)}{z^{5}},\] (51)
\[k_{0}(z)=\frac{\exp(-z)}{z},\] (52)
\[k_{2}(z)=\frac{(z^{2}+3z+3)\exp(-z)}{z^{3}},\] (53)
\[k_{4}(z)=\frac{(z^{4}+10z^{3}+45z^{2}+105z+105)\exp(-z)}{z^{5}}.\] (54)
Next, their integrals
\[\mathcal{I}_{0}(z)= \cosh(z)-1,\] (55)
\[\mathcal{I}_{2}(z)= \frac{z\cosh(z)-3\sinh(z)}{z}+2,\] (56)
\[\mathcal{I}_{4}(z)= \frac{(z^{3}+35z)\cosh(z)-(10z^{2}+35)\sinh(z)}{z^{3}}-\frac{8}{3},\] (57)
\[\mathcal{K}_{0}(z)= \exp(-z),\] (58)
\[\mathcal{K}_{2}(z)= \frac{(z+3)\exp(-z)}{z},\] (59)
\[\mathcal{K}_{4}(z)= \frac{(z^{3}+10z^{2}+35z+35)\exp(-z)}{z^{3}}.\] (60)
Unfortunately, we cannot perform the subsequent integration—of the angular coordinate \(\varphi\)—in Eq. (35) analytically, when we try to calculate \(\mathcal{A}\). Therefore, we use the series expansions (for even \(l\))
\[\mathcal{I}_{2n}(z)= 2^{2n}\sum_{k=0}^{\infty}\frac{(2n+k)!z^{2n+2k+2}}{(2n+2k+2)(4n+ 2k+1)!k!},\] (61)
\[\mathcal{K}_{2n}(z)= (-1)^{n}\frac{(2^{n}n!)^{2}}{(2n)!}\]
\[{}-\frac{1}{2^{2n}}\sum_{k=1}^{2n}\frac{(-1)^{k}(2k)!z^{2n-2k+1}} {(2n-2k+1)(2n-k)!k!}\]
\[{}-\frac{1}{2^{2n}}\sum_{k=0}^{\infty}\frac{k!z^{2n+2k+1}}{(2n+2k +1)(2n+k)!(2k)!}\]
\[{}+2^{2n}\sum_{k=0}^{\infty}\frac{(2n+k)!z^{2n+2k+2}}{(2n+2k+2)(4 n+2k+1)!k!}.\] (62)
Finally, we define the specific combination
\[\mathcal{C}_{l}(\kappa r)=k_{l}(\kappa r)\mathcal{I}_{l}(\kappa r)+i_{l}( \kappa r)\mathcal{K}_{l}(\kappa r),\] (63)
which turns out to be given by a relatively simple expression (for even \(l\))
\[\mathcal{C}_{2n}(\kappa r)= \frac{(n!)^{2}}{(2n)!}\sum_{k=0}^{n}\frac{(-1)^{k}(2n+2k)!}{(n+k) !(n-k)!(\kappa r)^{2k+1}}\]
\[{}-(-1)^{n}\frac{(2^{n}n!)^{2}}{(2n)!}k_{2n}(\kappa r),\] (64)
such that
\[\mathcal{C}_{0}(z)= \frac{1}{z}-\frac{\exp(-z)}{z},\] (65)
\[\mathcal{C}_{2}(z)= \frac{z^{2}-6}{z^{3}}+2\frac{(z^{2}+3z+3)\exp(-z)}{z^{3}},\] (66)
\[\mathcal{C}_{4}(z)= \frac{z^{4}-20z^{2}+280}{z^{5}}\]
\[{}-\frac{8}{3}\frac{(z^{4}+10z^{3}+45z^{2}+105z+105)\exp(-z)}{z^{ 5}}.\] (67)
Note that in each expression the first term cancels the divergence of the second term in the limit where \(z\to 0\). Hence, this limit is given by
\[\mathcal{C}_{2n}(0)=\delta_{n,0}.\] (68)
This property is also reflected in the series expansion—useful for calculations for small \(\kappa r\)—given by
\[\mathcal{C}_{2n}(\kappa r)= (-1)^{n}\frac{(2^{n}n!)^{2}}{(2n)!}\]
\[{}\times\frac{\sqrt{\pi}}{2}\sum_{k=0}^{\infty}\frac{1}{\Gamma \left(\frac{k+2n+3}{2}\right)\Gamma\left(\frac{k-2n+2}{2}\right)}\left(\frac{- \kappa r}{2}\right)^{k}.\] (69)
Note that the terms for even \(k<2n\) have vanishing coefficients.
The limit of parallel rods has a different set of expressions. Therefore, we define an additional notation
\[\mathcal{J}_{l}(z)= \int_{0}^{z}{\rm d}x\,\frac{z-x}{z}\,i_{l}(x),\] (70)
\[\mathcal{L}_{l}(z)= \int_{z}^{\infty}{\rm d}x\,\frac{z-x}{z}\,k_{l}(x).\] (71)
In this way, we split each integral in Eq. (45) in two parts
\[\kappa^{2}\mathcal{A}_{l,m}(r;\gamma=0)\]
\[{}=\left\{\begin{array}[]{l}2\kappa L\,k_{l}(\kappa r)\mathcal{J} _{l}(\kappa L)\hfill\mbox{for }r>L,\\ \\ \\ 2\frac{L-r}{r}\,\mathcal{C}_{l}(\kappa r)+2\kappa L\,k_{l}(\kappa r)\mathcal{J }_{l}(\kappa r)\\ \\ {}+2\kappa L\,i_{l}(\kappa r)(\mathcal{L}_{l}(\kappa r)-\mathcal{L}_{l}(\kappa L ))\qquad\mbox{for }r<L.\end{array}\right.\] (72)
Evaluation of these integrals result in slightly more complicated expressions, when compared to the expressions for \(\mathcal{I}\) and \(\mathcal{K}\) in Eqs. (55)–(60)
\[\mathcal{J}_{0}(z)= {\rm shi}(z)+\frac{1}{z}-\frac{\cosh(z)}{z},\] (73)
\[\mathcal{J}_{2}(z)= -\frac{1}{2}{\rm shi}(z)-\frac{2}{z}+\frac{z\cosh(z)+3\sinh(z)}{2 z^{2}},\] (74)
\[\mathcal{J}_{4}(z)= \frac{3}{8}{\rm shi}(z)+\frac{8}{3z}\]
\[{}-\frac{(3z^{3}+70y)\cosh(z)-(5z^{2}+70)\sinh(z)}{8z^{4}},\] (75)
where
\[{\rm shi}(z)=\int_{0}^{z}{\rm d}x\,\frac{\sinh(x)}{x},\] (76)
is the hyperbolic sine integral.
\[\mathcal{L}_{0}(z)= \Gamma(0,z)-\frac{\exp(-z)}{z},\] (77)
\[\mathcal{L}_{2}(z)= -\frac{1}{2}\Gamma(0,z)+\frac{(z-3)\exp(-z)}{2z^{2}},\] (78)
\[\mathcal{L}_{4}(z)= \frac{3}{8}\Gamma(0,z)-\frac{(3z^{3}+5z^{2}+70y+70)\exp(-z)}{8z^{ 4}}.\] (79)
In principle, one now has the exact solutions for \(\mathcal{A}\) up to \(l=4\). However, we need the expressions in Eqs. (73)–(79) to provide a well defined limit for the parallel rods, to use in combination with the expressions for arbitrary orientations (i.e. the series expansions in Eqs. (61), (62), and (64)). Therefore, it will be convenient to also have these expressions in the form of a series expansion
\[\mathcal{J}_{2n}(z)\]
\[{}=2^{2n}\sum_{k=0}^{\infty}\frac{(2n+k)!z^{2n+2k+1}}{(2n+2k+1)(2 n+2k+2)(4n+2k+1)!k!},\] (80)
\[\mathcal{L}_{2n}(z)=(-1)^{n}\frac{(2n)!}{(2^{n}n!)^{2}}\left(1+ \sum_{k=1}^{2n}\frac{1}{k}-\gamma_{\rm E}-\ln(z)\right)\]
\[{}-(-1)^{n}\frac{(2^{n}n!)^{2}}{(2n)!}\frac{1}{z}\]
\[{}-\frac{1}{2^{2n}}\sum_{k=0,k\neq n}^{2n}\frac{(-1)^{k}(2k)!z^{2 n-2k}}{(2n-2k)(2n-2k+1)(2n-k)!k!}\]
\[{}-\frac{1}{2^{2n}}\sum_{k=1}^{\infty}\frac{k!z^{2n+2k}}{(2n+2k)( 2n+2k+1)(2n+k)!(2k)!}\]
\[{}+2^{2n}\sum_{k=0}^{\infty}\frac{(2n+k)!z^{2n+2k+1}}{(2n+2k+1)(2 n+2k+2)(4n+2k+1)!k!}.\] (81)
## Appendix E Truncation and some examples of expressions
In principle, the calculation of each of the terms in Eq. (32) (i.e. each order of \(l\) and \(m\)) involves an infinite series expansion in \(\kappa L\). We will restrict our calculations to \(l=0,2\), and \(4\), and truncate each series expansion. Since the integration domain of \(\mathcal{A}\) is shaped like a parallelogram with sides of length \(L\), we divide out a factor \(L^{2}\) to make both \(\mathcal{A}\) and \(\mathcal{B}\) dimensionless (i.e. we calculate \(\kappa^{2}\mathcal{A}/(\kappa L)^{2}\) and \(\kappa^{2}\mathcal{B}/(\kappa L)^{2}\)). This factor \(L^{2}\) is combined with the prefactor \(\kappa l_{\rm B}\lambda^{2}\) in Eq. (32). From the definition of the charge parameter \(q\), we can write the result as an overall prefactor \(q\kappa^{2}L^{2}\). The truncated expansion is defined as the expansion up to fourth order in \(\kappa L\) of the expression where this prefactor is taken out. This means that we determine the series expansions of the expressions in Eqs. (48) and (72), after we divide by a factor \((\kappa L)^{2}\). We give some examples of the calculated expressions for \(l=0\) and \(m=0\). We make the distinction between four domains in \(r\). For \(r<\frac{L\sin\gamma}{2}\)
\[\kappa^{2}\mathcal{A}_{0,0}(r;\gamma)\]
\[{}=\frac{4}{\sin\gamma}\int_{0}^{\frac{\pi}{2}}{\rm d}\varphi \left(\mathcal{C}_{0}(\kappa r)-i_{0}(\kappa r)\mathcal{K}_{0}\left(\frac{ \kappa L\sin\gamma}{2\sin\left(\varphi+\frac{\gamma}{2}\right)}\right)\right)\]
\[{}\simeq\frac{2\pi}{\sin\gamma}\left(\frac{1}{\kappa r}-\frac{ \exp(-\kappa r)}{\kappa r}\right)-\frac{2\pi}{\sin\gamma}\frac{\sinh(\kappa r) }{\kappa r}\]
\[{}+\frac{\sinh(\kappa r)}{\kappa r}\,\kappa^{2}L^{2}\left[-\ln \left(\tan\frac{\gamma}{4}\tan\frac{\pi-\gamma}{4}\right)\right.\]
\[\qquad\qquad\qquad\qquad{}\times\left(\frac{2}{\kappa L}+\frac{ \kappa L\sin^{2}\gamma}{24}+\frac{\kappa^{3}L^{3}\sin^{4}\gamma}{2560}\right)\]
\[{}+\sqrt{1+\sin\gamma}(2-\sin\gamma)\frac{\kappa L}{12}\]
\[{}+\sqrt{1+\sin\gamma}\left(16-8\sin\gamma+2\sin^{2}\gamma-3\sin^ {3}\gamma\right)\frac{\kappa^{3}L^{3}}{3840}\]
\[\left.{}-1-\frac{\kappa^{2}L^{2}}{36}-\left(7+5\cos^{2}\gamma \right)\frac{\kappa^{4}L^{4}}{21600}\right].\] (82)
The next domain is \(\frac{L\sin\gamma}{2}<r<L\sin\frac{\gamma}{2}\), where the expression gets a lot more involved
\[\kappa^{2}\mathcal{A}_{0,0}(r;\gamma)=\frac{4}{\sin\gamma}\]
\[{}\times\left[\int_{0}^{\alpha(\kappa r)}{\rm d}\varphi\left( \mathcal{C}_{0}(\kappa r)-i_{0}(\kappa r)\mathcal{K}_{0}\left(\frac{\kappa L \sin\gamma}{2\sin\left(\varphi+\frac{\gamma}{2}\right)}\right)\right)\right.\]
\[\quad{}+\int_{\alpha(\kappa r)}^{\beta(\kappa r)}{\rm d}\varphi\, k_{0}(\kappa r)\mathcal{I}_{0}\left(\frac{\kappa L\sin\gamma}{2\sin\left( \varphi+\frac{\gamma}{2}\right)}\right)\]
\[\quad\left.{}+\int_{\beta(\kappa r)}^{\frac{\pi}{2}}{\rm d} \varphi\left(\mathcal{C}_{0}(\kappa r)-i_{0}(\kappa r)\mathcal{K}_{0}\left( \frac{\kappa L\sin\gamma}{2\sin\left(\varphi+\frac{\gamma}{2}\right)}\right) \right)\right]\]
\[{}\simeq\frac{4}{\sin\gamma}\left(2\,{\rm arcsin}(\xi)-\frac{\pi} {2}\right)\left(\frac{1}{\kappa r}-\frac{\exp(-\kappa r)}{\kappa r}\right)\]
\[\qquad\qquad\qquad{}-\frac{4}{\sin\gamma}\left(2\,{\rm arcsin}( \xi)-\frac{\pi}{2}\right)\frac{\sinh(\kappa r)}{\kappa r}\]
\[{}+\frac{\sinh(\kappa r)}{\kappa r}\,\kappa^{2}L^{2}\left[-\ln \left(\tan\frac{\gamma}{4}\tan\frac{\pi-\gamma}{4}\right)\right.\]
\[\qquad\qquad\qquad\qquad{}\times\left(\frac{2}{\kappa L}+\frac{ \kappa L\sin^{2}\gamma}{24}+\frac{\kappa^{3}L^{3}\sin^{4}\gamma}{2560}\right)\]
\[{}+\sqrt{1+\sin\gamma}(2-\sin\gamma)\frac{\kappa L}{12}\]
\[{}+\sqrt{1+\sin\gamma}\left(16-8\sin\gamma+2\sin^{2}\gamma-3\sin^ {3}\gamma\right)\frac{\kappa^{3}L^{3}}{3840}\]
\[{}-1-\frac{\kappa^{2}L^{2}}{36}-\left(7+5\cos^{2}\gamma\right) \frac{\kappa^{4}L^{4}}{21600}\]
\[{}-\frac{2r}{L}\,{\rm arctanh}\left(\sqrt{1-\xi^{2}}\right)\left( \frac{2}{\kappa r}+\xi^{2}\frac{\kappa r}{6}+\xi^{4}\frac{\kappa^{3}r^{3}}{160 }\right)\]
\[{}-\frac{2r}{L}\sqrt{1-\xi^{2}}\left(\frac{\kappa r}{6}+(3\xi^{2} +2)\frac{\kappa^{3}r^{3}}{480}\right.\]
\[\qquad\left.\left.{}-1-(2\xi^{2}+1)\frac{\kappa^{2}r^{2}}{36}-(8 \xi^{4}+4\xi^{2}+3)\frac{\kappa^{4}r^{4}}{5400}\right)\right]\]
\[{}+\frac{\exp(-\kappa r)}{\kappa r}\kappa^{2}L^{2}\left[\frac{2r} {L}\sqrt{1-\xi^{2}}\right.\]
\[{}\left.\times\left(1+(2\xi^{2}+1)\frac{\kappa^{2}r^{2}}{36}+(8 \xi^{4}+4\xi^{2}+3)\frac{\kappa^{4}r^{4}}{5400}\right)\right].\] (83)
We have abbreviated
\[\xi=\frac{L\sin\gamma}{2r}.\] (84)
This domain corresponds to the case where the circle of radius \(r\) intersects the edge of the parallelogram twice at each quadrant. The following domain corresponds to the case where there is just one intersection per quadrant. Recall that we assume \(0<\gamma<\pi/2\), such that this domain is given by \(L\sin\frac{\gamma}{2}<r<L\cos\frac{\gamma}{2}\)
\[\kappa^{2}\mathcal{A}_{0,0}(r;\gamma)=\frac{4}{\sin\gamma}\]
\[{}\times\left[\int_{0}^{\alpha(\kappa r)}{\rm d}\varphi\left( \mathcal{C}_{0}(\kappa r)-i_{0}(\kappa r)\mathcal{K}_{0}\left(\frac{\kappa L \sin\gamma}{2\sin\left(\varphi+\frac{\gamma}{2}\right)}\right)\right)\right.\]
\[\qquad\left.{}+\int_{\alpha(\kappa r)}^{\frac{\pi}{2}}{\rm d} \varphi\,k_{0}(\kappa r)\mathcal{I}_{0}\left(\frac{\kappa L\sin\gamma}{2\sin \left(\varphi+\frac{\gamma}{2}\right)}\right)\right]\]
\[{}\simeq\frac{4}{\sin\gamma}\left({\rm arcsin}(\xi)-\frac{\gamma} {2}\right)\left(\frac{1}{\kappa r}-\frac{\exp(-\kappa r)}{\kappa r}-\frac{ \sinh(\kappa r)}{\kappa r}\right)\]
\[{}+\frac{\sinh(\kappa r)}{\kappa r}\,\kappa^{2}L^{2}\]
\[{}\times\left[-\ln\left(\tan\frac{\gamma}{4}\right)\left(\frac{2} {\kappa L}+\frac{\kappa L\sin^{2}\gamma}{24}+\frac{\kappa^{3}L^{3}\sin^{4} \gamma}{2560}\right)\right.\]
\[{}+\left(\frac{1+\cos\gamma}{2}\right)^{\frac{3}{2}}\frac{\kappa L }{6}+\left(\frac{1+\cos\gamma}{2}\right)^{\frac{5}{2}}(7-3\cos\gamma)\frac{ \kappa^{3}L^{3}}{960}\]
\[{}-\frac{1+\cos\gamma}{2}-\left(\frac{1+\cos\gamma}{2}\right)^{2} (2-\cos\gamma)\frac{\kappa^{2}L^{2}}{36}\]
\[{}-\left(\frac{1+\cos\gamma}{2}\right)^{3}\left(7-6\cos\gamma+2 \cos^{2}\gamma\right)\frac{\kappa^{4}L^{4}}{5400}\]
\[{}-\frac{r}{L}\,{\rm arctanh}\left(\sqrt{1-\xi^{2}}\right)\left( \frac{2}{\kappa r}+\xi^{2}\frac{\kappa r}{6}+\xi^{4}\frac{\kappa^{3}r^{3}}{160 }\right)\]
\[{}-\frac{r}{L}\sqrt{1-\xi^{2}}\left(\frac{\kappa r}{6}+(3\xi^{2}+ 2)\frac{\kappa^{3}r^{3}}{480}\right.\]
\[\qquad\left.\left.{}-1-(2\xi^{2}+1)\frac{\kappa^{2}r^{2}}{36}-(8 \xi^{4}+4\xi^{2}+3)\frac{\kappa^{4}r^{4}}{5400}\right)\right]\]
\[{}+\frac{\exp(-\kappa r)}{\kappa r}\,\kappa^{2}L^{2}\]
\[{}\times\left[\frac{1-\cos\gamma}{2}+\left(\frac{1-\cos\gamma}{2} \right)^{2}(2+\cos\gamma)\frac{\kappa^{2}L^{2}}{36}\right.\]
\[{}+\left(\frac{1-\cos\gamma}{2}\right)^{3}\left(7+6\cos\gamma+2 \cos^{2}\gamma\right)\frac{\kappa^{4}L^{4}}{5400}\]
\[{}+\frac{r}{L}\sqrt{1-\xi^{2}}\left(1+(2\xi^{2}+1)\frac{\kappa^{2 }r^{2}}{36}\right.\] (85)
\[\qquad\qquad\qquad\qquad\qquad\qquad\left.\left.{}+(8\xi^{4}+4\xi ^{2}+3)\frac{\kappa^{4}r^{4}}{5400}\right)\right].\]
Finally, the domain where \(r>L\cos\frac{\gamma}{2}\) yields a more friendly expression
\[\kappa^{2}\mathcal{A}_{0,0}(r;\gamma)\]
\[{}=\frac{4}{\sin\gamma}\int_{0}^{\frac{\pi}{2}}{\rm d}\varphi\,k_ {0}(\kappa r)\mathcal{I}_{0}\left(\frac{\kappa L\sin\gamma}{2\sin\left(\varphi +\frac{\gamma}{2}\right)}\right)\]
\[{}\simeq\frac{\exp(-\kappa r)}{\kappa r}\,\kappa^{2}L^{2}\left[1+ \frac{\kappa^{2}L^{2}}{36}+\left(7+5\cos^{2}\gamma\right)\frac{\kappa^{4}L^{4} }{21600}\right].\] (86)
In the case of parallel rods, we can apply the alternative series expansions, or apply the limit \(\gamma\to 0\) on the last two expressions above. Both yield the following approximations, where for \(r<L\)
\[\kappa^{2}\mathcal{A}_{0,0}(r;\gamma=0)\]
\[{}=2\,\frac{L-r}{r}\left(\frac{1}{\kappa r}-\frac{\exp(-\kappa r) }{\kappa r}\right)\]
\[{}+2\kappa L\,\frac{\exp(-\kappa r)}{\kappa r}\left({\rm shi}( \kappa r)+\frac{1}{\kappa r}-\frac{\cosh(\kappa r)}{\kappa r}\right)\]
\[{}+2\kappa L\,\frac{\sinh(\kappa r)}{\kappa r}\left(\Gamma(0, \kappa r)-\frac{\exp(-\kappa r)}{\kappa r}\right.\]
\[\qquad\qquad\qquad\qquad\left.{}-\Gamma(0,\kappa L)+\frac{\exp(- \kappa L)}{\kappa L}\right)\]
\[{}\simeq 2\,\frac{L-r}{r}\left(\frac{1}{\kappa r}-\frac{\exp(- \kappa r)}{\kappa r}\right)\]
\[{}+2\kappa L\,\frac{\exp(-\kappa r)}{\kappa r}\frac{\kappa r}{2} \left(1+\frac{\kappa^{2}r^{2}}{36}+\frac{\kappa^{4}r^{4}}{1800}\right)\]
\[{}+2\kappa L\,\frac{\sinh(\kappa r)}{\kappa r}\left[\ln\left( \frac{L}{r}\right)-\frac{1}{\kappa L}\frac{L-r}{r}\right.\]
\[\qquad{}+\frac{\kappa r}{2}\left(1-\frac{\kappa r}{6}+\frac{ \kappa^{2}r^{2}}{36}-\frac{\kappa^{3}r^{3}}{240}+\frac{\kappa^{4}r^{4}}{1800}\right)\]
\[\qquad\left.{}-\frac{\kappa L}{2}\left(1-\frac{\kappa L}{6}+\frac {\kappa^{2}L^{2}}{36}-\frac{\kappa^{3}L^{3}}{240}+\frac{\kappa^{4}L^{4}}{1800} \right)\right].\] (87)
For \(r>L\) we obtain
\[\kappa^{2}\mathcal{A}_{0,0}(r;\gamma=0)\]
\[{}=2\kappa L\,\frac{\exp(-\kappa r)}{\kappa r}\left({\rm shi}( \kappa L)+\frac{1}{\kappa L}-\frac{\cosh(\kappa L)}{\kappa L}\right)\]
\[{}\simeq 2\kappa L\,\frac{\exp(-\kappa r)}{\kappa r}\frac{\kappa L }{2}\left(1+\frac{\kappa^{2}L^{2}}{36}+\frac{\kappa^{4}L^{4}}{1800}\right).\] (88)
Likewise, there are expressions for \(l=2,4\). These are all used together to create an (approximate) expression for the pair interaction outside of the hard-core exclusion region. We use this pair interaction to numerically calculate the effective excluded volume. This is accomplished by a numerical integration scheme over all different domains of \(r\), for given rod orientations. Our approach is fundamentally different from other theoretical work (Ramirez and Kjellander, 2006; Chapot et al., 2004), in the sense that we apply the interchange of the two positional vectors \(\mathbf{r}\) and \(l\hat{\omega}-l^{\prime}\hat{\omega}^{\prime}\). We have to do this in order to calculate the full integral over \(r\), in contrast to the studies in Refs. (Ramirez and Kjellander, 2006; Chapot et al., 2004), where only a description is given of the pair interaction for rods at large distances. Conversely, if one considers non-spherical charge distributions on spherical particles, this switch is not needed when introducing rotational invariants.
## References
* Zocher (1925) H. Zocher, Anorg. Allg. Chem. **147**, 91 (1925).
* Bawden et al. (1936) F. C. Bawden, N. W. Pirie, J. D. Bernal, and I. Fankuchen, Nature **138**, 1051 (1936).
* Onsager (1949) L. Onsager, Ann. N.Y. Acad. Sci. **51**, 627 (1949).
* Fraden et al. (1989) S. Fraden, G. Maret, D. L. D. Caspar, and R. B. Meyer, Phys. Rev. Lett. **63**, 2068 (1989).
* Dogic and Fraden (2001) Z. Dogic and S. Fraden, Phil. Trans. R. Soc. Lond. A **359**, 997 (2001).
* van Bruggen et al. (1996) M. P. B. van Bruggen, F. M. van der Kooij, and H. N. W. Lekkerkerker, J. Phys.: Condens. Mat. **8**, 9451 (1996).
* Snoeks et al. (2000) E. Snoeks, A. van Blaaderen, T. van Dillen, C. M. van Kats, M. L. Brongersma, and A. Polman, Advanced Materials **12**, 1511 (2000).
* Johnson et al. (2005) P. M. Johnson, C. M. van Kats, and A. van Blaaderen, Langmuir **21**, 11510 (2005).
* Yin and Alivisatos (2005) Y. Yin and A. P. Alivisatos, Nature **437**, 664 (2005).
* Sheu et al. (1990) H. R. Sheu, M. S. El-Aasser, and J. W. Vanderhoff, J. Polym. Sci. A **28**, 629 (1990).
* Jun et al. (2005) Y.-W. Jun, J.-W. Seo, S. J. Oh, and J. Cheon, Coord. Chem. Rev. **249**, 1766 1775 (2005).
* Murphy et al. (2005) C. J. Murphy, T. K. Sau, A. M. Gole, C. J. Orendorff, J. Gao, L. Gou, S. E. Hunyadi, and T. Li, J. Phys. Chem. B **109**, 13857 (2005).
* Zoldesi and Imhof (2005) C. I. Zoldesi and A. Imhof, Advanced Materials **17**, 924 (2005).
* Cho et al. (2005) Y.-S. Cho, G.-R. Yi, J.-M. Lim, S.-H. Kim, V. N. Manoharan, D. J. Pine, and S.-M. Yang, J. Am. Chem. Soc. **127**, 15968 (2005).
* Kraft et al. (2008) D. J. Kraft, W. S. Vlug, C. M. van Kats, A. van Blaaderen, A. Imhof, and W. K. Kegel (2008), accepted for publication in the Journal of the American Chemical Society.
* Derjaguin (1940) B. Derjaguin, Trans. Faraday Soc. **35**, 203 (1940).
* Verwey and Overbeek (1948) E. J. W. Verwey and J. T. G. Overbeek, _Theory of the stability of lyophobic colloids_ (Elsevier, Amsterdam, 1948).
* Bolhuis and Frenkel (1997) P. Bolhuis and D. Frenkel, J. Chem. Phys. **106**, 666 (1997).
* Cuesta and Martínez-Ratón (1997) J. A. Cuesta and Y. Martínez-Ratón, J. Chem. Phys. **107**, 6379 (1997).
* Martínez-Ratón and Cuesta (1999) Y. Martínez-Ratón and J. A. Cuesta, J. Chem. Phys. **111**, 317 (1999).
* Eppenga and Frenkel (1984) R. Eppenga and D. Frenkel, Mol. Phys. **52**, 1303 (1984).
* Esztermann et al. (2006) A. Esztermann, H. Reich, and M. Schmidt, Phys. Rev. E **73**, 011409 (2006).
* Pfleiderer and Schilling (2007) P. Pfleiderer and T. Schilling, Phys. Rev. E **75**, 020402(R) (2007).
* Ramirez and Kjellander (2006) R. Ramirez and R. Kjellander, J. Chem. Phys. **125**, 144110 (2006).
* Chapot et al. (2004) D. Chapot, L. Bocquet, and E. Trizac, J. Chem. Phys. **120**, 3969 (2004).
* Poniewierski and Hołyst (1990) A. Poniewierski and R. Hołyst, Phys. Rev. A **41**, 6871 (1990).
* Somoza and Tarazona (1990) A. M. Somoza and P. Tarazona, Phys. Rev. A **41**, 965 (1990).
* Graf and Löwen (1997) H. Graf and H. Löwen, J. Phys.: Condens. Mat. **104**, 177 (1997).
* Veerman and Frenkel (1990) J. A. C. Veerman and D. Frenkel, Phys. Rev. A **41**, 3237 (1990).
* Vroege and Lekkerkerker (1992) G. J. Vroege and H. N. W. Lekkerkerker, Rep. Prog. Phys. **55**, 1241 (1992).
* Stroobants et al. (1986) A. Stroobants, H. N. W. Lekkerkerker, and T. Odijk, Macromolecules **19**, 2232 (1986).
* Vega and Monson (1997) C. Vega and P. A. Monson, J. Chem. Phys. **107**, 2696 (1997).
* Marechal and Dijkstra (2008) M. Marechal and M. Dijkstra, Phys. Rev. E **77**, 061405 (2008).
* Sato and Teramoto (1991) T. Sato and A. Teramoto, Physica A **176**, 72 (1991).
* Nyrkova and Khokhlov (1986) I. A. Nyrkova and A. R. Khokhlov, Biophysics **31**, 839 (1986).
* Nyrkova et al. (1997) I. A. Nyrkova, N. P. Shusharina, and A. R. Khokhlov, Macromol. Theory Simul. **6**, 965 (1997).
* Chen and Koch (1996) S.-B. Chen and D. L. Koch, J. Chem. Phys. **104**, 359 (1996).
* Potemkin et al. (2002) I. I. Potemkin, R. E. Limberger, A. N. Kudlay, and A. R. Khokhlov, Phys. Rev. E **66**, 011802 (2002).
* Weyerich et al. (1990) B. Weyerich, B. D’Aguanno, E. Canessa, and R. Klein, Faraday Discuss. Chem. Soc. **90**, 245 (1990).
* Graf and Löwen (1999) H. Graf and H. Löwen, Phys. Rev. E **59**, 1932 (1999).
* (41) A. F. Demirors, A. Imhof, and A. van Blaaderen, private communication.
|
1411.1888 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 33106,
"num_imgs": 6,
"llama3_tokens_count": 9831
} | [
"content_image/1411.1888/x1.png",
"content_image/1411.1888/x2.png",
"content_image/1411.1888/x3.png",
"content_image/1411.1888/x4.png",
"content_image/1411.1888/x5.png",
"content_image/1411.1888/x6.png"
] | # Magnetic exchange in \(\alpha\)–iron from the ab initio calculations in the paramagnetic phase
P. A. Igoshev
Institute of Metal Physics, Russian Academy of Sciences, 620990 Ekaterinburg, Russia
Ural Federal University, 620002 Ekaterinburg, Russia
A. V. Efremov
Institute of Metal Physics, Russian Academy of Sciences, 620990 Ekaterinburg, Russia
A. A. Katanin
Institute of Metal Physics, Russian Academy of Sciences, 620990 Ekaterinburg, Russia
Ural Federal University, 620002 Ekaterinburg, Russia
February 18, 2024
###### Abstract
Applying the local density approximation (LDA) and dynamical mean field theory (DMFT) to paramagnetic \(\alpha\)–iron, we revisit a problem of theoretical description of its magnetic properties. The analysis of local magnetic susceptibility shows that at sufficiently low temperatures \(T<1500\) K, both, \(e_{g}\) and \(t_{2g}\) states equally contribute to the formation of the effective magnetic moment with spin \(S=1\). The self-energy of \(t_{2g}\) states shows sizable deviations from Fermi-liquid form, which accompanies earlier found non-quasiparticle form of \(e_{g}\) states. By considering the non-uniform magnetic susceptibility we find that the non-quasiparticle form of \(e_{g}\) states is crucial for obtaining ferromagnetic instability in \(\alpha\)-iron. The main contribution to the exchange interaction, renormalized by the effects of electron interaction, comes from the hybridization between \(t_{2g}\) and \(e_{g}\) states. We furthermore suggest the effective spin-fermion model for \(\alpha\)-iron, which allows us to estimate the exchange interaction from paramagnetic phase, which is in agreement with previous calculations in the ordered state within the LDA approaches.
iron, magnetic susceptibility, LDA, LDA+DMFT pacs: Valid PACS appear here Elemental iron in its low-temperature body-centered cubic (bcc) phase, which is stable below approximately 1200 K, provides unique example of itinerant magnetic \(d\)-electron systems, where formation of well-defined local magnetic moments can be expected. Indeed, the Rhodes-Wolfarth ratio \(p_{\mathrm{C}}/p_{\mathrm{S}}\) for this substance is very close to one, which is characteristic feature of systems, containing (almost) localized \(d\)–electrons (\(p_{\mathrm{C}}\) corresponds to the magnetic moment, extracted from the Curie–Weiss law for magnetic susceptibility in the paramagnetic phase \(\chi=(g\mu_{\mathrm{B}})^{2}p_{\mathrm{C}}(p_{\mathrm{C}}+1)/T\), and \(p_{\mathrm{S}}\) is the saturation moment, \(g\) is a Lande factor, \(T\) denotes temperature). At the same time, the moment \(p_{\mathrm{C}}=1.1\) has a small fractional part, which is natural for the itinerant material.
This poses natural questions: which electrons mainly contribute to the local-moment spin degrees of freedom of \(\alpha\)-iron? What is the appropriate physical model, that describes spin degrees of this substance? Attempting to answer the former question, Goodenough suggested[1], that the \(e_{g}\) electrons are localized, while \(t_{2g}\) electrons are itinerant. This suggestion was later on refined in Ref. [2], pointing to a possibility, that only some fraction of \(e_{g}\) electrons, contributing to formation of the peak of the density of states near the Fermi level, named by the authors as giant van Hove singularity, is localized. (The intimate relation between peaks of density of states and electron localization was also previously pointed out in Ref. [3]). On contrary, there were statements made that \(95\%\) of electrons are localized in iron[4]. On the model side, the thermodynamic properties of \(\alpha\)-iron were described within the effective spin \(S=1\) Heisenberg model [5], assuming therefore that the main part of magnetic moment is localized, in agreement with the above-mentioned Rhodes-Wolfarth arguments. Use of the effective Heisenberg model was justified from the ab–initio analysis of spin spiral energies yielding reasonable values of the exchange integrals [6].
These considerations however did not take into account strong electronic correlations in \(\alpha\)–iron, which important role was emphasized first in Ref. [7]. Previous calculations [8; 10] within the local density approximation (LDA), combined with the dynamical mean-field theory (DMFT) revealed the presence of non-quasiparticle states formed by \(e_{g}\) electrons, which were considered as a main source of local moment formation in iron, while \(t_{2g}\) states were assumed to be itinerant[8]. At the same time, magnetic properties of the same \(t_{2g}\) states also show some features of local-moment behavior. In particular, the temperature dependence of inverse local spin susceptibility, which was calculated previously[8] only at \(T>1000\) K because of the limitations of Hirsch-Fye method, is approximately linear, including the contribution of \(t_{2g}\) states; the real part of \(t_{2g}\) contribution to dynamic local magnetic susceptibility has a peak at low frequencies, reflecting a possibility of partial local moment formation by \(t_{2g}\) states.
Studying this possibility requires investigation of electronic and magnetic properties at low temperatures, since the energy scale for partially formed local \(t_{2g}\) moments can be smaller than for \(e_{g}\) states. Although real substance orders ferromagnetically at low temperatures, in the present paper (as in Ref. [8]) we perform analysis of local properties of iron in the paramagnetic phase to reveal the mechanism of local moment formation. Furthermore, we study non-local magnetic susceptibility in the low temperature range \(T>250\)\(\mathrm{K}\), which allows us to analyze the mechanism of magnetic exchange. To this end we use the state–of–art dynamical mean–field theory (DMFT) calculation with continous time quantum Monte-Carlo (CT–QMC) solver[9], combined with the ab-initio local density approximation (LDA). From our low-temperature analysis we argue, that \(t_{2g}\) electrons are almost equally contribute to the effective local magnetic moment, as the \(e_{g}\) electrons, and play crucial role in the mechanism of magnetic exchange in iron. In particular, the most important contribution to the exchange integrals comes from the hybridization of \(t_{2g}\) and \(e_{g}\) states, which yields _nearest-neighbour_ magnetic exchange interaction, which agrees well with the experimental data.
We perform the ab initio band–structure calculations in LDA approximation within tight–binding–linear muffin–tin orbital–atomic spheres approximation framework; the von Barth-Hedin local exchange-correlation potential [11] was used. Primitive reciprocal translation vectors were discretized into 12 points along each direction which leads to 72 **k**–points in irreducible part of the Brillouin zone. For DMFT (CT-QMC) calculations, we use the Hamiltonian of Hubbard type with the kinetic term containing all \(s\)–\(p\)–\(d\) states, being extracted from the LDA solution, and the interaction part with density-density contributions for \(d\) electrons only. The Coulomb interaction parameter value \(U=2.3\) eV and the Hund’s parameter \(I=0.9\) eV used in our work are the same as in earlier LDA+DMFT calculations [7; 8; 13]. To treat a problem of formation of local moments we consider paramagnetic phase, which is achieved by assuming spin-independent density of states, local self-energy and bath green function. For the purpose of extracting corresponding exchange parameters, we take in LDA part physical value of the lattice parameter \(a=2.8664\) Å, corresponding to ferromagnetic state at room temperature.
We consider first the results for the orbital–resolved temperature–dependent local static spin susceptibility \(\chi_{\text{loc,}mn}=4\mu_{B}^{2}\int\nolimits_{0}^{\beta}\langle s_{i,m}^{z}( \tau)s_{i,n}^{z}(0)\rangle\,d\tau\), where \(s_{i,m}^{z}\) is the \(z\)–projection of the spin of \(d\)–electrons, belonging to the orbitals \(m=t_{2g},e_{g}\) at a given lattice site \(i\), see Fig. 1 (for completeness, we also show the total susceptibility \(\chi_{\text{loc}}=\sum\nolimits_{mn}\chi_{\text{loc,}mn}\) which also includes the off-diagonal \(t_{2g}\)-\(e_{g}\) contribution). The temperature dependence of the static inverse local susceptibility is linear (as was also observed in previous studies [7; 8; 10; 13]), however being resolved with respect to orbital contributions (see Fig. 1) it appears to manifest very different nature of \(e_{g}\) and \(t_{2g}\) moments. The inverse \(e_{g}\) orbital contribution behaves approximately linearly with \(T\) in a broad temperature range[8; 10]. At the same time, analyzing low-temperature behaviour, we find that \(\chi_{\text{loc,}t_{2g}\text{-}t_{2g}}^{-1}\) demonstrates a crossover at \(T^{\ast}\sim 1500\) K between two linear dependences with the low–temperature part having higher slope (i. e. smaller effective moment). Note that this feature was not obtained in previous study [8] because of considering only temperature range \(T>1000\) K. The scale \(T^{\ast}\) corresponds to the crossover to non-Fermi-liquid behavior of \(t_{2g}\) states, see below.
<figure><img src="content_image/1411.1888/x1.png"><figcaption>Figure 1: (Color online) Temperature dependence of inverse local magneticsusceptibility, and the corresponding eg and t2g orbital contributions. Dashedlines show linear behavior in different temperature intervals.</figcaption></figure>
<figure><img src="content_image/1411.1888/x2.png"><figcaption>Figure 2: (Color online) The temperature dependence of the effective magneticmoment and instantaneous average ⟨(sz)2⟩ and μ2eff in α–iron, extracted fromthe temperature dependence of local susceptibility, together with thecontribution of the eg and t2g orbitals</figcaption></figure>
To get further insight into the local magnetic properties of \(\alpha\)-iron, we consider the temperature dependence of the effective magnetic moment \(\mu_{m,\mathrm{eff}}^{2}=3/(d\chi_{\text{loc,}mm}^{-1}/dT)\) and the instantaneous average \(\langle(s_{i,m}^{z})^{2}\rangle,\) corresponding to different orbital states, see Fig. 2. We find, that for \(e_{g}\) electrons both moments saturate at temperatures \(T<1500\) K and remain approximately constant up to sufficiently low temperatures. Comparing the value of the square of the moment \(\mu_{e_{g},\mathrm{eff}}^{2}/(3\mu{}_{B}^{2})=1.2\), extracted from the Curie-Weiss law for local susceptibility, and the instantaneous average \(4\langle(s_{i,e_{g}}^{z})^{2}\rangle=1.8\) with the corresponding filling \(n_{e_{g}}\simeq 2.6\), we find that the major part of \(e_{g}\) electrons determine the instantaneous average \(\langle(s_{i,e_{g}}^{z})^{2}\rangle\), and at least half of them contribute to the sufficiently long-living (on the scale of \(1/T\)) local moments. At the same time, for \(t_{2g}\) electronic states the abovementioned crossover between the high-temperature value \(\mu_{t_{2g},\mathrm{eff}}^{2}/(3\mu_{B}^{2})\approx 1.95\) and the low temperature value \(\mu_{t_{2g},\mathrm{eff}}^{2}/(3\mu_{B}^{2})\simeq 0.82\) is present, which, comparing to \(n_{t_{2g}}\simeq 4.4,\) shows that at least \(20\%\) of \(t_{2g}\) electrons participate in the effective local moment formation at low temperatures. Yet, the corresponding low-temperature effective moments \(\mu_{e_{g},\mathrm{eff}}^{2}\) and \(\mu_{t_{2g},\mathrm{eff}}^{2}\) are comparable (each of them is approximately \(3\mu_{B}^{2},\) corresponding to the effective spin \(s\simeq 1/2\)), showing important role of \(t_{2g}\) electrons in the formation of the total spin \(S=1\) state.
<figure><img src="content_image/1411.1888/x3.png"><figcaption>Figure 3: (Color online) Frequency dependence of ImΣt2g(iν). Black solidlines illustrate the results of calculations at temperatures (from top tobottom) T=1/5,1/10,1/20,1/30,1/40 eV. The green dot–dashed line present fitsto the the Fermi-liquid dependence in the range νn<1 eV, while blue dashedlines present fits to the non-Fermi-liquid dependence (see text). Dots denoteMatsubara frequencies νn=πT(2n+1).</figcaption></figure>
Although the self-energy calculations [8; 10] yield quasiparticle-like form of \(t_{2g}\) electron self-energy, the low-frequency and low-temperature dependence of self-energy shows pronounced deviations from the Fermi-liquid behavior, see Fig. 3. To analyse the frequency dependence of the self-energy on imaginary frequency axis, we fit the obtained results by the Fermi-liquid dependence \(-\mathrm{Im}\Sigma(i\nu)=\Gamma(T)+[Z^{-1}(T)-1]\nu+\sigma(T)\nu^{2},\) where \(\Gamma(T)\) is the damping of electrons at the Fermi level, \(Z(T)\) is the temperature-dependent quasiparticle residue. Alternatively, we consider the fit \(-\mathrm{Im}\Sigma(i\nu)=\Gamma_{1}(T)+\beta_{1}(T)\nu^{\alpha}+\sigma_{1}(T) \nu^{2}\) with some exponent \(\alpha<1\). The latter dependence corresponds to the non-Fermi-liquid behavior of \(t_{2g}\) electrons. The obtained results are presented in the Table.
\begin{tabular}{||l||l| |c|l||l|l|l|l||}
\hline \hline
\[\beta=1/T\] & \[\Gamma\] & \multicolumn{1}{l|}{ \[Z^{-1}-1\]} & \[\sigma\] & \[\Gamma_{1}\] & \[\beta_{1}\] & \[\sigma_{1}\] & \[\alpha\] \\
\hline \hline
20 & 0.20 & 0.22 & -0.09 & 0.17 & 0.18 & -0.006 & 0.51 \\
30 & 0.18 & 0.29 & -0.19 & 0.15 & 0.19 & -0.005 & 0.48 \\
40 & 0.17 & 0.37 & -0.32 & 0.13 & 0.20 & -0.005 & 0.44 \\ \hline \hline
\end{tabular}
The linear-quadratic fits are applicable only at \(\nu<1\) eV; at sufficiently small \(\nu\) they also do not fit the obtained results well. We find that the spectral weight \(Z(T)\) pronouncely decreases with decrease of temperature, and the coeffitient \(\Gamma(T)\) obviously does not obey the Fermi-liquid dependence \(\Gamma(T)\propto T^{2}\). These observations show that sizable deviations from Fermi-liquid picture can be expected.
The power-law fits yield much better agreement in a broad range of frequencies \(\nu<5\) eV, describing at the same time correctly the low-frequency behavior. The coefficients \(\beta_{1},\)\(\sigma_{1}\) of these fits show very weak temperature dependence (the contribution \(\sigma_{1}\) is almost negligible), while the damping \(\Gamma_{1}(T)\) and the exponent \(\alpha\) slightly decrease with temperature, being related by \(\Gamma_{1}(T)\sim T^{\alpha}\). These observations imply that \(t_{2g}\) electronic subsystem is better described by non-Fermi liquid behavior at low temperatures, which reflects its participation in the formation of local moments in \(\alpha\)-iron. Remarkably, consideration of the three-band model in Ref. [12] showed similar dependence of the self-energy \(\Sigma\sim\nu^{1/2}\) due to Hund exchange interaction, which allows to attribute the \(t_{2g}\) subsystem in iron as close to the ”spin freesing” transition, accroding to the terminology of Ref. [12].
To get further insight into the formation of effective local moments and extract corresponding exchange integrals, we calculate the momentum dependence of particle-hole bubble \(\chi_{\mathbf{q}}^{\text{{0}}\mathrm{,}mn}=-(2\mu_{B}^{2}/\beta)\sum_{l,{ \mathbf{k,}}\widetilde{m}\in m,\widetilde{n}\in n}\mathcal{G}_{\mathbf{k}, \widetilde{m}\widetilde{n}}(\mathrm{i}\nu_{l})\mathcal{G}_{\mathbf{k}+\mathbf{ q},\widetilde{n}\widetilde{m}}(\mathrm{i}\nu_{l})\), which is obtained using paramagnetic LDA and LDA+DMFT electronic spectrum [\(\mathcal{G}_{\mathbf{k},\widetilde{m}\widetilde{n}}(\mathrm{i}\nu_{l})\) is the corresponding electronic Green function for the transition from the orbital state \(\widetilde{m}\) to \(\widetilde{n}\), \(\nu_{l}\) is a fermionic Matsubara frequency; for more details on the calculation procedure see Ref. [13]].
<figure><img src="content_image/1411.1888/x4.png"><figcaption>Figure 4: (Color online) Orbital-resolved momentum dependence of χ0,mnq atT=290 K calculated in high symmetry directions of the Brillouin zone. Thecontributions χ0,dq, χ0,t2g−t2gq, χ0,eg−egq , and the hybridization partχ0,eg−t2gq are shown by black, red, green and blue lines, respectively. Solid(dashed) lines correspond to LDA+DMFT (LDA) results. The LDA+DMFT estimate forJ(1)q(μB/I)2 is shown by magenta short–dashed line.</figcaption></figure>
The results for LDA and LDA+DMFT approaches at \(T=290\) K are presented in the Fig. 4 (we find that the LDA+DMFT results are almost temperature-independent at low \(T\)). For the bubble, calculated using purely LDA spectrum (i.e. with the assumption that all electrons are itinerant), the maximum of \(\chi_{\mathbf{q}}^{\mathrm{0}}\) is located at the point \(\mathbf{q}=\mathbf{q}_{\mathrm{P}}=(\pi,\pi,\pi)/a\), while the ferromagnetic instability in \(\alpha\)-iron requires maximum of \(\chi_{\mathbf{q}}^{\mathrm{0}}\) at \(\mathbf{q}=0\) and low \(T,\) if one neglects the non-local vertex corrections. One can observe, that the main contribution to this “incorrect” behavior of the bubble originates from the \(e_{g}\) electron part, \(\chi_{\mathbf{q}}^{\mathrm{0},e_{g}-e_{g}}\). Both \(\chi_{\mathbf{q}}^{\mathrm{0},e_{g}-e_{g}}\) and \(\chi_{\mathbf{q}}^{\mathrm{0},e_{g}-t_{2g}}\) contributions are however strongly influenced by the account of the local self–energy corrections to the Green’s function in DMFT approach, which correspond physically to account of partial localization of \(d\)-electrons. These corrections mainly change \(\chi_{\mathbf{q}}^{\mathrm{0},e_{g}\text{-}e_{g}}\) and shift the maximum of \(\chi_{\mathbf{q}}^{\mathrm{0}}\) to \(\Gamma\) point (\(\mathbf{q}=0\)). Note that within LDA+DMFT, intra–orbital contributions to \(\chi_{\mathbf{q}}^{\mathrm{0},e_{g}-e_{g}}\) and \(\chi_{\mathbf{q}}^{\mathrm{0},t_{2g}-t_{2g}}\) are only weakly momentum-dependent; they also behave similarly, varying “counter–phase”. According to the general ideas of spin-fluctuation theory [14], this weak momentum dependence can be ascribed to the formation of the effective moments from \(e_{g}\) and \(t_{2g}\) states. In agreement with the abovediscussed consideration, the \(\chi_{\mathbf{q}}^{\mathrm{0},e_{g}\text{-}e_{g}}\) contribution has even weaker dispersion than the \(\chi_{\mathbf{q}}^{\mathrm{0},t_{2g}\text{-}t_{2g}}\) part. At the same time, strongly dispersive \(\chi_{\mathbf{q}}^{\mathrm{0},e_{g}-t_{2g}}\) contribution, which is assumed to correspond to the (remaining) itinerant degrees of freedom, provides the maximum of the resulting \(\chi_{\mathbf{q}}^{\mathrm{0}}\) at \(\mathbf{q}=0\) and appears to be the main source of the stability of the ferromagnetic ordering in iron within LDA+DMFT approximation.
The obtained results do not change qualitatively for the other choice Hubbard interactions (as we have verified for \(U=4.0\) and \(I=1.0\) eV), see Supplementary Material [15].
To see the quantitative implications of the described physical picture, we consider the effective spin-fermion model
\[\mathcal{S} =\frac{1}{2}\sum_{i,\omega_{n}}\chi_{S}^{-1}(\mathbf{q},\mathrm{i }\omega_{n})\mathbf{S}_{i}(\mathrm{i}\omega_{n})\mathbf{S}_{j}(-\mathrm{i} \omega_{n})e^{i\mathbf{q(R}_{i}-\mathbf{R}_{j}\mathbf{)}}\] (1)
\[+\sum_{\nu_{n}\sigma ll^{\prime}}c_{l\sigma}^{{\dagger}}(\mathrm{ i}\nu_{n})\left[\mathrm{i}\nu_{n}\delta_{ll^{\prime}}+H_{ll^{\prime}}+\Sigma_{ ll^{\prime}}(\mathrm{i}\nu_{n})\right]c_{l^{\prime}\sigma}(\mathrm{i}\nu_{n})\]
(\(\omega_{n}\) is a bosonic Matsubara frequency, \(l,l^{\prime}\) combines site and orbital indices), describing interaction of itinerant electrons with (almost) _local_ spin fluctuations (in contrast to critical spin fluctuation in cuprates [16]), see also Ref. [17]. We assume here that the Coulomb and Hund’s interaction acting within \(e_{g}\) and \(t_{2g}\) orbitals results in a formation of some common local moment (field \(\mathbf{S}\)), while the remaining itinerant degrees of freedom are described by the field \(\mathbf{s}_{i}=\mathbf{s}_{i}^{e_{g}}+\mathbf{s}_{i}^{t_{2g}},\) formed from the Grassmann variables \(c_{l^{\prime}\sigma};\)\(H_{ll^{\prime}}\) and \(\Sigma_{ll^{\prime}}\) are the Hamiltonian and local self-energy corrections to the LDA spectrum (the latter is assumed to be local and therefore diagonal with respect to orbital indices). The interaction between the two subsystems (localized and itinerant), which are formed from the \(d\)–electronic states, is driven by Hund’s constant coupling \(I\).
Considering the renormalization of the propagator \(\chi_{S}\) by the corresponding boson self–energy corrections, we obtain for the non-uniform susceptibility (see Supplementary Material [15])
\[\chi^{-1}(\mathbf{q},\mathrm{i}\omega_{n})=\chi_{\mathrm{loc}}^{-1}(\mathrm{i} \omega_{n})-J_{\mathbf{q}}/(4\mu_{B}^{2}),\] (2)
where \(\chi_{\mathrm{loc}}(\mathrm{i}\omega_{n})\) is the local spin susceptibility and \(J_{\mathbf{q}}\) is the exchange interaction, which fulfills \(\sum\nolimits_{\mathbf{q}}J_{\mathbf{q}}=0\) (no spin self-interaction). We find \(J_{\mathbf{q}}=J_{\mathbf{q}}^{(1)}+J_{\mathbf{q}}^{(2)}\), \(J_{\mathbf{q}}^{(1)}=(I/\mu_{B})^{2}\sum_{m}\left[\chi_{\mathbf{q}}^{\mathrm{0 ,}mm}-\sum\nolimits_{\mathbf{p}}\chi_{\mathbf{p}}^{\mathrm{0,}mm}\right]\) is the intra-orbital part, while \(J_{\mathbf{q}}^{(2)}=2(I/\mu_{B})^{2}\chi_{\mathbf{q}}^{\mathrm{0,}t_{2g}\text {-}e_{g}}\) results from the hybridization of states of different symmetry. The contribution \(J_{\mathbf{q}}^{(1)}\) is approximately twice smaller than \(J_{\mathbf{q}}^{(2)},\) and therefore the main contribution to the magnetic exchange comes from the hybridization of \(t_{2g}\) and \(e_{g}\) states. The whole momentum dependence of \(J_{\mathbf{q}}^{(2)}\) can be well captured by the nearest–neighbor approximation for effective exchange integrals only, \(J_{\mathbf{q}}^{(2)}=J_{0}\cos(aq_{x}/2)\cos(aq_{y}/2)\cos(aq_{z}/2),\) while \(J_{\mathbf{q}}^{(1)}\) has more complicated momentum dependence.
Restricting ourselves by considering the contribution \(J_{\mathbf{q}}=J_{\mathbf{q}}^{(2)}\), (we assume that the contribution \(J_{\mathbf{q}}^{(1)}\) is further suppressed by the non-local and vertex corrections), from Fig. 1 we find at \(T=290\) K the value \(J_{\mathbf{q}=0}=0.18\) eV. This value, as well as the momentum dependence of \(J_{\mathbf{q}}^{(2)}\) agrees well with the result of S.V. Okatov et al. [6]. The obtained results together with \(\mu_{\mathrm{eff}}^{2}=11.4\mu_{\mathrm{B}}^{2}\) (see Fig. 2) provide an estimate for the Curie temperature (we assume \(T_{\mathrm{C}}\gg\theta\)), which can be obtained from the divergence of \(\chi^{-1}(\mathbf{q},\mathrm{0})\):
\[T_{\mathrm{C}}=\frac{\mu_{\mathrm{eff}}^{2}}{4\mu_{\mathrm{B}}^{2}}\frac{J_{0} }{3}=0.17\text{ eV}\] (3)
and appears comparable with the result of full DMFT calculation, and therefore shows that the above model is adequate for describing magnetic properties of the full \(5\)-band Hubbard model. (Note that the overestimation of \(T_{\mathrm{C}}\) in DMFT approach in comparison with the experimental data is due to density-density approximation for the Coulomb interaction[18] and (to minor extent) due to presence of non-local fluctuations, not accounted by DMFT).
Neglecting longitudinal fluctuations of field \(\mathbf{S}\) we can map the model (1) to an effective \(S=1\) Heisenberg model \(\mathcal{H}_{\mathrm{H}}=(1/2)\sum_{ij}J_{ij}\mathbf{S}_{i}\mathbf{S_{j}}\) to estimate the spin–wave spectrum:
(4)
We obtain the corresponding spin stiffness \(D=\lim_{q\to 0}(\omega_{\mathbf{q}}/q^{2})=290\,\text{meV}\cdot\)Å\({}^{2}\) in a good agreement with the experimental data \(D=280\,\text{meV}\cdot\)Å\({}^{2}\) (Ref. [20]).
In conclusion, we have considered the problem of the description of effective local moments in \(\alpha\)-iron based on the electronic spectrum in paramagnetic phase within LDA+DMFT approximation. We find that local moments are formed by both \(e_{g}\) and \(t_{2g}\) orbital states, each of them contributing a half of the total moment \(S=1.\) For \(t_{2g}\) electronic states we find pronounced features of non-Fermi-liquid behavior, which accompanies earlier observed non-quasiparticle form of \(e_{g}\) states. The local moment and itinerant states interact with itinerant states via Hund interaction, yielding magnetic exchange between the local-moment states via the effective RKKY–type mechanism. The obtained exchange integrals are well captured by the LDA+DMFT approach. The main origin of the intersite interaction of these moments is attributed to the \(e_{g}\)-\(t_{2g}\) hybridization, which yields magnetic exchange, dominating on the nearest-neighbour sites. Contrary to the previous studies[19; 6], we do not however assume some magnetic ordering for the electronic system.
We also emphasize that non–local self–energy corrections, as well as vertex corrections, missed in our investigation, can make the described physical picture more precise. In particular, non-local effects allow for the non–zero non-diagonal \(e_{g}\)–\(t_{2g}\) self-energy matrix elements and therefore possibly renormalize the strength of exchange interaction, as well as the self-energy of \(t_{2g}\) electronic states. The role of the vertex corrections, only roughly accounted in the considered approach, also requires additional study. Therefore further investigation using powerful theoretical techniques of dynamic vertex approximation [21], dual fermion[22], or other non-local approaches is of certain interest.
The authors are grateful to Yu. N. Gornostyrev, A. V. Korolev, and K. Held for useful discussions. The work of P. A. Igoshev was supported by the Russian Foundation for Basic Research (Project No. 14-02-31603) and Act 211 Government of the Russian Federation 02.A03.21.0006; A. A. Katanin acknowledges support of the Program of ”Dynasty” foundation. The calculations were performed using ”Uran” supercomputer of IMM UB RAS.
## References
* (1) J. B. Goodenough, Phys. Rev. **120**, 67 (1960).
* (2) V. Yu. Irkhin, M. I. Katsnelson, and A. V. Trefilov, J. Phys.: Condens. Matter **5**, 8763 (1993).
* (3) S. V. Vonsovskii, M. I. Katsnelson, and A. V. Trefilov, Fiz. Met. Metalloved. 76 (3) 3 (1993); 76 (4), 3 (1993).
* (4) M. B. Stearns, Phys. Rev. B **8**, 4383 (1973); R. Mota and M. D. Coutinho-Filho, Phys. Rev. B **33**, 7724 (1986).
* (5) F. Körmann, A. Dick, B. Grabowski, B. Hallstedt, T. Hickel, and J. Neugebauer, Phys. Rev. B **78**, 033102 (2008)
* (6) S.V. Okatov, Yu.N. Gornostyrev, A.I. Lichtenstein, and M.I. Katsnelson, Phys. Rev. B **84**, 214422 (2011).
* (7) A. I. Lichtenstein, M. I. Katsnelson, and G. Kotliar, Phys. Rev. Lett. **87**, 067205 (2001).
* (8) A. A. Katanin, A. I. Poteryaev, A. V. Efremov, A. O. Shorikov, S. L. Skornyakov, M. A. Korotin, V. I. Anisimov, Phys. Rev. B **81**, 045117 (2010).
* (9) P. Werner et al., Phys. Rev. Lett. **97**, 076405 (2006).
* (10) L. V. Pourovskii, T. Miyake, S. I. Simak, A. V. Ruban, L. Dubrovinsky, and I. A. Abrikosov, Phys. Rev. B **87**, 115130 (2013)
* (11) U. von Barth and L. Hedin, J. Phys. C 5, 1629 (1972).
* (12) P. Werner, E. Gull, M. Troyer, and A.J. Millis, Phys. Rev. Lett. **101**, 166405 (2008).
* (13) P.A. Igoshev, A.V. Efremov, A.I. Poteryaev, A.A. Katanin, V.I. Anisimov, Phys. Rev. B **88**, 155120 (2013).
* (14) T. Moriya, Spin fluctuations in itinerant magnets. Springer-Verlag, Berlin, Heidelberg, 1985.
* (15) See Supplementary Material at http://
* (16) Jörg Schmalian, David Pines, and Branko Stojković, Phys. Rev. B **60**, 667 (1999); A. Abanov, A. V. Chubukov, and J. Schmalian, Adv. Phys. **52**, 119 (2003).
* (17) A. A. Katanin, A. Toschi, and K. Held, Phys. Rev. B **80**, 075104 (2009).
* (18) V. I. Anisimov, A. S. Belozerov, A. I. Poteryaev, and I. Leonov, Phys. Rev. B **86**, 035152 (2012).
* (19) A. I. Liechtenstein, M. I. Katsnelson, V. P. Antropov, V. A. Gubanov, JMMM **67**, 65 (1987).
* (20) H.A. Mook and R.M. Nicklow, Phys. Rev. B **7**, 336 (1973).
* (21) See, e.g. A. Toschi, A. A. Katanin, and K. Held, Phys. Rev. B **75**, 045118 (2007); A. Toschi, G. Rohringer, A. A. Katanin, K. Held, Ann. der Phys., 523, **698** (2011).
* (22) See, e.g., A. N. Rubtsov, M. I. Katsnelson, A. I. Lichtenstein, A. Georges, Phys.Rev. B **79** 045133 (2009).
## Supplementary Material for the paper ”Magnetic exchange in \(\alpha\)–iron from the ab initio calculations in the paramagnetic phase” by P. A. Igoshev et al.
### Local and non-uniform susceptibilities for \(U=4\) eV
We test below the stability of our results to change of model parameter values: results of the calculations by using the same method as in the main text but the other choice of parameters (\(U=4.0\) and \(I=1.0\) eV), which are close to those of Ref. [1]. The results for the temperature dependence of the inverse local magnetic susceptibility are shown in Fig. 1. We find the crossover discussed in the main text at lower \(T^{\ast}\sim 1050\) K. The calculation of momentum dependent irreducible susceptibility yields only the uniform (with respect to \(\mathbf{q}\)) renormalization without change of qualitative tendencies (see Fig. 2, cf. Fig. 4 of the main text). We have recalculated exchange interactions from these results and obtain \(J_{\mathbf{q}=0}^{(2)}=0.13\) eV vs 0.18 eV in the main text. This implies lowering of Curie temperature, which agrees with approximately renormalization of \(T^{\ast}\) by 1.5 times (cf. Fig. 1 of the main text). The qualitative conclusions of the paper remain unchanged for these parameter values.
<figure><img src="content_image/1411.1888/x5.png"><figcaption>Figure 1: (Color online) The same as in Fig. 1 of the main text for U=4.0 andI=1.0 eV.</figcaption></figure>
<figure><img src="content_image/1411.1888/x6.png"><figcaption>Figure 2: (Color online) The same as in Fig. 4 of the main text for U=4.0 andI=1.0 eV.</figcaption></figure>
### Calculation of exchange interaction from the spin-fermion model
To obtain exchange interaction, we first determine the bare propagator of magnetic degrees of freedom \(\chi_{S}(\mathbf{q},\mathrm{i}\omega_{n})\) by requiring that the dressed propagator of \(\mathbf{S}\) field is equal to the susceptibility of itinerant subsystem. Using the random-phase-type approximation, which reduces the orbital- and frequency dependence of the bubble and vertex to the respective single-frequency orbital ”averaged” quantities, \(\chi_{\mathbf{q}}^{\mathrm{0}}=\sum_{mn}\chi_{\mathbf{q}}^{\mathrm{0,}mn}\) and \(\Gamma\), we obtain
(1)
where the last term is added to cancel the corresponding bosonic self-energy correction from itinerant degrees of freedom to avoid double-counting, cf. Ref. [2]. We represent \(\chi_{\mathbf{q}}^{\mathrm{0}}=\overline{\chi}_{0}+\delta\chi_{\mathbf{q}}^{ \mathrm{0}}\) with momentum-independent \(\overline{\chi}_{0}\); without loss of generality, we can assume \(\sum_{\mathbf{q}}\delta\chi_{\mathbf{q}}^{\mathrm{0}}=0,\) such that \(\overline{\chi}_{0}=\sum_{\mathbf{q}}\chi_{\mathbf{q}}^{\mathrm{0}}.\) From the results of Fig. 4 of the main text it follows that \(\delta\chi_{\mathbf{q}}^{\mathrm{0}}\ll\overline{\chi}_{0}\). Expanding Eq. (1) to first order in \(\delta\chi_{\mathbf{q}}^{\mathrm{0}}\), we obtain
\[\chi_{S}^{-1}(\mathbf{q},\mathrm{i}\omega_{n}) = 4\mu_{B}^{2}\chi_{\mathrm{loc}}^{-1}(\mathrm{i}\omega_{n})+(I/ \mu_{\mathrm{B}})^{2}\overline{\chi}_{0}\] (2)
\[+[(I/\mu_{\mathrm{B}})^{2}-4\mu_{B}^{2}\overline{\chi}_{0}^{-2}] \delta\chi_{\mathbf{q}}^{\mathrm{0}},\]
where \(\chi_{\mathrm{loc}}^{-1}(\mathrm{i}\omega_{n})=\overline{\chi}_{0}^{-1}-2 \Gamma/(4\mu_{B}^{2})\) is the inverse local susceptibility. In practice, the frequency dependence \(\chi_{\mathrm{loc}}(\mathrm{i}\omega_{n})=\mu_{\mathrm{eff}}^{2}/(3(T+\theta)( 1+|\omega_{n}|/\delta))\) can be obtained from the dynamic local spin correlation functions, which is characterized by the temperature-independent moment \(\mu_{\mathrm{eff}},\) its damping \(\delta\propto T\), and the corresponding Weiss temperature \(\theta\) (see Refs. [8; 13] of the main text). Since \(\overline{\chi}_{0}\simeq 2\mu_{B}^{2}/\)eV and \(I\simeq 1\)eV the momentum dependence is almost cancelled, and we obtain the local bare propagator of spin degrees of freedom,
\[\chi_{S}^{-1}(\mathbf{q},\mathrm{i}\omega_{n})\simeq\chi_{S}^{-1}(\mathrm{i} \omega_{n})=4\mu_{B}^{2}\chi_{\mathrm{loc}}^{-1}(\mathrm{i}\omega_{n})+(I/\mu_ {\mathrm{B}})^{2}\overline{\chi}_{0}.\] (3)
Considering the renormalization of the propagator \(\chi_{S}\) by the corresponding boson self–energy corrections (cf. Ref. [2]), we obtain for the non-uniform susceptibility
\[\chi^{-1}(\mathbf{q},\mathrm{i}\omega_{n})=\frac{1}{4\mu_{B}^{2}}\left[\chi_{S }^{-1}(\mathbf{q},\mathrm{i}\omega_{n})-\frac{I^{2}}{\mu_{B}^{2}}\sum_{mn}\chi _{\mathbf{q}}^{\mathrm{0,}mn}\right],\] (4)
which yields Eq. (2) of the main text (we use also here that by symmetry \(\sum\nolimits_{\mathbf{p}}\chi_{\mathbf{p}}^{\mathrm{0,}t_{2g}\text{-}e_{g}}=0\)).
## References
* (1) L. V. Pourovskii, J. Mravlje, M. Ferrero, O. Parcollet, and I.A. Abrikosov, Phys. Rev. B **90**, 155120 (2014).
* (2) P. A. Igoshev, A. A. Katanin, V. Yu. Irkhin, JETP **105**, 1043 (2007).
|
0709.3302 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 23867,
"num_imgs": 2,
"llama3_tokens_count": 7723
} | [
"content_image/0709.3302/x1.png",
"content_image/0709.3302/x2.png"
] | # Coherent states in quantum cosmology
S. Robles-Pérez\({}^{1,2}\), Y. Hassouni\({}^{3}\) and P. F. González-Díaz\({}^{1,2}\)
\({}^{1}\)Colina de los Chopos, Centro de Física “Miguel Catalán”, Instituto de Matemáticas y Física Fundamental,
Consejo Superior de Investigaciones Científicas, Serrano 121, 28006 Madrid (SPAIN).
\({}^{2}\)Estación Ecológica de Biocosmología, Medellín (SPAIN).
\({}^{3}\)Laboratoire de Physique Theorique, Faculte des sciences-Universite Mohamed V-Agdal,
Avenue Ibn Batouta B.P: 1014, Agdal Rabat (MOROCCO).
February 21, 2024
###### Abstract
In the realm of a quantum cosmological model for dark energy in which we have been able to construct a well-defined Hilbert space, a consistent coherent state representation has been formulated that may describe the quantum state of the universe and has a well-behaved semiclassical limit.
pacs: 98.80.Qc, 03.65.Fd. †
[FOOTNOTE:†][ENDFOOTNOTE]
## I Introduction
In a previous paper [1] a general, simple quantum description was constructed for a model of an homogeneous and isotropic universe filled with a fluid described by an equation of state, \(p=w\rho\), being \(p\) and \(\rho\) the pressure and the energy density of the fluid, respectively, and \(w\) is a constant parameter. That model can be regarded to be an approximate idealization in which only a particular kind of energy dominates the universe along its time evolution, from the beginning to the end. Among these particular dominating energies, most emphasis was paid to the case of dark energy. The great advantage of this mode is that it is analytically solvable and, therefore, able to neatly show the analogy between quantum mechanical open systems and quantum cosmology, and take it quite far at least formally, even though quantum cosmology adds some exceptional features to the formalism. Thus, an analytically solvable model for which the complete quantum development can be tracked can be considered.
On the other hand, coherent states have been always considered as mathematical objects with applications in quantum physics. In fact, the large number of their applications lead to the introduction for new definitions of particular quantum systems for which coherent states are involved. Coherent states can be constructed from the algebras which are behind their definition. More precisely, in the literature we usually deal with Heisenberg algebras to obtain them. Nevertheless, in some works [2; 3] coherent states for some quantum systems are constructed from the so-called Generalized Heisenberg Algebras (GHA).
In this paper we shall take advantage from the above property of the model to investigate the role that coherent states, obtained from a GHA, may play in a cosmological model. We in fact obtain general expressions for the cosmic wavefunctions that describe coherent states, which can be taken as a basis for further developments of this subject.
We outline the paper as follows. In sec. II we give a brief summary of the cosmological model, reviewing the basics aspects of its Hilbert space. Coherent states are formulated and described in sec. III and, in section IV, we give some conclusions and further comments.
## II A cosmological model
The model considered in ref. [1] consists of a Friedman-Lema tre-Robertson-Walker (FLRW) universe filled with a fluid described by the equation of state, \(p=w\rho\), where \(w\) is a constant parameter. For a gauge \(\mathcal{N}=a^{3}\), where \(\mathcal{N}\) is the lapse function and \(a\equiv a(t)\) is the cosmic scale factor, the Hamiltonian of the system is given by,
\[H=-\frac{2\pi G}{3}a^{2}p^{2}_{a}+\rho_{0}a^{3(1-w)},\] (1)
where \(p_{a}\) is the conjugate momenta to the scale factor, \(G\) is the gravitational constant, and \(\rho_{0}\) is the energy density of the fluid at the coincidence time [1]. Then, a set of Hamiltonian eigenfunctionals can be obtained. In the configuration space, they are given as
\[\phi_{n}(a)=N_{n}a^{\alpha}\mathcal{J}_{n}(\lambda a^{q}),\] (2)
in which \(N_{n}\) is a normalization constant, \(\alpha\) is a parameter depicting the factor ordering ambiguity, \(\mathcal{J}_{n}\) is a Bessel function of the first kind and order \(n\), and,
\[q=\frac{3}{2}(1-w)\,\,,\,\,\lambda=\frac{1}{\hbar q}\sqrt{\frac{3}{2\pi G}\rho _{0}}.\] (3)
The functions given by Eq. (2) correspond to the following eigenvalue problem,
\[\hat{H}\phi_{n}(a)=\mu_{n}\phi_{n}(a),\] (4)
with,
\[\mu_{n}=q^{2}n^{2}-\epsilon_{0}^{2},\] (5)
where \(\epsilon_{0}^{2}\) is a factor which depends on the factor ordering choice. Now, we have to impose some boundary conditions in order to construct wavefunctionals which can describe the state of the universe. In particular, when the fluid is dark energy (\(w<-\frac{1}{3}\))[1], the boundary conditions are: i) the wavefunctionals have to be regular everywhere, even when the metric degenerates, \(a\to 0\), and ii) they should vanish at the big rip singularity when \(a\rightarrow\infty\). The boundary conditions are satisfied by the Hamiltonian eigenfunctionals when we impose the following restrictions on the parameter \(\alpha\),
\[-qn\leq\alpha<\frac{q}{2}\sim\frac{3}{2}.\] (6)
Now, in order to develop the usual machinery of quantum mechanics, we need to construct a well-defined Hilbert space. It is usually an impossible task, in general, when gravitational fields are taken into account, since they appear non-renormalizable infinities in the formalism. Just in the case of very simplified minisuperspaces, a regularization process can be made and, then, a Hilbert space can be however defined.
We can start by defining the Hamiltonian eigenstates, \(|n>\), to be those states represented in the configuration space by the wavefunctionals given in Eq. (2), i.e.,
\[<n|a>=<a|n>=\phi_{n}(a),\] (7)
as the wave functionals considered so far are real functions. Then, let us define the scalar product to be,
\[<f|g>=\int_{0}^{\infty}da\,W(a)f(a)g(a),\] (8)
where \(W(a)=a^{k}\) is a weight function.Thus, the orthogonality relations between the Hamiltonian eigenstates turn out to be,
\[<n|m>=\frac{N_{n}N_{m}}{q}\int_{0}^{\infty}du\,u^{\frac{k+2\alpha+1}{q}-1} \mathcal{J}_{n}(\lambda u)\mathcal{J}_{m}(\lambda u),\] (9)
with \(u=a^{q}\), and using the standard bibliography [4], this integral can be performed for the following values,
\[n+m+1>1-\frac{k+2\alpha+1}{q}>0.\] (10)
For instance, for a weight function, \(W(a)=a^{\frac{q}{2}-(2\alpha+1)}\), the orthogonality relations are,
\[<n|m>=\sqrt{\frac{\pi}{2q\tilde{\lambda}}}\frac{N_{n}N_{m}\,\Gamma(\frac{1}{2} )\Gamma(\frac{2(n+m)+1}{4})}{\Gamma(\frac{2(m-n)+3}{4})\Gamma(\frac{2(n+m)+3}{ 4})\Gamma(\frac{2(n-m)+3}{4})},\] (11)
where, \(\tilde{\lambda}=q\lambda=\frac{1}{\hbar}\sqrt{\frac{3}{2\pi G}\rho_{0}}\). In particular, they do not show any problem with the normalization of the zero mode because,
\[\langle 0|0\rangle=N_{0}^{2}\sqrt{\frac{\pi}{2q\tilde{\lambda}}}\frac{\Gamma( \frac{1}{4})}{\Gamma(\frac{3}{4})^{3}},\] (12)
which can be normalized by choosing an appropriate value of the normalization constant, \(N_{0}\). However, by Eq. (11), these relations do not form an orthogonal basis for the representation of the quantum state of the universe. Nevertheless, using the scalar product (8), we can define an orthonormal basis, in terms of Laguerre polynomials. For the value \(k=\frac{q}{2}-(2\alpha+1)\) in the integration measure, we can use the following set of functionals,
\[\psi_{n}(a)=N_{n}a^{\frac{4\alpha+q}{4}}e^{-\frac{\lambda a^{q}}{2}}L_{n}( \lambda a^{q}),\] (13)
where,
\[L_{n}(x)=\sum_{m=0}^{n}\left(\begin{array}[]{c}n\\ m\end{array}\right)\frac{(-x)^{m}}{m!},\] (14)
is the Laguerre polynomial of order \(n\). They form an orthonormal basis with appropriate values of the normalization constants, \(N_{n}\). Then, the Hamiltonian eigenstates can be decomposed into the basis defined by the set \(\{\psi_{n}\}\) as,
\[\phi_{n}(a)=\sum_{m=0}^{\infty}C_{mn}\psi_{m}(a),\] (15)
where the coefficients, \(C_{nm}\), are given by,
\[C_{nm}=\langle\psi_{m}|\phi_{n}\rangle=\int_{0}^{\infty}da\,a^{\frac{q}{2}-(2 \alpha+1)}\psi_{m}(a)\phi_{n}(a).\] (16)
We can then develop the usual formalism of quantum mechanics in this orthonormal basis.
The standard procedure of constructing coherent states is clearer when we work with an orthonormal basis of Hamiltonian eigenstates. Let us therefore consider the scalar product (8), for \(k=-(2\alpha+1)\). In that case, the orthogonality relations for the Hamiltonian eigenstates can be written as [1],
\[\begin{array}[]{ll}\langle n|n\rangle=1&,\forall n,\\ &\\ \langle n|m\rangle=0&,|n-m|\;\;\;\textmd{even,}\\ &\\ \langle n|m\rangle=\frac{4}{\pi}\frac{(-1)^{\frac{1}{2}(n-m-1)}\sqrt{n\,m}}{n^ {2}-m^{2}}&,|n-m|\;\;\;\textmd{odd}.\end{array}\] (17)
The price that we have to pay when using this scalar product is that then we need a regularization procedure for the zero mode [1]. Its advantage is that the set of Hamiltonian eigenstates can be split in two sets, for even and odds modes, which are orthogonal to each other, separately. Then, we can use an analogous formalism to that developed in [5]. Let us split the Hilbert space as
\[\mathcal{H}=\mathcal{H}_{+}\oplus\mathcal{H}_{-},\] (18)
where the subspaces, \(\mathcal{H}_{+}\) and \(\mathcal{H}_{-}\), are chosen to be,
\[f_{+}(u)\in\mathcal{H}_{+} \Rightarrow f_{+}(u) =\sum_{n=0}^{\infty}C_{2n}\phi_{2n}(u)\otimes\chi_{+}\] (19)
\[f_{-}(u)\in\mathcal{H}_{-} \Rightarrow f_{-}(u) =\sum_{n=0}^{\infty}C_{2n+1}\phi_{2n+1}(u)\otimes\chi_{-},\] (20)
with some constants \(C_{k}\) , and,
\[\chi_{+}=\left(\begin{array}[]{c}1\\ 0\end{array}\right)\;\;\;,\;\;\;\chi_{-}=\left(\begin{array}[]{c}0\\ 1\end{array}\right).\] (21)
The subspaces, \(\mathcal{H}_{+}\) and \(\mathcal{H}_{-}\), turn out to be the subspaces of even and odd functionals, and any function belonging to the space \(\mathcal{H}\) can be decomposed as,
\[f(u)=f_{+}(u)\oplus f_{-}(u).\] (22)
Let us define now, in this space, the following scalar product for any two functions, \(f(u),g(u)\in\mathcal{H}\),
\[\langle f|g\rangle=\lim_{l_{p}\to 0}\int_{l_{p}}^{\infty}du\,W_{1}(u)f ^{\dagger}(u)g(u)=\lim_{l_{p}\to 0}\int_{l_{p}}^{\infty}du\,W_{1}(u) \left(f_{+}(u)g_{+}(u)+f_{-}(u)g_{-}(u)\right),\] (23)
weighted by the function, \(W_{1}(u)=u^{-(2\alpha+1)}\), with the limit being introduced to regularize the zero mode. With this scalar product the basis \(\left\{\phi_{n}\right\}_{n\in N}\) becomes orthonormal,
\[\langle n|m\rangle=\langle\phi_{n}|\phi_{m}\rangle=\xi\lim_{l_{p}\to 0 }\int_{l_{p}}^{\infty}du\,\frac{1}{u}\mathcal{J}_{n}(u)\mathcal{J}_{m}(u),\] (24)
where \(\xi\) is given by the usual scalar product between the vectors, \(\chi_{\pm}\), i.e.,
\[\begin{array}[]{ccc}n,m\;\;\textrm{even}&\Rightarrow&\xi=\chi_{+} ^{\dagger}\chi_{+}=1\\ n,m\;\;\textrm{odd}&\Rightarrow&\xi=\chi_{-}^{\dagger}\chi_{-}=1\\ n\;\;\textrm{even},m\;\;\textrm{odd}&\Rightarrow&\xi=\chi_{+}^{\dagger}\chi_{- }=0\\ n\;\;\textrm{odd},m\;\;\textrm{even}&\Rightarrow&\xi=\chi_{-}^{\dagger}\chi_{+ }=0.\end{array}\] (25)
Therefore, we have obtained an orthogonal basis of Hamiltonian eigenfunctionals for a universe filled with dark energy. Now, we can apply the formalism described in ref. [6] to construct coherent states for the model being considered.
## III Coherent states
The interest of formulating coherent states in cosmology is two fold. On the one hand, this would prepare the mechanics of the universe to further, potentially, generalizable new developments, and, on the other hand, to enhance the analogy between usual quantum mechanics and cosmology.
In what follows we shall use the formalism to construct coherent states for a system described by a generalized algebra [6], given by
\[H_{0}A^{{\dagger}} = A^{{\dagger}}f(H_{0})\] (26)
\[AH_{0} = f(H_{0})A\] (27)
\[\left[A^{{\dagger}},A\right] = H_{0}-f(H_{0}),\] (28)
where \(A\), \(A^{{\dagger}}\) and \(H_{0}\) are the generators of the algebra, and \(f(x)\) is called the characteristic function of the system. \(H_{0}\) is the Hamiltonian of the physical system under consideration, with eigenstates given by
\[H_{0}|m\rangle=\epsilon_{m}|m\rangle,\] (29)
and \(A^{{\dagger}}\) and \(A\) are the generalized creation and annihilation operators,
\[A^{{\dagger}}|m\rangle = N_{m}|m+1\rangle\] (30)
\[A|m\rangle = N_{m-1}|m-1\rangle,\] (31)
where, \(N_{m}^{2}=\epsilon_{m+1}-\epsilon_{0}\). The use of a generalized algebra [6] would add a parametrization through the characteristic function, \(f(H_{0})\), that allows us to have a systematic covering of distinct potentials for the given system. The customary Heisenberg algebra is recovered in the limit value \(f(x)=1+x\)[6].
Then, the coherent states are defined to be the eigenstates of the generalized annihilation operator,
\[A|z\rangle=z|z\rangle,\] (32)
where \(z\) is a generally complex number.
Since we have a Hamiltonian spectrum for the model of a dark energy dominated universe, \(H|n\rangle=\epsilon_{n}|n\rangle\), we can now find the characteristic function, \(f(x)\), and the quantum excitation levels can be written as \(\epsilon_{n+1}=f(\epsilon_{n})\)[7]. Choosing a factor ordering \(\alpha=\beta\) so that \(\epsilon_{0}^{2}=0\), we have,
\[\epsilon_{n+1}=q^{2}(n+1)^{2}=\epsilon_{n}+2q\sqrt{\epsilon_{n}}+q^{2}=(\sqrt{ \epsilon_{n}}+q)^{2}=f(\epsilon_{n}).\] (33)
The spectrum of the case being considered is formally similar to the spectrum for a free particle in a square well potential [6], and the computation to follow can be done in a parallel way.
Thus, the coherent states are given by,
\[|z\rangle=N(z)\sum_{n=0}^{\infty}\frac{z^{n}}{N_{n-1}!}|n\rangle,\] (34)
where,
\[N_{n}!\equiv N_{0}N_{1}\cdots N_{n},\] (35)
with, for consistency, \(N_{-1}!\equiv 1\). Therefore, since \(N_{n-1}^{2}=\epsilon_{n}-\epsilon_{0}\), it can be checked that in our case,
\[N_{n-1}!=q^{n}n!,\] (36)
and the coherent states can be written then as,
\[|z\rangle=N(z)\sum_{n=0}^{\infty}\frac{z^{n}}{q^{n}n!}|n\rangle.\] (37)
This expression can be formally simplified, in terms of the creation operator, since the state \(|n\rangle\) can be written as,
\[|n\rangle=\frac{1}{N_{n-1}!}(A^{{\dagger}})^{n}|0\rangle=\frac{1}{q^{n}n!}(A^{ {\dagger}})^{n}|0\rangle.\] (38)
The coherent states can be expressed then with a formal expression,
\[|z\rangle=N(z)\sum_{n=0}^{\infty}\left(\frac{zA^{{\dagger}}}{q^{2}}\right)^{n} \frac{1}{(n!)^{2}}|0\rangle=N(z)I_{0}\left(2\sqrt{\frac{zA^{{\dagger}}}{q^{2}} }\right)|0\rangle,\] (39)
where, \(I_{0}(x)\), is the modified Bessel function of the first kind of order zero.
Now, we have to impose the following conditions in order to get the so-called Klauder’s coherent states [6] (KCS):
i/ Normalization:
\[\langle z|z\rangle=1,\] (40)
ii/ Continuity in the label, \(z\):
\[|z-z^{\prime}|\to 0\;\;\Rightarrow\;\;||\,|z\rangle-|z^{\prime}\rangle \,||\to 0,\] (41)
iii/ Completeness:
\[\int d^{2}z\;W(z)\,|z\rangle\langle z|=1.\] (42)
The normalization condition can be found by choosing an appropriate normalization function, \(N(z)\). In terms of the Hamiltonian eigenvectors, the norm of the coherent states can be expressed as,
\[1=\langle z|z\rangle=N^{2}(z)\sum_{n,m=0}^{\infty}\frac{z^{n}(z^{*})^{m}}{q^{n +m}n!m!}\langle n|m\rangle\] (43)
Therefore, the normalization function, \(N(z)\), ought to be chosen as,
\[N^{-2}(z)=\sum_{n=0}^{\infty}\left(\frac{|z|}{q}\right)^{2n}\frac{1}{(n!)^{2}} =I_{0}\left(\frac{2|z|}{q}\right).\] (44)
Then, the normalized coherent states can be written as,
\[|z\rangle=\left(I_{0}\left(\frac{2|z|}{q}\right)\right)^{-\frac{1}{2}}\sum_{n= 0}^{\infty}\frac{|z|^{n}}{q^{n}n!}|n\rangle,\] (45)
or rescaling the variable \(|z|\) as \(|z|\to q|z|\), it can be re-expressed as,
\[|z\rangle=\left(I_{0}\left(2|z|\right)\right)^{-\frac{1}{2}}\sum_{n=0}^{\infty }\frac{|z|^{n}}{n!}|n\rangle.\] (46)
In terms of the creation operator, using Eq. (39), the coherent states can then be also written as,
\[|z\rangle=\left[I_{0}\left(2|z|\right)\right]^{-\frac{1}{2}}I_{0}\left(2\sqrt{ \frac{|z|A^{{\dagger}}}{q}}\right)|0\rangle.\] (47)
In the configuration space, the wave functionals corresponding to the coherent states (46) can be expressed in terms of the scale factor, \(a\), and the variable \(z\), in the form,
\[\langle a|z\rangle=\varphi_{z}(a)\equiv\varphi(a,z)=\left[I_{0}(2|z|)\right]^{ -\frac{1}{2}}\sum_{n=0}^{\infty}\frac{|z|^{n}}{n!}\phi_{n}(a),\] (48)
where the function \(\varphi(a,z)\) has to be interpreted as a functional of paths for the scale factor, \(a(t)\), and the variable \(z\). These coherent wave functionals satisfy the boundary conditions imposed in the previous section as they are satisfied by the Hamiltonian eigenfunctionals. When the scale factor degenerates, in the limit \(a\to 0\), using the asymptotic expansions for Bessel’s functions, the coherent wavefunctionals can be written as,
\[\varphi(z,a)\approx\frac{a^{\alpha}}{\sqrt{I_{0}(2|z|)}}\sum_{n=0}^{\infty} \frac{|z|^{n}}{n!}\frac{\left(\lambda a^{q}\right)^{n}}{2^{n}n!}=\frac{a^{ \alpha}\,I_{0}\left(\sqrt{2\lambda|z|a^{q}}\right)}{\sqrt{I_{0}(2|z|)}},\] (49)
which may express the known boundary condition of the universe. If \(\alpha\) would vanish, then, Eq. (49) expressed the Vilenkin’s tunneling condition [8] as it took on a constant value. If \(\alpha>0\), then Eq. (49) would vanish in accordance with the Hartle-Hawking no boundary proposal [9].
In the opposite limit, for large values of the scale factor, the introduced boundary condition is also obeyed. The limit of large values of the scale factor is equivalent to the semiclassical limit, where \(\hbar\to 0\). In both cases, the asymptotic expansions of Bessel’s functions are the same, and the Hamiltonian eigenfunctionals go as,
\[\phi_{n}(a)\approx\sqrt{\frac{2}{\pi\lambda a^{q}}}\cos\left(\lambda a^{q}- \frac{\pi}{2}n-\frac{\pi}{4}\right).\] (50)
Then, the coherent states can be written as,
\[\varphi(z,a)\approx\frac{1}{\sqrt{I_{0}(2|z|)}}\sqrt{\frac{2}{\pi\lambda a^{q} }}\sum_{n=0}^{\infty}\frac{|z|^{n}}{n!}\cos\left(\lambda a^{q}-\frac{\pi}{2}n- \frac{\pi}{4}\right)=\frac{\cos(|z|-\lambda a^{q})-\sin(|z|-\lambda a^{q})}{ \sqrt{\pi\lambda a^{q}I_{0}(2|z|)}}\to 0,\] (51)
for large values of the scale factor. Since in this model the classical action is \(S_{c}=\lambda a^{q}\), it turns out that the functional \(\varphi(z,a)\) can be also expressed as,
\[\varphi(z,a)\approx\frac{\cos(|z|-S_{c}(a))-\sin(|z|-S_{c}(a))}{\sqrt{\pi S_{c }(a)I_{0}(2|z|)}}\to 0\;(a\rightarrow\infty).\] (52)
The coherent states, in the semiclassical limit are those represented in Fig. 1 and Fig. 2. There is a set of maxima for the coherent wave functionals, for the values \(z_{k}\) given by,
\[|z_{k}|=S_{c}(a)-\arctan\left(\frac{2S_{c}(a)-1}{2S_{c}(a)+1}\right)+2k\pi.\] (53)
<figure><img src="content_image/0709.3302/x1.png"><figcaption>Figure 1: Numerator in Eq. (52) for different real values of the parameter z(0,1,…).</figcaption></figure>
Therefore, we have obtained expressions for normalized coherent states, in the configuration space. They satisfy the imposed boundary conditions, both, in the limit of large values of the scale factor and when it degenerates. The same limit for large values of the scale factor runs for the semiclassical limit, in which the coherent states should represent, by the Hartle criterion, valid semiclassical approximations. That is the case because Eq. (51) is, for any value of the parameter \(|z|\), an oscillatory function of the classical action with a prefactor which goes to zero as the scale factor grows up.
<figure><img src="content_image/0709.3302/x2.png"><figcaption>Figure 2: Coherent wave functionals, φ(z,a), Eq. (51). It appears a set ofmaxima given by |zk|=Sc(a)−arctan(2Sc(a)−12Sc(a)+1)+2kπ.</figcaption></figure>
The second condition for coherent states to be a set of KCS amounts to the continuity in the label \(z\). It is easy to check that this condition is satisfied. For a given pair of complex numbers, \(z=re^{i\theta}\) and \(z^{\prime}=r^{\prime}e^{i\theta^{\prime}}\), which are very close to one another, \(r\approx r^{\prime}\) and \(\theta-\theta^{\prime}\approx 0\), the scalar product between them is given by,
\[\langle z|z^{\prime}\rangle \approx \frac{1}{\sqrt{I_{0}\left(\frac{2r}{q}\right)I_{0}\left(\frac{2r^ {\prime}}{q}\right)}}\sum_{n=0}^{\infty}\frac{(r^{\prime}r)^{n}}{q^{2n}(n!)^{2}}\] (54)
\[= \frac{1}{\sqrt{I_{0}\left(\frac{2r}{q}\right)I_{0}\left(\frac{2r^ {\prime}}{q}\right)}}I_{0}\left(2\frac{\sqrt{r^{\prime}r}}{q}\right)\approx 1,\]
when \(r^{\prime}\to r\), so the norm of the difference between two coherent states goes to zero as they approach,
\[||\,|z\rangle-|z^{\prime}\rangle\,||^{2}=2(1-\langle z|z^{\prime}\rangle) \approx 0.\] (55)
The third condition on the completeness of coherent states to be a set of KCS, can be fulfilled by including an appropriate weight function for the integration in the variable \(z\); i.e., it should be satisfied that
\[\int d^{2}z\,W_{2}(z)|z\rangle\langle z|=1,\] (56)
which in our case implies,
\[2\pi\sum_{n=0}^{\infty}\frac{|n\rangle\langle n|}{(n!)^{2}}\int_{0}^{\infty}d| z|\,W_{2}(z)\frac{|z|^{2n}}{I_{0}(2|z|)}=1.\] (57)
This corresponds to choosing a weight function
\[W_{2}(|z|)=\frac{2|z|}{\pi}I_{0}(2|z|)K_{0}(2|z|),\] (58)
in the formalism of ref. [6], and also amounts to the fulfillment of the completeness condition. The latter condition comes from the equalities,
\[\int d^{2}z\,W_{2}(z)|z\rangle\langle z| = 4\sum_{n=0}^{\infty}\frac{|n\rangle\langle n|}{(n!)^{2}}\int_{0} ^{\infty}d|z|\,K_{0}(2|z|)|z|^{2n+1}\] (59)
\[= \sum_{n=0}^{\infty}|n\rangle\langle n|=1,\]
where we have used [4],
\[\int_{0}^{\infty}dx\,K_{0}(2x)x^{2n+1}=\frac{(n!)^{2}}{4}.\] (60)
## IV Conclusions and further comments
We have obtained a set of Klauder coherent states for a dark energy dominated universe. They satisfy the boundary conditions and may lead to valid semiclassical approximations. Coherent states might represent a continuous set of states ascribable to classical universes, which are in this way interpretable as a multiverse. The different universes differ from one another in a smooth way by the value taken on by the parameter \(z\).
The distinction between on-shell (\(\hat{H}\phi_{n}=0\)) and off-shell (\(\hat{H}\phi_{n}\neq 0\)) contributions depends on the choice of the factor ordering. This ultimately implies that, if the factor ordering choice becomes eventually related to the particular choice of a time variable, the different Hamiltonian eigenstates may represent the so-called ground state wave functional for particular choices of the time variable, in the configuration space. In that case, coherent states can be interpreted as the ground state for a given time variable, i.e., for particular reference system.
###### Acknowledgements.
The authors thank J. P. Gazeau for useful comments and explanations. This paper was supported by CAICYT under Research Project N FIS2005-01181, and by the Research Cooperation Project CSIC-CNRST.
## References
* (1) P. F. González-Díaz and S. Robles-Pérez, Submitted.
* (2) J. R. Klauder, J. Math. Phys. 4 (1963) 1058.
* (3) C. Quesne, J. Phys. A 35 (2002) 9213.
* (4) Gradshteyn and Ryzhik, _Tables of Integrals, Series and Products_, A. Jeffrey (ed.), Academic Press (1994).
* (5) S. Twareque Ali, M. Englis and J. P. Gazeau, [arXiv:math-ph/0311042].
* (6) Y. Hassouni, E. M. F. Curado and M. A. Rego-Monteiro, Physical Review A 71 (2005) 022104, [arXiv:math-ph/0408030].
* (7) E. M. F. Curado and M. A. Rego-Monteiro, J. Phys. A 34 (2001) 3253.
* (8) A. Vilenkin, _Boundary conditions in quantum cosmology_, Phys. Rev. D 33 (1986) 3560.
* (9) J. B. Hartle & S. W. Hawking, _Wave function of the Universe_, Phys. Rev. D 28 (1983) 2960.
|
1610.09079 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 54994,
"num_imgs": 5,
"llama3_tokens_count": 16945
} | [
"content_image/1610.09079/x1.png",
"content_image/1610.09079/x4.png",
"content_image/1610.09079/x7.png",
"content_image/1610.09079/x9.png",
"content_image/1610.09079/x12.png"
] | # Stability analysis of the
numerical Method of characteristics
applied to a class of
energy-preserving hyperbolic systems.
Part I: Periodic boundary conditions
T.I. Lakoba¹, Z. Deng
Department of Mathematics and Statistics, 16 Colchester Ave.,
University of Vermont, Burlington, VT 05401, USA
[FOOTNOTE:1][ENDFOOTNOTE]
**Abstract**
We study numerical (in)stability of the Method of characteristics (MoC) applied to a system of non-dissipative hyperbolic partial differential equations (PDEs) with periodic boundary conditions. We consider three different solvers along the characteristics: simple Euler (SE), modified Euler (ME), and Leap-frog (LF). The two former solvers are well known to exhibit a mild, but unconditional, numerical instability for non-dissipative ordinary differential equations (ODEs). They are found to have a similar (or stronger, for the MoC-ME) instability when applied to non-dissipative PDEs. On the other hand, the LF solver is known to be stable when applied to non-dissipative ODEs. However, when applied to non-dissipative PDEs within the MoC framework, it was found to have by far the strongest instability among all three solvers. We also comment on the use of the fourth-order Runge–Kutta solver within the MoC framework.
**Keywords**: Method of characteristics, Coupled-wave equations, Numerical instability.
## 1 Introduction
In this series of two papers we address numerical stability of the Method of characteristics (MoC) applied to a class of non-dissipative hyperbolic partial differential equations (PDEs) in 1 time + 1 space dimension. For reference purposes, we will now present the idea of the MoC using the following system as an example:
\[\begin{array}[]{rcl}u_{1t}+u_{1x}&=&f_{1}(u_{1},u_{2})\\ u_{2t}-u_{2x}&=&f_{2}(u_{1},u_{2}),\end{array}\] (1)
where \(f_{1,2}\) are some functions. In the MoC, each of the equations in (1) is transformed to an ordinary differential equation (ODE) in its respective variable, \(\eta_{\pm}=x\mp t\), and then is solved by an ODE numerical solver.
The MoC is widely used and is described in most textbooks on numerical solution of PDEs. It is, therefore, quite surprising that its stability has not been investigated in as much detail as that of most other numerical methods. For example, in [1], it was considered by the von Neumann analysis (which implies periodic boundary conditions (BC)) for a simple scalar model
\[u_{t}+u_{x}=0,\] (2a)
with the ODE solver being the simple Euler method. Clearly, that analysis could be straightforwardly generalized for a vector model
\[{\bf u}_{t}+{\bf Au}_{x}={\bf 0},\] (2b)
where \({\bf A}\) is any constant diagonalizable matrix with real eigenvalues; an analysis for a similar model in three spatial dimensions was presented in [2]. However, we have been unable to find a systematic stability analysis — even for periodic BC — for a system of the form
\[{\bf u}_{t}+{\bf Au}_{x}={\bf Bu},\] (3)
where \({\bf A,\,B}\) are constant matrices. A partial exception is a conference paper [3], where a model with a somewhat more complicated right-hand side (rhs), describing a specific engineering application, was studied by the von Neumann analysis. The focus of that paper was on the examination of the effect of various parameters of that particular model rather than on the impact on stability of the MoC by the ODE solver used. (In fact, the ODE solver used in [3] was the simple Euler method.)
In this paper we will examine the effect of the ODE solver on the stability of the MoC applied to the model problem (3) with _periodic_ BC. We will further limit the scope of the problem as follows. First, and most importantly, we will consider only non-dissipative and stable systems. This implies that matrix \({\bf B}\) possesses only imaginary eigenvalues. Second, we will consider only three ODE solvers: simple Euler (SE), modified Euler (ME), and the Leap-frog (LF). Let us note that for a non-dissipative stable ODE, the numerical solutions obtained by the SE and ME are known to be mildly, but unconditionally, unstable, with the growth rates of the numerical instability being \(O(\Delta t)\) and \(O(\Delta t^{3})\), respectively; see, e.g., Secs. 3 and 4 below. On the contrary, the LF solver is known to quasi-preserve (i.e., not shift systematically) at least some of the conserved quantities of non-dissipative ODEs.
Our third assumption is mostly cosmetic and does not affect the methodology of our analysis. Namely, we will assume that the constant matrix \({\bf A}\) in (3) can be diagonalized into the form
\[{\bf A}=c_{1}{\bf I}_{N}+c_{2}{\bf\Sigma},\qquad{\bf\Sigma}={\rm diag}({\bf I} _{N_{1}},-{\bf I}_{N_{2}}),\] (4a)
where \[c_{1,2}\] are scalars, \[{\bf I}_{N}\] is the \[N\times N\] identity matrix, and \[N\] is the dimension of vector \[{\bf u}\] . Without loss of generality (i.e., by a simple change of variables) one can set
\[c_{1}=0,\quad c_{2}=1.\] (4b)
We will also let \(N_{1}=N_{2}=N/2\). Particular conclusions of the forthcoming analysis may change if the diagonalization of matrix \({\bf A}\) is different from (4a). However, the methodology of the analysis will not be affected.
Our analysis reveals two facts which, to our knowledge, have not been previously pointed out. First, in contrast to the situation with most numerical methods, the most numerically unstable Fourier harmonics may occur not at the edges of the spectrum but in the “middle” (see the footnote in Section 4) of it. In fact, the (in)stability of the highest Fourier harmonics is the same as that of the lowest ones. This fact easily follows from the foregoing analysis and holds for the MoC employing any ODE solver as long as matrix \({\bf A}\) has the form (4b).
Second, and quite unexpectedly, the LF, which outperforms the SE and ME when applied to non-dissipative ODEs, performs much worse than those methods when applied to non-dissipative stable PDEs within the MoC framework.
The main part of this work is organized as follows. In Section 2 we specify the physical model to be considered below. Our analysis will be developed for the linearized version of that model. In Section 3 we present the von Neumann analysis and its verification by direct numerical simulations of the PDE for the case when the MoC employs the SE solver. In Sections 4 and 5 we repeat steps of Section 3 for the cases where the ODE solvers are the ME and LF, respectively.
It should be noted that in all three cases, the von Neumann analysis leads one to \(4\times 4\) generalized eigenvalue problems which involve both matrices \({\bf B}\) and \({\bf A}\) (or, as per (4a), \({\bf\Sigma}\)). The largest-magnitude eigenvalue, which determines the stability of the numerical method, of those problems cannot be analytically found or even related to the _individual_ eigenvalues of the “physical” matrices \({\bf B}\) and \({\bf A}\). However, for given \({\bf B}\) and \({\bf A}\), that eigenvalue can be easily and quickly found numerically by standard built-in commands in software like Matlab, and thus we use this semi-analytical approach.
In Section 6 we comment on the case when the ODE solver is the fourth-order classical Runge–Kutta (cRK) method. We will not, however, analyze it in any detail because, as we will demonstrate, there is an uncertainty about using the cRK solver within the MoC framework. A detailed investigation of this issue is outside the scope of this paper. In Section 7 we present our conclusions and demonstrate the validity of our analysis for a broader class of systems than the particular system considered in Sections 2–5. First, in Appendix A we apply our analysis of the MoC-LF scheme to a class of models of which the model considered in the main text is a special case. These models still result in linearized equations with constant coefficients. Then, in Appendix B, we present a different model with a _spatially localized_ solution and demonstrate by direct numerical simulations that essential conclusions of our von Neumann analysis remain valid for (in)stability of the MoC applied to that non-constant-coefficient system.
## 2 Physical model
While our study will focus on a linear problem of a rather general form (see Eq. (10a) below), we will begin by stating a specific nonlinear problem which had originally motivated our study and whose linearization leads to (10a). We consider the system
\[{\bf\underline{S}}^{\pm}_{\;t}\pm{\bf\underline{S}}^{\pm}_{\;x}={\bf\underline {S}}^{\pm}\times{\bf\hat{J}}{\bf\underline{S}}^{\mp},\] (5)
where \({\bf\underline{S}}^{\pm}\equiv[S^{\pm}_{1},S^{\pm}_{2},S^{\pm}_{3}]^{T}\), \({\bf\hat{J}}={\rm diag}(1,-1,-2)\), and superscript ‘T’ denotes the transposition. This system is a representative of a class of models that arise in studying propagation of light in birefringent optical fibers with Kerr nonlinearity [4]–[7]; see Appendix A for more detail. The nonlinear system (5) has a rather special form and arises in a specific application. However, its _linearized_ form (see Eq. (10a) below), for which we will analyze the stability of the MoC, is quite general. A wide and diverse range of physical problems lead to the same equation, as we will explain after Eqs. (11b).
In the component form, system (5) is:
\[(\partial_{t}+\partial_{x})S^{+}_{1}=S^{+}_{3}S^{-}_{2}-2S^{+}_{2}S^{-}_{3},\] (6a)
\[(\partial_{t}+\partial_{x})S^{+}_{2}=2S^{+}_{1}S^{-}_{3}+S^{+}_{3}S^{-}_{1},\] (6b)
\[(\partial_{t}+\partial_{x})S^{+}_{3}=-(S^{+}_{1}S^{-}_{2}+S^{+}_{2}S^{-}_{1}),\] (6c)
\[(\partial_{t}-\partial_{x})S^{-}_{1}=S^{-}_{3}S^{+}_{2}-2S^{-}_{2}S^{+}_{3},\] (6d)
\[(\partial_{t}-\partial_{x})S^{-}_{2}=2S^{-}_{1}S^{+}_{3}+S^{-}_{3}S^{+}_{1},\] (6e)
\[(\partial_{t}-\partial_{x})S^{-}_{3}=(\partial_{t}+\partial_{z})S^{+}_{3}.\] (6f)
It has four families of soliton/kink solutions, a special case of one of which is (see, e.g., [6]):
\[S^{\pm}_{1}=\pm\frac{1}{\sqrt{3}}\,\mbox{sech}(\sqrt{2}x),\qquad S^{\pm}_{2}= \mp\tanh(\sqrt{2}x),\qquad S^{\pm}_{3}=\sqrt{2}\,S^{\pm}_{1}.\] (7)
(The other three families differ from (7) by combinations of signs of the solution’s components.) However, we will focus on the analysis of the (in)stability of the MoC applied to a simpler, _constant_ solution:
\[S^{\pm}_{1,3}=0,\qquad S^{\pm}_{2}=\pm 1.\] (8)
This solution corresponds to the asymptotic value of (7) as \(x\rightarrow-\infty\). It can be shown (see, e.g., [8]) that this solution is stable in the sense to be defined below. However, we have found that when simulated by certain “flavors” of the MoC, it can be numerically unstable. If that asymptotic, constant solution is numerically unstable, then so will be any “physically interesting” non-constant solutions, like (7), possessing the asymptotics (8). Thus, numerical stability of the constant solution (8) is necessary for the successful performance of the MoC on the soliton/kink solution (7) and related ones. Moreover, considering the numerical stability of the MoC simulating the simpler solution (8) will allow us to focus on the behavior of the numerical method without being distracted by the complexity of the physical model. The methodology of our analysis, as well as at least some of our general conclusions, will hold for the MoC applied to other non-dissipative hyperbolic PDEs; we will discuss this in Section 7 and provide details in the Appendices.
For future reference, we linearize Eqs. (6f) on the background of solution (8):
\[S^{\pm}_{j}=S^{\pm}_{j\,0}+s^{\pm}_{j},\qquad j=1,2,3;\] (9)
here \(S^{\pm}_{j\,0}\) are the components of the exact solution (8) and \(s^{\pm}_{j}\) are small perturbations. The linearized system (6f) reduces to:
\[{\bf s}_{t}+{\bf\Sigma}\,{\bf s}_{x}={\bf P}\,{\bf s},\] (10a)
\[(s^{\pm}_{2})_{t}\pm(s^{\pm}_{2})_{x}=0,\] (10b)
where \({\bf s}=[s^{+}_{1},s^{+}_{3},s^{-}_{1},s^{-}_{3}]^{T}\), the \(4\times 4\) matrices in (10a) are:
\[{\bf\Sigma}={\rm diag}(I,-I),\qquad{\bf P}=\left(\begin{array}[]{rr}-A&B\\ -B&A\end{array}\right),\] (11a)
and the \[2\times 2\] matrices in ( 11a ) are:
\[I=\left(\begin{array}[]{rr}1&0\\ 0&1\end{array}\right),\qquad A=\left(\begin{array}[]{rr}0&1\\ -1&0\end{array}\right),\qquad B=-\left(\begin{array}[]{rr}0&2\\ 1&0\end{array}\right).\] (11b)
Non-dissipative linearized equations of the form (10a), with the same or similar \({\bf\Sigma}\) but a more general form of \({\bf P}\), arise in a wide range of physical applications involving the interaction of waves propagating with distinctly different velocities. Examples include: acousto-optical interactions, known as Stimulated Brillouin Scattering ([9], Sec. 8.3; [10, 11]); Stimulated Raman Scattering ([9], Sec. 9.4; [12]) and, more generally, parametric interaction among two or more electromagnetic waves [13] in semi-classical optics [14, 15], quantum optics [16, 17], photonics [18], and plasma physics applications [19, 20]; interaction of crossed beams in photorefractive materials ([9], Sec. 10.6); interaction of counter- and co-propagating light waves in a medium with a periodic refractive index ([21, 22, 23]); and the relativistic field theory [24, 25] (also see [26] for a general form of coupled-wave-like field models with a cubic nonlinearity and [27] for more recent references on these models’ solutions). Interestingly, models similar to those in the relativistic field theory also arise in nonlinear fiber optics [28, 29]. In Section 7 we will demonstrate that our results for system (10b), (11b) are also applicable to the one-dimensional Gross–Neveu model [25] of the relativistic field theory.
In Eqs. (10b) and (11b) and in what follows we will adopt the following notations. Boldfaced quantities with an underline, \({\bf{\bf\underline{S}}^{\pm}}\), and with a hat, \({\bf\hat{J}}\), will continue to denote \(3\times 1\) vectors and \(3\times 3\) matrices, respectively, as in (5). Boldfaced quantities _without_ an underline or a hat will denote \(4\times 4\) matrices or \(4\times 1\) vectors, as in (10a); the ambiguity of the same notations for matrices and vectors here will not cause any confusion. Finally, underlined letters in regular (not boldfaced) font will denote \(2\times 1\) vectors; e.g.:
\[\underline{s}^{\pm}\equiv[s^{\pm}_{1},s^{\pm}_{3}]^{T}.\]
Clearly then, \({\bf s}\equiv\left[(\underline{s}^{+})^{T},\,(\underline{s}^{-})^{T}\right]^{T}\).
Seeking the solution of (10a) to be proportional to \(e^{ikx-i\omega t}\), one can show that \(\omega\in\mathbb{R}\) for all \(k\in\mathbb{R}\). This means that solution (8) is stable on the infinite line, as mentioned two paragraphs above. In particular, \(\omega(k=0)\in\mathbb{R}\), which is equivalent to the statement that all nonzero eigenvalues of \({\bf P}\) are purely imaginary. Indeed, from (11b) one finds:
\[\lambda_{\,\bf P}=0,0,\pm i\sqrt{6}\,;\qquad{\rm so}\quad\lambda_{\,\bf P}\in i \,\mathbb{R}.\] (12)
Returning to the full set of equations (5) and scalar-multiplying each of them by its respective \({\bf\underline{S}}^{+}\) or \({\bf\underline{S}}^{-}\), we notice that they admit the following “conservation” relations:
\[(\partial_{t}\pm\partial_{x})\,|{\bf\underline{S}}^{\pm}|^{2}=0,\] (13a)
where \[|\ldots|\] stands for the length of the vector. Two other conservation relations can be obtained in a similar fashion. Note that for periodic BC, considered in this paper, these relations become conservation laws. E.g., ( 13a ) yield:
\[\partial_{t}\int_{0}^{L}|{\bf\underline{S}}^{\pm}|^{2}dx=0,\] (13b)
where \(L\) is the length of the spatial domain. Moreover, a Hamiltonian of system (5) can be constructed in certain action-angle variables [30].
Therefore, it is desirable that the numerical method also conserve or quasi-conserve (i.e., not lead to a systematic shift) at least some of these quantities. A simple, explicit such a method for ODEs is LF, whereas explicit Euler methods are known to lead to mild numerical instability for conservative ODEs. For that reason we had expected that using the LF as the ODE solver in the MoC would produce better results than explicit Euler solvers. However, we found that, on the contrary, it produces the worst results. We will now turn to the description and analysis of the SE, ME, and LF solvers for the MoC with periodic BC. We will refer to the corresponding “flavors” of the MoC as MoC-SE, MoC-ME, and MoC-LF, respectively.
## 3 MoC with the SE solver
The SE is a first-order method and, moreover, is well-known to lead to a mild yet conspicuous numerical instability when applied to conservative ODEs (see, e.g., [31]). The reason that we consider this method is that it is simple enough to illustrate the approach. Thus, describing sufficient details in this Section will allow us to skip them in subsequent sections, devoted to second-order methods, where such details are more involved.
### Analysis
The form of the MoC-SE equations for system (6f) is:
\[(S^{\pm}_{j})^{n+1}_{m}=(S^{\pm}_{j})^{n}_{m\mp 1}+h\,f^{\pm}_{j}\big{(}\,({ \bf\underline{S}}^{+})^{n}_{m\mp 1},\,({\bf\underline{S}}^{-})^{n}_{m\mp 1}\, \big{)},\qquad j=1,2,3;\] (14)
where \(f_{j}^{\pm}\) are the nonlinear functions on the rhs of (6f), and \(m=0,\ldots\,\,M\), with \((M+1)\) being the number of grid points. Note that in this paper we have set the temporal and spatial steps equal, \(\Delta t=\Delta x=h\), to ensure having a regular grid. To impose periodic BC, we make the following identifications in (14):
\[({\bf\underline{S}}^{\pm})^{n}_{-1}\equiv({\bf\underline{S}}^{\pm})^{n}_{M}\,, \qquad({\bf\underline{S}}^{\pm})^{n}_{M+1}\equiv({\bf\underline{S}}^{\pm})^{n} _{0}\,.\] (15)
Linearizing Eqs. (14), one arrives at:
\[\left(\begin{array}[]{c}\underline{s}^{+}\\ \underline{s}^{-}\end{array}\right)_{m}^{n+1} = \left(\begin{array}[]{c}\underline{s}^{+}\\ \underline{0}\end{array}\right)_{m-1}^{n}+\left(\begin{array}[]{c}\underline{0 }\\ \underline{s}^{-}\end{array}\right)_{m+1}^{n}+\] (16)
\[\left(\begin{array}[]{ll}P^{++}&P^{+-}\\ \mathcal{O}&\mathcal{O}\end{array}\right)\left(\begin{array}[]{c}\underline{s} ^{+}\\ \underline{s}^{-}\end{array}\right)_{m-1}^{n}+\left(\begin{array}[]{ll} \mathcal{O}&\mathcal{O}\\ P^{-+}&P^{--}\end{array}\right)\left(\begin{array}[]{c}\underline{s}^{+}\\ \underline{s}^{-}\end{array}\right)_{m+1}^{n},\]
where \(\mathcal{O}\) is the \(2\times 2\) zero matrix and
\[\left(\begin{array}[]{ll}P^{++}&P^{+-}\\ P^{-+}&P^{--}\end{array}\right)\equiv\left(\begin{array}[]{ll}-A&B\\ -B&A\end{array}\right)={\bf P}\,.\] (17)
Seeking the solution of (16) with periodic BC in the form \(({\bf s})_{m}^{n}=({\bf s})^{n}\,e^{ikmh}\) reduces (16) to:
\[({\bf s})^{n+1}={\bf N}(z)\,({\bf s})^{n},\] (18a)
\[{\bf N}(z)={\bf Q}\,({\bf I}+h{\bf P}),\qquad{\bf Q}=\exp[\,-iz{\bf\Sigma}\,],\] (18b)
where \(z=kh\), and \({\bf\Sigma}\) is defined in (11b).
Below we will show that eigenvalues of \({\bf N}\) satisfy \(|\lambda_{\,{\bf N}}|>1\), which implies that the MoC-SE with periodic BC is unstable. One can easily see this for the longest- and shortest-period Fourier harmonics (\(z=0\) and \(z=\pi\), respectively), where one has \({\bf Q}(z=0)={\bf I}\) and \({\bf Q}(z=\pi)=-{\bf I}\) and invokes relation (12). Then one has:
\[|\lambda_{\,{\bf N}}(z=0,\pi)|\,=\,\big{|}1+ih|\lambda_{\,{\bf P}}|\,\big{|}\, >1.\] (19a)
However, for other Fourier harmonics of the numerical error, i.e., for \[z\neq 0\] or \[\pi\] , an analytical relation between the eigenvalues \[\lambda_{\,{\bf N}}\] of the numerical method and the eigenvalues \[\lambda_{\,{\bf P}}\] of the “physical” matrix \[{\bf P}\] cannot be established. In particular,
\[|\lambda_{\,{\bf N}}(z\neq 0,\pi)|\;\neq\;|1+h\lambda_{\,{\bf P}}|,\] (19b)
other than by accident. Therefore, to determine \(|\lambda_{\,{\bf N}}|\), one has to find these eigenvalues numerically. Let us stress that this limits only the result, but _not the methodology_, of our von Neumann analysis to a specific matrix \({\bf P}\). Indeed, for any \({\bf P}\), the relation between the amplification matrix \({\bf N}\) of the numerical scheme and the “physical” matrix \({\bf P}\) is given by (18b). The eigenvalues of \({\bf N}\) are found within a second by modern software, whereas direct numerical simulations of scheme (14) (and other MoC schemes considered in Sections 4 and 5) take much longer.
The largest (in magnitude) eigenvalue of \({\bf N}\), found by Matlab, is shown as a function of the normalized wavenumber \(z\) in Fig. 1(a). The two curves illustrate the general trend as \(h\) is varied. They also show that the strongest instability of the MoC-SE applied to (5) occurs in the ODE- and “anti-ODE” limits, i.e. for \(z=0\) and \(z=\pi\). In particular, since \({\bf Q}(z=\pi)=-{\bf Q}(z=0)\), one concludes that \(|\lambda_{\,{\bf N}}(z=\pi)|\,=\,|\lambda_{\,{\bf N}}(z=0)|\). In other words, the instability of the MoC-SE for the highest and lowest Fourier harmonics is the same, in contrast to that of other numerical methods.
<figure><img src="content_image/1610.09079/x1.png"><figcaption>Figure 1: (a) Eigenvalue of matrix N in (18b) for h=0.01 and h=0.04. (b)Spectrum of the numerical error obtained by simulating scheme (14), (15) forL=100 and t=520 (for h=0.01) and t=130 (for h=0.04). (c) Time evolution ofthe error (21) for h=0.04.</figcaption></figure>
### Numerical verification
First, in Fig. 1(b) we show the logarithm of the Fourier spectrum of the numerical error obtained by applying the MoC-SE to (6f). Namely, we simulated scheme (14) with the initial condition being (8) plus white (in space) noise of magnitude \(10^{-12}\) and subtracted from the numerical solution the exact solution (8). The curves in Fig. 1(b) have the same qualitative shapes as the corresponding curves in Fig. 1(a), which confirms the validity of the analysis in Section 3.1. Indeed, it follows from (18a) that (individual) Fourier harmonics of the numerical error satisfy
\[\|({\bf s})^{n}\|\propto|\lambda_{\,{\bf N}}|^{n}\approx\exp\left[\,\left((| \lambda_{\,{\bf N}}|-1)/h\right)\,t\,\right],\qquad\Rightarrow\qquad\ln\|({\bf s })^{n}\|\propto\left((|\lambda_{\,{\bf N}}|-1)/h\right)\,t\,,\] (20)
where \(\|\ldots\|\) denotes the Euclidean norm of the vector.
Second, to verify our analytical results from yet another perspective, we will compare the growth rate measured for a “total” numerical error (see below) with the growth rate predicted by the analysis of Section 3.1. The “total” numerical error is computed as
\[|{\bf s}_{\rm tot}|=\left(\sum_{m=0}^{M}\|({\bf s})_{m}\|^{2}\right)^{1/2}\,.\] (21)
Note that the summation over \(m\) smooths out the noisy spatial profile of the error. The perturbations \(s^{\pm}_{2}\) were indeed found to be much smaller than \(s^{\pm}_{1,3}\) (as long as the latter are themselves sufficiently small), as predicted by (10b), and hence they are not included in the “total” error (21). Now, according to Section 3.1, the highest growth rate occurs for harmonics with \(z=0,\;\pi\). It follows from (19a) that
\[|\lambda_{\,{\bf N}}(z=0)|^{n}=\left(|1+ih\max|\lambda_{\,{\bf P}}|\right)^{t/ h}\approx\exp\left[\frac{h}{2}\max|\lambda_{\,{\bf P}}|^{2}t\right].\] (22)
Thus, the theoretical growth rate is
\[\gamma_{\rm theor}=3h,\] (23)
where we have used (12).
On the other hand, the growth rate can be measured from the numerical solution:
\[\gamma_{\rm meas}=\frac{\ln|{\bf s}_{\rm tot}(t_{2})|-\ln|{\bf s}_{\rm tot}(t_ {1})|}{t_{2}-t_{1}}\,,\] (24)
where \(t_{1,2}\) are some times when the dependence of \(\ln|{\bf s}_{\rm tot}|\) on \(t\) appears to be linear (as, e.g., in Fig. 1(c) for \(t>40\)). Using (24), we have found for the parameters listed in the caption to Fig. 1 that \(\gamma_{\rm meas}\) agrees with its theoretical value (23) to two significant figures.
## 4 MoC with the ME solver
The MoC-ME applied to system (6f) yields the following scheme:
\[\overline{S^{\pm}_{j}}_{m}=(S^{\pm}_{j})^{n}_{m\mp 1}+h\,f^{\pm}_{j}\big{(}\,( {\bf\underline{S}}^{+})^{n}_{m\mp 1},\,({\bf\underline{S}}^{-})^{n}_{m\mp 1}\, \big{)},\qquad j=1,2,3;\] (25a)
\[(S^{\pm}_{j})^{n+1}_{m}=\frac{1}{2}\left[\,(S^{\pm}_{j})^{n}_{m\mp 1}+ \overline{S^{\pm}_{j}}_{m}+h\,f^{\pm}_{j}\Big{(}\,\overline{{\bf\underline{S}} ^{+}}_{m},\,\overline{{\bf\underline{S}}^{-}}_{m}\,\Big{)}\,\right]\,,\] (25b)
where the notations are the same as in Section 3. Note that the rhs of (25a) is the same as that of (14). Following the analysis of Section 3.1, one obtains a counterpart of (18b) where now
\[{\bf N}=\frac{1}{2}\left[{\bf Q}+({\bf I}+h{\bf P})\,{\bf Q}\,({\bf I}+h{\bf P })\,\right].\] (26)
As we discussed in Section 3.1, the eigenvalues of \({\bf N}\) cannot be analytically related to the eigenvalues of \({\bf P}\) and hence need to be found numerically. (As for the MoC-SE before, this takes fractions of a second for modern software, unlike the much longer direct simulations of scheme (25b).) The magnitude of the largest-in-magnitude such eigenvalue is shown in Fig. 2(a). The logarithm of the Fourier spectrum of \(\|{\bf s}\|\) is plotted in Fig. 2(b) and is seen to agree qualitatively with the result in Fig. 2(a); see also (20).
<figure><img src="content_image/1610.09079/x4.png"><figcaption>Figure 2: (a) Eigenvalue of matrix N in (26) for h=0.01. The curves forh≤0.05 all look qualitatively similar to the one shown here; in particular,their maximum value is 1. (b) Spectrum of the numerical error obtained bysimulating scheme (25b), (15) for L=100 and t=800 with h=0.01. (c)Comparison of the growth rate of MoC-ME measured from scheme (25b), (15) withthat established by formula (27b). Note that the value 1 of the ratio wouldcorrespond to the perfect agreement between the numerics and analysis.</figcaption></figure>
Let us note that the numerical instability of the MoC-ME scheme in the ODE and anti-ODE limits is much weaker than that of the MoC-SE, because from (26),
\[|\lambda_{\,{\bf N}}(z=0,\,\pi)|=\left(1+\frac{h^{4}}{4}\max|\lambda_{\,{\bf P }}|^{4}\right)^{1/2},\]
and hence similarly to (22), one has
\[\gamma_{\rm theor}(z=0,\,\pi)=\frac{h^{3}}{8}\max|\lambda_{\,{\bf P}}|^{4}\,.\] (27a)
However, Fig. 2 (a) shows that _unlike_ the MoC-SE, the strongest instability of the MoC-ME occurs in the “ _middle_ ” ² of the Fourier spectrum. Moreover, this growth rate has the same order of magnitude as the growth rate of the MoC-SE’s instability. Therefore, it is the growth rate of the harmonics with \[k\sim\pi/(2h)\] that one would measure in the numerical experiment. Using the information from Fig. 2 (a) and its caption, one has
\[|\lambda_{\,{\bf N}}(z=\pi/2)|\approx 1+h^{2},\]
and hence
\[\gamma_{\rm theor}(z=\pi/2)\approx h.\] (27b)
Figure 2(c) shows that the growth rates computed from the (semi-)analytical formula (27b) and measured, as explained in Section 3.2, in direct numerical simulations of scheme (25b), agree reasonably well.
[FOOTNOTE:2][ENDFOOTNOTE]
In Section 7 we will demonstrate that these conclusions about the numerical instability of the MoC-ME remain qualitatively valid for a different system of PDEs, which has spatially localized (as opposed to constant) coefficients; see Appendix B.
## 5 MoC with the LF solver
Unlike the Euler ODE solvers considered in the previous two sections, the LF involves the solution at three time levels:
\[(S^{\pm}_{j})^{n+1}_{m}=(S^{\pm}_{j})^{n}_{m\mp 2}+2h\,f^{\pm}_{j}\big{(}\,({ \bf\underline{S}}^{+})^{n}_{m\mp 1},\,({\bf\underline{S}}^{-})^{n}_{m\mp 1}\, \big{)},\qquad j=1,2,3.\] (28)
Given the initial condition, one can find the solution at time level \(n=1\) with, e.g., the MoC-SE. To enforce periodic BC, convention (15) needs to be supplemented by
\[({\bf\underline{S}}^{\pm})^{n}_{-2}\equiv({\bf\underline{S}}^{\pm})^{n}_{M-1} \,,\qquad({\bf\underline{S}}^{\pm})^{n}_{M+2}\equiv({\bf\underline{S}}^{\pm})^ {n}_{1}\,.\] (29)
The counterpart of (18b) is now
\[({\bf s})^{n+1}-2h{\bf Q}{\bf P}\,({\bf s})^{n}+{\bf Q}^{2}\,({\bf s})^{n-1}={ \bf 0}.\] (30)
The (in)stability of the MoC-LF is determined by whether the magnitude of the largest eigenvalue of the quadratic eigenvalue problem obtained from (30) and leading to
\[\det\left(\lambda^{2}{\bf I}-2\lambda h{\bf Q\,P}+{\bf Q}^{2}\right)=0\] (31)
exceeds unity. As previously for the MoC-SE and MoC-ME, this \(4\times 4\) eigenvalue problem has to be solved numerically for a given \({\bf P}\). The largest magnitude of its eigenvalue, computed with Matlab’s command polyeig for \({\bf P}\) given by (11b), is plotted in Fig. 3(a).
<figure><img src="content_image/1610.09079/x7.png"><figcaption>Figure 3: (a) Eigenvalue of problem (31) for h=0.01 and h=0.04. (b) Spectrumof the numerical error obtained by simulating scheme (28), (15), (29) forL=100 and t=10. </figcaption></figure>
It should be noted that the quantity \((|\lambda|-1)\) for the MoC-LF scales as \(h\), in contrast to the cases of MoC-SE and MoC-ME, where it scales as \(h^{2}\). Therefore, the instability growth rate of the MoC-LF is: \(\gamma=O(1)\), which is much greater than those rates of the MoC-SE and MoC-ME. The reason for this will come out as a by-product when we qualitatively explain, in the next paragraph, why the strongest instability of the MoC-LF for the PDE system in question occurs near \(z=\pi/2\).
First note that in the ODE limit, \(z=0\), one has \({\bf Q}(0)={\bf I}\), and hence from (31) one recovers the well-known relation between the amplification factor \(\lambda\) of the ODE-LF and eigenvalues of \({\bf P}\):
\[\lambda(z=0)=h\lambda_{\,{\bf P}}\pm\sqrt{1+(h\lambda_{\,{\bf P}})^{2}}.\] (32)
Given \(\lambda_{\,{\bf P}}\in i\mathbb{R}\), as would be the case for any stable non-dissipative system, this yields \(|\lambda(z=0)|=1\) (here and below we assume that \(h\max|\lambda_{\,{\bf P}}|<1\)). A similar situation holds in the anti-ODE limit, where \({\bf Q}=-{\bf I}\). However, for \(z=\pi/2\), \({\bf Q}(\pi/2)=-i\,{\rm diag}(I,-I)\) can be said to be the most dissimilar from \({\bf I}\equiv{\rm diag}(I,I)\), and the counterpart of (32) is:
\[\lambda(z=\pi/2)=-h\lambda_{\,{\bf P}_{\rm mod}}\pm\sqrt{-1+(h\lambda_{\,{\bf P }_{\rm mod}})^{2}},\] (33)
where \({\bf P}_{\rm mod}={\bf Q}(\pi/2)\,{\bf P}\). For \(\lambda_{\,{\bf P}_{\rm mod}}\in i\mathbb{R}\), this yields \(\max|\lambda(z=\pi/2)|>1\), i.e., a numerical instability. In the case of matrix \({\bf P}\) defined in (11b), \(\lambda_{\,{\bf P}_{\rm mod}}=0,0,\pm i\sqrt{2}\), and thus the above statements explain our result shown in Fig. 3(a). In Section 7 we will discuss how this generalizes to some other energy-preserving hyperbolic PDEs.
Moreover, it also follows from (33) that the growth rate of this instability is about \(\max|\lambda_{\,{\bf P}_{\rm mod}}|=O(1)\). (For the specific \({\bf P}\) in (11b), we will show in the second paper of this work that the growth rate is \(3/2\) instead of \(\sqrt{2}=\max|\lambda_{\,{\bf P}_{\rm mod}}|\); this minor difference is responsible for the subtly double-peaked profile of the curves in Fig. 3(a).) We confirmed this by direct numerical simulations: representative spectra of the numerical error are shown in Fig. 3(b) and are seen to agree with the shapes of the curves in Fig. 3(a); see (20). The solution obtained by the MoC-LF becomes completely destroyed by the numerical instability around \(t=25\) regardless of \(h\). In contrast, for the MoC-ME, the solution remains essentially unaffected by the growing numerical error over times \(O(1/h)\gg 10\).
## 6 Comments about the MoC with the cRK solver
The fourth-order cRK method may appear as a natural choice of a higher-order ODE solver to be combined with the MoC. Indeed, not only is its accuracy for ODEs \(O(\Delta t^{4})\), but also the systematic error that it introduces into the numerically computed conserved quantities of energy-preserving ODEs is usually negligible for sufficiently small time steps.
However, a straightforward use of the cRK solver in the MoC framework produces an unsatisfactory numerical method. The corresponding scheme is:
\[({\bf\underline{S}}^{\pm})_{m}^{n+1}=({\bf\underline{S}}^{\pm})_{m\mp 1}^{n}+ \frac{1}{6}\left((\underline{\bm{\kappa}}_{\,1}^{\pm})_{m}+2(\underline{\bm{ \kappa}}_{\,2}^{\pm})_{m}+2(\underline{\bm{\kappa}}_{\,3}^{\pm})_{m}+( \underline{\bm{\kappa}}_{\,4}^{\pm})_{m}\right);\] (34a)
\[(\underline{\bm{\kappa}}_{\,1}^{\pm})_{m}=h\,\underline{\bf f}^{\pm}\left(\,{ \bf\underline{S}}^{+}_{m\mp 1},\;{\bf\underline{S}}^{-}_{m\mp 1}\,\right);\] (34b)
(34c)
\[(\underline{\bm{\kappa}}_{\,4}^{\pm})_{m}=h\,\underline{\bf f}^{\pm}\left(\,{ \bf\underline{S}}^{+}_{m\mp 1}+(\underline{\bm{\kappa}}_{\,3}^{+})_{m},\;{\bf \underline{S}}^{-}_{m\mp 1}+(\underline{\bm{\kappa}}_{\,3}^{-})_{m}\,\right).\] (34d)
A von Neumann analysis analogous to that presented in Section 4 yields a dependence \(\max|\lambda_{\,{\bf N}}(z)|\) similar to the one shown in Fig. 2(a), and direct numerical simulations confirm that result for system (6f). Thus, scheme (34d) has the instability growth rate \(\gamma=O(h)\), just as the MoC-ME (25b). In fact, for the specific PDE system (6f), this \(\gamma\) is twice that of the MoC-ME’s, and hence scheme (34d) offers no advantage over the MoC-ME.
Perhaps, a proper MoC-cRK needs to ensure that the arguments of \(\underline{\bf f}^{\pm}\) in (34c) and (34d) be taken on the same characteristics. E.g., (34d) would then need to be replaced with
\[(\underline{\bm{\kappa}}_{\,4}^{\pm})_{m}=h\,\underline{\bf f}^{\pm}\left(\,{ \bf\underline{S}}^{+}_{m-1}+(\underline{\bm{\kappa}}_{\,3}^{+})_{m},\;{\bf \underline{S}}^{-}_{m+1}+(\underline{\bm{\kappa}}_{\,3}^{-})_{m}\,\right),\]
which mimics the last term in (25b). The node numbers of some of the arguments of \(\underline{\bf f}^{\pm}\) in (34c) would then also need to be modified in some way. This may result in a more stable MoC-cRK. However, a detailed exploration of this topic will have to begin with an analysis of the accuracy of the resulting method away from the ODE limit. This is outside the scope of this paper, and therefore we will not pursue it here. Let us mention that we have been unable to find any published research which would study the problem of setting up the proper MoC-cRK for the case of crossing characteristics.
## 7 Summary and discussion
We have presented results of the von Neumann analysis of the MoC combined with three ODE solvers: SE, ME, and LF, as applied to an energy-preserving hyperbolic PDE system, (5) or (6f). Our main findings are as follows.
First, we have found that the numerical instability of the highest and lowest Fourier harmonics in the MoC schemes has the same growth rate. This is in contrast with the situation of other numerical methods, where it is the highest Fourier harmonics that become numerically unstable before the lower ones. Moreover, we have found that the harmonics in the “middle” of the spectrum, i.e., with \(|k|\approx\pi/(2h)\), can become the most numerically unstable. This occurs, for the PDE system in question, for the MoC-ME and MoC-LF schemes. While such harmonics for the MoC-LF occupy a narrow spectral band and hence can, in principle, be filtered out, for the MoC-ME they occupy a substantial portion of the spectrum and hence cannot be filtered out without destroying the solution. It should also be pointed out that at least for some energy-preserving _PDEs_, as the one considered here, the growth rate of the most unstable harmonics has the same order of magnitude, \(O(h)\), for the MoC-SE and MoC-ME. This should be contrasted with the situation for energy-preserving _ODEs_, where the instability growth rate of the ME method, \(O(h^{3})\), is much lower than that, \(O(h)\), of the SE method.
Second, we have found that the MoC based on the LF solver, which (the solver) is known to considerably outperform the Euler schemes for energy-preserving ODEs, performs the worst for the energy-preserving PDE that we considered. Namely, the Fourier harmonics with \(|k|\approx\pi/(2h)\) grow at the rate \(\gamma=O(1)\), which by far exceeds the growth rate \(\gamma=O(h)\) of the numerically most unstable harmonics in both the MoC-SE and MoC-ME.
In Section 6 we have pointed out that, for hyperbolic PDE systems with crossing characteristics, the formulation of a MoC based on the cRK ODE solver and having numerical stability (or, at worst, a milder instability than the MoC-ME) for \(|kh|\sim\pi/2\), remains an open problem.
There remains a question as to whether the above conclusions of our analysis, which necessarily had to be obtained for a _specific_ system (as we explained in the main text), apply to other energy-preserving hyperbolic PDEs. Again, for the same reasons, this cannot be answered in general, as no analytical relation between the eigenvalues of the “physical” matrix \({\bf P}\) and the numerical amplification factor \(|\lambda_{\bf N}|\) can be obtained for an arbitrary wavenumber \(k\). Therefore, in Appendices A and B we consider several specific PDEs with the same linearized form (10a), where matrix \({\bf P}\) has a different form than (11b). In Appendix A we limit ourselves to systems with spatially constant coefficients and focus on how qualitatively different eigenvalues (see below) of \({\bf P}\) affect the amplification factor for the MoC-LF. In Appendix B we will demonstrate that our conclusions about numerical instability of both MoC-LF and MoC-ME remain qualitatively valid for a different system whose linearization has spatially-_localized_ entries in matrix \({\bf P}\).
Thus, we will now explore what ‘‘patterns” of numerical instability can be expected in simulations of energy-preserving PDEs by the MoC-LF. To that end, we consider a number of systems³ representing a broader class of such PDEs, which include our system (6f), (8) as a special case. The total of 18 systems, including it, were considered, and a summary of results is presented in Figs. 3(a) and 4 and explained below. The description of the systems is found in Appendix A.
[FOOTNOTE:3][ENDFOOTNOTE]
<figure><img src="content_image/1610.09079/x9.png"><figcaption>Figure 4: Eigenvalue of problem (31) for h=0.04 and for the matrices Pcorresponding to the physically unstable systems considered in Appendix A. The“other half” of the spectrum, z∈[π/2,π], is reflectionally symmetric about thepoint z=π/2 to the one shown here. (a) λP=±iα,±iβ; (b) λP=±iα (a pair ofdouble roots); (c) λP=±α,±β. See Section 7 for more details.</figcaption></figure>
Before we proceed, let us remind the reader that the stability of the _physical_ problem (10a), as opposed to the stability of the numerical method, was defined before Eq. (12). For example, system (6f), (8) is stable, which can be seen from Fig. 3(a). Indeed, the MoC-LF accurately approximates the solution for \(k=O(1)\) (i.e., for \(z\equiv kh=O(h)\) in Fig. 3(a)), and the fact that \(|\lambda|=1\) there corresponds to \(\omega\in\mathbb{R}\) in the text before Eq. (12). Let us stress that our focus is on the presence of numerical instability of the MoC-LF for physically _stable_ systems. Indeed, even if the numerical method is found to be free of numerical instability for a physically unstable system, then the corresponding solution will eventually (typically within \(t=O(1)\)) evolve towards the physically stable solution.
We now continue with the summary of stability results for the MoC-LF. These were obtained by solving the quadratic eigenvalue problem (31) with Matlab’s command polyeig for 18 specific matrices \({\bf P}\) corresponding to each of the systems listed in Appendix A. For all the seven of those systems which are physically stable, the dependence \(\max|\lambda(z)|\) was found to be similar to that shown in Fig. 3(a). Thus, for all of the stable systems considered, the MoC-LF has numerically unstable modes with \(|kh|\approx\pi/2\) and growth rate \(\gamma=O(1)\). Thus, this method should be deemed unsuitable for simulation of these, and possibly other, stable energy-preserving PDEs with crossing characteristics.
Let us mention, for completeness, that for the physically unstable systems considered in Appendix A, we found two possibilities with respect to the numerical (in)stability of the MoC-LF. In the first group, there are unstable systems such that: \(\lambda_{\bf P}=\pm i\alpha,\;\pm i\beta\) for \(\alpha,\beta\in\mathbb{R}\) (i.e., the mode with \(k=0\) is stable), with \(\alpha\neq\beta\) and at most one of \(\alpha\) or \(\beta\) could be zero. We have found that such systems have \(\omega\,\cancel{\in}\,\mathbb{R}\) in the immediate vicinity of \(k=0\). For systems in this group, the MoC-LF has also numerically unstable modes with \(|k|h\) near (but not exactly at) \(\pi/2\). This is illustrated in Fig. 4(a). In the second group there are unstable systems with all other possibilities of \(\lambda_{\bf P}\) compatible with the energy-preserving nature of the problem: \(\lambda_{\bf P}=\pm i\alpha\) (two pairs of repeated imaginary eigenvalues), including the case \(\alpha=0\); and \(\lambda_{\bf P}=\pm\alpha,\;\pm\beta\) (in this case, even the \(k=0\) mode is unstable). This is illustrated in Figs. 4(b,c), which show _no numerical_ instability in the MoC-LF. However, as we have mentioned above, once the initial, physically unstable, solution evolves sufficiently near a stable one, the MoC-LF will be invalidated by the numerically unstable harmonics, which are seen in Figs. 3(a) and 4(a).
Let us now demonstrate the validity of qualitative conclusions of our von Neumann analysis for a system whose linearization (10a) has a spatially dependent matrix \({\bf P}\). This system is the soliton (see Fig. 5(a)) of the Gross–Neveu model [25] in the relativistic field theory; its details are presented in Appendix B. It has received considerable attention in the past decade both from the analytical and numerical perspectives: see [27], [32]–[38], and references therein. The plots of the spectra of the numerical errors, obtained for this soliton with the MoC-LF (Fig. 5(b)) and MoC-ME (Fig. 5(c)), are qualitatively similar to such plots in Sections 5 and 4, respectively. In particular, the result in panel (b) agrees with our conclusion that the MoC-LF is unsuitable for simulations of a physically stable hyperbolic PDE system with crossing characteristics: the numerical instability at \(k\approx\pi/(2h)\) will destroy the solution soon after \(t=20\). In contrast, panel (c) shows that the MoC-ME would be a suitable method for such systems, even for relatively long times. Moreover, we also verified our conclusion (see (27b) and Fig. 2(c)) that the growth rate of the most unstable Fourier harmonics, with \(k\approx\pi/(2h)\), of the MoC-ME scales as \(O(h)\) (which is considerably greater than the growth rate of that method’s error for energy-preserving ODEs). To that end, we repeated the simulations for several values of \(h=L/M\), where \(M=2^{11},\,2^{12},\,2^{13},\,2^{14}\) and computed \(\gamma_{\rm meas}\) using (24) and (41). The results fit the dependence \(\gamma_{\rm meas}=6.4\cdot 10^{-3}\cdot(2^{11}/M)\cdot\ln(10)\), which is equivalent to \(\gamma_{\rm meas}=O(h)\), to two significant figures.
<figure><img src="content_image/1610.09079/x12.png"><figcaption>Figure 5: (a): Soliton solution (40b) for Ω=0.7. Note that Re(V)=Re(U),Im(V)=−Im(U). (b) and (c): Spectra of the total error (41) obtained by theMoC-LF at t=20 (b) and by the MoC-ME (c) at t=1000. See Section 7 andAppendix B for more details; in particular, the origin of the peaks near z=0in panels (b) and (c) is explained at the end of Appendix B.</figcaption></figure>
## Acknowledgement
This work was supported in part by the NSF grant DMS-1217006.
## Appendix A: A broader class of PDE systems with constant coefficients
The class of models that we referred to in Section 7 generalizes Eqs. 5:
\[{\bf\underline{S}}^{\pm}_{\;t}\pm{\bf\underline{S}}^{\pm}_{\;x}={\bf\underline {S}}^{\pm}\times{\bf\hat{J}_{c}}{\bf\underline{S}}^{\mp}+{\bf\underline{S}}^{ \pm}\times{\bf\hat{J}_{s}}{\bf\underline{S}}^{\pm}\,,\] (35)
where \({\bf\hat{J}_{c}}\) and \({\bf\hat{J}_{s}}\) are matrices accounting, respectively, for cross- and self-interaction among components of the Stokes vectors \({\bf\underline{S}}^{\pm}\). This class of models describes propagation of electromagnetic waves in optical fibers with various types of birefringence. The model considered in the main text is a special case of model (35) with
\[{\bf\hat{J}_{c}}=\alpha\;{\rm diag}(1,-1,-2),\qquad{\bf\hat{J}_{s}}=(2-3\alpha )\,{\rm diag}(0,0,1),\] (36a)
which corresponds to a highly spun birefringent fiber [6] . The parameter \[\alpha\] , which in model ( 5 ) is set to \[2/3\] , accounts for the ellipticity of the fiber’s core. Two other models are: that of a randomly birefringent fiber [7] , with
\[{\bf\hat{J}_{c}}={\rm diag}(-1,1,-1),\qquad{\bf\hat{J}_{s}}=\mathcal{O},\] (36b)
and that of an isotropic (i.e., non-birefringent) fiber [5] :
\[{\bf\hat{J}_{c}}={\rm diag}(-2,-2,0),\qquad{\bf\hat{J}_{s}}={\rm diag}(-1,-1,0).\] (36c)
Let us stress that only the above forms of matrices \({\bf\hat{J}_{c,\,s}}\) correspond to physically meaningful situations in the context of birefringent fibers, and thus it would not make sense to consider model (35) with arbitrary \({\bf\hat{J}_{c,\,s}}\).
Models (35), (36c) each have many stationary constant solutions. We will consider only six of them for each of those three models:
\[S_{j}^{+}=1,\quad S_{j}^{-}=\pm S_{j}^{+}\qquad\begin{array}[]{l}\mbox{for one of $j=1,2$, or $3$, \ with}\\ \mbox{the other two components of ${\bf\underline{S}}^{\pm}$ being $0$.}\end{array}\] (37)
For brevity, we will refer to these solutions as \((j\pm)\), where \(j\) and \(\pm\) correspond to the particular choice of the component and the sign in (37). For example, solution (8) of the main text corresponds to \((2-)\) in (37).
Of these 18 systems, the following 7 are stable on the infinite line (in the sense specified before Eq. (12)): model (36a) with solutions \((1+)\), \((2-)\), \((3-)\); model (36b) with solutions \((1-)\), \((2+)\), \((3-)\); model (36c) with solution \((3+)\). The following 3 systems exhibit instability for \(k=0\): model (36a) with solutions \((1-)\), \((2+)\) and model (36c) with solution \((3-)\). (In other words, \(\lambda_{\bf P}\in\mathbb{R}\setminus\{0\}\) for these systems.) The remaining 8 systems exhibit instability for perturbations with \(k\neq 0\).
## Appendix B: The soliton solution of the Gross–Neveu model
In physical variables, this model has the form [25, 27]:
\[\begin{array}[]{l}i(\psi_{t}+\chi_{x})+(|\psi|^{2}-|\chi|^{2})\psi-\psi=0,\\ i(\chi_{t}+\psi_{x})+(|\chi|^{2}-|\psi|^{2})\chi+\chi=0.\end{array}\] (38)
A change of variables \(u=(\psi+\chi)/\sqrt{2}\), \(v=(\psi-\chi)/\sqrt{2}\) transforms (38) to the form (1):
\[\begin{array}[]{l}u_{t}+u_{x}=i(\,|v|^{2}u+v^{2}u^{*}\,)-iv,\\ v_{t}-v_{x}=i(\,|u|^{2}v+u^{2}v^{*}\,)-iu.\end{array}\] (39)
The standing soliton solution of this system is (see, e.g., [27] and references therein):
\[\{u,v\}\,=\,\{U(x),V(x)\}\,\exp[-i\Omega t],\qquad\Omega\in(0,1);\] (40a)
\[\{U(x),V(x)\}\,=\,\sqrt{1-\Omega}\,\frac{\cosh(\beta x)\pm i\mu\,\sinh(\beta x )}{\cosh^{2}(\beta x)-\mu^{2}\,\sinh^{2}(\beta x)};\] (40b)
with \(\beta=\sqrt{1-\Omega^{2}}\) and \(\mu=\sqrt{(1-\Omega)/(1+\Omega)}\). The physical stability of this soliton for sufficiently small \(\Omega\) has been an issue of recent controversy [27, 37] between analytical and numerical results. However, for \(\Omega\) sufficiently away from \(0\), the soliton has been found to be stable by both analysis and numerics (see references in [37]). In Fig. 5(a) we show this solution for \(\Omega=0.7\).
The linearized system (39) on the background of soliton (40b) has the form (10a), where the entries of matrix \({\bf P}\) are localized functions of \(x\). Therefore, a von Neumann analysis cannot be rigorously applied to it. However, one can expect that its predictions could be valid qualitatively, based on the principle of frozen coefficients. To demonstrate that this is indeed the case, we simulated system (39) with the initial condition shown in Fig. 5(a), to which a small white noise was added. The length of the computational domain was \(L=64\). The spectra of the numerical errors obtained by schemes (28), (29) (MoC-LF) and (25b), (15) (MoC-ME) for \(h=L/2^{12}\approx 0.016\) are shown in Figs. 5(b,c). The numerical error was defined similarly to (21):
\[{\rm err}_{\rm tot}=\left(\sum_{m=0}^{M}\left|u_{m}^{n}-U(x_{m})e^{-i\Omega t_ {n}}\right|^{2}+\left|v_{m}^{n}-V(x_{m})e^{-i\Omega t_{n}}\right|^{2}\,\right) ^{1/2}.\] (41)
Note that the peaks near \(z=0\), seen in Figs. 5(b,c), do _not_ correspond to any numerical instability. In panel (b) that peak is caused directly by the discretization error of the scheme. In panel (c), a much higher such peak is also caused by the discretization error, but indirectly. Indeed, a slight error in the propagation constant \(\Omega\), which inevitably occurs due to a limited accuracy of the numerical scheme, causes \(\{u_{m}^{n},\,v_{m}^{n}\}\) in (41) to differ from their respective \(\{U(x_{m}),\,V(x_{m})\}\,\exp[-i\Omega t_{n}]\) by \(O(1)\) for \(t_{n}\gg 1\).
## References
* [1] B. Gustafsson, H.-O. Kreiss, J. Oliger, Time-dependent problems and difference methods, 2nd Ed., John Wiley & Sons, Hoboken, NJ, 2013; Sec. 7.4.
* [2] Yu.Ya. Mikhailov, Stability of some numerical schemes of the three-dimensional method of characteristics, USSR Comp. Math. Math. Phys. 8 (1968) 312–315.
* [3] A.C. Zecchin, A.R. Simpson, M.F. Lambert, von Neumann stability analysis of a method of characteristics visco-elastic pipeline model, 10th Int. Conf. on Pressure Surges (Edinburgh, Scotland), 2008.
* [4] V.E. Zakharov, A.V. Mikhailov, Polarization domains in nonlinear media, JETP Lett. 45 (1987) 349–352.
* [5] S. Pitois, G. Millot, S. Wabnitz, Nonlinear polarization dynamics of counterpropagating waves in an isotropic optical fiber: theory and experiments, J. Opt. Soc. B 18 (2001) 432–443.
* [6] S. Wabnitz, Chiral polarization solitons in elliptically birefringent spun optical fibers, Opt. Lett. 34 (2009) 908–910.
* [7] V.V. Kozlov, J. Nuno, S. Wabnitz, Theory of lossless polarization attraction in telecommunication fibers, J. Opt. Soc. B 28 (2011) 100–108.
* [8] Z. Deng, Uncommon numerical instability in the method of characteristics applied to hyperbolic equations, M.S. Thesis, University of Vermont, 2016.
* [9] R.W. Boyd, Nonlinear optics, Academic, San Diego, 1992.
* [10] D.J. Kaup, The first-order perturbed SBS equations, J. Nonlin. Sci. 3 (1993) 427–443.
* [11] C.E. Mungan, S.D. Rogers, N. Satyan, J.O. White, Time-dependent modeling of Brillouin scattering in optical fibers excited by a chirped diode laser, IEEE J. Quant. Electron. 48 (2012) 1542–1546.
* [12] F.Y.F. Chu, A.C. Scott, Inverse scattering transform for wave-wave scattering, Phys. Rev. A 12 (1975) 2060–2064.
* [13] D.J. Kaup, A. Rieman, A. Bers, Space-time evolution of nonlinear three-wave interactions. I. Interactions in an homogeneous medium, Rev. Mod. Phys. 51 (1979) 275–310.
* [14] D.J. Kaup, Simple harmonic generation: an exact method of solution, Stud. Appl. Math. 59 (1978) 25–35.
* [15] E. Ibragimov, A. Struthers, D.J. Kaup, Parametric amplification of chirped pulses in the presence of a large phase mismatch, J. Opt. Soc. Am. B 18 (2001) 1872–1876.
* [16] C.J. McKinstrie, L. Mejling, M.G. Raymer, K. Rottwitt, Quantum-state-preserving optical frequency conversion and pulse reshaping by four-wave mixing, Phys. Rev. A 85 (2012) 053829.
* [17] D.V. Reddy, M.G. Raymer, C.J. McKinstrie, Efficient sorting of quantum-optical wave packets by temporal-mode interferometry, Opt. Lett. 39 (2014) 2924–2927.
* [18] C.J. McKinstrie, D.S. Cargill, Simultaneous frequency conversion, regeneration and reshaping of optical signals, Opt. Expr. 20 (2012) 6881–6886.
* [19] C.J. McKinstrie, E.J. Turano, Spatiotemporal evolution of parametric instabilities driven by short laser pulses: One-dimensional analysis, Phys. Plasmas 3 (1996) 4683–4696.
* [20] C.J. McKinstrie, V.A. Smalyuk, R.E. Giacone, H.X. Vu, Power exchange between crossed laser beams and the associated frequency cascade, Phys. Rev. E 55 (1997) 2044–2047.
* [21] R. Kashyap, Fiber Bragg Gratings, Academic, Burlington, MA, 2010; Chap. 4.
* [22] J.E. Sipe, C.M. de Sterke, B.J. Eggleton, Rigorous derivation of coupled mode equations for short, high-intensity grating-coupled, co-propagating pulses, J. Mod. Opt. 49 (2002) 1437–-1452.
* [23] S.A.M.S. Chowdhury, J. Atai, Stability of Bragg grating solitons in a semilinear dual core system with dispersive reflectivity, IEEE J. Quant. Electron. 50 (2014) 458–465.
* [24] W.E. Thirring, A soluble relativistic field model, Ann. Phys. 3 (1958) 91–112.
* [25] D.J. Gross, A. Neveu, Dynamical symmetry breaking in asymptotically free field theories, Phys. Rev. D 10 (1974) 3235–3253.
* [26] M. Chugunova, D. Pelinovsky, Block-diagonalization of the symmetric first-order coupled-mode system, SIAM J. Appl. Dyn. Syst. 5 (2006) 55–-83.
* [27] S. Shao, N.R. Quintero, F.G. Mertens, F. Cooper, A. Khare, A. Saxena, Stability of solitary waves in the nonlinear Dirac equation with arbitrary nonlinearity, Phys. Rev. E 90 (2014) 032915.
* [28] A.B. Aceves, S. Wabnitz, Self-induced transparency solitons in nonlinear refractive periodic media, Phys. Lett. A 141 (1989) 37–42.
* [29] M. Romagnoli, S. Trillo, S. Wabnitz, Soliton switching in nonlinear couplers, Opt. Quantum Electron. 24 (1992) S1237–1267.
* [30] E. Assemat, A. Picozzi, H.-R. Jauslin, D. Sugny, Hamiltonian tools for the analysis of optical polarization control, J. Opt. Soc. B 29 (2012) 559–571.
* [31] D.F. Griffiths, D.J. Higham, Numerical methods for ordinary differential equations, Springer-Verlag, London, 2010; Chaps. 13 and 15.
* [32] G. Berkolaiko, A. Comech, On spectral stability of solitary waves of nonlinear Dirac equation in 1D, Math. Model. Nat. Phenom. 7 (2012) 13–31.
* [33] D.E. Pelinovsky, A. Stefanov, Asymptotic stability of small gap solitons in the nonlinear Dirac equations, J. Math. Phys. 53 (2012) 073705.
* [34] N. Boussaïd, A. Comech, On spectral stability of the nonlinear Dirac equation, J. Funct. Anal. 271 (2016) 1462–1524.
* [35] F. de la Hoz, F. Vadillo, An integrating factor for nonlinear Dirac equations, Comp. Phys. Commun. 181 (2010) 1195–1203.
* [36] J. Xu, S. Shao, H. Tang, Numerical methods for nonlinear Dirac equation, J. Comp. Phys. 245 (2013) 131–149.
* [37] J. Cuevas-Maraver, P.G. Kevrekidis, A. Saxena, F. Cooper, F.G. Mertens, Solitary waves in the nonlinear Dirac equation at the continuum limit: Stability and dynamics, In: Ord. Part. Diff. Eqs., Nova Science, Boca Raton, 2015; Chap. 4.
* [38] W.Z. Bao, Y.Y. Cai, X.W. Jia, J. Yin, Error estimates of numerical methods for the nonlinear Dirac equation in the nonrelativistic limit regime, Science China Math. 59 (2016) 1461–1494.
|
1807.03708 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 62029,
"num_imgs": 4,
"llama3_tokens_count": 20245
} | [
"content_image/1807.03708/point_env.png",
"content_image/1807.03708/x4.png",
"content_image/1807.03708/x6.png",
"content_image/1807.03708/x8.png"
] | # Deterministic Policy Gradients With General State Transitions
Qingpeng Cai, Ling Pan, Pingzhong Tang
Institute for Interdisciplinary Information Sciences
Tsinghua University
{cqp14,pl17}@mails.tsinghua.edu.cn, kenshinping@gmail.com
###### Abstract
We study a reinforcement learning setting, where the state transition function is a convex combination of a stochastic continuous function and a deterministic function. Such a setting generalizes the widely-studied stochastic state transition setting, namely the setting of deterministic policy gradient (DPG).
We firstly give a simple example to illustrate that the deterministic policy gradient may be infinite under deterministic state transitions, and introduce a theoretical technique to prove the existence of the policy gradient in this generalized setting. Using this technique, we prove that the deterministic policy gradient indeed exists for a certain set of discount factors, and further prove two conditions that guarantee the existence for all discount factors. We then derive a closed form of the policy gradient whenever exists. Furthermore, to overcome the challenge of high sample complexity of DPG in this setting, we propose the _Generalized Deterministic Policy Gradient_ (GDPG) algorithm. The main innovation of the algorithm is a new method of applying model-based techniques to the model-free algorithm, the deep deterministic policy gradient algorithm (DDPG). GDPG optimize the long-term rewards of the model-based augmented MDP subject to a constraint that the long-rewards of the MDP is less than the original one.
We finally conduct extensive experiments comparing GDPG with state-of-the-art methods and the direct model-based extension method of DDPG on several standard continuous control benchmarks. Results demonstrate that GDPG substantially outperforms DDPG, the model-based extension of DDPG and other baselines in terms of both convergence and long-term rewards in most environments.
[name=Per cusse, color=orange]per (#2)
## 1 Introduction
Reinforcement learning has been one of the most successful computational tools for solving complex decision making problems [29], with extensive applications in both discrete tasks such as general game playing [17, 18] and continuous control tasks such as robotics [10]. In contrast to the traditional value-based methods [31, 35, 17, 18] that are meant for solving problems with discrete and low-dimensional action space, policy gradient methods [23, 30] aim to tackle these limitations, by optimizing a parameterized policy via estimating the gradient of the expected long-term reward, using gradient ascent.
[26] propose the deterministic policy gradient (DPG) algorithm that aims to find an optimal deterministic policy, which lowers the variance when estimating the policy gradient [39], compared to stochastic policies [30]. It is shown that the algorithm can be applied to domains with continuous and high-dimensional action spaces. [15] further propose the deep deterministic policy gradient (DDPG) algorithm, by combining deep neural networks to improve convergence. It is recognized that DDPG has been successful in robotic control tasks such as locomotion [27] and manipulation [7].
Despite the effectiveness of DDPG in these tasks, it is limited for problems with stochastic continuous state transitions. Here, the continuity means that the probability density of the next state is continuous in the action taken at the current state. In fact, many important control problems, such as MountainCar, Pendulum [2], and autonomous driving, include both stochastic and deterministic state transitions. For example, in most autonomous driving tasks, state transitions are deterministic under normal driving conditions, yet are still stochastic due to sudden disturbances. As a result, DDPG, which assumes stochastic state transitions, does not generalize well in practice.
Tasks with deterministic state transitions pose serious technical challenges due to the discontinuity of the transition function, where the gradient of the transition probability density function over actions does not always exist. [37, 5, 9] consider the gradient of the value function over states and the deterministic policy gradient in the setting of deterministic state transitions, but the existence of the value function’s gradient over states and the deterministic policy gradient is not studied. Lacking of theoretical guarantees for the existence of the gradient limits the applicability of deterministic policy gradient algorithms. As a result, an important question for policy gradient based methods is,
_Does the gradient exist in settings with deterministic state transitions? If yes, can one solve the problem efficiently by its gradient?_
In this paper, we study a generalized setting, where the state transition is a convex combination of a stochastic continuous transition function and a deterministic discontinuous transition function. As a result, it includes both the stochastic case and the deterministic case as special cases. Our setting is arguably closer to the mixed control problems mentioned above than those stochastic settings. We first give a simple example to illustrate that the deterministic policy gradient may be infinite under deterministic state transitions. Then we introduce a new theoretical technique to prove the existence of the policy gradient in this generalized setting. Using this technique, we prove that the deterministic policy gradient indeed exists for a certain set of discount factors. We further present two conditions that guarantee the existence for all discount factors. We then derive a closed form of the policy gradient.
However, the estimation of the deterministic policy gradient is much more challenging due to the sample complexity of model-free algorithms [25] and complex state transitions. As for the state transition, the difficulty of the computation of the gradient mainly comes from the dependency of the policy gradient and the gradient of the value function over the state. Such computation may involve infinite times of sampling the whole state space. Thus applying DDPG directly in a general setting even with low-dimensional state space may incur high sample complexity.
To overcome these challenges, we approximate the original Markov decision process (MDP) by a model-based augmented MDP with the same reward function and the transition function being the expectation of original MDP. By the form of the deterministic policy gradient with deterministic state transitions, we get that the model-based augmented MDP has a simple structure, which allows for more efficient computations and faster convergence than model-free methods [14, 13, 36]. Unfortunately, applying this mode-based technique directly does not help to solve environments with large continuous state space as it is hard to represent the transition dynamics [34]. This leads to an essential question:
_How to apply the model-based technique to deterministic policy gradient algorithms effectively?_
We then consider a program that maximizes the long-term rewards of the augmented MDP with the constraint that its long-term rewards is less than that of the original MDP. The intuition is that we choose a objective with less sample complexity to optimize, and it serves as a lower bound of the original objective. Note that the improvement of the new objective, guarantees the improvement of the original objective. As the constrainted problem is hard to optimize, we choose to optimize the Lagrangian dual function of the program, which can be interpreted as a weighted objective between the long-term reward of the original MDP and the augmented MDP. Based on this dual function, we propose the _Generalized Deterministic Policy Gradient (GDPG) algorithm_. The algorithm updates the policy by stochastic gradient ascent with the gradient of the weighted objective over the parameter of the policy, and the weight maintains a trade-off between fast convergence and performance.
To sum up, the main contribution of the paper is as follows:
* First of all, we provide a theoretical guarantee for the existence of the gradient in settings with deterministic state transitions.
* Secondly, we propose a novel policy gradient algorithm, called Generalized Deterministic Policy Gradient (GDPG), which combines the model-free and model-based methods. GDPG reduces sample complexity, enables faster convergence and performance improvement.
* Finally, we conduct extensive experiments on standard benchmarks comparing with state-of-the-art stochastic policy gradient methods including TRPO [25], ACKTR [38] and the direct model-based extension of DDPG, called MDPG. Results confirm that GDPG significantly outperforms other algorithms in terms of both convergence and performance.
## 2 Preliminaries
A Markov decision process (MDP) is a tuple \((S,A,p,r,\gamma,p_{1})\), where \(\mathcal{S}\) and \(\mathcal{A}\) denote the set of states and actions respectively. Let \(p(s_{t+1}|s_{t},a_{t})\) represent the conditional density from state \(s_{t}\) to state \(s_{t+1}\) under action \(a_{t}\), which satisfies the Markov property, i.e., \(p(s_{t+1}|s_{0},a_{0},...,s_{t},a_{t})=p(s_{t+1}|s_{t},a_{t}).\) The density of the initial state distribution is denoted by \(p_{0}(s)\). At each time step \(t\), the agent interacts with the environment with a deterministic policy \(\mu_{\theta}\), which is parameterized by \(\theta\). We use \(r(s_{t},a_{t})\) to represent the corresponding immediate reward, contributing to the discounted overall rewards from state \(s_{0}\) following \(\mu_{\theta}\), denoted by \(J(\mu_{\theta})=\mathbb{E}[\sum_{k=0}^{\infty}{\gamma}^{k}r(a_{k},s_{k})|\mu_{ \theta},s_{0}]\). Here, \(\gamma\in[0,1]\) is the discount factor. The Q-function of state \(s_{t}\) and action \(a_{t}\) under policy \(\mu_{\theta}\) is denoted by \(Q^{\mu_{\theta}}(s_{t},a_{t})=\mathbb{E}[\sum_{k=t}^{\infty}{\gamma}^{k-t}r(a_ {k},s_{k})|\mu_{\theta},s_{t},a_{t}]\). The corresponding value function of state \(s_{t}\) under policy \(\mu_{\theta}\) is denoted by \(V^{\mu_{\theta}}(s_{t})=Q^{\mu_{\theta}}(s_{t},\mu_{\theta}(s_{t}))\). We denote the density at state \(s^{{}^{\prime}}\) after \(t\) time steps from state \(s\) by \(p(s,s^{{}^{\prime}},t,\mu_{\theta})\) following the policy \(\mu_{\theta}\). We denote the (improper) discounted state distribution by \(\rho^{\mu_{\theta}}(s^{{}^{\prime}})=\int_{\mathcal{S}}\sum_{t=1}^{\infty}{ \gamma}^{t-1}p_{0}(s)p(s,s^{{}^{\prime}},t,\mu_{\theta})ds\). The agent aims to find an optimal policy that maximizes \(J(\mu_{\theta})\).
### Why is the DPG theorem not applicable for deterministic state transitions?
An important property of the DPG algorithms is the Deterministic Policy Gradient Theorem [26], \(\bigtriangledown_{\theta}J(\mu_{\theta})=\int_{\mathcal{S}}\rho^{\mu_{\theta}} (s)(\bigtriangledown_{\theta}\mu_{\theta}(s)\bigtriangledown_{a}Q^{\mu_{\theta }}(s,a)|_{a=\mu_{\theta}(s)})ds,\) which proves the existence of the deterministic policy gradient. The DPG theorem holds under the regular condition presented by [26], i.e., \(p(s^{{}^{\prime}}|s,a)\) is continuous in \(a\). The arguments in the proof of the DPG theorem do not work without this condition¹.
[FOOTNOTE:1][ENDFOOTNOTE]
Now we give a simple example to show the policy gradient is infinite for some discount factors.
**Example 2.1**.: _Given a MDP with two dimensional state spaces and action spaces, whose transition and reward functions are defined by_
\(T(s,a)={(2s_{1}+2s_{2}+a_{1},2s_{1}+2s_{2}+a_{2})}^{T}\)_, \(r(s,a)=-s^{T}a\). Consider a deterministic policy \(\mu_{\theta}(s)=\theta\), then \(\bigtriangledown_{s}T(s,\mu_{\theta}(s))=\begin{bmatrix}2&2\\ 2&2\end{bmatrix}\), and \(\bigtriangledown_{s}V^{\mu_{\theta}}(s)=-(I+\sum_{n=1}^{\infty}{\gamma}^{n} \begin{bmatrix}2^{2n-1}&2^{2n-1}\\ 2^{2n-1}&2^{2n-1}\end{bmatrix})\theta.\) Then \(\bigtriangledown_{s}V^{\mu_{\theta}}(s)\) converges if and only if \(\gamma<1/4\)._
One must need a new technique to determine the existence of the gradient of \(J(\mu_{\theta})\) over \(\theta\) in irregular cases.
## 3 Deterministic State Transitions
In this section we study a simple setting where the state transition is a deterministic function. As discussed before, the DPG theorem does not apply to this setting. To analyze the gradient of a deterministic policy, we let \(T(s,a)\) denote the next state given the current state \(s\) and the action \(a\). Without loss of generality, we assume that \(T(s,a),\bigtriangledown_{a}T(s,a),\bigtriangledown_{s}T(s,a),r(s,a), \bigtriangledown_{s}r(s,a),\bigtriangledown_{a}r(s,a)\) are all continuous in \(s\) and \(a\) and bounded. By definition, \(\bigtriangledown_{\theta}V^{\mu_{\theta}}(s)=\bigtriangledown_{\theta}(r(s,\mu _{\theta}(s))+\gamma V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s, \mu_{\theta}(s))}).\) Thus the key of the existence of the gradient of \(V^{\mu_{\theta}}(s)\) over \(\theta\) is the existence of \(\bigtriangledown_{s}V^{\mu_{\theta}}(s)\). Now we give a sufficient condition of the existence of \(\bigtriangledown_{s}V^{\mu_{\theta}}(s)\).
**Lemma 1**.: _For any policy \(\mu_{\theta}\), let \(n\) denote the dimension of the state, and \(c\) be the maximum of the max norm of all Jacobain matrices, \(\max_{s}||\bigtriangledown_{s}T(s,\mu_{\theta}(s))||_{max}\), for any discount factor \(\gamma\) in \([0,\frac{1}{nc})\), \(\bigtriangledown_{s}V^{\mu_{\theta}}(s)\) exists._
Proof.: By definition, \(V^{\mu_{\theta}}(s)=Q^{\mu_{\theta}}(s,\mu_{\theta}(s))=r(s,\mu_{\theta}(s))+ \gamma V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s,\mu_{\theta}(s) )}).\) Then
\[\begin{split}\bigtriangledown_{s}V^{\mu_{\theta}}(s)& =\bigtriangledown_{s}r(s,\mu_{\theta}(s))+\gamma\bigtriangledown_ {s}T(s,\mu_{\theta}(s))\bigtriangledown_{s^{{}^{\prime}}}V^{\mu_{\theta}}(s^{{ }^{\prime}})|_{s^{{}^{\prime}}=T(s,\mu_{\theta}(s))}.\end{split}\] (1)
By unrolling (1) with infinite steps, we get
\[\bigtriangledown_{s}V^{\mu_{\theta}}(s)=\sum_{t=0}^{\infty}\int_{\mathcal{S}}{ \gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{\theta}) \bigtriangledown_{s^{{}^{\prime}}}r(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime }}))ds^{{}^{\prime}},\]
where \(I(s,s^{{}^{\prime}},t,\mu_{\theta})\) is an indicator function that indicates whether \(s^{{}^{\prime}}\) is obtained after \(t\) steps from the state \(s\) following the policy \(\mu_{\theta}\). Here, \(g(s,t,\mu_{\theta})=\prod_{i=0}^{t-1}\bigtriangledown_{s_{i}}T(s_{i},\mu_{ \theta}(s_{i})),\) where \(s_{0}=s\) and \(s_{i}\) is the state after \(i\) steps following policy \(\mu_{\theta}\). The state transitions and policies are both deterministic. We now prove that for any \(\mu_{\theta},s,s^{{}^{\prime}}\) and \(\gamma\in[0,\frac{1}{nc})\), \(A(s)=\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t, \mu_{\theta})\) converges. We describe the proof sketch here and the complete proof is referred to Appendix A. For each state \(s^{\prime}\), which is reached from the initial state \(s\) with infinite steps, there are three cases due to deterministic state transitions: never visited, visited once, and visited infinite times. It is easy to see that \(A(s)\) converges in the first two cases. In the last case, as \(A(s)\) is the sum of the power of the matrix \({\gamma}^{t_{2}}g(s,t_{2},\mu_{\theta})\), then we get a upper bound of \(\gamma\) such that \(A(s)\) converges. By Lebesgue’s Dominated Convergence Theorem [24], we exchange the order of the limit and the integration, \(\bigtriangledown_{s}V^{\mu_{\theta}}(s)=\int_{\mathcal{S}}\sum_{t=0}^{\infty}{ \gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{\theta}) \bigtriangledown_{s^{{}^{\prime}}}r(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime }}))ds^{{}^{\prime}}.\) By the continuity of \(T\), \(r\) and \(\mu_{\theta}\), the gradient of \(V^{\mu_{\theta}}(s)\) over \(s\) exists. ∎
Note that the condition proposed in Lemma 1 is indeed necessary in Example 2.1, where \(n=2,c=2\) and the gradient exists if and only if the discount factor \(\gamma<1/4\). By Lemma 1, we show that the deterministic policy gradient exists and obtain the closed form. The proof is referred to Appendix B.
**Theorem 1**.: _For any policy \(\mu_{\theta}\) and MDP with deterministic state transitions, for any discount factor \(\gamma\) in \([0,\frac{1}{nc})\), the policy gradient exists, and_
\[\bigtriangledown_{\theta}J(\mu_{\theta})=\int_{\mathcal{S}}\rho^{\mu_{\theta}} (s)\bigtriangledown_{\theta}\mu_{\theta}(s)(\bigtriangledown_{a}r(s,a)|_{a=\mu _{\theta}(s)}+\gamma\bigtriangledown_{a}T(s,a)|_{a=\mu_{\theta}(s)} \bigtriangledown_{s^{{}^{\prime}}}V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{ \prime}}=T(s,a)})ds.\]
## 4 Deterministic Policy Gradients with general state transitions
In this section we consider a general setting where the state transition for any state \(s\) and any action \(a\) is a convex combination of a deterministic transition function \(T(s,a)\) with probability \(f(s,a)\), and a stochastic probability transition density function \(p(s^{{}^{\prime}}|s,a)\) with probability \(1-f(s,a)\). Note that this setting generalizes that of DPG. Here, \(T\) also satisfies the same condition as in Section 3. We assume that \(f(s,a)\), \(\bigtriangledown_{s}f(s,a)\) and \(\bigtriangledown_{a}f(s,a)\) are continuous and bounded. By the similar technique to the setting with deterministic state transitions, we get the main theorem which proves the existence of the gradient of \(J(\mu_{\theta})\) over \(\theta\) for a set of discount factors and proposes two conditions such that for all discount factors the policy gradient exists:
**Condition A.1**: \(\max_{s}f(s,\mu_{\theta}(s))\leq\frac{1}{nc}\).
**Condition A.2**: For any sequence of states \((s_{0},...,s_{t-1})\) and any timestep \(t\), the eigenvalues of \(\prod_{i=0}^{t-1}f(s_{i},\mu_{\theta}(s_{i}))\bigtriangledown_{s_{i}}T(s_{i}, \mu_{\theta}(s_{i}))\) are in \([-1,1]\).
**Theorem 2**.: **The GDPG Theorem**__
_For any MDP in the general cases and any policy \(\mu_{\theta}\), for any discount factor \(\gamma\) in \([0,\frac{1}{nc\max_{s}f(s,\mu_{\theta}(s))})\), the policy gradient exists. If the MDP satisfies Condition A.1 or Condition A.2, for any discount factor and any policy \(\mu_{\theta}\), the policy gradient exists. The form is_
\[\begin{split}\bigtriangledown_{\theta}J(\mu_{\theta})=& \int_{\mathcal{S}}\rho^{\mu_{\theta}}(s)(\bigtriangledown_{\theta }\mu_{\theta}(s)\bigtriangledown_{a}r(s,a)|_{a=\mu_{\theta}(s)}+\gamma f(s,\mu _{\theta}(s))\bigtriangledown_{\theta}\mu_{\theta}(s)\bigtriangledown_{a}T(s,a )|_{a=\mu_{\theta}(s)}\\ &\bigtriangledown_{s^{{}^{\prime}}}V^{\mu_{\theta}}(s^{{}^{\prime }})|_{s^{{}^{\prime}}=T(s,a)}+\gamma(1-f(s,\mu_{\theta}(s)))\int_{\mathcal{S}} \bigtriangledown_{\theta}\mu_{\theta}(s)\bigtriangledown_{a}p(s^{{}^{\prime}}| s,a)|_{a=\mu_{\theta}(s)}\\ & V^{\mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}}+\gamma \bigtriangledown_{\theta}f(s,\mu_{\theta}(s))V^{\mu_{\theta}}(s^{{}^{\prime}}) |_{s^{{}^{\prime}}=T(s,\mu_{\theta}(s))}-\gamma\bigtriangledown_{\theta}f(s, \mu_{\theta}(s))\\ &\int_{\mathcal{S}}p(s^{{}^{\prime}}|s,a)|_{a=\mu_{\theta}(s)}V^{ \mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}})ds=\int_{\mathcal{S}}\rho^{\mu_ {\theta}}(s)(\bigtriangledown_{\theta}\mu_{\theta}(s)\bigtriangledown_{a}Q^{ \mu_{\theta}}(s,a)|_{a=\mu_{\theta}(s)})ds.\end{split}\] (2)
The proof is referred to Appendix C. It is interesting to note that the form is the same as the form of gradient of DPG. In fact, the assumption of the condition A.1 and A.2 would become weaker when the probability of the deterministic state transition becomes lower. In the extreme case, i.e., the stochastic case, where the probability is zero, the policy gradient exists without any assumption as in [26]. In fact, the form of the policy gradient is the same in settings of the deterministic state transition and the general case. However, given an estimator of the value function, the complexity of calculating the gradient of these two cases is different. By comparing (1) with (2), we get that it is the more computationally expensive for the gradient of the general case than the deterministic case. The gradient of deterministic state transitions only involves \(\bigtriangledown_{\theta}r(s,\mu_{\theta}(s))\) and \(\bigtriangledown_{s^{{}^{\prime}}}V^{\mu_{\theta}}(s^{{}^{\prime}})\), while the gradient of the general case introduces additional integration on the state space.
### Direct Model-based Extension of DDPG
As discussed before, even for the environment with low-dimensional state space, the sample complexity of DDPG is significantly high for the general case, which may limit the capability of the model-free algorithms due to slow convergence. Thus, we consider a model-based augmented MDP \(\mathcal{M_{*}}\) of the original MDP \(\mathcal{M}\) with the same reward function, while the state transition function is defined as the expectation of the distribution of the next state of the original MDP, i.e., \(T_{*}(s,a)=\mathbb{E}[s^{{}^{\prime}}|s,a]\). \(\mathcal{M_{*}}\) is easier to solve as the state transition of \(\mathcal{M^{*}}\) is deterministic. Note that if the environment is indeed deterministic, \(\mathcal{M_{*}}=\mathcal{M}\). Now we define a direct model-based extension of DDPG, called MDPG. MDPG directly uses the gradient of the long-term rewards of \(\mathcal{M_{*}}\) with policy \(\mu_{\theta}\) to improve the policy instead of the deterministic policy gradient, i.e., \(\bigtriangledown_{\theta}J_{*}(\mu_{\theta})=\mu_{\theta}(s)\bigtriangledown_{ a}Q_{*}^{\mu_{\theta}}(s,a),\) where \(Q_{*}^{\mu_{\theta}}(s,a)\) denotes the action value function of the augmented MDP. However, it is hard to represent the transition dynamics in complex environments, and it may cause the policy to move to a wrong direction as shown in Section 5.2 on problems with large state space.
### The GDPG Algorithm
```
1 Initialize a positive weight \(\alpha\)
2 Initialize the transition network \(T(s,a|{\theta}^{T})\) with random weights \({\theta}^{T}\)
3 Initialize the original and augmented critic networks \(Q(s,a|{\theta}^{Q})\), \(Q_{*}(s,a|{\theta}^{Q_{*}})\) with random weights \({\theta}^{Q}\), \({\theta}^{Q_{*}}\)
4 Initialize the actor network \(\mu(s|{\theta}^{\mu})\) with random weights \({\theta}^{\mu}\)
5 Initialize the target networks \(Q^{{}^{\prime}}\), \({Q_{*}}^{{}^{\prime}}\) and \(\mu^{{}^{\prime}}\) with weights \({\theta}^{Q^{{}^{\prime}}}={\theta}^{Q},{\theta}^{Q_{*}^{{}^{\prime}}}={\theta }^{Q_{*}},{\theta}^{{\mu}^{{}^{\prime}}}={\theta}^{\mu}\)
6 Initialize Experience Replay buffer \(\mathcal{B}\)
7for episode\(=0,...,N-1\) do
8 Initialize a random process \(\mathcal{N}\) for action exploration
9 Receive initial observation state \(s_{0}\).
10 for \(t=1,...,T\) do
11 Select action \(a_{t}=\mu_{(}s_{t}|{\theta}^{\mu})+{\mathcal{N}}_{t}\) according to the current policy and exploration noise
12 Execute action \(a_{t}\), observe reward \(r_{t}\) and new state \(s_{t+1}\), and store transition \((s_{t},a_{t},r_{t},s_{t+1})\) in \(\mathcal{B}\)
13 Sample a random minibatch of \(N\) transitions \((s_{i},a_{i},r_{i},s_{i+1})\) from \(\mathcal{B}\)
14 Set \(y_{i}=r_{i}+\gamma Q^{{}^{\prime}}(s_{i+1},{\mu}^{{}^{\prime}}(s_{i+1}|{\theta }^{{\mu}^{{}^{\prime}}})|{\theta}^{Q^{{}^{\prime}}})\)
15 Update the critic \(Q\) by minimizing the loss: \(L_{1}=\frac{1}{N}\sum_{i}{(y_{i}-Q(s_{i},a_{i}|{\theta}^{Q}))}^{2}\)
16 Set \(y_{i}^{{}^{\prime}}=r_{i}+\gamma Q_{*}^{{}^{\prime}}(T(s_{i},a_{i}|{\theta}^{T }),{\mu}^{{}^{\prime}}(T(s_{i},a_{i}|{\theta}^{T})|{\theta}^{{\mu}^{{}^{\prime }}})|{\theta}^{Q_{*}^{{}^{\prime}}})\)
17 Update the augmented critic \(Q_{*}\) by minimizing the loss: \(L_{2}=\frac{1}{N}\sum_{i}{(y_{i}^{{}^{\prime}}-Q_{*}(s_{i},a_{i}|{\theta}^{Q_{ *}}))}^{2}\)
18 Upate the transition \(T\) by minimizing the loss: \(L_{3}=\frac{1}{N}\sum_{i}{(s_{i+1}-T(s_{i},a_{i}|{\theta}^{T}))}^{2}\)
19 Update the actor by the sampled policy gradient and target networks:
\[\begin{split}\bigtriangledown_{\theta^{\mu}}J(\theta^{\mu})=& \frac{1}{N}\sum_{i}(1-\alpha)\bigtriangledown_{{\theta}^{\mu}}\mu (s|{\theta}^{\mu})\bigtriangledown_{a}Q_{*}(s,a|{\theta}^{Q_{*}})+\alpha \bigtriangledown_{{\theta}^{\mu}}\mu(s|{\theta}^{\mu})\bigtriangledown_{a}Q(s, a|{\theta}^{Q})\end{split}\]
\[{\theta}^{Q^{{}^{\prime}}}=\tau{\theta}^{Q}+(1-\tau){\theta}^{Q^{{}^{\prime}}} ;{\theta}^{Q_{*}^{{}^{\prime}}}=\tau{\theta}^{Q_{*}}+(1-\tau){\theta}^{Q_{*}^{ {}^{\prime}}};{\theta}^{\mu^{{}^{\prime}}}=\tau{\theta}^{\mu}+(1-\tau){\theta} ^{\mu^{{}^{\prime}}}\]
20
```
**Algorithm 1**GDPG algorithm
On the one hand, only solving the model-based augmented MDP may be too myopic. On the other hand, the model-free algorithm suffers from high sample complexity as mentioned. Consequently, we consider a program that maximizes the long-term rewards of the augmented MDP, with the constraint being that the long-term rewards of the augmented MDP is less than the original MDP, i.e.,
\[\max_{\theta}J_{*}(\mu_{\theta}),\ s.t.J_{*}(\mu_{\theta})\leq J(\mu_{\theta}).\] (3)
It is easy to check that the optimum of this program is less than \(\max_{\theta}J(\mu_{\theta})\), and it serves as a lower bound of the long-term rewards of the original MDP. The intuition of this program is to optimize a model-based objective which is easier to solve and the improvement of the new objective guarantees the improvement of the original objective.
If the value function is convex in states ², the long-term rewards of \(\mathcal{M_{*}}\) with policy \(\mu_{\theta}\), \(J_{*}(\mu_{\theta})\) is no larger than the long-term rewards of \(\mathcal{M}\), as illustrated in Theorem 3. That is, the program turns into a problem that maximizes the model-based objective. The proof is referred to Appendix D.
[FOOTNOTE:2][ENDFOOTNOTE]
**Theorem 3**.: _If \(V^{\mu_{\theta}}(s)\) is convex in \(s\), \(J(\mu_{\theta})\geq J_{*}(\mu_{\theta}).\)_
In the other case that the value function is not convex, it is hard to solve the program directly. Therefore, we choose to optimize its Lagrangian dual program,
\[\min_{\alpha\geq 0}\max_{\theta}J_{*}(\mu_{\theta})+\alpha(J(\mu_{\theta})-J_{ *}(\mu_{\theta})).\] (4)
Then for each choice of \(\alpha\), we use the gradient of \(J_{*}(\mu_{\theta})+\alpha(J(\mu_{\theta})-J_{*}(\mu_{\theta}))\), i.e.,
\[(1-\alpha)\mu_{\theta}(s)\bigtriangledown_{a}Q_{*}^{\mu_{\theta}}(s,a)+\alpha \mu_{\theta}(s)\bigtriangledown_{a}Q^{\mu_{\theta}}(s,a),\] (5)
which generalizes the gradient of the DDPG algorithm, to improve the policy by stochastic gradient ascent, where \(Q_{*}^{\mu_{\theta}}(s,a)\) denotes the action value function of the augmented MDP. However, the estimation of the value function of the augmented MDP relies on the expectation of the distribution of the next state, which is unknown. To overcome this challenge, we follow the idea of [22], where neural networks are applied to predict the next state. Different from [22] where they take model predictive control as the control policy, we apply the estimators of state transitions to estimate the action-value function of the augmented MDP. We now propose the Generalized Deterministic Policy Gradient (GDPG) algorithm, as shown in Algorithm 1. Apart from training the actor and the critic, we also train a transition network \(T\) which predicts the next state.
## 5 Experiments
In this section, we design a series of experiments to evaluate GDPG. We aim to investigate the following questions: (1) How does the value of \(\alpha\) affect the performance on a toy problem with general state transitions? (2) How does GDPG compare with DDPG, MDPG, and other state-of-the-art methods on continuous control benchmarks? We first illustrate the influence of the weight \(\alpha\) in a toy environment, ComplexPoint-v0 with general state transitions. Then we evaluate GDPG in a number of continuous control benchmark tasks in OpenAI Gym [2], including a classic control problem [21] and a task in the Box2D and MuJoCo [33] simulator. The details of our benchmarks are referred to Appendix E. We compare GDPG with the following baselines: (a) DDPG, (b) MDPG, (c) TRPO, (d) ACKTR. For the experiments, we run each algorithm 1M steps on each environment over 5 random seeds. Note that the configuration of GDPG is the same as that of DDPG of except for the transition network. Full configuration is referred to Appendix E. We use the averaged return of previous 100 episodes as the performance metric.
### The ablation study of GDPG
To better understand the effect of \(\alpha\) in the dual function, we evaluate GDPG with five different choices of the weight \(\alpha=0,0.25,0.5,0.75,1,2\) in ComplexPoint-v0. Figure 1(a) shows a snapshot of this environment, where the state is the coordinates of the agent in the 5D space while the feasible action set is \([-0.1,0.1]^{5}\). The state transition is a convex combination of the deterministic transition \(T(s,a)=s+a\) with probability \(f(s,a)\), and uniform distribution \([-1,1]^{5}\) with probability \(1-f(s,a)\), where \(f(s,a)=||a||_{2}^{2}/0.05\). The reward function is \(r(s,a)=-||s+a||_{2}\), i.e., the distance between the agent and the origin. The task is terminated either when the agent enters the termination area or the number of steps exceeds a threshold of 100 steps. Figure 1(b) shows the performance comparison, and Figure 1(c) and Figure 1(d) correspond to its earlier stage and convergence stage, which illustrates convergence and performance more clearly. As shown, for \(\alpha=1\), which indeed corresponds to DDPG, results in a bad performance and slow convergence. The slow convergence attributes to the computation complexity of gradient in this environment. For \(\alpha=0\), the goal corresponds to optimize the augmented MDP, which performs better than DDPG as it efficiently reduces sample complexity. However, it is too myopic as it solely focuses on the augmented MDP, which may deviate from the original objective and limit its performance. We observe that the best performance is achieved when \(\alpha=0.5\). We can view the weighted objective as a convex combination of the model-free objective and the model-based objective when \(\alpha\in[0,1]\). \(\alpha\) trades-off between the convergence and the performance. A large \(\alpha\) may introduce bias while a small \(\alpha\) may suffer from sample complexity. Note that the choice of \(2\) for the value of \(\alpha\) achieves the worst performance. Recall (5), the reason is that setting a value of \(\alpha\) larger than 1 may lead the gradient of the policy to a totally opposite direction and induce large variance of the policy gradient.
<figure><img src="content_image/1807.03708/point_env.png"><figcaption>(a) The ComplexPoint environment.</figcaption></figure>
### Performance comparison with baselines on continuous control benchmarks
We now present and discuss the findings from our experiments on several continuous control tasks, all of which are standard benchmark defined in OpenAI Gym [2]. Tasks range from low-dimensional input space to high-dimensional input space. For the baselines algorithms, we use the implementation from OpenAI Baselines [4]. Figure 2, 3 and 4 show the sample mean and the standard deviation of the averaged returns in each environment. As shown in Figure 2, GDPG outperforms other baselines in tasks with low-dimensional input space including a classic continuous control task and a task simulated by Box2D. From Figure 3 and 4, we observe that GDPG outperforms high-dimensional tasks simulated by MuJoCo by a large margin, especially in Swimmer-v2, HalfCheetah-v2, and Humanoid-v2. This demonstrates that GDPG combines the model-based augmented MDP and the original MDP efficiently. Note that the direct model-based extension of DDPG, MDPG performs the worst in all environments except Swimmer-v2. It shows that the model-based technique can not solve complex settings like MuJoCo as it is hard to represent the transition dynamics.
<figure><img src="content_image/1807.03708/x4.png"><figcaption>(a) Pendulum-v0.</figcaption></figure>
<figure><img src="content_image/1807.03708/x6.png"><figcaption>(a) Swimmer-v2.</figcaption></figure>
<figure><img src="content_image/1807.03708/x8.png"><figcaption>(a) HumanoidStandup-v2.</figcaption></figure>
## 6 Related Work
Model-based algorithms has been widely studied [11, 16, 19, 20] in recent years. Iterative LQG [14] applies model-based methods and assumes a specific form of both transition dynamics and the value function while [28, 8, 12] generate synthetic samples by the learned model. Different from traditional model-based methods, we optimize the dual function that involves the model-based augmented MDP and the original MDP. Perhaps the most related model-based approach to our work is PILCO [3], which learns the transition model by Gaussian processes. With the non-parametric transition model, [3] applies policy improvement on analytic policy gradients. However this method does not scale well to nonlinear transition dynamics or high-dimensional state spaces. Different from [3], we do not rely on assumptions of the transition model.
## 7 Conclusion
Most existing works on policy gradient assume stochastic state transitions, while most realistic settings often involve deterministic state transitions. In this paper, we study a setting with a general state transition that is a convex combination of a stochastic continuous function and a deterministic discontinuous function. We prove the existence of the deterministic policy gradient for a certain set of discount factors. We propose the GDPG algorithm to reduce the sample complexity of the deterministic policy gradient. GDPG solves a program that maximizes the long-terms rewards of the model-based augmented MDP with the constraint that the objective serves as the lower bound of the original MDP. We compare GDPG with MDPG and state-of-the-art algorithms on several continuous control benchmarks. Results show that GDPG substantially outperforms other baselines in terms of convergence and long-term rewards. For future work, how to address the optimal weight in the dual program remains to be studied. It is worth studying whether the deterministic policy gradient exists in more general settings that involve multiple deterministic state transitions. Last but not least, it is promising to apply the model-based technique presented in this paper to other model-free algorithms.
## References
* [1] S. J. Bradtke. Reinforcement learning applied to linear quadratic regulation. In _Advances in neural information processing systems_, pages 295–302, 1993.
* [2] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym, 2016.
* [3] M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In _Proceedings of the 28th International Conference on machine learning (ICML-11)_, pages 465–472, 2011.
* [4] P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, and Y. Wu. Openai baselines. https://github.com/openai/baselines, 2017.
* [5] M. Fairbank. Reinforcement learning by value gradients. _arXiv preprint arXiv:0803.3539_, 2008.
* [6] A. Farnell. Limits for the characteristic roots of a matrix. _Bulletin of the American Mathematical Society_, 50(10):789–794, 1944.
* [7] S. Gu, E. Holly, T. Lillicrap, and S. Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In _Robotics and Automation (ICRA), 2017 IEEE International Conference on_, pages 3389–3396. IEEE, 2017.
* [8] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-based acceleration. In _International Conference on Machine Learning_, pages 2829–2838, 2016.
* [9] N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. Tassa. Learning continuous control policies by stochastic value gradients. In _Advances in Neural Information Processing Systems_, pages 2944–2952, 2015.
* [10] J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. _The International Journal of Robotics Research_, 32(11):1238–1274, 2013.
* [11] J. Koutník, G. Cuccu, J. Schmidhuber, and F. Gomez. Evolving large-scale neural networks for vision-based reinforcement learning. In _Proceedings of the 15th annual conference on Genetic and evolutionary computation_, pages 1061–1068. ACM, 2013.
* [12] T. Kurutach, I. Clavera, Y. Duan, A. Tamar, and P. Abbeel. Model-ensemble trust-region policy optimization. _arXiv preprint arXiv:1802.10592_, 2018.
* [13] S. Levine and V. Koltun. Guided policy search. In _International Conference on Machine Learning_, pages 1–9, 2013.
* [14] W. Li and E. Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In _ICINCO (1)_, pages 222–229, 2004.
* [15] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. _arXiv preprint arXiv:1509.02971_, 2015.
* [16] R. Lioutikov, A. Paraschos, J. Peters, and G. Neumann. Sample-based informationl-theoretic stochastic optimal control. In _Robotics and Automation (ICRA), 2014 IEEE International Conference on_, pages 3896–3902. IEEE, 2014.
* [17] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. _arXiv preprint arXiv:1312.5602_, 2013.
* [18] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. _Nature_, 518(7540):529–533, 2015.
* [19] T. M. Moldovan, S. Levine, M. I. Jordan, and P. Abbeel. Optimism-driven exploration for nonlinear systems. In _Robotics and Automation (ICRA), 2015 IEEE International Conference on_, pages 3239–3246. IEEE, 2015.
* [20] W. Montgomery and S. Levine. Guided policy search as approximate mirror descent. _arXiv preprint arXiv:1607.04614_, 2016.
* [21] A. W. Moore. Efficient memory-based learning for robot control. 1990.
* [22] A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. _arXiv preprint arXiv:1708.02596_, 2017.
* [23] J. Peters and J. A. Bagnell. Policy gradient methods. In _Encyclopedia of Machine Learning_, pages 774–776. Springer, 2011.
* [24] H. L. Royden and P. Fitzpatrick. _Real analysis_, volume 2. Macmillan New York, 1968.
* [25] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In _International Conference on Machine Learning_, pages 1889–1897, 2015.
* [26] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In _International Conference on Machine Learning (ICML_, 2014.
* [27] D. R. Song, C. Yang, C. McGreavy, and Z. Li. Recurrent network-based deterministic policy gradient for solving bipedal walking challenge on rugged terrains. _arXiv preprint arXiv:1710.02896_, 2017.
* [28] R. S. Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In _Machine Learning Proceedings 1990_, pages 216–224. Elsevier, 1990.
* [29] R. S. Sutton and A. G. Barto. _Reinforcement learning: An introduction_, volume 1. MIT press Cambridge, 1998.
* [30] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In _Advances in neural information processing systems_, pages 1057–1063, 2000.
* [31] G. Tesauro. Temporal difference learning and td-gammon. _Communications of the ACM_, 38(3):58–68, 1995.
* [32] E. Todorov. Linearly-solvable markov decision problems. In _Advances in neural information processing systems_, pages 1369–1376, 2007.
* [33] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In _Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on_, pages 5026–5033. IEEE, 2012.
* [34] N. Wahlström, T. B. Schön, and M. P. Deisenroth. From pixels to torques: Policy learning with deep dynamical models. _arXiv preprint arXiv:1502.02251_, 2015.
* [35] C. J. Watkins and P. Dayan. Q-learning. _Machine learning_, 8(3-4):279–292, 1992.
* [36] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In _Advances in neural information processing systems_, pages 2746–2754, 2015.
* [37] P. J. Werbos. A menu of designs for reinforcement learning over time. _Neural networks for control_, pages 67–95, 1990.
* [38] Y. Wu, E. Mansimov, R. B. Grosse, S. Liao, and J. Ba. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In _Advances in neural information processing systems_, pages 5285–5294, 2017.
* [39] T. Zhao, H. Hachiya, G. Niu, and M. Sugiyama. Analysis and improvement of policy gradient estimation. In _Advances in Neural Information Processing Systems_, pages 262–270, 2011.
## Appendix A Proof of Lemma 1
Proof.: Recall the definition of \(V^{\mu_{\theta}}(s)\), we have
\[V^{\mu_{\theta}}(s)=Q^{\mu_{\theta}}(s,\mu_{\theta}(s))=r(s,\mu_{\theta}(s))+ \gamma V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s,\mu_{\theta}(s) )}).\] (6)
\[\begin{split}\bigtriangledown_{s}V^{\mu_{\theta}}(s)& =\bigtriangledown_{s}r(s,\mu_{\theta}(s))+\gamma\bigtriangledown_ {s}T(s,\mu_{\theta}(s))\bigtriangledown_{s^{{}^{\prime}}}V^{\mu_{\theta}}(s^{{ }^{\prime}})|_{s^{{}^{\prime}}=T(s,\mu_{\theta}(s))}.\end{split}\] (7)
By unrolling (7) with infinite steps, we get
\[\bigtriangledown_{s}V^{\mu_{\theta}}(s)=\sum_{t=0}^{\infty}\int_{\mathcal{S}}{ \gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{\theta}) \bigtriangledown_{s^{{}^{\prime}}}r(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime }}))ds^{{}^{\prime}},\] (8)
where \(I(s,s^{{}^{\prime}},t,\mu_{\theta})\) is an indicator function that indicates whether \(s^{{}^{\prime}}\) is obtained after \(t\) steps from the state \(s\) following the policy \(\mu_{\theta}\). Here, \(g(s,t,\mu_{\theta})=\prod_{i=0}^{t-1}\bigtriangledown_{s_{i}}T(s_{i},\mu_{ \theta}(s_{i})),\)where \(s_{0}=s\) and \(s_{i}\) is the state after \(i\) steps following policy \(\mu_{\theta}\). The state transitions and policies are both deterministic. We now prove that for any \(\mu_{\theta},s,s^{{}^{\prime}}\) and any discount factor \(\gamma\in[0,\frac{1}{nc})\) such that \(\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{ \theta})\) converges.
For each state \(s^{\prime}\), which is reached from the initial state \(s\) with infinite steps, there are three cases due to deterministic state transitions, as analyzed below:
1. Never visited: \(\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{ \theta})=\mathbf{0}.\)
2. Visited once: Let \(t_{s^{\prime}}\) denote the number of steps that it takes to reach the state \(s^{\prime}\), then \(\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{ \theta})={\gamma}^{t_{s^{{}^{\prime}}}}g(s,t_{s^{{}^{\prime}}},\mu_{\theta}).\)
3. Visited infinite times: Let \(t_{1}\) denote the number of steps it takes to reach \(s^{\prime}\) for the first time. The state \(s^{\prime}\) will be revisited every \(t_{2}\) steps after the previous visit. \[\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{ \theta})=\sum_{k=0}^{\infty}{\gamma}^{t_{1}+kt_{2}}g(s,t_{1}+kt_{2},\mu_{ \theta}).\] (9) By the definition of \(g\), \(g(s,t_{1}+kt_{2},\mu_{\theta})=g(s,t_{1},\mu_{\theta}){(g(s,t_{2},\mu_{\theta} ))}^{k}\), we have \[\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{ \theta})={\gamma}^{t_{1}}g(s,t_{1},\mu_{\theta})\sum_{k=0}^{\infty}{({\gamma}^ {t_{2}}g(s,t_{2},\mu_{\theta}))}^{k}.\] (10) We get the sum of the absolute value of a row or a column of the matrix \(g(s,t_{2},\mu_{\theta}))\) is no larger than \({(nc)}^{t_{2}}\). If we choose \(\gamma\) such that \(\gamma<\frac{1}{nc}\), by [6], the absolute value of any eigenvalue of \({\gamma}^{t_{2}}g(s,t_{2},\mu_{\theta})\) is strictly less than \({\gamma}^{t_{2}}{(nc)}^{t_{2}}=1\). By representing \({\gamma}^{t_{2}}g(s,t_{2},\mu_{\theta})\) with Jordan normal form, i.e., \({\gamma}^{t_{2}}g(s,t_{2},\mu_{\theta})=MJM^{-1}\), \[{\gamma}^{t_{1}}g(s,t_{1},\mu_{\theta})\sum_{k=0}^{\infty}{({\gamma}^{t_{2}}g( s,t_{2},\mu_{\theta}))}^{k}={\gamma}^{t_{1}}g(s,t_{1},\mu_{\theta})M\sum_{k=0} ^{\infty}J^{k}M^{-1}.\] (11) As the absolute value of any eigenvalue of \({\gamma}^{t_{2}}g(s,t_{2},\mu_{\theta})\) is strictly less than \(1\), \(\sum_{k=0}^{\infty}J^{k}\) converges, then \(\sum_{k=0}^{\infty}{({\gamma}^{t_{2}}g(s,t_{2},\mu_{\theta}))}^{k}\) and \(\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{ \theta})\) converge.
By the Lebesgue’s Dominated Convergence Theorem [24], we exchange the order of the limit and the integration.
\[\bigtriangledown_{s}V^{\mu_{\theta}}(s)=\int_{\mathcal{S}}\sum_{t=0}^{\infty}{ \gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{\theta}) \bigtriangledown_{s^{{}^{\prime}}}r(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime }}))ds^{{}^{\prime}}.\] (12)
By the continuity of \(T\) , \(r\) and \(\mu_{\theta}\), the gradient of \(V^{\mu_{\theta}}(s)\) over \(s\) exists.
∎
## Appendix B Proof of Theorem 1
Proof.: By the definition,
\[\begin{split}\bigtriangledown_{\theta}V^{\mu_{\theta}}(s)=& \bigtriangledown_{\theta}Q^{\mu_{\theta}}(s,\mu_{\theta}(s))\\ =&\bigtriangledown_{\theta}(r(s,\mu_{\theta}(s))+ \gamma V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s,\mu_{\theta}(s) )})\\ =&\bigtriangledown_{\theta}\mu_{\theta}(s) \bigtriangledown_{a}r(s,a)|_{a=\mu_{\theta}(s)}+\gamma\bigtriangledown_{\theta }V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s,\mu_{\theta}(s))}\\ +&\gamma\bigtriangledown_{\theta}\mu_{\theta}(s) \bigtriangledown_{a}T(s,a)|_{a=\mu_{\theta}(s)}\bigtriangledown_{s^{{}^{\prime }}}V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s,a)}.\end{split}\] (13)
With the indicator function \(I(s,s^{{}^{\prime}},t,\mu_{\theta})\), we rewrite the equation (13):
\[\begin{split}\bigtriangledown_{\theta}V^{\mu_{\theta}}(s)=& \bigtriangledown_{\theta}\mu_{\theta}(s)(\bigtriangledown_{a}r(s, a)|_{a=\mu_{\theta}(s)}+\gamma\bigtriangledown_{a}T(s,a)|_{a=\mu_{\theta}(s)} \bigtriangledown_{s^{{}^{\prime}}}V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{ \prime}}=T(s,a)})\\ &+\int_{\mathcal{S}}\gamma I(s,s^{{}^{\prime}},1,\mu_{\theta}) \bigtriangledown_{\theta}V^{\mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}}.\\ =&\bigtriangledown_{\theta}\mu_{\theta}(s)( \bigtriangledown_{a}r(s,a)|_{a=\mu_{\theta}(s)}+\gamma\bigtriangledown_{a}T(s, a)|_{a=\mu_{\theta}(s)}\bigtriangledown_{s^{{}^{\prime}}}V^{\mu_{\theta}}(s^{{ }^{\prime}})|_{s^{{}^{\prime}}=T(s,a)})\\ &+\int_{\mathcal{S}}\gamma I(s,s^{{}^{\prime}},1,\mu_{\theta}) \bigtriangledown_{\theta}\mu_{\theta}(s^{{}^{\prime}})(\bigtriangledown_{a^{{} ^{\prime}}}r(s^{{}^{\prime}},a^{{}^{\prime}})|_{a^{{}^{\prime}}=\mu_{\theta}(s ^{{}^{\prime}})}+\gamma\bigtriangledown_{a^{{}^{\prime}}}T(s^{{}^{\prime}},a^{ {}^{\prime}})|_{a^{{}^{\prime}}=\mu_{\theta}(s^{{}^{\prime}})}\\ &\bigtriangledown_{s^{{}^{\prime\prime}}}V^{\mu_{\theta}}(s^{{}^{ \prime\prime}})|_{s^{{}^{\prime\prime}}=T(s^{{}^{\prime}},a^{{}^{\prime}})})ds ^{{}^{\prime}}+\int_{\mathcal{S}}\gamma I(s,s^{{}^{\prime}},1,\mu_{\theta}) \int_{\mathcal{S}}\gamma I(s^{{}^{\prime}},s^{{}^{\prime\prime}},1,\mu_{\theta })\bigtriangledown_{\theta}V^{\mu_{\theta}}(s^{{}^{\prime\prime}})ds^{{}^{ \prime\prime}}ds^{{}^{\prime}}.\\ =&\bigtriangledown_{\theta}\mu_{\theta}(s)( \bigtriangledown_{a}r(s,a)|_{a=\mu_{\theta}(s)}+\gamma\bigtriangledown_{a}T(s, a)|_{a=\mu_{\theta}(s)}\bigtriangledown_{s^{{}^{\prime}}}V^{\mu_{\theta}}(s^{{ }^{\prime}})|_{s^{{}^{\prime}}=T(s,a)})\\ &+\int_{\mathcal{S}}\gamma I(s,s^{{}^{\prime}},1,\mu_{\theta}) \bigtriangledown_{\theta}\mu_{\theta}(s^{{}^{\prime}})(\bigtriangledown_{a^{{} ^{\prime}}}r(s^{{}^{\prime}},a^{{}^{\prime}})|_{a^{{}^{\prime}}=\mu_{\theta}(s ^{{}^{\prime}})}+\gamma\bigtriangledown_{a^{{}^{\prime}}}T(s^{{}^{\prime}},a^{ {}^{\prime}})|_{a^{{}^{\prime}}=\mu_{\theta}(s^{{}^{\prime}})}\\ &\bigtriangledown_{s^{{}^{\prime\prime}}}V^{\mu_{\theta}}(s^{{}^{ \prime\prime}})|_{s^{{}^{\prime\prime}}=T(s^{{}^{\prime}},a^{{}^{\prime}})})ds ^{{}^{\prime}}+\int_{\mathcal{S}}\gamma^{2}I(s,s^{{}^{\prime\prime}},2,\mu_{ \theta})\bigtriangledown_{\theta}V^{\mu_{\theta}}(s^{{}^{\prime\prime}})ds^{{} ^{\prime\prime}}.\end{split}\] (14)
By unrolling (14) with infinite steps, we get
\[\begin{split}\bigtriangledown_{\theta}V^{\mu_{\theta}}(s)& =\int_{\mathcal{S}}\sum_{t=0}^{\infty}{\gamma}^{t}I(s,s^{{}^{ \prime}},t,\mu_{\theta})\bigtriangledown_{\theta}\mu_{\theta}(s^{{}^{\prime}}) (\bigtriangledown_{a^{{}^{\prime}}}r(s^{{}^{\prime}},a^{{}^{\prime}})|_{a^{{}^ {\prime}}=\mu_{\theta}(s^{{}^{\prime}})}+\gamma\bigtriangledown_{a^{{}^{\prime }}}T(s^{{}^{\prime}},a^{{}^{\prime}})|_{a^{{}^{\prime}}=\mu_{\theta}(s^{{}^{ \prime}})}\\ &\bigtriangledown_{s^{{}^{\prime\prime}}}V^{\mu_{\theta}}(s^{{}^{ \prime\prime}})|_{s^{{}^{\prime\prime}}=T(s^{{}^{\prime}},a^{{}^{\prime}})})ds ^{{}^{\prime}}.\end{split}\] (15)
By the definition of \(J(\mu_{\theta})\),
\[\begin{split}\bigtriangledown_{\theta}J(\mu_{\theta})=& \bigtriangledown_{\theta}\int_{\mathcal{S}}p_{0}(s)V^{\mu_{\theta }}(s)ds\\ =&\int_{\mathcal{S}}p_{0}(s)\bigtriangledown_{\theta }V^{\mu_{\theta}}(s)ds.\end{split}\] (16)
As
\[\rho^{\mu_{\theta}}(s^{{}^{\prime}})=\int_{\mathcal{S}}\sum_{t=0}^{\infty}{ \gamma}^{t}p_{0}(s)I(s,s^{{}^{\prime}},t,\mu_{\theta})ds.\] (17)
By exchanging the order of the integration, we get
\[\bigtriangledown_{\theta}J(\mu_{\theta})=\int_{\mathcal{S}}\rho^{\mu_{\theta}} (s)\bigtriangledown_{\theta}\mu_{\theta}(s)(\bigtriangledown_{a}r(s,a)|_{a=\mu _{\theta}(s)}+\gamma\bigtriangledown_{a}T(s,a)|_{a=\mu_{\theta}(s)} \bigtriangledown_{s^{{}^{\prime}}}V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{ \prime}}=T(s,a)})ds.\] (18)
∎
## Appendix C Proof of Theorem 2
Proof.: We first prove a fact that for any continuous policy \(\mu_{\theta}\), there exists a discount fator \(\gamma\) such that the gradient of \(V^{\mu_{\theta}}(s)\) over \(s\) exists. Recall the definition of \(V^{\mu_{\theta}}(s)\), we have
\[\begin{split} V^{\mu_{\theta}}(s)&=Q^{\mu_{\theta}}( s,\mu_{\theta}(s))\\ &=r(s,\mu_{\theta}(s))+\gamma f(s,\mu_{\theta}(s))V^{\mu_{\theta} }(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s,\mu_{\theta}(s))}+\gamma(1-f(s,\mu_{ \theta}(s)))\\ &\int_{\mathcal{S}}p(s^{{}^{\prime}}|s,a)|_{a=\mu_{\theta}(s)}V^{ \mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}}.\end{split}\] (19)
Then
\[\begin{split}\bigtriangledown_{s}V^{\mu_{\theta}}(s)=& \bigtriangledown_{s}r(s,\mu_{\theta}(s))+\gamma\bigtriangledown_{ s}f(s,\mu_{\theta}(s))V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s, \mu_{\theta}(s))}+\gamma f(s,\mu_{\theta}(s))\\ &\bigtriangledown_{s}T(s,\mu_{\theta}(s))\bigtriangledown_{s^{{}^ {\prime}}}V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s,\mu_{\theta} (s))}+\gamma(1-f(s,\mu_{\theta}(s)))\int_{\mathcal{S}}\bigtriangledown_{s}p(s^ {{}^{\prime}}|s,\mu_{\theta}(s))\\ & V^{\mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}}-\gamma \bigtriangledown_{s}f(s,\mu_{\theta}(s))\int_{\mathcal{S}}p(s^{{}^{\prime}}|s, a)|_{a=\mu_{\theta}(s)}V^{\mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}}.\end{split}\] (20)
By unrolling (20) with infinite steps, we get
\[\begin{split}\bigtriangledown_{s}V^{\mu_{\theta}}(s)=& \sum_{t=0}^{\infty}\int_{\mathcal{S}}{\gamma}^{t}g(s,t,\mu_{ \theta})I(s,s^{{}^{\prime}},t,\mu_{\theta})(\bigtriangledown_{s^{{}^{\prime}}} r(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime}}))+\gamma\bigtriangledown_{s^{{} ^{\prime}}}f(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime}}))V^{\mu_{\theta}}(s^ {{}^{\prime\prime}})+\\ &\gamma(1-f(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime}}))\int_{ \mathcal{S}}\bigtriangledown_{s^{{}^{\prime}}}p(s^{{}^{\prime\prime}}\mid{s}^{ {}^{\prime}},\mu_{\theta}(s^{{}^{\prime}}))V^{\mu_{\theta}}(s^{{}^{\prime \prime}})ds^{{}^{\prime\prime}}-\gamma\bigtriangledown_{s}f(s^{{}^{\prime}}, \mu_{\theta}(s^{{}^{\prime}}))\\ &\int_{\mathcal{S}}p(s^{{}^{\prime\prime}}|s^{{}^{\prime}},a^{{}^ {\prime}})|_{a^{{}^{\prime}}=\mu_{\theta}(s^{{}^{\prime}})}V^{\mu_{\theta}}(s^ {{}^{\prime\prime}})ds^{{}^{\prime\prime}})ds,\end{split}\] (21)
where \(I(s,s^{{}^{\prime}},t,\mu_{\theta})\) is an indicator function that indicates whether \(s^{{}^{\prime}}\) is obtained after \(t\) steps from the state \(s\) following the policy \(\mu_{\theta}\) and the deterministic transition. \(g(s,t,\mu_{\theta})=\prod_{i=0}^{t-1}f(s_{i},\mu_{\theta}(s_{i})) \bigtriangledown_{s_{i}}T(s_{i},\mu_{\theta}(s_{i})),\) where \(s_{0}=s\). Here, as the policy is deterministic and the calculation of the gradient with \(\theta\) only involves the deterministic state transitions, \(s_{i}\) is the state after \(i\) steps following policy \(\mu_{\theta}\). By the same technique of the proof of Lemma 1, we get that there exists a discount factor \(\gamma(0<\gamma<1)\) such that
\[\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{ \theta})\] (22)
converges. In fact, we can choose \(\gamma\) such that \(\gamma\times\max_{s}f(s,\mu_{\theta}(s))<\frac{1}{nc}\), where \(n\) denotes the dimension of the state, and \(c\) be the maximum absolute value of elements of all matrices \(\bigtriangledown_{s}T(s,\mu_{\theta}(s))\).
If the condition A.1 holds, i.e., for any state \(s\), \(\max_{s}f(s,\mu_{\theta}(s))\leq\frac{1}{nc}\), by the proof of Lemma 1, for any discount factor, \(\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{ \theta})\) converges.
If the condition A.2 holds, we have
\[\gamma^{t_{2}}\text{max}_{\lambda}|\lambda(g(s,t_{2},\mu_{\theta}))|<1.\]
Thus for any discount factor \(\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{\theta})I(s,s^{{}^{\prime}},t,\mu_{ \theta})\) converges.
By the Lebesgue’s Dominated Convergence Theorem, we exchange the order of the limit and the intergation:
\[\begin{split}\bigtriangledown_{s}V^{\mu_{\theta}}(s)=& \int_{\mathcal{S}}\sum_{t=0}^{\infty}{\gamma}^{t}g(s,t,\mu_{ \theta})I(s,s^{{}^{\prime}},t,\mu_{\theta})(\bigtriangledown_{s^{{}^{\prime}}} r(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime}}))+\gamma\bigtriangledown_{s^{{} ^{\prime}}}f(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime}}))V^{\mu_{\theta}}(s^ {{}^{\prime\prime}})+\\ &\gamma(1-f(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime}}))\int_{ \mathcal{S}}\bigtriangledown_{s^{{}^{\prime}}}p(s^{{}^{\prime\prime}}\mid{s}^{ {}^{\prime}},\mu_{\theta}(s^{{}^{\prime}}))V^{\mu_{\theta}}(s^{{}^{\prime \prime}})ds^{{}^{\prime\prime}}-\gamma\bigtriangledown_{s}f(s^{{}^{\prime}}, \mu_{\theta}(s^{{}^{\prime}}))\\ &\int_{\mathcal{S}}p(s^{{}^{\prime\prime}}|s^{{}^{\prime}},a^{{}^ {\prime}})|_{a^{{}^{\prime}}=\mu_{\theta}(s^{{}^{\prime}})}V^{\mu_{\theta}}(s^ {{}^{\prime\prime}})ds^{{}^{\prime\prime}})ds,\end{split}\] (23)
By the continuity of \(T\) , \(r\), \(f\) and \(\mu_{\theta}\), the gradient of \(V^{\mu_{\theta}}(s)\) over \(s\) exists.
Now we derive the form of the policy gradient. By definition,
\[\begin{split}\bigtriangledown_{\theta}V^{\mu_{\theta}}(s)=& \bigtriangledown_{\theta}r(s,\mu_{\theta}(s))+\gamma \bigtriangledown_{\theta}f(s,\mu_{\theta}(s))V^{\mu_{\theta}}(s^{{}^{\prime}}) |_{s^{{}^{\prime}}=T(s,\mu_{\theta}(s))}+\gamma f(s,\mu_{\theta}(s))\\ &\bigtriangledown_{\theta}T(s,\mu_{\theta}(s))\bigtriangledown_{s ^{{}^{\prime}}}V^{\mu_{\theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s,\mu_{ \theta}(s))}+\gamma(1-f(s,\mu_{\theta}(s)))\int_{\mathcal{S}}\bigtriangledown_ {\theta}p(s^{{}^{\prime}}|s,\mu_{\theta}(s))\\ & V^{\mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}}-\gamma \bigtriangledown_{\theta}f(s,\mu_{\theta}(s))\int_{\mathcal{S}}p(s^{{}^{\prime }}|s,a)|_{a=\mu_{\theta}(s)}V^{\mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}}+ \\ &\gamma f(s,\mu_{\theta}(s))\bigtriangledown_{\theta}V^{\mu_{ \theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T(s,\mu_{\theta}(s))}+\gamma(1-f(s ,\mu_{\theta}(s)))\\ &\int_{\mathcal{S}}p(s^{{}^{\prime}}|s,a)|_{a=\mu_{\theta}(s)} \bigtriangledown_{\theta}V^{\mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}}. \end{split}\] (24)
By unrolling (24) with infinite steps, we get
\[\begin{split}\bigtriangledown_{\theta}V^{\mu_{\theta}}(s)=& \int_{\mathcal{S}}\sum_{t=0}^{\infty}{\gamma}^{t}p(s,s^{{}^{ \prime}},t,\mu_{\theta})(\bigtriangledown_{\theta}r(s^{{}^{\prime}},\mu_{ \theta}(s^{{}^{\prime}}))+\gamma\bigtriangledown_{\theta}f(s^{{}^{\prime}},\mu _{\theta}(s^{{}^{\prime}}))V^{\mu_{\theta}}(s^{{}^{\prime\prime}})|_{s^{{}^{ \prime\prime}}=T(s,\mu_{\theta}(s^{{}^{\prime}}))}\\ &+\gamma f(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime}})) \bigtriangledown_{\theta}T(s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime}})) \bigtriangledown_{s^{{}^{\prime\prime}}}V^{\mu_{\theta}}(s^{{}^{\prime\prime}} )|_{s^{{}^{\prime\prime}}=T(s,\mu_{\theta}(s^{{}^{\prime}}))}+\gamma(1-f(s^{{} ^{\prime}},\mu_{\theta}(s^{{}^{\prime}})))\\ &\int_{\mathcal{S}}\bigtriangledown_{\theta}p(s^{{}^{\prime\prime }}|s^{{}^{\prime}},\mu_{\theta}(s^{{}^{\prime}}))V^{\mu_{\theta}}(s^{{}^{ \prime\prime}})ds^{{}^{\prime\prime}}-\gamma\bigtriangledown_{\theta}f(s^{{}^{ \prime}},\mu_{\theta}(s^{{}^{\prime}}))\\ &\int_{\mathcal{S}}p(s^{{}^{\prime\prime}}|s^{{}^{\prime}},a)|_{a =\mu_{\theta}(s^{{}^{\prime}})}V^{\mu_{\theta}}(s^{{}^{\prime\prime}})ds^{{}^{ \prime\prime}})ds^{{}^{\prime}},\end{split}\] (25)
where \(p(s,s^{{}^{\prime}},t,\mu_{\theta})\) denotes the probability density of the state \(s^{{}^{\prime}}\) after \(t\) steps following the policy \(\mu_{\theta}\). By the definition of \(J(\mu_{\theta})\) and the same technique as the proof of Theorem 1, we get (2). By definition,
\[Q^{\mu_{\theta}}(s,a)=r(s,a)+\gamma f(s,a)V^{\mu_{\theta}}(s^{{}^{\prime}})|_{ s^{{}^{\prime}}=T(s,a)}+\gamma(1-f(s,a))\int_{\mathcal{S}}p(s^{{}^{\prime}}|s, a)V^{\mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}}.\] (26)
Then
(27)
Thus, we get that the policy gradient of (2) is equivalent to the form of the DPG theorem. ∎
## Appendix D Proof of Theorem 3
Proof.: By definition, we have
\[\forall s,V^{\mu_{\theta}}(s)=r(s,\mu_{\theta}(s))+\gamma\int_{s\sim D(s,\mu_{ \theta}(s))}V^{\mu_{\theta}}(s^{{}^{\prime}})ds^{{}^{\prime}},\] (28)
where \(D(s,\mu_{\theta}(s))\) denotes the distribution of the next state. As the value function is convex, we get
\[\forall s,V^{\mu_{\theta}}(s)\geq r(s,\mu_{\theta}(s))+\gamma V^{\mu_{\theta}} (s^{{}^{\prime}})|_{s^{{}^{\prime}}=T^{*}(s,\mu_{\theta}(s))}.\] (29)
By definition,
\[\forall s,V_{*}^{\mu_{\theta}}(s)=r(s,\mu_{\theta}(s))+\gamma V_{*}^{\mu_{ \theta}}(s^{{}^{\prime}})|_{s^{{}^{\prime}}=T^{*}(s,\mu_{\theta}(s))}.\] (30)
Thus
\[\forall s,V^{\mu_{\theta}}(s)-V_{*}^{\mu_{\theta}}(s)\geq\gamma(V^{\mu_{\theta }}(s^{{}^{\prime}})-V_{*}^{\mu_{\theta}}(s^{{}^{\prime}}))|_{s^{{}^{\prime}}=T ^{*}(s,\mu_{\theta}(s))}.\] (31)
As these two value functions are bounded, there is a lower bound \(C\) such that
\[\forall s,V^{\mu_{\theta}}(s)-V_{*}^{\mu_{\theta}}(s)\geq C.\] (32)
Combining (31) with (32) repeatedly, we obtain
\[\forall s,V^{\mu_{\theta}}(s)\geq V_{*}^{\mu_{\theta}}(s).\] (33)
Note that
\[J(\mu_{\theta})=\int_{\mathcal{S}}p_{0}(s)V^{\mu_{\theta}}(s)ds.\] (34)
and
\[J_{*}(\mu_{\theta})=\int_{\mathcal{S}}p_{0}(s)V_{*}^{\mu_{\theta}}(s)ds.\] (35)
Thus \(J(\mu_{\theta})\geq J_{*}(\mu_{\theta}).\) ∎
## Appendix E Implementation Details
In this section we describle the details of the implementation of GDPG. The configuration of the actor network and the augmented critic network is the same as the implementation of OpenAI Baslines. Each network has two fully connected layers, where each layer has 64 units. The activation function is RelU, the batch size is \(128\), the learning rate of the actor is \({10}^{-4}\), and the learning rate of the critic is \({10}^{-3}\).
We exploit the model-based technique by estimating the state transition function using deep neural networks. For problems with low-diemensional input space including ComplexPoint-v0, Pendulum-v0, HalfCheetah-v2, LunarLanderContinuous-v2, we use the two layers fully connected structure for the transition network. For problems which are more complex, including Humanoid-v2, HumanoidStandup-v2, we apply the Convolutional Neural Networks (CNN). In particular, the network contains two layers of CNN followed by a fully connected layer. The configuration for the CNN layer is as listed in Table 1. The learning rate of the transition network is \({10}^{-3}\). We also add \(L_{2}\) norm regularizer to the loss and the batch size is \(128\).
Note that the weight of our objective affects the performance of GDPG as discussed in Section 5.3, we test different value of \(\alpha\) on all environments, and we get that the value of \(\alpha=0.9\) achieves the best performance in all environments.
\addstackgap[2pt]Paramter | Value
---|---
\addstackgap[2pt]Filters for Layer 1 | 32
\addstackgap[2pt]Filters for Layer 2 | 64
\addstackgap[2pt]Kernel Size | 5
\addstackgap[2pt]Paxdding Mode | Same
\addstackgap[2pt]Pooling Size | 2
\addstackgap[2pt]Strides | 2
\addstackgap[2pt]Activation Function | ReLU
Table 1: Configurations.
Environment | ||S|| | ||A||
---|---|---
ComplexPoint-v0 | 5 | 5
Pendulum-v0 | 3 | 1
LunarLanderContinuous-v2 | 8 | 2
Swimmer-v2 | 8 | 2
HalfCheetah-v2 | 17 | 6
HumanoidStandup-v2 | 376 | 17
Humanoid-v2 | 376 | 17
Table 2: List of environments.
|
0906.1706 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 43352,
"num_imgs": 5,
"llama3_tokens_count": 12598
} | [
"content_image/0906.1706/x1.png",
"content_image/0906.1706/x2.png",
"content_image/0906.1706/x3.png",
"content_image/0906.1706/x4.png",
"content_image/0906.1706/x5.png"
] | # Stretched-Gaussian asymptotics of the truncated Lévy flights for the diffusion in nonhomogeneous media
Tomasz Srokowski
Institute of Nuclear Physics, Polish Academy of Sciences, PL – 31-342 Kraków, Poland
###### Abstract
The Lévy, jumping process, defined in terms of the jumping size distribution and the waiting time distribution, is considered. The jumping rate depends on the process value. The fractional diffusion equation, which contains the variable diffusion coefficient, is solved in the diffusion limit. That solution resolves itself to the stretched Gaussian when the order parameter \(\mu\to 2\). The truncation of the Lévy flights, in the exponential and power-law form, is introduced and the corresponding random walk process is simulated by the Monte Carlo method. The stretched Gaussian tails are found in both cases. The time which is needed to reach the limiting distribution strongly depends on the jumping rate parameter. When the cutoff function falls slowly, the tail of the distribution appears to be algebraic.
keywords: Diffusion; Fractional equation; Truncated Lévy flights PACS 02.50.Ey, 05.40.Fb, 05.60.-k
## 1 Introduction
Transport in physical systems can be described in terms of a jumping process which is completely defined by two probability distributions. They determine spatial and temporal characteristics of the system and are called the jumping size, \(\lambda(x)\), and waiting time, \(w(t)\), probability distributions, respectively. The stochastic trajectory consists of infinitely fast jumps which are separated by intervals when the Brownian particle is at rest (e.g., due to traps). Usually, the distributions \(\lambda\) and \(w\) are regarded as mutually independent and processes which they generate are studied in the framework of the decoupled continuous time random walk (CTRW). If the distribution \(w(t)\) is Poissonian, the jumps take place at random, uniformly distributed times and the corresponding process is Markovian. The algebraic form of \(w(t)\), which possesses long tails, is of particular interest; it leads to long rests and then to a weak, subdiffusive transport [1]. The jumping size distribution \(\lambda\) can assume – in order to be stable – either the Gaussian form, or obey the general Lévy distribution which corresponds to processes with long tails, frequently encountered in many areas of physics, sociology, medicine [2] and many others. In the diffusion limit of small wave numbers, the master equation for the decoupled CTRW, and for the case of the Lévy distributed jumping size, resolves itself to the fractional diffusion equation with a constant diffusion coefficient.
However, many physical phenomena require taking into account that the space has some structure and that the diffusion coefficient must actually be variable. It is the case when one considers the transport in porous, inhomogeneous media and in plasmas, as well as for the Lévy flights in systems with an external potential [3]. In the Hamiltonian systems, the speed of transport depends on regular structures in the phase space [4]. Any description of the diffusion on fractals must involve the variable diffusion coefficient, and that one in the power-law form [5, 6]. Similarly, the diffusion on multifractal structures can be regarded as a superposition of the solutions which correspond to the individual fractals [7]. The power-law form of the diffusion coefficient has been also used to describe e.g. the transport of fast electrons in a hot plasma [8] and the turbulent two-particle diffusion [9]. The presence of long jumps indicates a high complexity and the existence of long-range correlations. One can expect that especially in such systems the waiting time depends on the position. The jumping process, stationary and Markovian, which takes into account that dependence, has been proposed in Ref.[10]: the distribution \(w(t)\) is Poissonian with a \(x\)-dependent jumping rate \(\nu(x)\). In this paper, we consider that process for the Lévy distributed \(\lambda(x)\) and solve the corresponding fractional equation. The solution consistently takes into account that the symmetric Lévy distribution, defined by the characteristic function \(\exp(-|k|^{\mu})\), \(0<\mu\leq 2\), exhibits two qualitatively different kinds of behaviour in its asymptotic limit: it is either algebraic, \(\sim|x|^{-\mu-1}\), for \(0<\mu<2\) or Gaussian for \(\mu=2\).
The Lévy process is characterized by very long jumps since the tails of the Lévy distribution fall slowly and the second moment is divergent. In a concrete, realistic problem, however, the available space is finite and the asymptotics differs from the Lévy tail. One can take that into account by introducing a cutoff to the stochastic equations by multiplying the jumping size distribution – in the Lévy form – by a function which falls not slower than \(1/|x|^{3}\), usually by the exponential or the algebraic function. Since the resulting probability distribution has the finite variance, in the diffusion approximation the process is equivalent to the Gaussian limit \(\mu\to 2\).
Restrictions on the jump size are necessary also for problems connected with the random transport of massive particles. Since the velocity of propagation is then finite, the jump, which takes place within a given time interval, must have a finite length. If the jump size is governed by the particle velocity, a coupling between spatial and temporal characteristics of the system emerges in the framework of CTRW. That coupled form of the CTRW is known as the Lévy walk [11, 12, 13]. It has many applications, e.g. for the turbulence [14] and the chaotic diffusion in Josephson junctions [11].
The aim of this paper is to study the limit in which the algebraic tail of the Lévy process becomes the exponential function. Utilizing the result for \(\mu\to 2\) can improve the solution of the fractional equation itself and it is suited to describe the truncated Lévy flights in the diffusion approximation. The paper is organized as follows. In Sec.II we solve the fractional equation and derive the stretched Gaussian asymptotics in the limit \(\mu\to 2\). Sec.III is devoted to the truncated Lévy flights: we present the probability distributions, obtained from simulations of random walk trajectories, for both exponential and power-law form of the cutoff. The results are summarized in Sec.IV.
## 2 The stretched-Gaussian limit of the Lévy process
We consider the random walk process defined by the waiting time probability distribution \(w(t)\) and the jump-size distribution \(\lambda(x)\). They are of the form
\[w(t)=\nu(x){\mbox{e}}^{-\nu(x)t},\] (1)
where \(\nu(x)=|x|^{-\theta}\) (\(\theta>-1\)) and
\[\lambda(x)=\sqrt{2/\pi}\int_{0}^{\infty}\exp(-K^{\mu}k^{\mu})\cos(kx)dk,\] (2)
respectively. The latter expression corresponds to the symmetric Lévy distribution. The master equation for the above process is of the form [10]
\[\frac{\partial}{\partial t}p(x,t)=-\nu(x)p(x,t)+\int\nu(x^{\prime})\lambda(|x- x^{\prime}|)p(x^{\prime},t)dx^{\prime}.\] (3)
In the diffusion limit of small wave numbers, the Eq.(3) can be reduced to the fractional equation. Indeed, the Fourier transform of jump-size distribution, \({\widetilde{\lambda}}(k)=\exp(-K^{\mu}|k|^{\mu})\), can be expanded \({\widetilde{\lambda}}(k)=1-K^{\mu}|k|^{\mu}+\dots\) and the master equation in the Fourier space becomes the following equation [15]
\[\frac{\partial{\widetilde{p}}(k,t)}{\partial t}=-K^{\mu}|k|^{\mu}{\widetilde{F }}[\nu(x)p(x,t)].\] (4)
The above approximation means that the summation over jumps is substituted by the integral and it agrees with the exact result if the jumps are small, compared to the entire trajectory. One can demonstrate by estimating the neglected terms in the Euler-Mclaurin summation formula that the approximation fails near the origin and for \(\mu\leq 1\)[16]. In the other cases, the solution of the fractional equation converges with time to the exact result. Therefore, in the following, we assume \(\mu>1\).
The inversion of the Fourier transforms in Eq.(4) yields the fractional equation
\[\frac{\partial p(x,t)}{\partial t}=K^{\mu}\frac{\partial^{\mu}[\nu(x)p(x,t)]}{ \partial|x|^{\mu}},\] (5)
where \(\partial^{\mu}/\partial|x|^{\mu}\) is the Riesz-Weyl derivative, defined for \(1<\mu<2\) in the following way [17]
\[\frac{\partial^{\mu}}{\partial|x|^{\mu}}f(x,t)=\frac{-1}{2\cos(\pi\mu/2)\Gamma (2-\mu)}\frac{\partial^{2}}{\partial x^{2}}\int_{-\infty}^{\infty}\frac{f(x^{ \prime},t)}{|x-x^{\prime}|^{\mu-1}}dx^{\prime}.\] (6)
Eq.(5) for the constant diffusion coefficient – often generalized to take into account non-Markovian features of the trapping mechanism in the framework of CTRW by substituting the simple time derivative by the fractional Riemann-Liouville derivative [1] – is frequently considered and solved by means of a variety of methods. The solution resolves itself to the Lévy stable distribution with the asymptotic power-law \(x\)-dependence and divergent second moment. The method of solution which is especially interesting for our considerations involves the Fox functions. The well-known result of Schneider [18] states that any Lévy distribution, both symmetric and asymmetric, can be expressed as \(H_{2,2}^{1,1}\). In order to solve the Eq.(5) for the case of the variable jumping rate, \(\nu(x)=|x|^{-\theta}\), we conjecture that the solution also belongs to the class of functions \(H_{2,2}^{1,1}\) and it has the scaling property. Then the ansatz is the following
\[p(x,t)=Na(t)H_{2,2}^{1,1}\left[a(t)|x|\left|\begin{array}[]{c}(a _{1},A_{1}),(a_{2},A_{2})\\ \\ (b_{1},B_{1}),(b_{2},B_{2})\end{array}\right.\right],\] (7)
where \(N\) is the normalization constant. The Fox function is defined in the following way [19, 20]
\[H_{pq}^{mn}\left[z\left|\begin{array}[]{c}(a_{p},A_{p})\\ \\ (b_{q},B_{q})\end{array}\right.\right]=\frac{1}{2\pi i}\int_{L}\chi(s)z^{s}ds,\] (8)
where
\[\chi(s)=\frac{\prod_{1}^{m}\Gamma(b_{j}-B_{j}s)\prod_{1}^{n}\Gamma(1-a_{j}+A_{ j}s)}{\prod_{m+1}^{q}\Gamma(1-b_{j}+B_{j}s)\prod_{n+1}^{p}\Gamma(a_{j}-A_{j}s)}.\] (9)
The contour \(L\) separates the poles belonging to the two groups of the gamma function in the Eq.(9). Evaluation of the residues leads to the well-known series expansion of the Fox function:
\[H_{pq}^{mn}\left[z\left|\begin{array}[]{c}(a_{p},A_{p})\\ \\ (b_{q},B_{q})\end{array}\right.\right]=\sum_{h=1}^{m}\sum_{\nu=0}^{\infty} \frac{\prod_{j=1,j\neq h}^{m}\Gamma(b_{j}-B_{j}\frac{b_{h}+\nu}{B_{h}})\prod_{ j=1}^{n}\Gamma(1-a_{j}+A_{j}\frac{b_{h}+\nu}{B_{h}})}{\prod_{j=m+1}^{q}\Gamma( 1-b_{j}+B_{j}\frac{b_{h}+\nu}{B_{h}})\prod_{j=n+1}^{p}\Gamma(a_{j}-A_{j}\frac{ b_{h}+\nu}{B_{h}})}\frac{(-1)^{\nu}z^{(b_{h}+\nu)/B_{h}}}{\nu!B_{h}},\] (10)
We will try to solve the fractional equation (5) by inserting the function (7). This procedure, if successful, would allow us to find conditions for the coefficients and the function \(a(t)\). The idea to assume the solution in the form (7) is motivated by the following property of the Fox function
\[z^{\sigma}H_{pq}^{mn}\left[z\left|\begin{array}[]{c}(a_{p},A_{p} )\\ \\ (b_{q},B_{q})\end{array}\right.\right]=H_{pq}^{mn}\left[z\left|\begin{array}[] {c}(a_{p}+\sigma A_{p},A_{p})\\ \\ (b_{q}+\sigma B_{q},B_{q})\end{array}\right.\right]\] (11)
which eliminates the algebraic factor by means of a simple shift of the coefficients. One can demonstrate that Eq.(5) cannot be satisfied, in general, by the function (7) for any choice of the parameters. However, as long as we are interested only in the diffusion limit of small wave numbers \(|k|\) (large \(|x|\)), the higher terms in the characteristic function expansion can be neglected. Therefore we require that the Eq.(5) should be satisfied by a function which agrees with the exact solution only up the terms of the order \(|k|^{\mu}\) in the Fourier space. Note that this condition does not introduce any additional idealization since the Eq.(5) itself has been constructed on the same assumption: the higher terms in the \(|k|\) expansion of \(\lambda(x)\) have been also neglected.
First, we need the Fourier transform of the Fox function. Since the process is symmetric, we can utilize the formula for the cosine transform which yields also a Fox function but of the enhanced order:
\[\int_{0}^{\infty}H_{pq}^{mn}\left[x\left|\begin{array}[]{c}(a_{p} ,A_{p})\\ \\ (b_{q},B_{q})\end{array}\right.\right]\cos(kx)dx=\frac{\pi}{k}H_{q+1,p+2}^{n+1 ,m}\left[k\left|\begin{array}[]{l}(1-b_{q},B_{q}),(1,1/2)\\ \\ (1,1),(1-a_{p},A_{p}),(1,1/2)\end{array}\right.\right].\] (12)
The function \({\widetilde{p}}(k,t)\) is then of the order \(H_{3,4}^{2,1}\). The Fourier transform of the function \(p_{\theta}=x^{-\theta}p(x,t)\) can be obtained in the same way. Next, we insert the appropriate functions into the Eq.(4) and expand both sides of the equation by using the formula (10). Let us denote the expansion coefficients of the functions \({\widetilde{p}}(k,t)\) and \({\widetilde{p}_{\theta}}(k,t)\) by \(h_{\sigma,\nu}\) and \(h_{\sigma,\nu}^{(\theta)}\), respectively; \(\sigma\) assumes the values 1 and 2. Applying the Eq.(10) yields
\[{\widetilde{p}}(k,t)=h_{1,0}+h_{1,1}|k|+h_{2,0}|k|^{(1-a_{1})/A_{1}-1}+h_{2,1} |k|^{(2-a_{1})/A_{1}-1}+h_{1,2}k^{2}+\dots\] (13)
and
\[{\widetilde{p}_{\theta}}(k,t)=h_{1,0}^{(\theta)}+h_{1,1}^{(\theta)}|k|+h_{2,0} ^{(\theta)}|k|^{(1-a_{1}+\theta A_{1})/A_{1}-1}+\dots.\] (14)
After inserting the above expressions to the Eq.(4), we find some simple relations among the coefficients of the Fox function by comparison of the exponents. To get the term \(|k|^{\mu}\) on lhs, which corresponds to the term \(k^{0}\) on rhs, we need the condition \((2-a_{1})/A_{1}-1=\mu\). Moreover, we attach the third term on rhs to the first one by putting \((1-a_{1}+\theta A_{1})/A_{1}-1=0\); it is not possible to balance the third term by another one on the lhs. The above conditions determine two coefficients of the Fox function: \(a_{1}=1-(1-\theta)/(\mu+\theta)\) and \(A_{1}=1/(\mu+\theta)\). The coefficients \(h_{1,1}\) and \(h_{1,1}^{(\theta)}\) vanish identically since the gamma function in the denominators has its argument equal 0. The only remaining term can be eliminated by assuming the condition \(1-b_{2}-B_{2}(1-\theta)=0\); then \(h_{2,0}=0\) since that term contains the function \(\Gamma(1-b_{2}-B_{2}(1-\theta))\) in the denominator. Therefore, such choice of the coefficients reduces the function (13) to the two terms and it makes it identical with the probability distribution for the Lévy process in the diffusion limit. Finally, the Eq.(4) becomes a simple differential equation which determines the function \(a(t)\):
\[\dot{\xi}=K^{\mu}\frac{h_{0}^{(\theta)}}{h_{2,1}}\xi^{-\theta/\mu},\] (15)
where \(h_{0}^{(\theta)}=h_{1,0}^{(\theta)}+h_{2,0}^{(\theta)}\) and \(\xi(t)=a^{-\mu}\). The solution
\[a(t)=\left[K^{\mu}\frac{h_{0}^{(\theta)}}{h_{2,1}}\left(1+\frac{\theta}{\mu} \right)t\right]^{-1/(\mu+\theta)}\] (16)
corresponds to the initial condition \(p(x,0)=\delta(x)\). The coefficient \(h_{2,1}\) can be determined directly from Eq.(10), whereas \(h_{0}^{(\theta)}=(2\pi)^{-1}a^{-\theta}{\widetilde{p}}(0,t)=\pi^{-1}a^{-\theta }\int_{0}^{\infty}p(x,t)dx\) can be expressed in terms of the Mellin transform from the Fox function, \(\chi(s)\), and then easily evaluated.
In the limit \(\mu\to 2\), the asymptotic behaviour of the fractional equation changes qualitatively. It is no longer algebraic; the tails of the distribution, and consequently the tails of the Fox function, have to assume the exponential form for \(\mu=2\). It is possible only if the algebraic contribution from the residues vanishes. If \(n=0\), all poles are outside the contour \(L\) and \(H_{p,q}^{m,0}\) is given by the integral over a vertical straight line:
\[H_{p,q}^{m,0}=\frac{1}{2\pi i}\int_{w-i\infty}^{w+i\infty}\chi(s )z^{s}ds,\] (17)
where \(w<Re(b_{i}/B_{i})\). The above integral is usually neglected if \(n>0\) because it is small compared to the contribution from the residues.
The Fox function in the required form can be obtained from (7) by applying the reduction formula if the coefficients in the main diagonal are equal. This demand imposes an additional condition on \(b_{2}\) and \(B_{2}\) which allows us to determine these coefficients: \(b_{2}=1-(1-\theta)/(2+\theta)\) and \(B_{2}=1/(2+\theta)\). Note that the above choice of \((b_{2},B_{2})\) yields \(h_{1,2}=0\) in Eq.(13) and then the solution of Eq.(5) is exact up to the order \(k^{2}\) for any \(\mu\). The most general solution of the fractional equation in the form of the function \(H^{1,1}_{2,2}\), which involves the required conditions, is the following
\[p(x,t)=Na(t)H_{2,2}^{1,1}\left[a(t)|x|\left|\begin{array}[]{c}(1 -\frac{1-\theta}{\mu+\theta},\frac{1}{\mu+\theta}),(a_{2},A_{2})\\ \\ (b_{1},B_{1}),(1-\frac{1-\theta}{2+\theta},\frac{1}{2+\theta})\end{array} \right.\right].\] (18)
The coefficients in the main diagonal have a simple interpretation. Applying Eq.(10) to (18) reveals that \(p(x,t)\) behaves as \(|x|^{b_{1}/B_{1}}\) for \(|x|\to 0\) and then the parameters \(b_{1}\) and \(B_{1}\) are responsible for the shape of the probability distribution near the origin. On the other hand, the parameters \(a_{1}\) and \(A_{1}\) determine the asymptotic shape of the distribution. It can be demonstrated by applying the following property of the Fox function
\[H_{pq}^{mn}\left[z\left|\begin{array}[]{c}(a_{p},A_{p})\\ \\ (b_{q},B_{q})\end{array}\right.\right]=H_{pq}^{mn}\left[\frac{1}{z}\left| \begin{array}[]{c}(1-b_{q},B_{q})\\ \\ (1-a_{p},A_{p})\end{array}\right.\right]\] (19)
and by expansion according to Eq.(10): the leading term is of the form \(p(x,t)\sim|x|^{(2-a_{1})/A_{1}}=|x|^{-1-\mu}~{}~{}~{}~{}(|x|\to\infty)\). Now it becomes clear why our method of solving the Eq.(5) did not determine the parameters \((b_{1},B_{1})\). Since we neglected higher terms in the \(k\)-expansion, the region of small \(|x|\) remained beyond the scope of the approximation. However, we can supplement that solution by referring directly to the master equation (3) which reveals a simple behaviour near the origin. That equation is satisfied by the stationary solution \(1/\nu(x)=|x|^{\theta}\), for any normalizable \(\lambda(x)\). Obviously, such \(p(x,t)\) cannot be interpreted as the probability density distribution since the normalization integral diverges in infinity but it properly reproduces that distribution for small \(|x|\). Therefore we obtain the additional condition \(b_{1}=\theta B_{1}\) which improves the agreement of our solution with the solution of the master equation. The probability distribution for the case \(\theta=0\) can be easily found by the direct solution of the fractional equation which is exact and yields the following values of the parameters: \(b_{1}=0\), \(B_{1}=1\), \(a_{2}=1/2\), and \(A_{2}=1/2\).
In the case \(\mu=2\), the main diagonal in Eq.(18) can be eliminated and the solution of the fractional equation (5) in the limit \(\mu\to 2\) takes the form
\[p(x,t)=Na(t)H_{1,1}^{1,0}\left[a(t)|x|\left|\begin{array}[]{c}(a _{2},A_{2})\\ \\ (b_{1},B_{1})\end{array}\right.\right].\] (20)
The function \(a(t)\) is given by Eq.(16) in the following form
\[a(t)=\left[K^{2}(2+\theta)\frac{h_{0}}{h_{2}}t\right]^{-1/(2+\theta)},\] (21)
where \(h_{0}=\Gamma(b_{1}+B_{1}(1-\theta))/\Gamma(a_{2}+A_{2}(1-\theta))\) and \(h_{2}=\Gamma(b_{1}+3B_{1})/\Gamma(a_{2}+3A_{2})\) are the coefficients of the \(k\)-expansion of the functions \(|x|^{-\theta}p(x,t)\) and \(p(x,t)\), respectively. The asymptotic expression for \(p(x,t)\) can be obtained from the estimation of the integral (17) by the method of steepest descents [21]. The result reads
\[H_{1,1}^{1,0}(z)\approx{\mbox{e}}^{i\pi(\alpha^{\prime}-1/2)}E(z{\mbox{e}}^{i \pi\alpha}),\] (22)
where
\[E(z)=\frac{1}{2\pi i\alpha}\sum_{j=0}^{\infty}C_{j}(\beta\alpha^{\alpha}z)^{(1 -\alpha^{\prime}-j)/\alpha}\exp(\beta\alpha^{\alpha}z)^{1/\alpha}\] (23)
and \(\alpha=B_{1}-A_{2}\), \(\alpha^{\prime}=a_{2}-b_{1}+1/2\), \(\beta=A_{2}^{A_{2}}/B_{1}^{B_{1}}\), \(z=a|x|\). The final result appears to be a stretched Gaussian, modified by an algebraic factor and a series which converges to a constant for \(z\to\infty\):
\[H_{1,1}^{1,0}(z)\approx\frac{1}{2\pi\alpha}z^{(1-\alpha^{\prime})/\alpha}\exp( -\beta^{1/\alpha}\alpha z^{1/\alpha})\beta^{(1-\alpha^{\prime})/\alpha}\alpha^ {1-\alpha^{\prime}}\sum_{j=0}^{\infty}(-1)^{j}C_{j}\beta^{-j/\alpha}\alpha^{-j }z^{-j/\alpha}.\] (24)
The above expression has been obtained by Wyss [22] as the expansion of \(H^{3,0}_{2,3}(z)\) which satisfies a generalized diffusion equation. It is the integral equation in respect to the time variable (non-Markovian) which resolves itself to the fractional diffusion equation; that equation is commonly used to handle the subdiffusive processes in the framework of the CTRW [12, 23, 1]. The function which contains the stretched Gaussian, modified by the power-law factor, is used as the asymptotic form of the propagator for diffusion on fractals, e.g. on the Sierpiński gasket [24].
The coefficients \(C_{j}\) are defined by means of the following expression
\[\frac{\Gamma(1-a_{2}+A_{2}s)}{\Gamma(1-b_{1}+B_{1}s)}(\beta\alpha^{\alpha})^{- s}\equiv G=\sum_{j=0}^{\infty}\frac{C_{j}}{\Gamma(\alpha s+\alpha^{\prime}+j)}.\] (25)
They can be explicitly evaluated by a subtraction of the consecutive terms and by taking the limit \(s\to\infty\). More precisely, \(C_{j}\) are given by the following recurrence formula
\[C_{j} = \lim_{s\to\infty}\left[G\Gamma(\alpha s+\alpha^{\prime})-C_{0}- \frac{C_{1}}{\alpha s+\alpha^{\prime}}-\dots-\frac{C_{j}}{(\alpha s+\alpha^{ \prime})\dots(\alpha s+\alpha^{\prime}+j-1)}\right]\times\] (26)
\[\times (\alpha s+\alpha^{\prime})(\alpha s+\alpha^{\prime}+1)\dots( \alpha s+\alpha^{\prime}+j-1)\]
where
\[C_{0}=\sqrt{2\pi}\alpha^{\alpha^{\prime}-1/2}A_{2}^{1/2-a_{2}}B_{1}^{b_{1}-1/2}\] (27)
has been obtained from the expansion of the gamma functions by means of the Stirling formula. The exponent of the stretched Gaussian, \(1/\alpha\), is connected with higher moments of the distribution \(p(x,t)\) and it cannot be determined in the framework of the diffusion approximation.
All moments of the distribution \(p(x,t)\) are convergent. In particular, the variance, which determines the diffusion properties of the system, is given by the expansion coefficient \(h_{2}\) in a simple way:
\[\langle x^{2}\rangle=-\frac{\partial^{2}}{\partial k^{2}}{\widetilde{p}}(0,t)= h_{2}a^{-2}\sim t^{\frac{2}{2+\theta}}.\] (28)
For \(\theta=0\) the diffusion coefficient \(D=\lim_{t\to\infty}\langle x^{2}\rangle(t)/2t\) assumes a finite value and the diffusion is normal. The case \(\theta\neq 0\) means the anomalous diffusion: either the enhanced one for \(\theta<0\), or the subdiffusion for \(\theta>0\); the diffusion coefficients are then \(\infty\) or 0, respectively. The kind of diffusion depends only on \(\theta\) and it is not sensitive on free parameters. The anomalous diffusion is frequently encountered in physical phenomena, in particular in complex and disordered systems [25], as well as in dynamical systems [4].
On the other hand, the fractional equation (5) for \(\mu=2\) assumes a form of the simple diffusion equation
\[\frac{\partial p(x,t)}{\partial t}=K^{2}\frac{\partial^{2}[|x|^{-\theta}p(x,t) ]}{\partial x^{2}}\] (29)
which can be solved exactly just by assuming the scaling form of the solution \(p(x,t)=a(t)f(a(t)x)\). The functions \(a(t)\) and \(f(ax)\) are derived by inserting to the Eq.(29) and by separation of the variables [26]. That procedure finally yields
\[p(x,t)=N(K^{2}t)^{-\frac{1+\theta}{2+\theta}}|x|^{\theta}\exp\left(-\frac{|x|^ {2+\theta}}{K^{2}(2+\theta)^{2}t}\right).\] (30)
Eq.(29) follows from the master equation (3) with the Gaussian \(\lambda(x)\), when one neglects all terms higher than the second one in the Kramers-Moyal expansion. That procedure is justified if jumps are small and \(\nu(x)\) is a smooth function [27]. We can expect that the solution of the master equation converges with time to Eq.(30) and this convergence is fast for small \(|\theta|\). Convergence of the tails must be slow, because the contribution from large jumps is substantial for large \(|x|\). Indeed, we will demonstrate in the following that for large \(|\theta|\), in particular for \(\theta\) close to \(-1\), a very long time is required.
The result (30) is useful for further improvement of the solution (18). The comparison of Eqs. (24) and (30) yields the conditions for the Fox function coefficients which ensure the proper limit \(\mu\to 2\): \(\lim_{\mu\to 2}(B_{1}-A_{2})=1/(2+\theta)\) and \(\lim_{\mu\to 2}(1-\alpha^{\prime})/\alpha=\theta\). By inserting the coefficients which follow from those limiting values to the Eq.(18) and by assuming, in addition, that \(B_{1}=1\), we obtain a particular solution of the fractional equation (5) in the form
\[p(x,t)=Na(t)H_{2,2}^{1,1}\left[a(t)|x|\left|\begin{array}[]{c}(1 -\frac{1-\theta}{\mu+\theta},\frac{1}{\mu+\theta}),(\frac{1}{2}+\frac{\theta(1 +\theta)}{2+\theta},1-\frac{1}{2+\theta})\\ \\ (\theta,1),(1-\frac{1-\theta}{2+\theta},\frac{1}{2+\theta})\end{array}\right. \right].\] (31)
Therefore, the above solution of the fractional equation takes into account the Kramers-Moyal result (30) in the limit \(\mu\to 2\) and its behaviour in the origin agrees with the master equation. The limit \(\theta\to 0\) corresponds to the Lévy process.
## 3 Truncated Lévy flights
The Lévy flights are characterized by the power-law tail with the exponent smaller than 3 which implies the infinite variance. However, in physical systems, which are limited in space, the variance must always be finite. The finiteness of the available space must be taken into account in any attempt to simulate the random walk in lattices, for example, in a model of turbulence which describes a transport in a Boltzmann lattice gas [28]. One can constrain the jumping size either by taking into account that the Brownian particle actually possesses a finite velocity, i.e. to substitute the Lévy flights by the Lévy walks, or by introducing some cutoff in the jumping size probability distribution. As a result, the variance becomes finite and, in the case of the mutually independent jumps, the random walk probability distribution converges to the Gaussian, according to the central limit theorem. The truncation of the Lévy flights can be accomplished either by a simple removing of the tail [29] or by multiplying the tail by some fast falling function. An obvious choice in this context is the exponential \(\exp(-\gamma|x|)\) and the algebraic function \(|x|^{-\beta}\), where \(\beta\geq 2-\mu\). The former case was considered by Koponen in Ref.[30], where an analytic expression for the characteristic function was derived. The Lévy flights with exponential truncation serve as a model for phenomena in many fields, e.g. in turbulence [31], solar systems, economy. The distribution function of velocity and magnetic-field vector differences within solar wind can be reasonably fitted in this way [32]. In the framework of the economic research, the Lévy process is a natural model of the financial assets flow and the fractional equations are applied to characterize the dynamics of stock prices, where rare, non-Gaussian events are frequently encountered, in particular if the market exhibits high volatility. One can improve in this way the Black-Scholes model, commonly used to price the options, which is restricted to the Gaussian distributions. In order to incorporate the finiteness of the financial system to the fractional equations formalism by means of the exponential truncation of the Lévy tails, models known as CGMY and KoBoL have been devised [33]. The non-Markovian fractional equation, which results from the CTRW model with the Lévy distributed and exponentially truncated jump size, has been considered in Ref.[34]. The solutions do not exhibit a typical scaling at small time but they converge, asymptotically, to the stretched Gaussian which is predicted by the subdiffusive case of CTRW with the Gaussian step-size distribution.
<figure><img src="content_image/0906.1706/x1.png"><figcaption>Figure 1: The distributions p(x,t) obtained from trajectory simulations forthe power-law cutoff (35) with μ′=5.5 for t=5, t=10, t=50 and t=200 (from topto bottom). The distribution predicted by Eq.(30) is also presented and itcoincides with the curve for t=200. Other parameters are the following: θ=1,σ=0.1 and a=t−1/(2+θ).</figcaption></figure>
In Ref. [30], the following jumping size distribution, in the form of the Lévy tail multiplied by the exponential factor, has been introduced:
\[\lambda(x)=N{\mbox{e}}^{-\gamma|x|}|x|^{-\mu-1}\] (32)
and the characteristic function for the process has been evaluated. In the above formula \(\gamma\geq 0\) and \(N\) is the normalization constant. For the symmetric process, the normalized Fourier transform from \(\lambda(x)\) is given by [30, 34]
\[{\widetilde{\lambda}(k)}=\frac{4}{\pi}\mu\Gamma(\mu)\tan(\pi\mu/2)[\gamma^{\mu }-(k^{2}+\gamma^{2})^{\mu/2}\cos(\mu\arctan(k/\gamma)]+1.\] (33)
We keep only the terms of the lowest order in \(|k|\). The expansion of the expression (33) produces the following result
\[{\widetilde{\lambda}(k)}\approx 1+\frac{2}{\pi}\mu\Gamma(\mu)\gamma^{\mu-2} \tan(\pi\mu/2)(\mu^{2}-\mu)k^{2}\equiv 1-K_{E}^{2}k^{2}.\] (34)
The diffusion process is then described by Eq.(29) with \(K^{2}=K_{E}^{2}\).
<figure><img src="content_image/0906.1706/x2.png"><figcaption>Figure 2: The time T needed to reach the distribution (30), as a function ofθ, for the case of the algebraic cutoff (35) with the parameters μ′=5.5 andσ=0.1.</figcaption></figure>
On the other hand, one can apply the power-law cutoff to the Lévy tail. In Ref.[35] the fractional equation of the distributed order, with the constant diffusion coefficient, was introduced; it implies the cutoff in the form of the power-law function with the exponent \(5-\mu\). Since the equation involves both the fractional and the diffusion component, there is no simple scaling. In this paper, we assume the jumping size distribution in the form of the modified Lévy tail:
\[\lambda(x)=\left\{\begin{array}[]{ll}0&\mbox{for $|x|\leq\sigma$}\\ \mu^{\prime}\sigma^{\mu^{\prime}}|x|^{-\mu^{\prime}-1}&\mbox{for $|x|>\sigma$} ,\end{array}\right.\] (35)
where \(\mu^{\prime}=\mu+\beta>2\) to ensure the existence of the second moment. The Fourier transform is given by [36]
\[{\widetilde{\lambda}(k)}=1-\frac{\mu^{\prime}}{2(\mu^{\prime}-2)}(k\sigma)^{2} -\left[\Gamma(1-\mu^{\prime})\cos(\pi\mu^{\prime}/2)\right](|k|\sigma)^{\mu^{ \prime}}+\dots.\] (36)
In this case we have \(K^{2}=\mu^{\prime}\sigma^{2}/2(\mu^{\prime}-2)\), provided we keep only the quadratic term.
<figure><img src="content_image/0906.1706/x3.png"><figcaption>Figure 3: The shape parameter δ of the distribution tail exp(−const|ax|δ) as afunction of time for the exponential cutoff (32). The curves correspond to thecases: θ=1, θ=0 and θ=−0.5 (from top to the bottom). The other parameters:μ=1.5 and γ=1.</figcaption></figure>
In the following, we evaluate the probability density distribution \(p(x,t)\) for both forms of the Lévy flight cutoff, exponential and algebraic, by means of the random walk trajectory simulations. The waiting time is sampled from Eq.(1) and the jumping size from either (32) or (35). Fig.1 presents the power-law case for \(\theta=1\), which corresponds to the subdiffusion. The results are compared with the Kramers-Moyal limiting distribution (30), which is expected to be reached at large time. We observe a rapid convergence for small and medium \(|x|\)-values, whereas the tails reach the form (30) at about \(t=200\). Nevertheless, the shape of the tails is always stretched exponential \(\sim\exp(-const|ax|^{\delta})\) and the index \(\delta\) rises with time. The speed of convergence to the distribution (30) strongly depends on the parameter \(\theta\). In Fig.2 we present the convergence time \(T\) as a function of \(\theta\). It is relatively short only for small \(|\theta|\); for this case the kernel in the master equation (3) changes weakly with \(|x|\) and the higher terms in the Kramers-Moyal expansion soon become negligible. \(T\) rises rapidly for the negative \(\theta\): the estimation presented in the figure suggests that the dependence \(T(\theta)\) is exponential, \(\sim\exp(-16\theta)\), which yields \(T\sim 10^{8}\) when \(\theta\) approaches -1. The rapid growth of the time needed to reach convergence of the tails for the negative \(\theta\) may also be related to a specific shape of the distribution. The tails become flat for \(\theta<0\), and the asymptotics emerges first for very large \(|x|\). On the other hand, the probability density to stay in the origin, \(p(0,t)\), is then infinite: we have \(p(x,t)\sim t^{-(1+\theta)/(2+\theta)}|x|^{\theta}\)\((|x|\ll 1)\), according to Eq.(30). Note that in the case \(\theta=0\) we obtain the usual result for diffusion: \(p(0,t)\sim 1/\sqrt{t}\).
<figure><img src="content_image/0906.1706/x4.png"><figcaption>Figure 4: The distributions p(x,t) at t=100 and t=200 obtained from trajectorysimulations for the exponential cutoff with μ=1.5 and γ=2. These curvescoincide and assume the numerically estimated shape ∼a|x|exp(−4.7(a|x|)2.6.The distribution predicted by Eq.(30) is also presented. The parameter θ=1 anda=t−1/(2+θ).</figcaption></figure>
Application of the exponential cutoff to the Lévy tail produces similar probability density distributions to those presented in Fig.1 and they are also characterized by the stretched Gaussian tails. However, the convergence rate of the index \(\delta\) to the value \(2+\theta\), predicted by the solution (30), is smaller than for the power-law truncation because the kernel in Eq.(3) is steeper in this case. In Fig.3 we present the dependence \(\delta(t)\) for all kinds of the diffusion. Initially, the exponent rises fast but then it begins to stabilize and the curves approach the asymptotic values very slowly, especially for the superdiffusive case of the negative \(\theta\). Discrepancies from Eq.(30) are more pronounced for \(\gamma=2\). This case is presented in Fig.4: though the rescaled distribution seems to be stabilized already for \(t=100\), its shape differs from (30) also for intermediate values of \(a|x|\).
<figure><img src="content_image/0906.1706/x5.png"><figcaption>Figure 5: The distributions p(x,t), plotted in the double logarithmic scale,obtained from trajectory simulations for the power-law cutoff with μ′=2.5, θ=1and σ=0.1.</figcaption></figure>
The exponential asymptotics of the probability density distributions is guarantied by the existence of the finite second moment. However, the time needed to reach that asymptotics can be so large [29] that it cannot be observed in any practical realizations of the process. In fact, the convergence rate to the normal distribution is governed by the third moment, according to the Berry-Esséen theorem [37] which refers to the sum of mutually independent variables, sampled from the same distribution. It states that if the third moment \(\rho\) is finite, then the deviation of a given distribution from the normal one is less than \(33\rho/(4\sigma^{3}\sqrt{n})\), where \(n\) the number of steps and \(\sigma\) is the standard deviation. If \(\rho=\infty\), the convergence to the Gaussian may be problematic in practice. To demonstrate that the case of divergent third moment is exceptional also for our process, let us consider the power-law cutoff of the Lévy tail, Eq.(35), with the parameter \(\mu^{\prime}=2.5\). The tail of the resulting random walk distribution, presented in Fig.5, is no longer exponential but it assumes the power-law form \(|x|^{-\delta}\) which persists to the largest times, numerically accessible. The parameter \(\delta\) is constant and well determined for small time: \(\delta=3.4\). Moreover, it seems to be independent of \(\theta\). The presence of algebraic tail is not restricted to the case \(\mu^{\prime}\leq 3\), for which the third moment is divergent; also for slightly larger values of \(\mu^{\prime}\) it is clearly visible. In the case \(\mu^{\prime}=3.1\) we find \(\delta=4.0\) at small times, whereas \(\delta=4.5\) for \(\mu^{\prime}=3.5\).
The probability distributions which possess algebraic tails with \(3<\delta<4\) are of interest in the economic research. It has been suggested that such power-laws in financial data arise when the trading behaviour is performed in an optimal way [38]. The stock market data seems to confirm that expectation. Extensive studies of the US indexes indicate the power-law form of the probability distribution of stock price changes with \(2.5<\delta<4\)[39, 40, 41]. Moreover, the distributed order equation of Sokolov et al. [35] predicts the similar value: \(\delta=3.3\).
## 4 Conclusions
We have solved the fractional equation which follows from the master equation for the jumping process in the diffusion approximation of small wave numbers. The jumping size distribution has the Lévy form. The jumping rate depends on position and then the diffusion coefficient in the fractional equation is variable. We have considered the jumping rate in the algebraic form \(\nu(x)=|x|^{-\theta}\), well suited for the diffusion on self-similar structures. The generalization to other dependences \(\nu(x)\) is possible [7]. It has been demonstrated that the equation is satisfied by the scaling formula which can be expressed in terms of the Fox function \(H^{1,1}_{2,2}\), when one neglects higher terms in the \(k\)-expansion. In the limit \(\mu\to 2\), the solution reduces itself to the Fox function of the lower order and it exhibits the stretched-Gaussian asymptotic form (24). The exponent of that function, \(1/\alpha\), is related to the higher moments and it cannot be uniquely determined in the diffusion approximation. The solution predicts all kinds of diffusion, both normal and anomalous, which are distinguished by the parameter \(\theta\). The diffusion equation can be solved exactly for the case \(\mu=2\) and that solution constitutes the Kramers-Moyal approximation of the master equation. The requirement that the fractional equation solution should agree with that result in the limits \(\mu\to 2\) and \(t\to\infty\) allows us to find additional conditions for the Fox function coefficients.
In the approximation of small wave numbers, the problem of truncated Lévy flights coincides with that of the Gaussian jump sizes. We have applied two forms of the cutoff: the exponential and the algebraic ones to study the random walk process by the Monte Carlo method. In most cases, the probability density distributions converge with time to the Kramers-Moyal result which predicts \(\alpha=1/(2+\theta)\). However, that convergence appears very slow if \(\theta\) is far away from 0, especially for \(\theta<0\). If the truncation function is steep, the distribution seems to assume the stabilized asymptotic shape which differs slightly from Eq.(30) (see Fig.4) and this conclusion may indicate a limit of applicability of the Kramers-Moyal approximation. In other cases, form (30) is actually reached after a long time. For smaller time, the scaled distribution is time-dependent, in disagreement with the ansatz (7). However, the distribution tail is always power-law and the exponent \(\delta\) depends on time very weakly (Fig.3), compared to the function \(a(t)\). One can expect that in an experimental situation, when the observation time is finite, the distributions are unable to converge to (30) and they may reveal the values of \(\delta\) smaller than \(2+\theta\).
The convergence of the distribution to (30) becomes problematic not only for very sharp cutoffs but, conversely, for the functions which possess a large, in particular infinite, third moment. Numerical simulations predict in this case the power-law asymptotics and no trace of the exponential tail could be found. The value of the exponent of power-law tail agrees with observations, e.g. for the financial data.
## References
* [1] R. Metzler, J. Klafter, Phys. Rep. 339 (2000) 1.
* [2] B. J. West, W. Deering, Phys. Rep. 246 (1994) 1.
* [3] D. Brockmann, T. Geisel, Phys. Rev. Lett. 90 (2003) 170601.
* [4] G. M. Zaslavsky, Phys. Rep. 371 (2002) 461.
* [5] B. O’Shaughnessy, I. Procaccia, Phys. Rev. Lett. 54 (1985) 455.
* [6] R. Metzler, T. F. Nonnenmacher, J. Phys. A 30 (1997) 1089.
* [7] T. Srokowski, Phys. Rev. E 78 (2008) 031135.
* [8] A. A. Vedenov, Rev. Plasma Phys. 3 (1967) 229.
* [9] H. Fujisaka, S. Grossmann, S. Thomae, Z. Naturforsch. Teil A 40 (1985) 867.
* [10] A. Kamińska, T. Srokowski, Phys. Rev. E 69 (2004) 062103.
* [11] T. Geisel, J. Nierwetberg, A. Zacherl, Phys. Rev. Lett. 54 (1985) 616.
* [12] G. Zumofen, J. Klafter, Phys. Rev. E 47 (1993) 851.
* [13] J. Klafter, A. Blumen, M. F. Shlesinger, Phys. Rev. A 35 (1987) 3081.
* [14] M. F. Shlesinger, B. J. West, J. Klafter, Phys. Rev. Lett. 58 (1987) 1100.
* [15] T. Srokowski, A. Kamińska, Phys. Rev. E 74 (2006) 021103.
* [16] E. Barkai, Chem. Phys. 284 (2002) 13.
* [17] A. V. Chechkin, V. Y. Gonchar, J. Klafter, R. Metzler, L. V. Tanatarov, J. Stat. Phys. 115 (2004) 1505.
* [18] W. R. Schneider, in: S. Albeverio, G. Casati, D. Merlini (Eds.), Stochastic Processes in Classical and Quantum Systems, Lecture Notes in Physics, Vol. 262, Springer, Berlin, 1986.
* [19] C. Fox, Trans. Am. Math. Soc. 98 (1961) 395.
* [20] A. M. Mathai, R. K. Saxena, _The H-function with Applications in Statistics and Other Disciplines_, (Wiley Eastern Ltd., New Delhi, 1978).
* [21] B. L. S. Braaksma, Compos. Math. 15 (1964) 239.
* [22] W. Wyss, J. Math. Phys. 27 (1986) 2782.
* [23] J. Klafter, G. Zumofen, J. Phys. Chem. 98 (1994) 7366.
* [24] H. E. Roman, Phys. Rev. E 51 (1995) 5422.
* [25] J.-P. Bouchaud, A. Georges, Phys. Rep. 195 (1990) 12.
* [26] Kwok Sau Fa, E. K. Lenzi, Phys. Rev. E 67 (2003) 061105.
* [27] N. G. van Kampen, _Stochastic Processes in Physics and Chemistry_ (North-Holland, Amsterdam, 1981).
* [28] F. Hayot, L. Wagner, Phys. Rev E 49 (1994) 470.
* [29] R. Mantegna, H. E. Stanley, Phys. Rev. Lett. 73 (1994) 2946.
* [30] I. Koponen, Phys. Rev. E 52 (1995) 1197.
* [31] B. Dubrulle, J.-Ph. Laval, Eur. Phys. J. B 4 (1998) 143.
* [32] R. Bruno, L. Sorriso-Valvo, V. Carbone, B. Bavassano, Europhys. Lett. 66 (2004) 146.
* [33] A. Cartea, D. del-Castillo-Negrete, Physica A 374 (2007) 749.
* [34] A. Cartea, D. del-Castillo-Negrete, Phys. Rev. E 76 (2007) 041105.
* [35] I. M. Sokolov, A. V. Chechkin, J. Klafter, Physica A 336 (2004) 245.
* [36] M. Marseguerra, A. Zoia, Physica A 377 (2007) 1.
* [37] W. Feller, _An introduction to probability theory and its applications_ (John Wiley and Sons, New York, 1966), Vol.II.
* [38] X. Gabaix, P. Gopikrishnan, V. Plerou, H. E. Stanley, Nature 423 (2003) 267.
* [39] P. Gopikrishnan, M. Meyer, L. A. N. Amaral, H. E. Stanley, Eur. Phys. J. B 3 (1998) 139.
* [40] V. Plerou, P. Gopikrishnan, L. A. Nunes Amaral, M. Meyer, H. E. Stanley, Phys. Rev. E 60 (1999) 6519.
* [41] H. E. Stanley, Physica A 318 (2003) 279.
|
1101.2395 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 49453,
"num_imgs": 10,
"llama3_tokens_count": 15057
} | [
"content_image/1101.2395/x1.png",
"content_image/1101.2395/x1.png",
"content_image/1101.2395/x1.png",
"content_image/1101.2395/x1.png",
"content_image/1101.2395/x1.png",
"content_image/1101.2395/x2.png",
"content_image/1101.2395/x3.png",
"content_image/1101.2395/x4.png",
"content_image/1101.2395/x5.png",
"content_image/1101.2395/x6.png"
] | # Domain decomposition schemes for evolutionary equations of first order with not self-adjoint operators
Petr N. Vabishchevich
P.N. Vabishchevich Keldysh Institute of Applied Mathematics, 4 Miusskaya Square, 125047 Moscow, Russia
Tel.: +7-499-9781314
Fax: +7-499-9720737
¹
[FOOTNOTE:2][ENDFOOTNOTE]
Submitted to arXiv.org January 12, 2011
###### Abstract
Domain decomposition methods are essential in solving applied problems on parallel computer systems. For boundary value problems for evolutionary equations the implicit schemes are in common use to solve problems at a new time level employing iterative methods of domain decomposition. An alternative approach is based on constructing iteration-free methods based on special schemes of splitting into subdomains. Such regionally-additive schemes are constructed using the general theory of additive operator-difference schemes. There are employed the analogues of classical schemes of alternating direction method, locally one-dimensional schemes, factorization methods, vector and regularized additive schemes. The main results were obtained here for time-dependent problems with self-adjoint elliptic operators of second order.
The paper discusses the Cauchy problem for the first order evolutionary equations with a nonnegative not self-adjoint operator in a finite-dimensional Hilbert space. Based on the partition of unit, we have constructed the operators of decomposition which preserve nonnegativity for the individual operator terms of splitting. Unconditionally stable additive schemes of domain decomposition were constructed using the regularization principle for operator-difference schemes. Vector additive schemes were considered, too. The results of our work are illustrated by a model problem for the two-dimensional parabolic equation.
Keywords:Time-dependent problems Domain decomposition method Additive schemes Operator-splitting difference schemes pacs: 02.60.Lj 02.70.Bf Msc: 65N06 65M06 †
[FOOTNOTE:†][ENDFOOTNOTE]
∎
## 1 Introduction
Domain decomposition methods are widely used for the numerical solution of boundary value problems for partial differential equations on parallel computers. Stationary problems [11; 12; 24; 25] are the most extensively studied in the theory of domain decomposition methods. Numerical algorithms with and without overlapping of subdomains are used here in the synchronous (sequential) and asynchronous (parallel) organization of computations.
Domain decomposition methods for unsteady problems are based on two approaches [14]. In the first approach for the approximate solution of time-dependent problems we use the standard implicit approximations in time. After that domain decomposition methods are applied to solve the discrete problem at a new time level. For the optimal iterative methods of domain decomposition the number of iterations does not depend on the discretization steps in time and space [3; 4]. The iteration-free domain decomposition algorithms are constructed for unsteady problems in the second approach. In some cases we can confine ourselves to only one iteration of the Schwarz alternating method for solving boundary value problems for the parabolic equation of second order [6; 7]. Special schemes of splitting into subdomains (regionally-additive schemes [26; 27]) are constructed, too.
Construction and convergence investigation of the regionally-additive schemes is based on the general results of the theory of splitting schemes [10; 13; 34]. The most interesting for practice is the situation where the problem operator is decomposed into a sum of three or more noncommutative not self-adjoint operators. In the case of such a multi-component splitting the stable additive splitting schemes are constructed on the basis of the concept of summarized approximation. Additively-averaged schemes of summarized approximation are interesting for using on parallel computers. In the class of splitting schemes of full approximation [19] we highlight the vector additive schemes based on the transition from the single original equation to a system of similar equations [1; 2; 31]. Additive regularized operator-difference schemes are constructed in the most simple way for multi-component splitting [18; 23], where stability is achieved via perturbations of operators of the difference scheme.
Peculiarities of domain decomposition schemes result from the selection of splitting operators. To construct the operators of decomposition for boundary value problems for partial differential equations, it is convenient to use the partition of unit for the computational domain [5; 8; 16; 26; 28; 29; 33]. In the domain decomposition method with overlapping a separate subdomain is associated with a function with values lying between zero and one. Domain decomposition methods for unsteady convection-diffusion problems are studied in works [17; 20; 30]. In the limiting case the width of subdomain overlapping is equal to the discretization step. In this case the regionally-additive schemes are interpreted as the decomposition without overlapping of subdomains but with appropriate boundary conditions of exchange. Domain decomposition methods for unsteady boundary value problems are summarized in the books [14; 19]. More recent studies are presented in the work [32]. In this case we use different constructions for the splitting operators and for operators of the grid problem at a new time level.
In this paper we construct domain decomposition schemes for the first order evolutionary equations with a general nonnegative operator in a finite Hilbert space. Decomposition operators are constructed here separately for the self-adjoint and skew-symmetric parts using the partition of unit in the appropriate spaces. Two classes of unconditionally stable regionally-additive regularized schemes are proposed. Vector additive operator-difference schemes of domain decomposition are considered. The paper is organized as follows. In section 2 there is formulated the Cauchy problem for the evolutionary equation of first order and the corresponding a priori estimate of stability is derived. Decomposition operators are constructed in Section 3. Problems with the self-adjoint operator and skew-symmetric one are considered separately. Unconditionally stable regularized additive schemes of domain decomposition are constructed in Section 4, with the additive and multiplicative perturbation of the operator of transition to the new time level. Vector splitting schemes are discussed in Section 5. In section 6 we consider a model boundary value problem for the two-dimensional parabolic equation along with the results of using different domain decomposition schemes. The main results are summarized in Section 7.
## 2 The Cauchy problem for the first order evolutionary equation
Let \(H\) be a finite-dimensional real Hilbert space of grid functions with the scalar product and norm \((\cdot,\cdot)\)\(\|\cdot\|\), respectively. Let a constant (independent of time \(t\)) grid operator \(A\) is nonnegative in \(H\):
\[A\geq 0,\quad\frac{d}{dt}A=A\frac{d}{dt}\] (1)
and \(E\) is the identity operator in \(H\). We search the solution of the Cauchy problem
\[\frac{du}{dt}+Au=f(t),\quad 0<t\leq T,\] (2)
\[u(0)=u^{0}.\] (3)
Problem (1)–(3) results from a finite-difference approximation in space for the approximate solution of boundary value problems for partial differential equations. Similar systems of ordinary differential equations arise in using the finite-element method as well as in applying the finite-volume approach. Let us obtain the standard a priori estimate for problem (1)–(3).
Multiply equation (2) by \(u\) scalarly in \(H\) . In view of (1) we obtain inequality
\[\frac{1}{2}\frac{d}{dt}\|u\|^{2}\leq(f,u).\] (4)
Taking into account
\[(f,u)\leq\|f\|\|u\|,\]
from (4) we have
\[\frac{d}{dt}\|u\|\leq\|f\|.\]
In view of the Gronwall lemma we obtain the desired estimate
\[\|u\|\leq\|u^{0}\|+\int_{0}^{t}\|f(\theta)\|d\theta,\] (5)
which expresses the stability of the solution with respect to the initial data and right-hand side.
The emphasis of our work is on constructing approximations in time for equation (2). Two-level schemes will be considered. Let \(\tau\) be a step of the uniform grid in time and let \(y^{n}=y(t^{n}),\ t^{n}=n\tau\), \(n=0,1,...,N,\ N\tau=T\). Equation (2) is approximated by the two-level scheme with weights
\[\frac{y^{n+1}-y^{n}}{\tau}+A(\sigma y^{n+1}+(1-\sigma)y^{n})=\varphi^{n},\quad n =0,1,...,N-1,\] (6)
where, for example, \(\varphi^{n}=f(\sigma t^{n+1}+(1-\sigma)t^{n})\). It is supplemented by the initial condition
\[y^{0}=u^{0}.\] (7)
Difference scheme (6), (7) has approximation error \(\mathcal{O}(\tau^{2}+(\sigma-0.5)\tau)\).
Grid analog of (5) is the estimate at the time level
\[\|y^{n+1}\|\leq\|y^{n}\|+\tau\|\varphi^{n}\|,\quad n=0,1,...,N-1.\] (8)
Let us prove the following statement.
Theorem 2.1: _Difference scheme (1), (6), (7) is unconditionally stable at \(\sigma\geq 0.5\), and estimate (8) holds for the difference solution._
Proof: We write (6) in the form
\[y^{n+1}=Sy^{n}+\tau(E+\sigma\tau A)^{-1}\varphi^{n},\] (9)
where
\[S=(E+\sigma\tau A)^{-1}(E-(1-\sigma)\tau A)\] (10)
is the operator of transition to the new time level. From (9) we have
\[\|y^{n+1}\|=\|S\|\,\|y^{n}\|+\tau\|(E+\sigma\tau A)^{-1}\varphi^{n}\|.\] (11)
For the last term in the right-hand side of (11) in the class of operators (1), under natural conditions \(\sigma\geq 0\) we have
\[\|(E+\sigma\tau A)^{-1}\varphi^{n}\|\leq\|\varphi^{n}\|.\]
We show that if \(\sigma\geq 0.5\) then for the nonnegative operators \(A\) the following estimate holds
\[\|S\|\leq 1.\] (12)
In the Hilbert real space \(H\) inequality (12) is equivalent [9] to fulfilment of the operator inequality
\[SS^{*}\leq E.\]
In view of (10) this inequality takes the form
\[(E+\sigma\tau A)^{-1}(E-(1-\sigma)\tau A)(E-(1-\sigma)\tau A^{*})(E+\sigma\tau A ^{*})^{-1}\leq E.\]
Multiplying this inequality on the left by \((E+\sigma\tau A)^{-1}\) and on the right by \((E+\sigma\tau A^{*})^{-1}\), we obtain
\[(E-(1-\sigma)\tau A)(E-(1-\sigma)\tau A^{*})\leq(E+\sigma\tau A)(E+\sigma\tau A ^{*}).\]
It follows that
\[\tau(A+A^{*})+(\sigma^{2}-(1-\sigma)^{2})\tau^{2}AA^{*}\geq 0.\]
This inequality holds for the nonnegative operators \(A\) with \(\sigma\geq 0.5\). In view of (12) we have from (11) required estimate (8).
## 3 Operators of decomposition
To better understand the formal structure of the domain decomposition operators, we give a typical example. We consider a model unsteady convection-diffusion problem with a constant (independent of time, but depending on the points of a computational domain) coefficients of diffusion and convection. Convective transport is written in the so-called (see, for example, [21]) symmetric form. Let in a bounded domain \(\Omega\) an unknown function \(u(\mathbf{x},t)\) satisfies the following equation
\[\frac{\partial u}{\partial t}+\frac{1}{2}\sum_{\alpha=1}^{m}\left(v_{\alpha}({ \bf x})\frac{\partial u}{\partial x_{\alpha}}\ +\frac{\partial}{\partial x_{ \alpha}}(v_{\alpha}({\bf x})u)\right)\]
\[-\sum_{\alpha=1}^{m}\frac{\partial}{\partial x_{\alpha}}\left(k({\bf x})\frac{ \partial u}{\partial x_{\alpha}}\right)=f({\bf x},t),\quad{\bf x}\in\Omega, \quad 0<t<T,\] (13)
where \(k(\mathbf{x})\geq\kappa>0,\ {\bf x}\in\Omega\). Supplement equation (13) with the homogeneous Dirichlet boundary conditions
\[u({\bf x},t)=0,\quad{\bf x}\in\partial\Omega,\quad 0<t<T.\] (14)
In addition, we define the initial condition
\[u({\bf x},0)=u^{0}({\bf x}),\quad{\bf x}\in\Omega.\] (15)
Let us consider a set of functions \(u({\bf x},t)\) satisfying boundary conditions (14). We write the unsteady convection-diffusion problem in the form of differential-operator equation
\[\frac{du}{dt}+{\cal A}u=f(t),\quad 0<t<T.\quad t>0\] (16)
We consider the Cauchy problem for evolutionary equation (16):
\[u(0)=u^{0}.\] (17)
Operators of diffusive and convective transport are treated separately, so that in (16)
\[{\cal A}={\cal C}+{\cal D}.\] (18)
For the diffusion operator we set
\[{\cal D}u=-\sum_{\alpha=1}^{m}\frac{\partial}{\partial x_{\alpha}}\left(k({\bf x })\frac{\partial u}{\partial x_{\alpha}}\right).\]
On set of functions (14) in \({\cal H}={\cal L}_{2}(\Omega)\) diffusive transport operator \({\cal D}\) is self-adjoint and positive definite:
\[{\cal D}={\cal D}^{*}\geq\kappa\delta{\cal E},\quad\delta=\delta(\Omega)>0,\] (19)
where \({\cal E}\) is the identity operator in \({\cal H}\).
Convective transport operator \({\cal C}\) is defined by the expression
\[{\cal C}u=\frac{1}{2}\sum_{\alpha=1}^{m}\left(v_{\alpha}({\bf x})\frac{ \partial u}{\partial x_{\alpha}}\ +\frac{\partial}{\partial x_{\alpha}}(v_{ \alpha}({\bf x})u)\right).\]
For any \(v_{\alpha}({\bf x})\) the operator \({\cal C}\) is skew-symmetric in \({\cal H}\):
\[{\cal C}=-{\cal C}^{*}.\] (20)
Taking into account representation (18), it follows from (19), (20) that \({\cal A}>0\) in \({\cal H}\).
Domain decomposition schemes will be constructed via the partition of unit for the computational domain \(\Omega\). Let domain \(\Omega\) consists of \(p\) separate subdomains
\[\Omega=\Omega_{1}\cup\Omega_{2}\cup...\cup\Omega_{p}.\]
Individual subdomains can overlap. With separate subdomain \(\Omega_{\alpha},\ \alpha=1,2,...,p\) we associate function \(\eta_{\alpha}(\mathbf{x}),\ \alpha=1,2,...,p\) such that
\[\eta_{\alpha}(\mathbf{x})=\left\{\begin{array}[]{cc}>0,&\mathbf{x}\in\Omega_{ \alpha},\\ 0,&\mathbf{x}\notin\Omega_{\alpha},\\ \end{array}\right.\quad\alpha=1,2,...,p,\] (21)
where
\[\sum_{\alpha=1}^{p}\eta_{\alpha}(\mathbf{x})=1,\quad\mathbf{x}\in\Omega.\] (22)
In view of (21), (22) we obtain from (18) the following representation
\[{\cal A}=\sum_{\alpha=1}^{p}{\cal A}_{\alpha},\quad{\cal A}_{\alpha}={\cal C}_ {\alpha}+{\cal D}_{\alpha},\quad\alpha=1,2,...,p,\] (23)
where
\[{\cal D}_{\alpha}u=-\sum_{\alpha=1}^{m}\frac{\partial}{\partial x_{\alpha}} \left(k({\bf x})\eta_{\alpha}(\mathbf{x})\frac{\partial u}{\partial x_{\alpha} }\right),\]
\[{\cal C}_{\alpha}u=\frac{1}{2}\sum_{\alpha=1}^{m}\left(v_{\alpha}({\bf x})\eta _{\alpha}(\mathbf{x})\frac{\partial u}{\partial x_{\alpha}}\ +\frac{\partial}{ \partial x_{\alpha}}(v_{\alpha}({\bf x})\eta_{\alpha}(\mathbf{x})u)\right).\]
Similarly (19), (20), we have
\[{\cal D}_{\alpha}={\cal D}^{*}_{\alpha}\geq 0,\quad{\cal C}_{\alpha}=-{\cal C} ^{*}_{\alpha},\quad\alpha=1,2,...,p.\] (24)
In view of (24) in splitting (23) the following property holds
\[{\cal A}_{\alpha}\geq 0,\quad\alpha=1,2,...,p,\] (25)
and the self-adjoint part of operator \({\cal A}\) is splitted into the sum of nonnegative self-adjoint operators, whereas the skew-symmetric one – into the sum of skew-symmetric operators.
It is convenient to represent diffusive transport operator \({\cal D}\) as follows
\[{\cal D}={\cal G}^{*}{\cal G},\quad{\cal G}=k^{1/2}\mathop{\rm grad}\nolimits, \quad{\cal G}^{*}=-\mathop{\rm div}\nolimits k^{1/2},\] (26)
with \({\cal G}:{\cal H}\rightarrow\widetilde{{\cal H}}\), where \(\widetilde{{\cal H}}=({\cal L}_{2}(\Omega))^{p}\) is the corresponding Hilbert space of vector functions. From this structure, for \({\cal D}_{\alpha},\ \alpha=1,2,...,p\) we obtain
\[{\cal D}_{\alpha}={\cal G}^{*}\eta_{\alpha}{\cal G},\quad\alpha=1,2,...,p.\] (27)
Similarly, \({\cal C}_{\alpha},\ \alpha=1,2,...,p\) can be represented as
\[{\cal C}_{\alpha}=\frac{1}{2}(\eta_{\alpha}{\cal C}+{\cal C}\eta_{\alpha}), \quad\alpha=1,2,...,p.\] (28)
Representations (27), (28) for operators of diffusive and convective transport demonstrate us clearly the structure of operators in individual subdomains in the splitting based on (21), (22) and allow to verify fulfilment of (24). A similar consideration can be given for the operator of problem (2), (3).
Let us divide the operator \(A\) into the self-adjoint and skew-symmetric parts:
\[A=C+D,\quad C=\frac{1}{2}(A-A^{*}),\quad D=\frac{1}{2}(A+A^{*}).\] (29)
For the nonnegative operator \(D\) the following representation holds
\[D=G^{*}G,\] (30)
where \(G:H\rightarrow\widetilde{H}\). For the partition of unit of the computational domain we consider the corresponding additive representation of unit operators \(E\) and \(\widetilde{E}\) in spaces \(H\) and \(\widetilde{H}\), respectively. Let
\[\sum_{\alpha=1}^{p}\chi_{\alpha}=E,\quad\chi_{\alpha}\geq 0,\quad\alpha=1,2,.. .,p,\] (31)
\[\sum_{\alpha=1}^{p}\widetilde{\chi}_{\alpha}=\widetilde{E},\quad\widetilde{ \chi}_{\alpha}\geq 0,\quad\alpha=1,2,...,p.\] (32)
By analogy with (23)–(25), we use the splitting
\[A=\sum_{\alpha=1}^{p}A_{\alpha},\quad A_{\alpha}\geq 0,\quad\alpha=1,2,...,p,\] (33)
where
\[A_{\alpha}=C_{\alpha}+D_{\alpha},\quad D_{\alpha}=D^{*}_{\alpha}\geq 0,\quad C _{\alpha}=-C^{*}_{\alpha},\quad\alpha=1,2,...,p.\] (34)
On the basis of (32) for the terms of the self-adjoint part of the operator \(A\) we set
\[D_{\alpha}=G^{*}\widetilde{\chi}_{\alpha}G,\quad\alpha=1,2,...,p.\] (35)
Decomposition of the skew-symmetric part is based on (31):
\[C_{\alpha}=\frac{1}{2}(\chi_{\alpha}C+C\chi_{\alpha}),\quad\alpha=1,2,...,p.\] (36)
Such an additive representation is a discrete analog of (27), (28) and is interpreted as the corresponding version of the domain decomposition.
## 4 Regularized domain decomposition schemes
For the approximate solution of the Cauchy problem for equation (2), (3) under condition (33) we apply different splitting schemes. Transition to the new time level is based on solving \(p\) separate subproblems with individual operators \(A_{\alpha},\ \alpha=1,2,...,p\). Taking into account the structure of the operators (see (34)–(36)) we can say that these splitting schemes are regionally-additive and iteration-free.
Currently, the principle of regularization of difference schemes is considered as the basic methodological principle for improving difference schemes [13]. The construction of unconditionally stable additive difference schemes [19] via the principle of regularization will be implemented in the following way.
1. For the initial problem there is constructed some simple difference scheme (the producing difference scheme). This scheme does not satisfy the necessary properties. For example, in constructing additive schemes the producing scheme is not splitting one, or it is conditionally stable or even absolutely unstable.
2. The difference scheme is written in the form for which the stability conditions are known.
3. Quality of the scheme (its stability) is improved via perturbations of operators of the difference scheme with preserving possibility of its computational implementation as an additive scheme.
Concerning to problem (2), (3) it is natural to choose as the producing scheme the following simple explicit scheme
\[\frac{y^{n+1}-y^{n}}{\tau}+Ay^{n}=\varphi^{n},\quad n=0,1,...,N-1,\] (37)
which is supplemented by initial conditions (7). Stability of scheme (37) is provided (see the proof of Theorem 2.1) by fulfillment of inequality
\[A+A^{*}-\tau AA^{*}\geq 0.\] (38)
Inequality (38) with \(D>0\) imposes appropriate restrictions on the time step, i.e. scheme (29), (37) is conditionally stable. Note also that if \(D=0\) than scheme (29), (37) is absolutely unstable. Taking into account splitting (33), we refer this scheme to the class of additive schemes.
To construct additive schemes, we can take more general scheme (6), (7) as a producing one. It is unconditionally stable at \(\sigma\geq 0.5\). In this case the perturbation of scheme operators is oriented only to receive the additive schemes preserving the property of unconditional stability.
Regularization of a difference scheme in order to improve the stability restriction (construction of a splitting scheme) can be performed via some perturbation of the operator \(A\). The second possibility is related to perturbation of the operator at the difference derivative in time (for our scheme ( ref 37) it is operator \(E\)). In constructing additive schemes, it is convenient to operate with the transition operator \(S\), rewriting producing scheme (37) as follows
\[y^{n+1}=Sy^{n}+\tau\varphi^{n},\quad n=0,1,...,N-1.\] (39)
For (37) we have
\[S=E-\tau A.\] (40)
The regularized scheme is based on perturbation of the operator \(S\) and has the following form
\[y^{n+1}=\widetilde{S}y^{n}+\tau\varphi^{n},\quad n=0,1,...,N-1.\] (41)
Let us consider general restrictions for \(\widetilde{S}\).
To preserve the first order approximation, which has generating scheme (39), (40), we subordinate the selection of \(\widetilde{S}\) to the condition
\[\widetilde{S}=E-\tau A+\mathcal{O}(\tau^{2}).\] (42)
Stability of scheme (41) in the sense that estimate (8) holds, is provided by the inequality
\[\|\widetilde{S}\|\leq 1.\] (43)
In addition, the regularized scheme must be additive, i.e. the transition to the new time level is implemented via solving the individual subproblems for operators \(A_{\alpha},\ \alpha=1,2,...,p\) in decomposition (33).
The first class of regularized splitting schemes is based on the following additive representation for the transition operator of the producing scheme
\[S=\frac{1}{p}\sum_{\alpha=1}^{p}S_{\alpha},\quad S_{\alpha}=E-p\tau A_{\alpha} ,\quad\alpha=1,2,...,p.\]
The similar additive representation we also use for the transition operator of the regularized scheme
\[\widetilde{S}=\frac{1}{p}\sum_{\alpha=1}^{p}\widetilde{S}_{\alpha},\quad\alpha =1,2,...,p.\] (44)
Individual terms \(\widetilde{S}_{\alpha},\ \alpha=1,2,...,p\) are constructed via perturbations of operators \(A_{\alpha},\ \alpha=1,2,...,p\). By analogy with(10) we set
\[\widetilde{S}_{\alpha}=(E+\sigma p\tau A_{\alpha})^{-1}(E-(1-\sigma)p\tau A_{ \alpha}),\quad\alpha=1,2,...,p.\] (45)
If \(\sigma\geq 0.5\) (see proof of Theorem 2.1) we have
\[\|\widetilde{S}_{\alpha}\|\leq 1,\quad\alpha=1,2,...,p.\]
In view of (44) it provides fulfilment of stability conditions (43).
Using the representation
\[\widetilde{S}_{\alpha}=E-p\tau(E+\sigma p\tau A_{\alpha})^{-1}A_{\alpha},\quad \alpha=1,2,...,p\]
we can rewrite regularized additive scheme (41), (44), (45) as follows
\[\frac{y^{n+1}-y^{n}}{\tau}+\sum_{\alpha=1}^{p}(E+\sigma p\tau A_{\alpha})^{-1} A_{\alpha}y^{n}=\varphi^{n},\quad n=0,1,...,N-1.\] (46)
The comparison with producing scheme (33), (37) shows that the regularization is provided by perturbation of the operator \(A\). Our consideration results in the following statement.
Theorem 4.1: _Additive difference scheme(7), (41), (44), (45) is unconditionally stable at \(\sigma\geq 0.5\), and for the numerical solution estimate of stability (8) with respect to the initial data and right-hand side holds._
Numerical implementation of scheme (7), (46) can be conducted as follows. Assume
\[y^{n+1}=\frac{1}{p}\sum_{\alpha=1}^{p}y_{\alpha}^{n+1},\quad\varphi^{n}=\sum_{ \alpha=1}^{p}\varphi_{\alpha}^{n}.\]
In this case we obtain
\[\frac{y_{\alpha}^{n+1}-y^{n}}{p\tau}+(E+\sigma p\tau A_{\alpha})^{-1}A_{\alpha }y^{n}=\varphi_{\alpha}^{n},\quad\alpha=1,2,...,p.\] (47)
for the individual components of the approximate solution at the new time level \(y_{\alpha}^{n+1},\ \alpha=1,2,...,p\). Scheme (47) can be rewritten as follows
\[\frac{y_{\alpha}^{n+1}-y^{n}}{p\tau}+A_{\alpha}y^{n}(\sigma y_{\alpha}^{n+1}+( 1-\sigma)y^{n})=(E+\sigma p\tau A_{\alpha})\varphi_{\alpha}^{n}.\]
In this form we can interpret scheme (47) as a variant of the additively-averaged scheme of component-wise splitting [19].
The second class of regularized splitting schemes is based on using not additive (see(44)) but multiplicative representation of the transition operator:
\[\widetilde{S}=\prod_{\alpha=1}^{p}\widetilde{S}_{\alpha},\quad\alpha=1,2,...,p.\] (48)
Taking into account (42), we have
\[S=\prod_{\alpha=1}^{p}S_{\alpha}+\mathcal{O}(\tau^{2}),\quad S_{\alpha}=E-\tau A _{\alpha},\quad\alpha=1,2,...,p.\]
Similarly (45), we set
\[\widetilde{S}_{\alpha}=(E+\sigma\tau A_{\alpha})^{-1}(E-(1-\sigma)\tau A_{ \alpha}),\quad\alpha=1,2,...,p.\] (49)
Under the standard restrictions \(\sigma\geq 0.5\) regularized scheme(41), (48), (49) is stable.
Theorem 4.2: _Additive difference scheme (7), (41), (48), (49) is unconditionally stable at \(\sigma\geq 0.5\), and estimate (8) of stability with respect to the initial data and right-hand side is valid for the difference solution._
We present now some possible implementation of the constructed regularized scheme. Let us introduce auxiliary quantities \(y^{n+\alpha/p},\ \alpha=1,2,...,p\) and taking into account (41), (48), we define them from the equations
\[y^{n+\alpha/p}=\widetilde{S}_{\alpha}y^{n+(\alpha-1)/p},\quad\alpha=1,2,...,p-1,\]
\[y^{n+1}=\widetilde{S}_{p}y^{n+(p-1)/p}+\tau\varphi^{n}.\] (50)
Similar to (47) we obtain from (50)
\[\frac{y^{n+\alpha/p}-y^{n+(\alpha-1)/p}}{\tau}+(E+\sigma\tau A_{\alpha})^{-1}A _{\alpha}y^{n+(\alpha-1)/p}=\varphi_{\alpha}^{n},\] (51)
where
\[\varphi_{\alpha}^{n}=\left\{\begin{array}[]{ll}0,&\alpha=1,2,...,p-1,\\ \varphi^{n},&\alpha=p.\\ \end{array}\right.\]
Rewrite scheme(51) as follows
\[\frac{y^{n+\alpha/p}-y^{n+(\alpha-1)/p}}{\tau}+A_{\alpha}(\sigma y^{n+\alpha/p }+(1-\sigma)y^{n+(\alpha-1)/p})=\widetilde{\varphi}_{\alpha}^{n},\] (52)
where
\[\widetilde{\varphi}_{\alpha}^{n}=(E+\sigma\tau A_{\alpha})\varphi_{\alpha}^{n} ,\quad\alpha=1,2,...,p.\]
Scheme (52) is a special variant of the standard component-wise splitting scheme [10; 13; 34]. But unlike these schemes of summarized approximation we constructed here the regularized schemes of full approximation. Regularized schemes (41), (44), (45), constructed using additive representation (44) for the transition operator, are more suitable for parallel computations in compare with regularized schemes (41), (48), (49) which are based on multiplicative representation (48).
## 5 Vector schemes of domain decomposition
Difference schemes for unsteady problems can often be treated as appropriate iterative methods for the approximate solution of stationary problems. Great opportunities in this direction provide the vector additive schemes [1; 31].
Instead of the single unknown \(u(t)\) we consider \(p\) unknowns \(u_{\alpha},\ \alpha=1,2,...,p\), which are determined from the system
\[\frac{du_{\alpha}}{dt}+\sum_{\beta=1}^{p}A_{\beta}u_{\beta}=f(t),\quad\alpha=1 ,2,...,p,\quad 0<t\leq T.\] (53)
The system of equations (53) is supplemented with the initial conditions
\[u_{\alpha}(0)=u^{0},\quad\alpha=1,2,...,p,\] (54)
which follow from (2). Obviously, each function is a solution of problem (2), (3), (33). The approximate solution of (2), (3), (33) will be constructed on the basis of one or another difference scheme for vector problem (53), (54).
To solve problem (53), (54), we use the following two-level scheme
\[\frac{y_{\alpha}^{n+1}-y_{\alpha}^{n}}{\tau}+\sum_{\beta=1}^{\alpha}A_{\beta}y _{\beta}^{n+1}+\sum_{\beta=\alpha+1}^{p}A_{\beta}y_{\beta}^{n}=\varphi^{n},\]
\[\quad\alpha=1,2,...,p,\quad n=0,1,...,N-1.\] (55)
For this difference scheme we use the initial conditions
\[y_{\alpha}(0)=u^{0},\quad\alpha=1,2,...,p.\] (56)
Numerical implementation of this scheme is based on the successive inversion of operators \(E+\tau A_{\alpha},\ \alpha=1,2,...,p\).
Theorem 5.1: _Vector additive difference scheme (33), (55), (56) is unconditionally stable, and for the components of the difference solution the following stability estimate with respect to the initial data and right-hand side_
\[\|y_{\alpha}^{n+1}\|\leq\|y_{\alpha}^{n}\|+\tau\|\varphi^{0}-Au^{0}\|+\tau\sum _{k=1}^{n}\tau\left\|\frac{\varphi^{k}-\varphi^{k-1}}{\tau}\right\|,\]
\[\alpha=1,2,...,p,\quad n=0,1,...,N-1,\] (57)
_is valid._
Proof: To study vector scheme (55), (56), it is convenient to use the approach from the work [22]. Subtracting the \(\alpha\)-th equation for \(y_{\alpha+1}^{n}\) from the \(\alpha+1\)-th equations of system (55) for \(y_{\alpha+1}^{n+1}\), we get
\[(E+\tau A_{\alpha+1})\frac{y_{\alpha+1}^{n+1}-y_{\alpha+1}^{n}}{\tau}=\frac{y_ {\alpha}^{n+1}-y_{\alpha}^{n}}{\tau},\quad\alpha=1,2,...,p-1.\] (58)
Similarly, considering the equations for \(y_{1}^{n+1}\) and \(y_{p}^{n}\), we obtain
\[(E+\tau A_{1})\frac{y_{1}^{n+1}-y_{1}^{n}}{\tau}=\frac{y_{p}^{n}-y_{p}^{n-1}}{ \tau}+\tau\frac{\varphi^{n}-\varphi^{n-1}}{\tau}.\] (59)
Taking into account nonnegativity of operators \(A_{\alpha},\ \alpha=1,2,...,p\), from (58) we obtain
\[\left\|\frac{y_{\alpha+1}^{n+1}-y_{\alpha+1}^{n}}{\tau}\right\|\leq\left\| \frac{y_{\alpha}^{n+1}-y_{\alpha}^{n}}{\tau}\right\|,\quad\alpha=1,2,...,p-1.\] (60)
Similarly, from (59) we have
\[\left\|\frac{y_{1}^{n+1}-y_{1}^{n}}{\tau}\right\|\leq\left\|\frac{y_{p}^{n}-y_ {p}^{n-1}}{\tau}\right\|+\tau\left\|\frac{\varphi^{n}-\varphi^{n-1}}{\tau} \right\|.\] (61)
From (60), (61), we derive at each time level the following estimate
\[\left\|\frac{y_{\alpha}^{n+1}-y_{\alpha}^{n}}{\tau}\right\|\leq\left\|\frac{y_ {\alpha}^{n}-y_{\alpha}^{n-1}}{\tau}\right\|+\tau\left\|\frac{\varphi^{n}- \varphi^{n-1}}{\tau}\right\|,\]
\[\alpha=1,2,...,p,\quad n=1,2,...,N-1.\] (62)
From (62) we get
\[\left\|\frac{y_{\alpha}^{n+1}-y_{\alpha}^{n}}{\tau}\right\|\leq\left\|\frac{y_ {\alpha}^{1}-y_{\alpha}^{0}}{\tau}\right\|+\sum_{k=1}^{n}\tau\left\|\frac{ \varphi^{k}-\varphi^{k-1}}{\tau}\right\|,\]
\[\alpha=1,2,...,p,\quad n=1,2,...,N-1.\] (63)
From (55) with \(\alpha=1\), taking into account splitting (33) and initial conditions (56)), we obtain
\[\left\|\frac{y_{1}^{1}-y_{1}^{0}}{\tau}\right\|\leq\|\varphi^{0}-Au^{0}\|.\]
In view of (60) we can rewrite inequality (63) as follows
\[\left\|\frac{y_{\alpha}^{n+1}-y_{\alpha}^{n}}{\tau}\right\|\leq\|\varphi^{0}- Au^{0}\|+\sum_{k=1}^{n}\tau\left\|\frac{\varphi^{k}-\varphi^{k-1}}{\tau}\right\|,\]
\[\alpha=1,2,...,p,\quad n=1,2,...,N-1.\] (64)
Taking into account the obvious inequality
\[\|y_{\alpha}^{n+1}\|\leq\|y_{\alpha}^{n}\|+\tau\left\|\frac{y_{\alpha}^{n+1}-y _{\alpha}^{n}}{\tau}\right\|,\quad\alpha=1,2,...,p,\]
we obtain from (64) required estimate (57).
We emphasize that the above stability estimates(57) are received for each individual component \(y_{\alpha}^{n+1},\ \alpha=1,2,...,p\). Each of them or their linear combination
can be treated as an approximate solution of our problem (2), (3), (33) at time moment \(t=t^{n+1}\).
## 6 Model problem
To illustrate possibilities of the constructed here domain decomposition schemes, let us consider the simplest boundary value problem for the parabolic equation. We consider the problem in a rectangle
\[\Omega=\{\ \mathbf{x}\ |\ \mathbf{x}=(x_{1},x_{2}),\ 0<x_{\alpha}<l_{\alpha}, \ \alpha=1,2\}.\]
In \(\Omega\) the following boundary value problem
\[\frac{\partial u}{\partial t}=\sum_{\alpha=1}^{2}\frac{\partial^{2}u}{\partial x ^{2}_{\alpha}},\quad{\bf x}\in\Omega,\quad 0<t<T,\] (65)
\[u({\bf x},t)=0,\quad{\bf x}\in\partial\Omega,\quad 0<t<T,\] (66)
\[u({\bf x},0)=u^{0}({\bf x}),\quad{\bf x}\in\Omega\] (67)
is solved.
The approximate solution is searched at the nodes of a uniform rectangular grid in \(\Omega\):
\[\bar{\omega}=\{\mathbf{x}\ |\ \mathbf{x}=(x_{1},x_{2}),\quad x_{\alpha}=i_{ \alpha}h_{\alpha},\quad i_{\alpha}=0,1,...,N_{\alpha},\quad N_{\alpha}h_{ \alpha}=l_{\alpha}\}\]
and let \(\omega\) be the set of internal nodes (\(\bar{\omega}=\omega\cup\partial\omega\)). For grid functions \(y(\mathbf{x})=0,\ \mathbf{x}\in\partial\omega\) we define the Hilbert space \(H=L_{2}({\omega})\) with the scalar product and norm
\[(y,w)\equiv\sum_{{\bf x}\in\omega}y({\bf x})w({\bf x})h_{1}h_{2},\quad\|y\| \equiv(y,y)^{1/2}.\]
Approximating problem (65), (66) in space, we obtain the differential-difference equation
\[\frac{dy}{dt}+Ay=0,\quad\mathbf{x}\in\omega,\quad 0<t<T,\] (68)
where
\[Ay=-\frac{1}{h_{1}^{2}}(y(x_{1}+h_{1},x_{2})-2y(x_{1},x_{2})-y(x_{1}-h_{1},x_{ 2}))\]
\[-\frac{1}{h_{2}^{2}}(y(x_{1},x_{2}+h_{2})-2y(x_{1},x_{2})-y(x_{1},x_{2}-h_{2}) ),\quad\mathbf{x}\in\omega.\] (69)
In the space \(H\) the operator \(A\) is self-adjoint and positive definite [13; 15]:
\[A=A^{*}\geq(\delta_{1}+\delta_{2})E,\quad\delta_{\alpha}=\frac{4}{h^{2}_{ \alpha}}\sin^{2}\frac{\pi h_{\alpha}}{2l_{\alpha}},\quad\alpha=1,2.\] (70)
Taking into account (67), we supplement equation (69) with the initial condition
\[y({\bf x},0)=u^{0}({\bf x}),\quad{\bf x}\in\omega.\] (71)
For simplicity, operators of domain decomposition in the investigated problem (68)–(71) will be constructed without the explicit definition of operators \(G\) and \(G\) as well as space \(\widetilde{H}\), focusing on decomposition (21), (22). We set
\[A_{\alpha}y=-\frac{1}{h_{1}^{2}}\eta_{\alpha}(x_{1}+0.5h_{1},x_{2})(y(x_{1}+h_ {1},x_{2})-y(x_{1},x_{2}))\]
\[+\frac{1}{h_{1}^{2}}\eta_{\alpha}(x_{1}-0.5h_{1},x_{2})(y(x_{1},x_{2})-y(x_{1} -h_{1},x_{2}))\]
\[-\frac{1}{h_{2}^{2}}\eta_{\alpha}(x_{1},x_{2}+0.5h_{2})(y(x_{1},x_{2}+h_{2})-y (x_{1},x_{2}))\]
\[+\frac{1}{h_{2}^{2}}\eta_{\alpha}(x_{1},x_{2}-0.5h_{2})(y(x_{1},x_{2})-y(x_{1} ,x_{2}-h_{2})),\quad\alpha=1,2,...,p.\] (72)
In view of (21), (22) we have
\[A=\sum_{\alpha=1}^{p}A_{\alpha},\quad A_{\alpha}=A^{*}_{\alpha},\quad\alpha=1, 2,...,p.\] (73)
Thus, we consider the class of additive schemes (33).
<figure><img src="content_image/1101.2395/x1.png"><figcaption>Figure 5: Error at N1=N2=32 and N=10</figcaption></figure>
Numerical calculations for problem (65)–(67) are performed in the unit square (\(l_{1}=l_{2}=1\)) where the solution has the form
\[u(\mathbf{x},t)=\sin(n_{1}\pi x_{1})\sin(n_{2}\pi x_{2})\exp(-\pi^{2}(n_{1}^{2 }+n_{2}^{2})t)\] (74)
for a natural \(n_{1}\) and \(n_{2}\). For this solution we set the corresponding initial conditions (67). Decomposition is performed with respect to one of two variables into four subdomains (see Fig. 1) with overlapping. Disconnected subdomains can be considered as some single subdomain and the decomposition in Fig. 1 can be treated as the decomposition into two subdomains described via two functions: \(\eta_{\alpha}=\eta_{\alpha}(x_{1}),\ \alpha=1,2\).
For problems of type (65)–(67) two cases of domain decomposition are highlighted: decomposition with and without overlapping of subdomains. Methods without overlapping of subdomains are connected with the explicit formulation of certain conditions at the common boundaries. In our case a special problem at the interfaces is not formulated, but for algorithms without overlapping we can derive the corresponding exchange boundary conditions.
For the domain decomposition methods the fundamental issue is exchange of calculation data between different subdomains. Standard explicit schemes can be used. In this case the domain decomposition can be associated with separate subsets of grid nodes:\(\omega_{\alpha},\ \alpha=1,2\), where \(\omega=\omega_{1}\cup\omega_{2}\). In the case of (65)–(67) (the seven point stencil in space), the transition to the new time level via the explicit scheme for finding the approximate solution on grid \(\omega_{\alpha},\ \alpha=1,2\) is performed using the solution values at nodes adjacent to the interface. We need to transfer the data of size \(\sim\partial\omega_{\alpha},\ \alpha=1,2\). For the approximate solution of problem (68)–(71) we can consider two possibilities with minimum overlapping of subdomains. The first employs the domain decomposition with interfaces at integer nodes — the boundary nodes belong to several subdomains (two in our case of decomposition with respect to one variable). The second possibility is realized when the boundary of subdomains passes through the half-integer nodes of the corresponding variable.
<figure><img src="content_image/1101.2395/x1.png"><figcaption>Figure 5: Error at N1=N2=32 and N=10</figcaption></figure>
The variant of domain decomposition with boundaries through integer nodes is shown in Fig. 2. Assume that the decomposition is carried out in spatial variable \(x_{1}\), i.e. \(\theta=x_{1}\). The boundary of subdomains here passes through the node \(\theta=\theta_{i}\). Thus, for this decomposition operators (72) take the form
\[A_{1}y=\frac{1}{h_{1}^{2}}(y(x_{1},x_{2})-y(x_{1}-h_{1},x_{2}))\]
\[-\frac{1}{2h_{2}^{2}}(y(x_{1},x_{2}+h_{2})-2y(x_{1},x_{2})-y(x_{1},x_{2}-h_{2} )),\]
\[A_{2}y=-\frac{1}{h_{1}^{2}}(y(x_{1}+h_{1},x_{2})-y(x_{1},x_{2}))\]
\[-\frac{1}{2h_{2}^{2}}(y(x_{1},x_{2}+h_{2})-2y(x_{1},x_{2})-y(x_{1},x_{2}-h_{2} )),\quad x_{1}=\theta_{i}.\]
This decomposition can be associated with using the Neumann boundary conditions as the exchange boundary conditions. The relationship between the individual subdomains is minimal and requires exchange of data at \(\theta=\theta_{i}\). This case can be identified by the operators of decomposition (32) as follows:
\[R(\widetilde{\chi}_{\alpha})=[0,1],\quad\alpha=1,2,...,p.\] (75)
The values of \(\eta_{\alpha}(x_{1}\pm 0.5h_{1},x_{2})\), \(\eta_{\alpha}(x_{1},x_{2}\pm 0.5h_{1}),\ \alpha=1,2\) for (72), (74) is equal to 0 or 1.
<figure><img src="content_image/1101.2395/x1.png"><figcaption>Figure 5: Error at N1=N2=32 and N=10</figcaption></figure>
The second possibility, which is associated with decomposition through half-integer nodes, is depicted in Fig. 3. In this case instead of (75) we have
\[R(\widetilde{\chi}_{\alpha})=[0,1/2,1],\quad\alpha=1,2,...,p.\] (76)
At node \(\theta=\theta_{i}\) we use the difference approximation with the flux reduced by half. For the decomposition in the variable \(x_{1}\) the operators of decomposition (72) seem like this
\[A_{1}y=\frac{1}{2h_{1}^{2}}(y(x_{1},x_{2})-y(x_{1}-h_{1},x_{2}))\]
\[-\frac{1}{4h_{2}^{2}}(y(x_{1},x_{2}+h_{2})-2y(x_{1},x_{2})-y(x_{1},x_{2}-h_{2} )),\]
\[A_{2}y=-\frac{1}{h_{1}^{2}}(y(x_{1}+h_{1},x_{2})-y(x_{1},x_{2}))+\frac{1}{2h_{ 1}^{2}}(y(x_{1},x_{2})-y(x_{1}-h_{1},x_{2}))\]
\[-\frac{3}{4h_{2}^{2}}(y(x_{1},x_{2}+h_{2})-2y(x_{1},x_{2})-y(x_{1},x_{2}-h_{2} )),\quad x_{1}=\theta_{i}.\]
For calculations in the domain \(\Omega_{1}\) (see Fig. 3)) we employ adjacent to the interface data from the domain \(\Omega_{2}\) — at node \(\ theta=\ theta_{i}\). Thus, for this domain decomposition exchanges are minimal and coincide with the exchanges in the explicit scheme.
<figure><img src="content_image/1101.2395/x1.png"><figcaption>Figure 5: Error at N1=N2=32 and N=10</figcaption></figure>
The considered variants of decomposition (75), (76) correspond to the minimum overlapping of subdomains. At the discrete level the width of overlapping is governed by the mesh size (\(h\) and \(2h\), respectively). Similar variants can be constructed for a higher overlapping of subdomains. For the decomposition presented in Fig. 4 we have
\[R(\widetilde{\chi}_{\alpha})=[0,1/3,2/3,1],\quad\alpha=1,2,...,p.\] (77)
Obviously, in this case we have a grater volume of data exchange, but at the same time the transition from one domain to another is much smoother. The latter allows us to expect a higher accuracy of the approximate solution.
Consider now the results of approximate solving problem (65)–(67), which has the exact solution (74). Let \(n_{1}=2,\ n_{2}=1\), \(T=0.01\) and the grid is square \(N_{1}=N_{2}\). Calculations have been performed using regularized schemes with the additive (scheme (7), (41), (45), (45)) and multiplicative (scheme (7), (41), (48), (49)) perturbation of the transition operator at \(\sigma=1\), as well as vector additive scheme (33), (55), (56). The results are compared with the difference solution obtained via implicit scheme (1), (6), (7) at \(\sigma=1\). The error of the approximate solution was estimated through value \(\varepsilon(t^{n})=\|y^{n}(\mathbf{x})-u(\mathbf{x},t^{n})\|\) at a particular time step.
<figure><img src="content_image/1101.2395/x1.png"><figcaption>Figure 5: Error at N1=N2=32 and N=10</figcaption></figure>
Considering decomposition (75) (the width of overlapping is \(h\)) with the space grid \(N_{1}=N_{2}=32\) and time grid \(N=10\) (\(\tau=0.001\)), we can compare the error norms of the difference solution obtained using different schemes (see Fig. 5). Figure 6–8 shows the local error at the final time moment. The error is localized in the area of overlapping and it is much lower for the vector scheme of decomposition in compare with the additive and multiplicative variants of regularized additive schemes.
<figure><img src="content_image/1101.2395/x2.png"><figcaption>Figure 6: Error of scheme (7), (41), (48), (49)</figcaption></figure>
<figure><img src="content_image/1101.2395/x3.png"><figcaption>Figure 7: Error of scheme (7), (41), (45), (45)</figcaption></figure>
<figure><img src="content_image/1101.2395/x4.png"><figcaption>Figure 8: Error of scheme (33), (55), (56)</figcaption></figure>
<figure><img src="content_image/1101.2395/x5.png"><figcaption>Figure 9: Error at N1=N2=64 and N=10</figcaption></figure>
In contrast to the implicit scheme, the error of the approximate solution grows with increasing the space grid for domain decomposition schemes (Fig. 9). In this case the width of overlapping is reduced by half.
<figure><img src="content_image/1101.2395/x6.png"><figcaption>Figure 10: The error for N1=N2=32, N=10 and decomposition R=[0,1/3,2/3,1]</figcaption></figure>
The dependence of results on the width of overlapping is shown in Fig. 10. It is easy to see that decomposition (77) demonstrates the approximate solution of essentially higher accuracy in compare with decomposition(75) (compare Fig. 5 with Fig. 10).
## 7 Conclusions
1. In this paper we have constructed the operators of domain decomposition for solving evolutionary problems. Splitting of the general not self-adjoint nonnegative finite-dimensional operator is performed separately for its self-adjoint and skew-symmetric parts. This preserves the property of nonnegativity for operator terms associated with individual subdomains.
2. There are constructed unconditionally stable regularized additive schemes for the Cauchy problem for evolutionary equations of first order based on splitting of the problem operator into the sum of not self-adjoint nonnegative operators. Regularization is based on the principle of regularization for operator-difference schemes with perturbation of the transition operator for the explicit scheme. Both additive and multiplicative splittings are considered. It was highlighted the relationship of such regularized schemes with the additive schemes of summarized approximation: additively-averaged schemes as well as standard component-wise splitting ones.
3. Vector additive schemes of full approximation are selected among the splitting schemes for evolutionary equations. They are based on the transition to a system of similar problems with a special component-wise organization for searching the approximate solution at the new time level.
4. The numerical solution of the boundary value problem for the parabolic equation in a rectangle was conducted. The calculations allow to compare various schemes of domain decomposition and to show the accuracy dependence of the approximate solution on the width of overlapping. The vector additive scheme of domain decomposition demonstrates the best results in terms of accuracy.
## References
* (1) Abrashin, V.: A variant of the method of variable directions for the solution of multi- dimensional problems of mathematical-physics. I. Differ. Equations **26**(2), 243–250 (1990)
* (2) Abrashin, V., Vabishchevich, P.: Vector Additive Schemes for Second-Order Evolution Equations. Differential Equations **34**(12), 1673–1681 (1998)
* (3) Cai, X.C.: Additive Schwarz algorithms for parabolic convection-diffusion equations. Numer. Math. **60**(1), 41–61 (1991)
* (4) Cai, X.C.: Multiplicative Schwarz methods for parabolic problems. SIAM J. Sci Comput. **15**(3), 587–603 (1994)
* (5) Dryja, M.: Substructuring methods for parabolic problems. In: R. Glowinski, Y.A. Kuznetsov, G.A. Meurant, J. Périaux, O. Widlund (eds.) Fourth International Symposium on Domain Decomposition Methods for Partial Differential Equations. SIAM, Philadelphia, PA (1991)
* (6) Kuznetsov, Y.: New algorithms for approximate realization of implicit difference schemes. Sov. J. Numer. Anal. Math. Model. **3**(2), 99–114 (1988)
* (7) Kuznetsov, Y.: Overlapping domain decomposition methods for FE-problems with elliptic singular perturbed operators. Fourth international symposium on domain decomposition methods for partial differential equations, Proc. Symp., Moscow/Russ. 1990, 223-241 (1991) (1991)
* (8) Laevsky, Y.: Domain decomposition methods for the solution of two-dimensional parabolic equations. In: Variational-difference methods in problems of numerical analysis, 2, pp. 112–128. Comp. Cent. Sib. Branch, USSR Acad. Sci., Novosibirsk (1987). In Russian
* (9) Lax, P.D.: Linear algebra and its applications. 2nd edition. Pure and Applied Mathematics. A Wiley-Interscience Series of Texts, Monographs & Tracts. New York, NY: Wiley. xvi, 376 p. (2007)
* (10) Marchuk, G.: Splitting and alternating direction methods. In: P.G. Ciarlet, J.L. Lions (eds.) Handbook of Numerical Analysis, Vol. I, pp. 197–462. North-Holland (1990)
* (11) Mathew, T.: Domain decomposition methods for the numerical solution of partial differential equations. Lecture Notes in Computational Science and Engineering 61. Berlin: Springer. xiii, 764 p. (2008)
* (12) Quarteroni, A., Valli, A.: Domain decomposition methods for partial differential equations. Numerical Mathematics and Scientific Computation. Oxford: Clarendon Press. xv, 360 p. (1999)
* (13) Samarskii, A.: The theory of difference schemes. Pure and Applied Mathematics, Marcel Dekker. 240. New York, NY: Marcel Dekker. 786 p. (2001)
* (14) Samarskii, A., Matus, P., Vabishchevich, P.: Difference schemes with operator factors. Mathematics and its Applications (Dordrecht). 546. Dordrecht: Kluwer Academic Publishers. x, 384 p. (2002)
* (15) Samarskii, A., Nikolaev, E.: Numerical methods for grid equations. Birkhäuser (1989)
* (16) Samarskii, A., Vabishchevich, P.: Vector additive schemes of domain decomposition for parabolic problems. Differ. Equations **31**(9), 1522–1528 (1995)
* (17) Samarskii, A., Vabishchevich, P.: Factorized finite-difference schemes for the domain decomposition in convection-diffusion problems. Differ. Equations **33**(7), 972–979 (1997)
* (18) Samarskii, A., Vabishchevich, P.: Regularized additive full approximation schemes. Doklady. Mathematics **57**(1), 83–86 (1998)
* (19) Samarskii, A., Vabishchevich, P.: Additive schemes for problems of mathematical physics (Additivnye skhemy dlya zadach matematicheskoj fiziki). Moscow: Nauka. 320 p. (1999). In Russian
* (20) Samarskii, A., Vabishchevich, P.: Domain decomposition methods for parabolic problems. In: C.H. Lai, P. Bjorstad, M. Gross, O. Widlund (eds.) Eleventh International Conference on Domain Decomposition Methods, pp. 341–347. DDM.org (1999)
* (21) Samarskii, A., Vabishchevich, P.: Numerical methods for solution of convection-diffusion problems (Chislennye metody resheniya zadach konvekcii-diffuzii). Moscow: URSS. 247 p. (1999). In Russian
* (22) Samarskii, A., Vabishchevich, P., Matus, P.: Stability of Vector Additive Schemes. Doklady. Mathematics **58**(1), 133–135 (1998)
* (23) Samarskii, A.A., Vabishchevich, P.N.: Regularized difference schemes for evolutionary second order equations. Math. Models and Methods in Applied Sciences **2**(3), 295–315 (1992)
* (24) Smith, B.: Domain decomposition. Parallel multilevel methods for elliptic partial differential equations. Cambridge: Cambridge University Press. xii, 224 p. (1996)
* (25) Toselli, A., Widlund, O.: Domain decomposition methods – algorithms and theory. Springer Series in Computational Mathematics 34. Berlin: Springer. xv, 450 p. (2005)
* (26) Vabishchevich, P.: Difference schemes with domain decomposition for solving non-stationary problems. U.S.S.R. Comput. Math. Math. Phys. **29**(6), 155–160 (1989)
* (27) Vabishchevich, P.: Regional-additive difference schemes for nonstationary problems of mathematical physics. Mosc. Univ. Comput. Math. Cybern. (3), 69–72 (1989)
* (28) Vabishchevich, P.: Parallel domain decomposition algorithms for time-dependent problems of mathematical physics. In: Advances in Numerical Methods and Applications, pp. 293–299. World Schientific (1994)
* (29) Vabishchevich, P.: Regionally additive difference schemes with a stabilizing correction for parabolic problems. Comput. Math. Math. Phys. **34**(12), 1573–1581 (1994)
* (30) Vabishchevich, P.: Finite-difference domain decomposition schemes for nonstationary convection-diffusion problems. Differ. Equations **32**(7), 929–933 (1996)
* (31) Vabishchevich, P.: Vector additive difference schemes for first-order evolutionary equations. Computational mathematics and mathematical physics **36**(3), 317–322 (1996)
* (32) Vabishchevich, P.: Domain decomposition methods with overlapping subdomains for the time-dependent problems of mathematical physics. Comput. Methods Appl. Math. **8**(4), 393–405 (2008)
* (33) Vabishchevich, P., Verakhovskij, V.: Difference schemes for component-wise splitting-decomposition of a domain. Mosc. Univ. Comput. Math. Cybern. **1994**(3), 7–11 (1994)
* (34) Yanenko, N.: The method of fractional steps. The solution of problems of mathematical physics in several variables. Berlin-Heidelberg-New York: Springer Verlag, VIII, 160 p. with 15 fig. (1971)
|
1506.03539 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 53764,
"num_imgs": 16,
"llama3_tokens_count": 15196
} | [
"content_image/1506.03539/x1.png",
"content_image/1506.03539/x2.png",
"content_image/1506.03539/x3.png",
"content_image/1506.03539/x4.png",
"content_image/1506.03539/x5.png",
"content_image/1506.03539/x7.png",
"content_image/1506.03539/x8.png",
"content_image/1506.03539/x9.png",
"content_image/1506.03539/x11.png",
"content_image/1506.03539/x12.png",
"content_image/1506.03539/x13.png",
"content_image/1506.03539/x14.png",
"content_image/1506.03539/x15.png",
"content_image/1506.03539/x16.png",
"content_image/1506.03539/x17.png",
"content_image/1506.03539/x18.png"
] | # Reexamination of the evidence for entanglement in a quantum annealer
Tameem Albash
albash@usc.edu
Department of Physics and Astronomy, University of Southern California, Los Angeles, California 90089, USA
Information Sciences Institute, University of Southern California, Marina del Rey, CA 90292
Center for Quantum Information Science & Technology, University of Southern California, Los Angeles, California 90089, USA
Itay Hen
Information Sciences Institute, University of Southern California, Marina del Rey, CA 90292
Center for Quantum Information Science & Technology, University of Southern California, Los Angeles, California 90089, USA
Federico M. Spedalieri
Information Sciences Institute, University of Southern California, Marina del Rey, CA 90292
Center for Quantum Information Science & Technology, University of Southern California, Los Angeles, California 90089, USA
Department of Electrical Engineering, University of Southern California, Los Angeles, California 90089, USA
Daniel A. Lidar
Department of Physics and Astronomy, University of Southern California, Los Angeles, California 90089, USA
Center for Quantum Information Science & Technology, University of Southern California, Los Angeles, California 90089, USA
Department of Electrical Engineering, University of Southern California, Los Angeles, California 90089, USA
Department of Chemistry, University of Southern California, Los Angeles, California 90089, USA
###### Abstract
A recent experiment [Lanting _et al._, PRX, (2014)] claimed to provide evidence of up to \(8\)-qubit entanglement in a D-Wave quantum annealing device. However, entanglement was measured using qubit tunneling spectroscopy, a technique that provides indirect access to the state of the system at intermediate times during the anneal by performing measurements at the end of the anneal with a probe qubit. In addition, an underlying assumption was that the quantum transverse-field Ising Hamiltonian, whose ground states are already highly entangled, is an appropriate model of the device, and not some other (possibly classical) model. This begs the question of whether alternative, classical or semiclassical models would be equally effective at predicting the observed spectrum and thermal state populations. To check this, we consider a recently proposed classical rotor model with classical Monte Carlo updates, which has been successfully employed in describing features of earlier experiments involving the device. We also consider simulated quantum annealing with quantum Monte Carlo updates, an algorithm that samples from the instantaneous Gibbs state of the device Hamiltonian. Finally, we use the quantum adiabatic master equation, which cannot be efficiently simulated classically, and which has previously been used to successfully capture the open system quantum dynamics of the device. We find that only the master equation is able to reproduce the features of the tunneling spectroscopy experiment, while both the classical rotor model and simulated quantum annealing fail to reproduce the experimental results. We argue that this bolsters the evidence for the reported entanglement.
Quantum Annealer, Entanglement
## I Introduction
The D-Wave processors Johnson _et al._ (2010); Berkley _et al._ (2010); Johnson _et al._ (2011) are designed to be physical quantum annealers Brooke _et al._ (1999) performing adiabatic evolution using programmable superconducting flux qubits. These devices have generated a substantial debate in the quantum computing community by laying claim to being the first large scale implementation of a quantum algorithm S. Suzuki and A. Das (guest eds.) (2015). Much effort has been directed at answering the fundamental question of whether the D-Wave processors exhibit sufficient “quantumness” to justify these claims.
Independent verification of the quantumness of the D-Wave devices is challenging in part because of their black-box nature: the user interacts with the device by presenting it with an Ising model problem instance that is programmed as input, and receives a classical bit string representing the measured state of the qubits in the computational basis as output, at the end of the computation (or quantum annealing run). This state is the device’s attempt at finding the ground state of the input Ising problem instance. This input-output interaction mode is clearly not amenable to the usual tests of quantumness emphasizing non-locality Reichardt _et al._ (2013). Nevertheless, for sufficiently small problems (\(\lesssim 20\) qubits), specific instances that emphasize quantum features of the evolution have been designed and found to show strong agreement only with quantum master equations, i.e., open quantum system models, but not with classical models Boixo _et al._ (2013); Albash _et al._ (8); Boixo _et al._ (9). However, for a class of much larger (\(>100\) qubits) random Ising model problems, the device’s output exhibited strong correlations Boixo _et al._ (10); Shin _et al._ (2014) with a classical rotor model with Monte Carlo updates due to Shin, Smith, Smolin & Vazirani (SSSV) Shin _et al._ (2014), with simulated quantum annealing (SQA) implemented using quantum Monte Carlo Martoňák _et al._ (2002); Santoro _et al._ (2002); Boixo _et al._ (10), and with Parallel Tempering simulations Martin-Mayor and Hen (2015). While a detailed study of the excited states and degenerate ground states showed significant deviations between the device and these models Albash _et al._ (15), these results nevertheless keep alive the question of whether SQA and the SSSV model provide an effective microscopic description of the device at large numbers of qubits. This question is particularly pertinent since both models are efficiently simulatable on classical computers. Another approach that has been used successfully to model the D-Wave devices is the quantum adiabatic master equation (ME) Albash _et al._ (2012). The latter is so far the only model that has successfully captured all aspects of the “quantum signature” experiments reported in Refs. Boixo _et al._ (2013); Albash _et al._ (8). Importantly, unlike SQA, this quantum model does not lend itself to an efficient classical simulation. A related master equation (based on the noninteracting blip approximation Leggett _et al._ (1987)) was successfully used to model collective tunneling in experiments involving a D-Wave device Boixo _et al._ (9).
In contrast to the black-box approach, permitting observations only at the end of each annealing run, recent experiments found evidence of entanglement generated during the evolution of the D-Wave devices from input to output, effectively opening the black box Lanting _et al._ (2014). Specifically, the experiments reported in Ref. Lanting _et al._ (2014) showed, using qubit tunneling spectroscopy Berkley _et al._ (2013), that the measured quantum spectrum and thermal populations are in strong agreement with the quantum spectrum and Gibbs state of the transverse-field Ising Hamiltonian—the Hamiltonian the device is supposed to evolve under. This, in turn, allowed Ref. Lanting _et al._ (2014) to demonstrate the existence of entanglement in the device using negativity Vidal and Werner (2002); Love _et al._ (2007) and an entanglement witness Terhal (2000); Gühne and Téth (2009); Spedalieri (2012).
However, an important caveat is that these experiments only provide an indirect way to detect entanglement, since the underlying assumption is that the quantum transverse-field Ising Hamiltonian is an appropriate model of the device. That is, entanglement was detected under the assumption that the measured spectrum and populations arise from transverse-field Ising models whose ground states are already highly entangled, and not from some other (possibly classical) model. This begs the question of whether alternative, classical or semiclassical models would be equally effective at predicting the observed spectrum and thermal state populations. For, if so, the deduced entanglement witness would not be applicable.
To make this important point clearer, we might formulate it as a “model loophole,” in the tradition of the loopholes associated with nonlocality and Bell inequality tests (see, e.g., Ref. Gerhardt _et al._ (2011)). The loophole is simply the fact that entanglement was detected under the assumption that a particular, already-quantum model of the dynamics, is responsible for the observed experimental results. In an attempt to close this loophole, here we numerically simulate the tunneling spectroscopy experiment using the SSSV model, SQA, and the ME. We shall demonstrate that SQA and the SSSV model both fail to capture any feature of the entanglement witness experiments reported in Ref. Lanting _et al._ (2014), whereas the ME again succeeds. In other words, the reported measurements Lanting _et al._ (2014) are consistent with the ME, but inconsistent with SSSV and SQA. If the correct model of the D-Wave device is the ME then the reported measurements imply entanglement. The SQA fails not because it isn’t a quantum model, but because it is the incorrect quantum model for the experiments at hand: unlike the ME it does not include a unitary dynamics component, and this dynamics is what is presumably responsible for the experimental observations via an adiabatic connection of eigenstates. Our strategy does of course not rule out the possibility that other classical models, or efficiently simulatable quantum models, might also obtain agreement with the reported measurements. Our results may be seen as an invitation to invent such models, which, if found, would further guide the study of the quantumness question of the D-Wave devices.
The structure of this paper is as follows. In Section II we review the principle behind the qubit tunneling spectroscopy technique, as well as the theory behind the entanglement measures. In Section III we describe the simulation methods, in particular the ME, SQA, and the SSSV model. In Section IV we describe the simulated experiment. We present and discuss our results in Section V, where we demonstrate that only the ME matches the experimental tunneling spectroscopy results. We conclude in Section VI. In the Appendix we demonstrate the robustness of our conclusions to various noise models, and also reject another classical model based on spin dynamics with a friction term.
## II Review
### Qubit tunneling spectroscopy
We briefly review the principle behind qubit tunneling spectroscopy Berkley _et al._ (2013), where the goal is to find the energy gaps of the quantum system Hamiltonian \(H_{\mathrm{S}}\). We take \(H_{\mathrm{S}}\) to be of the form:
\[H_{\mathrm{S}}=-A\sum_{i=1}^{N}\sigma_{i}^{x}+BH_{\mathrm{IS}}\ ,\] (1)
where \(A,B>0\) are constants and
\[H_{\mathrm{IS}}=\sum_{i}h_{i}\sigma^{z}_{i}+\sum_{i<j}J_{ij}\sigma^{z}_{i} \sigma^{z}_{j}\] (2)
is an Ising Hamiltonian acting only on the system qubits. The \(h_{i}\) and \(J_{ij}\) are the local fields and couplings, respectively, and we use \(\sigma_{i}^{x}\) (\(\sigma_{i}^{z}\)) to denote the Pauli \(x\) (\(z\)) matrix acting on qubit \(i\).
We denote the eigenstates and eigenenergies of \(H_{\mathrm{S}}\) by \(\{|E_{n}\rangle\}_{n=1}\) and \(\{E_{n}\}_{n=1}\) respectively with \(E_{1}~{}\leq~{}E_{2}~{}\leq~{}\cdots\). A probe qubit P is coupled to system qubit \(1\) to give a system+probe Hamiltonian:
\[H_{\mathrm{S+P}} =H_{\mathrm{S}}+BH_{1\mathrm{P}}\ ,\] (3a)
\[H_{1\mathrm{P}}\] (3b)
An offset local field \(\propto-J_{1\mathrm{P}}\) has been applied to qubit \(1\) such that, in the eigenenergy subspace where the probe qubit is in the state \(|0\rangle\), the eigenstates of the system are given by \(|E_{n}\rangle\otimes|0\rangle\) with energy \(E_{n}-Bh_{\mathrm{P}}\) (where the first ket is the state of the system qubits and the second ket is the probe qubit state). When the probe is in the \(|1\rangle\) state, the lowest energy state of the system and probe can be written as \(|\psi_{0}\rangle\otimes|1\rangle\) with eigenenergy \(\tilde{\epsilon}_{0}=\epsilon_{0}+Bh_{\mathrm{P}}\), where \(|\psi_{0}\rangle\) is the ground state of \(H_{\mathrm{S}}-2BJ_{1\mathrm{P}}\sigma_{1}^{z}\) with eigenenergy \(\epsilon_{0}\).
Let us assume that the system and probe are initialized in the state \(|\psi_{0}\rangle\otimes|1\rangle\). Introducing a small transverse field term (\(\propto\sigma_{\mathrm{P}}^{x}\)) for the probe qubit allows for transitions between the states \(|\psi_{0}\rangle\otimes|1\rangle\) and \(|E_{n}\rangle\otimes|0\rangle\). In an open quantum system we may expect that the dominant process is incoherent tunneling between these two states Harris _et al._ (2008). By tuning the value of \(h_{\mathrm{P}}\), we can make the two states degenerate, i.e., \(E_{n}-Bh_{\mathrm{P}}=\epsilon_{0}+Bh_{\mathrm{P}}\), resulting in a resonant peak in the tunneling rate. Since both \(B\) and \(h_{\mathrm{P}}\) are known, this allows us to solve for differences of the \(E_{n}\), and by finding the locations of the tunneling peaks as a function of \(h_{\mathrm{P}}\), we can map out the quantum spectrum of \(H_{\mathrm{S}}\). For example, for a pair of such tunneling peaks at \(h_{\mathrm{P}}^{(1)}\) and \(h_{\mathrm{P}}^{(2)}\), corresponding to the \(n=1\) and \(n=2\) energy eigenstates respectively, the energy gap between the eigenstates \(|E_{2}\rangle\) and \(|E_{1}\rangle\) is then given by:
\[E_{2}-E_{1}=2B\left(h_{\mathrm{P}}^{(2)}-h_{\mathrm{P}}^{(1)}\right)\ .\] (4)
### Equilibrium distribution
Let us assume that to a very good approximation our system only populates the states \(|\psi_{0}\rangle\otimes|1\rangle\) and \(\{|E_{n}\rangle\otimes|0\rangle\}_{n=1}\), such that the populations in these states sum to unity:
\[P(|\psi_{0}\rangle\otimes|1\rangle)+\sum_{n=1}P(|E_{n}\rangle\otimes|0\rangle) =1\ .\] (5)
If we observe that the probe qubit is in the state \(|0\rangle\), the system energy eigenstate populations \(P(E_{n})\) are given by:
\[P(E_{n})=\frac{P(|E_{n}\rangle\otimes|0\rangle)}{\sum_{i=1}P(|E_{i}\rangle \otimes|0\rangle)}=\frac{P(|E_{n}\rangle\otimes|0\rangle)}{1-P(|\psi_{0} \rangle\otimes|1\rangle)}\ .\] (6)
However, if we have tuned \(h_{\mathrm{P}}\) so that the states \(|\psi_{0}\rangle\otimes|1\rangle\) and \(|E_{n}\rangle\otimes|0\rangle\) are degenerate, and we have waited long enough that their populations have thermalized (i.e., they have equal populations), then \(P(|E_{n}\rangle\otimes|0\rangle)=P(|\psi_{0}\rangle\otimes|1\rangle)\). Under these assumptions we can find the energy eigenstate populations entirely in terms of the population of the state \(|\psi_{0}\rangle\otimes|1\rangle\):
\[P(E_{n})=\frac{P(|\psi_{0}\rangle\otimes|1\rangle)}{1-P(|\psi_{0}\rangle \otimes|1\rangle)}\ .\] (7)
We expect these to match the Gibbs state populations, i.e., \(P(E_{n})=e^{-\beta H_{\mathrm{IS}}}/Z\), where \(Z\) is the partition function and \(\beta=1/(kT)\) is the inverse temperature.
### Evidence for entanglement
In Ref. Lanting _et al._ (2014), the authors use the populations found in the ground state \(P_{1}\) and first excited state \(P_{2}\) to construct the density matrix of the system \(\rho=\sum_{i=1}^{2}P_{i}|E_{i}\rangle\langle E_{i}|\), which assumes that the off-diagonal components in the energy eigenbasis are zero. Under this assumption (which, as we show below, agrees with the results of our ME simulations) the authors calculate the negativity Vidal and Werner (2002) for all possible bipartitions \(A\) of the system,
\[\mathcal{N}(\rho)=\frac{1}{2}\left(\parallel\rho^{\Gamma_{A}}\parallel_{1}-1 \right)\ ,\] (8)
where \(\rho^{\Gamma_{A}}\) denotes the partial transpose of \(\rho\) with respect to the bipartition \(A\). Global entanglement is then defined as the geometric mean of the negativity of all bipartitions Love _et al._ (2007), and was shown to be non-zero in Ref. Lanting _et al._ (2014).
A drawback of this approach is that it assumes the off-diagonal elements of the density matrix vanish. To ensure the robustness of an entanglement conclusion, an entanglement witness was used in Ref. Lanting _et al._ (2014) (based on the theory formulated in Ref. Spedalieri (2012)):
\[\mathcal{W}_{A}=|\phi\rangle\langle\phi|^{\Gamma_{A}}\ ,\] (9)
where \(|\phi\rangle\) is the eigenstate of \(|E_{0}\rangle\langle E_{0}|^{\Gamma_{A}}\) with the most negative eigenvalue. The entanglement witness approach succeeds in certifying entanglement even when the off-diagonal elements of the density matrix are not constrained to vanish, thus extending the range of validity of the presence of entanglement beyond that attainable using the negativity approach.
Using this entanglement witness, two non-trivial checks were performed. First, experimental errors in the populations \(P_{1}\) and \(P_{2}\) impose linear constraints on the state \(\rho\):
\[P_{i}-\Delta P_{i}\leq\mathrm{Tr}\left[\rho|E_{i}\rangle\langle E_{i}|\right] \leq P_{i}+\Delta P_{i}\ .\] (10)
If \(\mathrm{Tr}\left[\mathcal{W}_{A}\rho\right]<0\) for all \(\rho\) satisfying the experimental constraints in Eq. (10), then entanglement is certified for the bipartition \(A\). Optimizing \(\mathrm{Tr}[\mathcal{W}_{A}\rho]\) subject to the above constraints is an instance of a semidefinite program, a class of convex optimization problems for which efficient algorithms are known.
Second, uncertainties in the specification of the Hamiltonian in Eq. (1), which can lead to changes in the eigenstates \(|E_{i}\rangle\), were included by adding random perturbations (\(10^{4}\) samples in total) to the Hamiltonian, while ensuring that the maximum of \(\mathrm{Tr}\left[\mathcal{W}_{A}\rho\right]<0\) for all perturbations. We independently verify that such noise does not change the presence-of-entanglement conclusion in Appendix A.
## III Simulation Methods
The time-dependent Hamiltonian of the experiment is given by
\[H(s)=-A_{\mathrm{S}}(s)\sum_{i=1}^{N}\sigma_{i}^{x}-A_{\mathrm{P}}(s)\sigma_{P }^{x}+B(s)H_{\mathrm{Ising}}\ ,\] (11)
where \(s=t/t_{f}\) is the dimensionless time, and the Ising Hamiltonian \(H_{\mathrm{Ising}}\) is given by:
\[H_{\mathrm{Ising}}=H_{\mathrm{IS}}+H_{1\mathrm{P}}\] (12)
Just as in Ref. Lanting _et al._ (2014), we take \(J_{1\mathrm{P}}=-1.8\). The annealing schedule functions (\(A_{\mathrm{S}}(s),A_{\mathrm{P}}(s),B(s)\)) are shown in Fig. 1. These schedules are calculated using rf-SQUID models with independently calibrated qubit parameters Lanting (2015), and they correspond to the same schedule used in Ref. Lanting _et al._ (2014) [see their Fig. 1(d)].
<figure><img src="content_image/1506.03539/x1.png"><figcaption>Figure 1: (Color online) The annealing schedules used in the experimentLanting _et al._ (2014). The functional form for AS(s) and AP(s) in Eq. (11)is identical to the function A(s) shown. The dashed line corresponds to theexperimental temperature of 12.5mK.</figcaption></figure>
### Me
We assume that the qubit system is coupled to a bath of independent, thermal, Ohmic oscillators via a dephasing interaction. The adiabatic quantum master equation Albash _et al._ (2012) can be used to describe the system evolution in the weak coupling limit, and the evolution of the density matrix is given by:
\[\frac{d}{dt}\rho = -\frac{i}{\hbar}\left[H(t),\rho\right]\] (13)
\[+\sum_{\alpha=1}^{N}\frac{g_{\alpha}^{2}}{\hbar^{2}}\sum_{a,b} \Gamma(\omega_{ba})\left[L_{ab,\alpha}(t)\rho,\sigma^{z}_{\alpha}\right]+ \mathrm{h.c.}\ ,\]
where the index \(\alpha\) runs over the qubits, the indices \(a,b=1,\dots,2^{N}\) run over the instantaneous energy eigenvalues \(\varepsilon_{a}(t)\) of the system Hamiltonian \(H(t)\), with Bohr frequencies \(\omega_{ba}=[\varepsilon_{b}(t)-\varepsilon_{a}(t)]/\hbar\). The Lindblad operators are given by:
\[L_{ab,\alpha}(t)=\langle\varepsilon_{a}(t)|\sigma_{\alpha}^{z}|\varepsilon_{b} (t)\rangle\langle\varepsilon_{a}(t)|\varepsilon_{b}(t)\rangle\ ,\] (14)
and the function \(\Gamma\) encodes the bath correlation function:
\[\Gamma(\omega)= \frac{1}{2}\gamma(\omega)+iS(\omega)\] (15a)
\[= \frac{\pi\eta e^{-|\omega|/\omega_{c}}}{1-e^{-\beta\omega}}+i\int _{-\infty}^{\infty}\frac{d\omega^{\prime}}{2\pi}\gamma(\omega^{\prime}) \mathcal{P}\left(\frac{1}{\omega-\omega^{\prime}}\right)\ ,\] (15b)
where \(\mathcal{P}\) is the principal value. The important free parameters in the ME are the coupling strengths between the system and probe qubits to their respective bosonic baths. We denote these couplings by \(g_{\mathrm{S}}\) and \(g_{\mathrm{P}}\) respectively. For convenience, we take the system-bath coupling strength \(g_{\mathrm{S}}\) to be the same and fixed for all the system qubits, and vary the probe-bath coupling strength \(g_{\mathrm{P}}\) relative to \(g_{\mathrm{S}}\), expecting \(g_{\mathrm{P}}\geq g_{\mathrm{S}}\), since experimentally the probe qubit is operated in a regime where its coupling to the environment is strong (see Supplementary Information of Ref. Lanting _et al._ (2014)). In the simulations performed in this work, we fix the system-bath coupling strength to:
\[g_{\mathrm{S}}^{2}\eta/\hbar^{2}=1.2732\times 10^{-4}\ ,\] (16)
which is the value used in previous work Albash _et al._ (8).
### Sssv
The Hamiltonian of the SSSV model Shin _et al._ (2014) is obtained by replacing \(\sigma_{i}^{x}\mapsto\sin\theta_{i}\) and \(\sigma^{z}_{i}\mapsto\cos\theta_{i}\) in Eq. (11). The system is evolved by performing Monte Carlo updates on the angles \(\theta_{i}\in[0,\pi]\). At the end of the evolution, the state is projected onto the computational basis by mapping \(\theta_{i}\leq\pi/2\to+1\) (state 0) and \(\theta_{i}>\pi/2\to-1\) (state 1). This model was derived from first principles using the Keldysh formalism in Ref. Crowley and Green (2015).
### Sqa
We implement a discrete-time version of SQA Martoňák _et al._ (2002); Santoro _et al._ (2002); Boixo _et al._ (10). At each fixed time \(t\) in Eq. (11), Monte Carlo sampling is performed on the the dual classical spin system with:
\[\beta\mathcal{H}_{S}(t) = \frac{\beta}{N_{\tau}}B(t)\sum_{\tau}\left[\sum_{i}h_{i}\mu_{i, \tau}+\sum_{i<j}J_{ij}\mu_{i,\tau}\mu_{j,\tau}\right]\] (17)
\[-\sum_{i,\tau}J_{\perp,i}(t)\mu_{i,\tau}\mu_{i,\tau+1}\ ,\]
where \(\beta\) is the inverse temperature of the Monte Carlo simulation, \(N_{\tau}\) is the number of Trotter slices used along the Trotter direction, \(\mu_{i,\tau}\) denotes the \(i\)th classical spin on the \(\tau\)th Trotter slice, and \(J_{\tau,i}\) is the nearest-neighbor coupling strength of the \(i\)th qubit along its Trotter direction and is given by:
\[J_{\perp,i}(t)\equiv-\frac{1}{2}\ln(\tanh(\beta A_{i}(t)/N_{\tau}))>0\ .\] (18)
In our simulations we fixed \(N_{\tau}=128\) (we checked that increasing it does not alter the results).
## IV Description of the simulated experiment
The initial Hamiltonian is given by \(H(1)\), with \(H(s)\) as in Eq. (11). We choose the initial state of our ME and SQA simulations to be \(|\mathbf{1}\rangle\equiv|1\rangle^{\otimes(N+1)}\) (the \(N\) system qubits plus the probe qubit); for SSSV we correspondingly choose all the initial angles as \(\theta_{i}=\pi\). We emphasize that this is not necessarily the ground state of \(H(1)=B(1)H_{\mathrm{Ising}}\). Note that by doing this we are able to skip the state preparation step performed in Ref. Lanting _et al._ (2014). The simulation of the experiment then proceeds as follows:
1. \(B(s)\) and \(A_{\mathrm{S}}(s)\) are evolved backward from \(s=1\) to \(s=s^{\ast}\) in a time \(\tau_{1}\). The choice of \(s^{\ast}\) determines the quantum Hamiltonian whose spectrum we wish to study.
2. \(A_{\mathrm{P}}(s)\) is evolved backward from \(s=1\) to \(s=s_{\mathrm{P}}\) in a time \(\tau_{1}\).
3. The system evolves under the constant Hamiltonian \(H=-A_{\mathrm{S}}(s^{\ast})\sum_{i\in S}\sigma_{i}^{x}-A_{\mathrm{P}}(s_{ \mathrm{P}})\sigma_{P}^{x}+B(s^{\ast})H_{\mathrm{Ising}}\) for a “hold” time \(\tau\). Note that this means that the values of \(A\) and \(B\) in Eq. (1) are given by \(A(s^{\ast})\) and \(B(s^{\ast})\) respectively.
4. \(A_{\mathrm{P}}\) is evolved forward from \(s=s_{\mathrm{P}}\) to \(s=1\) in a time \(\tau_{1}\).
5. \(B(s)\) and \(A_{\mathrm{S}}(s)\) are evolved forward from \(s=s^{\ast}\) to \(s=1\) in a time \(\tau_{1}\).
The state of the system qubits and probe qubit are then read. Since the states \(|\mathbf{1}\rangle\) and \(|\psi_{0}\rangle\otimes|1\rangle\) are adiabatically connected energy eigenstates, measuring the population change in the \(|\mathbf{1}\rangle\) state indicates how much incoherent tunneling Harris _et al._ (2008) (to the iso-energetic state \(|E_{n}\rangle\otimes|0\rangle\)) has occurred at \(s^{\ast}\). By repeating the experiment for different values of the hold time \(\tau\) and recording the probability \(P_{|\mathbf{1}\rangle}(\tau,h_{\mathrm{P}},s^{\ast})\), of observing the \(|\mathbf{1}\rangle\) state at the end of the experiment, we can extract the tunneling rate \(\Gamma(h_{\mathrm{P}},s^{\ast})\) by fitting \(P_{|\mathbf{1}\rangle}(\tau,h_{\mathrm{P}},s^{\ast})\) to the function \(a+be^{-\Gamma(h_{\mathrm{P}},s^{\ast})\tau}\). The experiments are repeated for a range of \(h_{\mathrm{P}}\) values in order to find the location of the peaks in \(\Gamma\).
<figure><img src="content_image/1506.03539/x2.png"><figcaption>Figure 2: (Color online) Depiction of the 2+1-qubit Ising problem. Qubits aredisplayed as (labeled) green disks, with their local field value above them.Couplings are shown as solid lines connecting qubits with their value abovethe line. Values are picked according to the experiment in Ref. Lanting _etal._ (2014), i.e., JS=−2.5, J1P=−1.8, and hP is varied.</figcaption></figure>
<figure><img src="content_image/1506.03539/x3.png"><figcaption>Figure 3: (Color online) Depiction of the 8+1-spin Ising problem. The localfields are zero, and we take the couplings to be JS=−2.5. The probe qubitcouples as in Fig. 2.</figcaption></figure>
<figure><img src="content_image/1506.03539/x4.png"><figcaption>Figure 4: (Color online) Tunneling rate calculated using the ME, SSSV, and SQAfor s∗=0.339 and gP=10gS compared against the experimental results in Ref.Lanting _et al._ (2014) [see the center figure in their Fig. 8(a)], shiftedto the left by 14.5GHz to align the first peak with the ME results (a constantterm in the Hamiltonian present in the experimental setup but not in the MEaccounts for this shift). For the experimental results, 1σ error bars areshown. The two dominant peaks correspond to resonances with the ground stateand the first excited state; the magnitude of the peak grows as gP→gS (shownin Appendix B), but the peak position remains the same regardless of the valueof gP. For SSSV, we show the results with T=12.5 mK and τ1=5. 106 runs wereperformed for each hP value, and the population of the |1⟩ state wasdetermined by counting the number of times it occurred in these runs. For SQA,we show the results with T=12.5 mK and τ1=5. We performed 106 runs for each hPvalue, and the population of the |1⟩ state was determined by counting thenumber of times it occurred in these runs. The tunneling rate for the ME ismeasured in μs−1, while for SSSV and SQA it is in inverse sweeps.</figcaption></figure>
<figure><img src="content_image/1506.03539/x5.png"><figcaption></figcaption></figure>
The simulations were performed, as in the experiment of Ref. Lanting _et al._ (2014), on two system qubits plus one probe qubit, depicted in Fig. 2, and eight system qubits plus one probe qubit, depicted in Fig. 3.
## V Results
### Me: \(2+1\) qubit results
We first analyze the 2+1 qubit system example studied in Ref. Lanting _et al._ (2014), using the adiabatic quantum master equation described in Sec. III.1. We perform the procedure outlined in Sec. IV with \(\tau_{1}=10\mu s\), and we give an example of the tunneling rate observed at a particular \(s^{\ast}\) in Fig. 4 and compare it to the experimental results of Ref. Lanting _et al._ (2014). We estimate the position of the peak by fitting the data points around the peak with a Gaussian. We find excellent agreement with the energy spectrum as computed directly from diagonalizing the Hamiltonian, as shown in Fig 5. We then extract the energy eigenstate population distribution using Eq. (7) at the values of \(h_{\mathrm{P}}\) corresponding to the peaks in the tunneling rate, and reproduce the theoretical Gibbs state; see Fig. 5. Therefore, by reproducing the quantum spectrum and Gibbs populations, the ME is able to reproduce the spectroscopy results of the experiments performed in Ref. Lanting _et al._ (2014).
While the relative position of the dominant peaks corresponding to the ground state and first excited state agree very well, we observe that the experimental results are broadened considerably. It is shown in Ref. Lanting _et al._ (2014) that the experimental broadening is dominated by the linewidth of the probe qubit, which is claimed to be strongly affected by low frequency (\(1/f\)) noise Amin (2015). We attempted to reproduce the broadening using a variety of methods detailed in Appendix B, including the incorporation of low frequency noise, but unfortunately were unable to do so, and this remains an open problem. Furthermore, the positions of the third and fourth peak corresponding to the third and fourth excited states do not match the experimental result well, but this can be understood as being due to a breakdown of the two-level approximation of the rf-SQUIDs (see the Supplementary Information of Ref. Lanting _et al._ (2014) and also Ref. Amin _et al._ (2013)).
### Sssv: \(2+1\) qubit results
We now follow the same procedure using the SSSV model. Because this model effectively operates in a strong system-bath coupling regime, it thermalizes rapidly, so we are forced to make \(\tau_{1}\) very small (we take each to be only \(5\) Monte Carlo sweeps, where a sweep is a complete update of all spins). Otherwise the SSSV model quickly forgets the initial state before it reaches \(s^{\ast}\), and also forgets the state it relaxed to at \(s^{\ast}\) when it returns to \(s=1\).
We show the tunneling rate calculated using the SSSV model in Fig. 4. Only a single peak is observed over the entire range of \(h_{\mathrm{P}}\) values studied, and this peak does not occur at the same position as any of the ME peaks. It is perhaps not surprising that the SSSV model does not reproduce multiple tunneling peaks, as its energy spectrum is continuous. The single tunneling peak represents the thermal configuration of the classical rotors at \(s^{\ast}\) that maximally depopulates the \(|\mathbf{1}\rangle\) state. However, as the SQA results will demonstrate next, the failure is ultimately due to the absence of unitary dynamics that adiabatically connects energy eigenstates.
### Sqa: \(2+1\) qubit results
Unlike the SSSV model’s continuous energy spectrum, SQA’s spectrum is discrete. However, as we show in Fig. 4, SQA also fails to reproduce the experimental tunneling signature on the \(2+1\) qubit problem and has a strong similarity to the SSSV model’s result (also shown in Fig. 4). We note that SQA and SSSV also correlated strongly on the \(108\) random Ising spin problem studied in Ref. Boixo _et al._ (10), as further corroborated and explained in Ref. Albash _et al._ (15).
First, we check whether this failure is due to SQA somehow failing to capture the thermal expectation values of observables. However, this is not the case: when we hold SQA at a constant Hamiltonian it correctly reproduced the thermal quantum Gibbs state populations for the computational states, as shown in Fig. 6. Instead, the present failure of SQA is rooted in the absence of unitary dynamics. Namely, there is no adiabatic connection between the initial \(|\mathbf{1}\rangle\) state and the state \(|\psi_{0}\rangle\otimes|1\rangle\) at the point \(s^{\ast}\). In order for the arguments presented earlier to work, it is important that during the anneal from \(s=1\) to \(s^{\ast}\) and \(s_{\mathrm{P}}\) and then back to \(s=1\), the evolution remain adiabatic with no loss of population from the energy eigenstates, in order for the population of the state \(|\mathbf{1}\rangle\) to accurately track the population of the state \(|\psi_{0}\rangle\otimes|1\rangle\). For the ME, this is plausible since all other states with the probe qubit in the \(|1\rangle\) state at \(s=1\) are at much larger energies, i.e., they correspond to much higher energy eigenstates. Therefore, in a simulation with a unitary dynamics component such as occurs in the ME, we do not expect (nor do we observe) any of these states to be populated such that the only state with the probe qubit in the \(|1\rangle\) state is the \(|\mathbf{1}\rangle\) state. However, SQA lacks unitary dynamics, so at best it can provide an adiabatic connection between the thermal state at \(s=1\) and at \(s=s^{\ast}\), which cannot reproduce the experimental tunneling peak signature. We also see that SQA does populate other states with the probe qubit in the \(|1\rangle\) state besides the \(|\mathbf{1}\rangle\) state. (Note that if we choose to define the tunneling rate in terms of the populations of states with the probe qubit down, our SQA tunneling curves do not change.)
<figure><img src="content_image/1506.03539/x7.png"><figcaption>Figure 6: (Color online) The population of the 8 computational states for SQAwith Nτ=128 Trotter slices at 1000 sweeps for 106 repetitions, compared totheir population in the quantum Gibbs state. Both are evaluated at s∗=0.339and hP=1.03, which is very close to the resonance condition for the states|E1⟩⊗|0⟩ and |ψ0⟩⊗|1⟩.</figcaption></figure>
### ME: 8+1 qubit results
We extend our analysis to the \(8+1\) qubit problem depicted in Fig. 3, which was also studied in Ref. Lanting _et al._ (2014). We follow the same method as in the \(2+1\) qubit case, the only difference in our numerical simulations being that we truncate the energy spectrum to the lowest \(16\) energy eigenstates in order to reduce the computational effort. The results are shown in Fig. 7. We continue to find no agreement between the ME and SQA or SSSV. The experimental results exhibit two broad tunneling peaks whose centers match the ME results after a constant shift (as in the \(2+1\) qubits case). While the amount of broadening is the same in the \(8+1\) and \(2+1\) qubit experimental results, the limited range of \(h_{\mathrm{P}}\) values used (extending only up to \(15\)GHz as opposed to \(45\)GHz in Fig. 4) makes the agreement appear less impressive than in the \(2+1\) qubits case.
<figure><img src="content_image/1506.03539/x8.png"><figcaption>Figure 7: (Color online) Tunneling rate calculated using the ME, SSSV, and SQAfor s∗=0.284 as well as the DW experimental results (shifted to align with thefirst ME peak) on the 8+1 qubit problem. For the experimental results, 1σerror bars are shown. We show the results with T=12.5 mK for all simulationmethods. Otherwise, the parameters for all simulation methods are as for the2+1 qubit case. The tunneling rate for the ME is measured in μs−1, while forSSSV and SQA it is in inverse sweeps.</figcaption></figure>
<figure><img src="content_image/1506.03539/x9.png"><figcaption></figcaption></figure>
The energy spectrum and the populations as derived from the ME results are shown in Fig. 8. The agreement between the ME and the results from diagonalization [in panel (a)] and the Gibbs state [in panel (b)] is again excellent.
## VI Conclusions
In this work we set out to check whether the conclusion that entanglement is present during the evolution of \(2\)- and \(8\)-qubit experiments using the D-Wave device can be explained using a classical rotor model (SSSV), SQA, or the quantum adiabatic master equation (ME). This question is pertinent since the evidence for entanglement is indirect and assumes a quantum Hamiltonian. We found that neither the SSSV model nor SQA can explain the evidence. The failure of both models can ultimately be attributed to the fact that neither accurately captures the adiabatic connection between energy eigenstates during the annealing evolution. In contrast, the ME reproduces the experimental tunneling spectroscopy signature, suggesting that the underlying transverse field Ising mode Hamiltonian, with its quantized energy spectrum, is an appropriate description, along with the weak coupling approximation used to derive the master equation, at least for the system sizes of up to \(8\) qubits we have considered.
Our results support the presence of entanglement in the evolution of the D-Wave device, as concluded in Ref. Lanting _et al._ (2014). Furthermore, the failure of SQA and the SSSV model to capture this result indicates the importance of real-time open system quantum dynamics in supporting this conclusion. We emphasize that it is important to have both the right thermalization as well as the right dynamics to reproduce the experimental results; for example, we have checked that a Langevin-O(3) model as used in Ref. Albash _et al._ (8) fails to reproduce the experimental results as well (see Appendix C).
Nevertheless, it is important to note that while it becomes computationally prohibitive to use the ME to study systems larger than about \(15\) qubits, SSSV and SQA do not have this drawback and seem to capture certain experimental results (in particular the ground state population) at scales of \(>100\) qubits Boixo _et al._ (10); Shin _et al._ (2014). This tension between the small and large system behaviors remains an open question: does the open quantum system description of the device remain valid at larger sizes, and is the agreement of SQA and the SSSV model with ground state populations due to the fact that they and the open quantum system description yield similar final-time statistics? If so, this could be an artifact of the problem instances studied so far Katzgraber _et al._ (2014), or the dominance of thermal noise in the annealing evolution. Or does the weak-coupling approximation actually break down, so that semi-classical descriptions such as the SSSV model become valid for the dynamics as well? Further progress in open quantum system modeling and more appropriate benchmarking tests are required in order to address these questions.
###### Acknowledgements.
We thank Trevor Lanting for explaining various details of the experimental procedure, for providing the D-Wave experimental data, and for providing comments on an early version of the manuscript. The computing resources were provided by the USC Center for High Performance Computing and Communications. This research also used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This research was supported under ARO grant number W911NF-12-1-0523, under ARO MURI Grant No. W911NF-11-1-0268, and under NSF grant number DMS-1529079.
## References
* Johnson _et al._ (2010)M. W. Johnson, P. Bunyk, F. Maibaum, E. Tolkacheva, A. J. Berkley, E. M. Chapple, R. Harris, J. Johansson, T. Lanting, I. Perminov, E. Ladizinsky, T. Oh, and G. Rose, Superconductor Science and Technology **23**, 065004 (2010).
* Berkley _et al._ (2010)A. J. Berkley, M. W. Johnson, P. Bunyk, R. Harris, J. Johansson, T. Lanting, E. Ladizinsky, E. Tolkacheva, M. H. S. Amin, and G. Rose, Superconductor Science and Technology **23**, 105014 (2010).
* Johnson _et al._ (2011)M. W. Johnson, M. H. S. Amin, S. Gildert, T. Lanting, F. Hamze, N. Dickson, R. Harris, A. J. Berkley, J. Johansson, P. Bunyk, E. M. Chapple, C. Enderud, J. P. Hilton, K. Karimi, E. Ladizinsky, N. Ladizinsky, T. Oh, I. Perminov, C. Rich, M. C. Thom, E. Tolkacheva, C. J. S. Truncik, S. Uchaikin, J. Wang, B. Wilson, and G. Rose, Nature **473**, 194 (2011).
* Brooke _et al._ (1999)J. Brooke, D. Bitko, T. F., Rosenbaum, and G. Aeppli, Science **284**, 779 (1999).
* S. Suzuki and A. Das (guest eds.) (2015)S. Suzuki and A. Das (guest eds.), Eur. Phys. J. Spec. Top. **224**, 1 (2015).
* Reichardt _et al._ (2013)B. W. Reichardt, F. Unger, and U. Vazirani, Nature **496**, 456 (2013).
* Boixo _et al._ (2013)S. Boixo, T. Albash, F. M. Spedalieri, N. Chancellor, and D. A. Lidar, Nat. Commun. **4**, 2067 (2013).
* Albash _et al._ (2015a)T. Albash, W. Vinci, A. Mishra, P. A. Warburton, and D. A. Lidar, Physical Review A **91**, 042314 (2015a).
* Boixo _et al._ (2014a)S. Boixo, V. N. Smelyanskiy, A. Shabani, S. V. Isakov, M. Dykman, V. S. Denchev, M. Amin, A. Smirnov, M. Mohseni, and H. Neven, arXiv:1411.4036 (2014a).
* Boixo _et al._ (2014b)S. Boixo, T. F. Ronnow, S. V. Isakov, Z. Wang, D. Wecker, D. A. Lidar, J. M. Martinis, and M. Troyer, Nat. Phys. **10**, 218 (2014b).
* Shin _et al._ (2014)S. W. Shin, G. Smith, J. A. Smolin, and U. Vazirani, arXiv:1401.7087 (2014).
* Martoňák _et al._ (2002)R. Martoňák, G. E. Santoro, and E. Tosatti, Phys. Rev. B **66**, 094203 (2002).
* Santoro _et al._ (2002)G. E. Santoro, R. Martoňák, E. Tosatti, and R. Car, Science **295**, 2427 (2002).
* Martin-Mayor and Hen (2015)V. Martin-Mayor and I. Hen, “Unraveling quantum annealers using classical hardness,” (2015), arXiv:1502.02494.
* Albash _et al._ (2015b)T. Albash, T. F. Rønnow, M. Troyer, and D. A. Lidar, Eur. Phys. J. Spec. Top. **224**, 111 (2015b).
* Albash _et al._ (2012)T. Albash, S. Boixo, D. A. Lidar, and P. Zanardi, New J. of Phys. **14**, 123016 (2012).
* Leggett _et al._ (1987)A. J. Leggett, S. Chakravarty, A. T. Dorsey, M. P. A. Fisher, A. Garg, and W. Zwerger, Reviews of Modern Physics **59**, 1 (1987).
* Lanting _et al._ (2014)T. Lanting, A. J. Przybysz, A. Y. Smirnov, F. M. Spedalieri, M. H. Amin, A. J. Berkley, R. Harris, F. Altomare, S. Boixo, P. Bunyk, N. Dickson, C. Enderud, J. P. Hilton, E. Hoskinson, M. W. Johnson, E. Ladizinsky, N. Ladizinsky, R. Neufeld, T. Oh, I. Perminov, C. Rich, M. C. Thom, E. Tolkacheva, S. Uchaikin, A. B. Wilson, and G. Rose, Phys. Rev. X **4**, 021041 (2014).
* Berkley _et al._ (2013)A. J. Berkley, A. J. Przybysz, T. Lanting, R. Harris, N. Dickson, F. Altomare, M. H. Amin, P. Bunyk, C. Enderud, E. Hoskinson, M. W. Johnson, E. Ladizinsky, R. Neufeld, C. Rich, A. Y. Smirnov, E. Tolkacheva, S. Uchaikin, and A. B. Wilson, Phys. Rev. B **87**, 020502 (2013).
* Vidal and Werner (2002)G. Vidal and R. F. Werner, Phys. Rev. A **65**, 032314 (2002).
* Love _et al._ (2007)P. Love, A. van den Brink, A. Smirnov, M. Amin, M. Grajcar, E. Il’ichev, A. Izmalkov, and A. Zagoskin, Quantum Information Processing **6**, 187 (2007).
* Terhal (2000)B. M. Terhal, Physics Letters A **271**, 319 (2000).
* Gühne and Téth (2009)O. Gühne and G. Téth, Physics Reports **474**, 1 (2009).
* Spedalieri (2012)F. M. Spedalieri, Physical Review A **86**, 062311 (2012).
* Gerhardt _et al._ (2011)I. Gerhardt, Q. Liu, A. Lamas-Linares, J. Skaar, V. Scarani, V. Makarov, and C. Kurtsiefer, Physical Review Letters **107**, 170404 (2011).
* Harris _et al._ (2008)R. Harris, M. W. Johnson, S. Han, A. J. Berkley, J. Johansson, P. Bunyk, E. Ladizinsky, S. Govorkov, M. C. Thom, S. Uchaikin, B. Bumble, A. Fung, A. Kaul, A. Kleinsasser, M. H. S. Amin, and D. V. Averin, Phys. Rev. Lett. **101**, 117003 (2008).
* Lanting (2015)T. Lanting, D-Wave Inc., private communications (2015).
* Crowley and Green (2015)P. J. D. Crowley and A. G. Green, arXiv:1503.00651 (2015).
* Amin (2015)M. Amin, D-Wave Inc., private communications (2015).
* Amin _et al._ (2013)M. Amin, N. Dickson, and P. Smith, Quantum Inf. Proc. **12**, 1819 (2013).
* Katzgraber _et al._ (2014)H. G. Katzgraber, F. Hamze, and R. S. Andrist, Phys. Rev. X **4**, 021008 (2014).
* King and McGeoch (2014)A. D. King and C. C. McGeoch, arXiv:1410.2628 (2014).
* O’Malley _et al._ (2015)P. J. J. O’Malley, J. Kelly, R. Barends, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. G. Fowler, I.-C. Hoi, E. Jeffrey, A. Megrant, J. Mutus, C. Neill, C. Quintana, P. Roushan, D. Sank, A. Vainsencher, J. Wenner, T. C. White, A. N. Korotkov, A. N. Cleland, and J. M. Martinis, Phys. Rev. Applied **3**, 044009 (2015).
* Haag _et al._ (1967)R. Haag, N. M. Hugenholtz, and M. Winnink, Comm. Math. Phys. **5**, 215 (1967).
* Jayannavar (1991)A. Jayannavar, Zeitschrift für Physik B Condensed Matter **82**, 153 (1991).
* Kubo and Hashitsume (1970)R. Kubo and N. Hashitsume, Progress of Theoretical Physics Supplement **46**, 210 (1970).
* Landau and Lifshitz (1935)L. D. Landau and E. M. Lifshitz, Phys. Z. Sowjetunion **8**, 153 (1935).
## Appendix A Robustness of the entanglement
We consider adding Gaussian noise to all the fields and couplings in the \(2\)-qubit system Hamiltonian [Eq. (1)], i.e., we include an additional Hamiltonian of the form:
\[H_{\mathrm{Noise}}(s) = A(s)\left(\alpha_{1}\sigma_{1}^{x}+\alpha_{2}\sigma_{2}^{x}\right)\] (19)
\[+B(s)\left(\alpha_{3}\sigma_{1}^{Z}+\alpha_{4}\sigma_{2}^{Z}+ \alpha_{5}\sigma_{1}^{Z}\sigma_{2}^{Z}\right),\]
where \(\alpha_{i}\sim\mathcal{N}(0,0.1)\). This choice is well above the typical \(\sigma=0.05\) ICE associated with the D-Wave Two processors. We show in Fig. 9 that for the \(10^{5}\) noise samples generated, the Gibbs state never has zero negativity [as defined in Eq. (8)]. Therefore, such a noise model is unlikely to invalidate the presence-of-entanglement conclusion of Ref. Lanting _et al._ (2014).
<figure><img src="content_image/1506.03539/x11.png"><figcaption>Figure 9: (Color online) Histogram of the negativity of the Gibbs stateassociated with the 2-qubit Hamiltonian of Eq. (1) when a noise term of theform given in Eq. (19) is included. The red vertical line at 0.211 is thenegativity for the noiseless case. A total of 105 noise instances are shown.</figcaption></figure>
## Appendix B Extended noise models
While the ME is able to reproduce the positions of the tunneling rate peaks (see Figs. 4 and 7), the experimental results show a broadening of the peaks that is absent in the ME results, indicating that an important noise source is missing from the ME simulations. In the Supplementary Information of Ref. Lanting _et al._ (2014), it is made clear that the line shape of the first peak (corresponding to the ground state) in the two-qubit and eight-qubit systems are almost identical to the probe qubit’s macroscopic resonant tunneling (MRT) profile. This suggests that the broadening observed is from noise on the probe qubit and not the multi-qubit system. In our simulations, we have attempted to model this by increasing the relative system-bath coupling strength between the probe and system qubits. We show in Fig. 10 that the broadening induced by this method is not sufficient to explain the observed broadening.
<figure><img src="content_image/1506.03539/x12.png"><figcaption>Figure 10: (Color online) Tunneling rate for the 2+1 qubit problem calculatedusing the ME for s∗=0.339 and variable gP/gS compared against the experimentalresults in Ref. Lanting _et al._ (2014).</figcaption></figure>
<figure><img src="content_image/1506.03539/x13.png"><figcaption>Figure 11: (Color online) Tunneling rate for the 2+1 qubit problem calculatedusing the ME for s∗=0.339 and gP=10gS with σ=0.05 and σ=0.1 ICE on the probequbit only and without ICE compared against the experimental results in Ref.Lanting _et al._ (2014).</figcaption></figure>
<figure><img src="content_image/1506.03539/x14.png"><figcaption>Figure 12: (Color online) Tunneling rate for the 2+1 qubit problem calculatedusing the ME for s∗=0.339 and gP=10gS with σ=0.05 and σ=0.1 ICE on all qubitsand without ICE compared against the experimental results in Ref. Lanting _etal._ (2014).</figcaption></figure>
<figure><img src="content_image/1506.03539/x15.png"><figcaption>Figure 13: (Color online) Tunneling rate for the 2+1-qubit problem calculatedusing the ME for s∗=0.339 and gP=10gS averaged over α [see Eq. (21)] comparedagainst the experimental results in Ref. Lanting _et al._ (2014).</figcaption></figure>
We attempt therefore to reproduce this broadening using several other possible noise models. Our noise model choices are phenomenological, and are designed to test whether such modifications are sufficient to broaden the tunneling peaks. First, we consider introducing Gaussian noise on the Ising local fields and couplings:
<figure><img src="content_image/1506.03539/x16.png"><figcaption>Figure 14: (Color online) The modifications due to the addition of telegraphnoise [Eq. (22)] to the purely Ohmic spectral density function γ(ω).</figcaption></figure>
\[J_{ij}\to J_{ij}+\mathcal{N}(0,\sigma)\ ,\quad h_{i}\to h_{i}+\mathcal{N}(0,\sigma)\] (20)
This is a relevant source of noise for the D-Wave processors, often referred to as internal control error (ICE) King and McGeoch (2014). We run the ME with \(1000\) noise realizations for each applied (ideal) bias \(h_{p}\). We then average the observed population in the \(|\mathbf{1}\rangle\) state over these 1000 realizations and then extract the tunneling rate associated with the applied (ideal) bias \(h_{p}\). We first consider the case where the noise is purely on the local field and coupling of the probe qubit. We show in Fig. 11 results with \(\sigma=0.05\) and \(\sigma=0.1\), well above the typical \(\sigma=0.05\) ICE associated with the D-Wave Two “Vesuvius” processors. While the introduction of ICE in the simulations reduces the height of the tunneling-rate peaks and broadens them slightly, it is insufficient to account for the broadening observed in the experiment. Similar results are achieved if ICE is introduced on all qubits (see Fig. 12), suggesting this noise model is not sufficient to explain the broadening obesrved.
<figure><img src="content_image/1506.03539/x17.png"><figcaption>Figure 15: (Color online) Tunneling rate for the 3 qubit problem calculatedusing the ME for s∗=0.339 and gP=10gS using purely Ohmic and Ohmic plustelegraph spectral noise compared against the experimental results in Ref.Lanting _et al._ (2014).</figcaption></figure>
A more restrictive (and less physically reasonable) noise model is:
\[h_{i}\to h_{i}+\alpha\ ,\quad i=1,2\ ,\] (21)
where we average over three \(\alpha\in\{-0.1,0,0.1\}\) values. The result is shown in Fig. 13. While this does not sufficiently broaden the peaks, it does cause the appearance of peak splitting, which is a feature of the experimental results.
Finally, we modify the spectral density \(\gamma(\omega)\) in Eq. (15) to include a low frequency component. We consider a form of telegraph noise O’Malley _et al._ (2015), which we model as
\[\gamma_{\mathrm{tel}}(\omega)=\frac{\omega}{1-e^{-\beta\omega}}\frac{1}{\omega ^{2}+\omega_{\mathrm{IR}}^{2}}\ ,\] (22)
in order to satisfy the KMS condition Haag _et al._ (1967). We set \(\omega_{\mathrm{IR}}^{2}=0.01,0.05\) GHz\({}^{2}\)(this choice is motivated by numerical stability; making \(\omega_{\mathrm{IR}}\) and \(\omega_{\mathrm{IR}}\) smaller results in a significant slowdown of our simulations), and assume that this contribution to \(\gamma(\omega)\) has the same coupling strength \(g\) as the Ohmic component. We show in Fig. 14 how this modifies the purely Ohmic case. As we show in Fig. 15, these modifications are counterproductive and act to narrow the peak. We also included \(1/f\) noise [with a spectral density of the form \(\gamma_{1/f}(\omega)=\frac{2\pi e^{-|\omega|/\omega_{c}}}{1-e^{-\beta\omega}} \frac{|\omega|}{\omega^{2}+\omega_{\mathrm{IR}}^{2}}\)], but this did not reproduce the experimental broadening either (not shown).
<figure><img src="content_image/1506.03539/x18.png"><figcaption></figcaption></figure>
Since we were unable to reproduce the observed broadening we must conclude that our noise model is lacking a relevant feature, likely related to the breakdown of the weak coupling limit. Alternative methods (such as the non-interacting blip approximation Boixo _et al._ (9), though it is restricted to two-level systems) may thus need to be employed or developed to capture the broadening aspect of the experiment.
## Appendix C Spin-Langevin Model
Given the importance of the unitary dynamics component in reproducing the experimental results, we consider an alternative classical model with dynamics, specifically a (Markovian) spin-Langevin equation Jayannavar (1991); Kubo and Hashitsume (1970) with a Landau-Lifshitz friction term Landau and Lifshitz (1935); Kubo and Hashitsume (1970),
\[\frac{d}{dt}\vec{M}_{i}=-\left(\vec{H}_{i}+\vec{\xi}(t)+\chi\vec{H}_{i}\times \vec{M}_{i}\right)\times\vec{M}_{i}\ ,\] (23)
with the Gaussian noise \(\vec{\xi}=\{\xi_{i}\}\) satisfying
\[\langle\xi_{i}(t)\rangle=0\ ,\quad\langle\xi_{i}(t)\xi_{j}(t^{\prime})\rangle= 2k_{B}T\chi\delta_{ij}\delta(t-t^{\prime})\ ,\] (24)
and
\[\vec{H}_{i}=2A_{i}(t)\hat{x}-2B(t)\left(h_{i}+\sum_{j\neq i}J_{ij}\vec{M}_{j} \cdot\hat{z}\right)\hat{z}\ ,\] (25)
where \(\hat{x}\) and \(\hat{z}\) are unit vectors. Like the SSSV model, this model is another classical limit that can be derived from the Keldysh path integral formalism Crowley and Green (2015). We follow the same procedure as described in Sec. IV of the main text. We find that magnetization along the \(z\)-direction of the penalty qubit does not depend on the hold time \(\tau\), as shown in Fig. 16. Therefore, we do not observe the experimental signature of exponential decay of the all-down state. Furthermore, the final outcome of the \(z\)-magnetization in fact depends smoothly on \(h_{\mathrm{P}}\) as shown in Fig. 16. Therefore, although this model does include dynamics, it fails to reproduce the experimental signature.
|
1701.02995 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 23068,
"num_imgs": 7,
"llama3_tokens_count": 5577
} | [
"content_image/1701.02995/x1.png",
"content_image/1701.02995/x2.png",
"content_image/1701.02995/x6.png",
"content_image/1701.02995/x7.png",
"content_image/1701.02995/x8.png",
"content_image/1701.02995/x9.png",
"content_image/1701.02995/x10.png"
] | # Radio detection of extensive air showers
Tim Huege
Institut für Kernphysik, Karlsruher Institut für Technologie - Campus Nord, Postfach 3640, 76021 Karlsruhe, Germany
###### Abstract
Radio detection of extensive air showers initiated in the Earth’s atmosphere has made tremendous progress in the last decade. Today, radio detection is routinely used in several cosmic-ray observatories. The physics of the radio emission in air showers is well-understood, and analysis techniques have been developed to determine the arrival direction, the energy and an estimate for the mass of the primary particle from the radio measurements. The achieved resolutions are competitive with those of more traditional techniques. In this article, I shortly review the most important achievements and discuss the potential for future applications.
keywords: high-energy cosmic rays, radio emission, extensive air showers †
[FOOTNOTE:†][ENDFOOTNOTE]
## 1 Introduction
While the understanding of cosmic rays has progressed very significantly in the last decades, many questions about their origin and the physics of their acceleration are still unanswered (Blümer et al., 2009). New detection techniques help to maximize the information gathered about each detected cosmic-ray particle. This is especially important at the highest energies where fluxes are extremely low and measurements through air showers yield only rather indirect information on the primary particles. In the last years, radio detection of air showers in the _very high frequency_ (VHF) band, typically around 30–80 MHz, has been researched with great effort. Today, the technique has matured from a prototype stage to a well-established detection technique that benefits any cosmic-ray detector in which it is employed. The energy range accessible with the so-far developed approaches is illustrated in Fig. 1
<figure><img src="content_image/1701.02995/x1.png"><figcaption>Figure 1: Cosmic-ray energy spectrum overlaid with the reach of the VHF radiodetection technique. At low energies the radio signals are hidden in Galacticnoise, at very high energies concepts have yet to be devised to coverextremely large detection areas. Diagram updated and adapted from (Engel etal., 2011), reprinted from (Huege, 2016).</figcaption></figure>
In the following, I give a concise overview of the most important achievements made with the technique to date. For a detailed discussion of the state of the field, I kindly refer the reader to a previously-published extensive review (Huege, 2016).
## 2 Emission physics
The most important breakthrough of the past few years has been the detailed understanding of the radio emission processes in extensive air showers. Three effects are important:
* Electrons and positrons in the extensive air shower are accelerated by the Lorentz force in the geomagnetic field. At the same time they are decelerated by interactions with air molecules. An equilibrium arises, and the net drift of the particles in directions perpendicular to the air-shower axis leads to transverse currents. As the shower first grows in particle number, then reaches a maximum and then dies out, these transverse currents undergo a time-variation. The time-variation of the currents leads to radio emission. This is the dominant effect responsible for approximately 90% of the electric field amplitude, usually referred to as “geomagnetic emission” (Kahn and Lerche, 1966; Scholten et al., 2008).
* During the air-shower evolution, a negative charge excess builds up in the shower front. This arises mostly because ionization electrons from the ambient medium are swamped with the shower, while positive ions stay behind. Again, as the shower evolves, the net charge grows, reaches a maximum and then declines. The time-variation of the net charge excess leads to radio emission which contributes approximately 10% of the electric field amplitude. This is the so-called “Askaryan effect” which is the dominant mechanism for radio emission from particle showers in dense media (Askaryan, 1962, 1965).
* At VHF frequencies the radio emission is generally coherent. This means that electric field amplitudes from individual particles add up constructively. The total electric field amplitude thus scales linearly with the number of particles in the air shower, which in turn scales approximately linearly with the energy of the primary cosmic ray. Consequently, the radiated power (and energy) scales quadratically with the cosmic-ray energy. Coherence is governed by different scales in the air shower: the thickness of the shower pancake, the lateral width of the shower, and the time-delays arising from the geometry of the shower disk propagating with the speed of light as seen from a specific observer location. The latter is strongly influenced by the refractive index of the air which is approximately 1.0003 at sea level and scales with the density gradient of the atmosphere. This leads to “Cherenkov rings” in the radio-emission footprints; observers on these rings see time-compressed radio signals for which coherence reaches up to GHz frequencies (Alvarez-Muñiz et al., 2012; de Vries et al., 2011).
<figure><img src="content_image/1701.02995/x2.png"><figcaption>Figure 2: Left: The geomagnetic radiation mechanism. The arrows indicate thedirection of linear polarization in the plane perpendicular to the air showeraxis. The emission is linearly polarized along the direction given by theLorentz force, →v×→B (east-west for vertical air showers). Right: The chargeexcess (Askaryan) emission. The arrows indicate the polarization which islinear with electric field vectors oriented radially with respect to theshower axis. Diagrams have been adapted from (Schoorlemmer, 2012) and (deVries et al., 2012) and reprinted from (Huege, 2016).</figcaption></figure>
The geomagnetic and Askaryan mechanisms have different polarization characteristics, their superposition leads to constructive interference or destructive interference depending on the location of the observer with respect to the shower axis. The resulting radio-emission “footprint” is thus asymmetric. An illustration of the mechanisms and their polarization characteristics is shown in Fig. 2.
<figure><img src="content_image/1701.02995/x6.png"><figcaption>Figure 3: Comparison of the amplitude at a lateral distance of 100 m asmeasured with LOPES and simulated with CoREAS for air showers induced byprotons. The few outlier events in the lower-right parts of the diagrams arenot understood, but constitute less than 2% of the data. Adapted from (Link etal., 2015), reprinted from (Huege, 2016).</figcaption></figure>
The interpretation discussed so far is based on macroscopic models which employ concepts such as currents and net charges in the air shower. Correct incorporation of the enormous complexity of the air shower and the resulting coherence effects in macroscopic models is, however, very difficult, and the required simplifications degrade the quality of the modelled radio signals. This is why microscopic simulations in which the radio emission from the particle shower is calculated by adding up the emission from each individual electron and positron (Alvarez-Muñiz et al., 2012; Huege et al., 2013) are most widely used in the community. These are based on first principle calculations, applying discretized formalisms of classical electrodynamics (Zas et al., 1992; James et al., 2011) to the individual moving particles in the air shower. Consequently, they predict the radio signal on an absolute scale without any free parameters in the simulation. All measurements to date have been described by such simulations within errors. An example for a comparison of amplitudes measured with LOPES (Falcke et al., 2005) and simulated with CoREAS (Huege et al., 2013) at a distance of 100 m from the shower axis is given in Fig. 3. Similar results have been reported by other experiments.
## 3 Reconstruction of cosmic-ray parameters
Any detection technique for extensive air showers is only a means to the end of determining the parameters of the primary cosmic-ray particle: the arrival direction, the energy, and a measure for the mass. And in fact, all of these parameters have been demonstrated to be reconstructable from radio measurements with resolutions competitive with those of more traditional detection techniques.
The arrival direction of the air shower can be determined from the arrival-time distribution of the radio pulses in individual antennas. It is important to note that the wavefront of the radio signal is not a plane wave but has a complex structure: It is hyperbolical (Apel et al., 2014; Corstanje et al., 2015), i.e., spherical close to the shower axis and conical further away from the shower axis. The arrival direction has been demonstrated to be reconstructable within 0.5\({}^{\circ}\) and possibly as well as 0.1\({}^{\circ}\) with radio techniques.
<figure><img src="content_image/1701.02995/x7.png"><figcaption>Figure 4: Cosmic-ray energy determined with the Tunka-Rex radio antennas incomparison with the energy reconstructed with the Tunka-133 optical Cherenkovdetectors. Adapted from (Bezyazeekov et al., 2016), reprinted from (Huege,2016).</figcaption></figure>
<figure><img src="content_image/1701.02995/x8.png"><figcaption>Figure 5: Correlation between the radiation energy (normalized for incidenceperpendicular to the geomagnetic field) and the cosmic-ray energy measuredwith the Auger surface detector. Open circles represent air showers with radiosignals measured in three or four AERA detectors, filled circles correspond toshowers with five or more measured radio signals. Adapted from (Aab et al.,2016), reprinted from (Huege, 2016).</figcaption></figure>
Due to the coherent nature of the radio emission, the amplitude of the radio signal scales approximately linearly with the energy of the primary particle. The depth of the shower maximum (related to the mass of the primary particle) influences the steepness of the lateral distribution of the radio signal, so a measurement at a given position will exhibit intrinsic fluctuations of the measured amplitude. Two concepts have been devised to minimize these intrinsic fluctuations for a precise measurement of the energy of the cosmic ray. First, a characteristic lateral distance exists (Huege et al., 2008) at which these fluctuations are minimized. Experiments such as LOPES and Tunka-Rex (Bezyazeekov et al., 2015) have thus used amplitude measurements at this characteristic lateral distance as an energy estimator, and have reached resolutions as good as 15% using this approach, cf. Fig. 4. Second, instead of measuring the amplitude at a specific lateral distance, an area integral can be performed over the complete radio-emission footprint. This approach has been pioneered by the _Auger Engineering Radio Array_ (AERA) (Schulz and for the Pierre Auger Collaboration, 2015). The energy fluence (in units of eV/m\({}^{2}\)) as measured at individual antenna locations is fitted with a model of the two-dimensional emission footprint to then integrate over area and determine the total “radiation energy” (in units of eV) deposited in the form of radio signals on the ground. This radiation energy scales quadratically with the energy of the primary cosmic ray, the achieved energy resolution amounts to 17%, cf. Fig. 5. The radiation energy has the benefit of being a well-defined physical quantity that is independent of the observation altitude and zenith angle of the air shower (Aab et al., 2016). It is thus conceptually very attractive as a means to cross-calibrate the absolute energy scales of experiments against each other or against first-principle calculations. An interesting aside is that only a minute fraction of the energy of the primary particle is radiated in the form of radio signals in the VHF band: For a 10\({}^{18}\) eV primary particle, the radiation energy amounts to approximately 15.8 MeV (Aab et al., 2016). Nevertheless, photon statistics need not be considered as the energy of a 55 MHz photon is of order \(10^{-7}\) eV.
<figure><img src="content_image/1701.02995/x9.png"><figcaption>Figure 6: Atmospheric depth between shower maximum and observer altitude asdetermined with the Tunka-Rex radio measurement and the Tunka-133 Cherenkovdetectors. Adapted from (Bezyazeekov et al., 2016), reprinted from (Huege,2016).</figcaption></figure>
The most difficult challenge is the reconstruction of the depth of shower maximum (in g/cm\({}^{2}\), usually called \(X_{\mathrm{max}}\)), which is the primary estimator for particle mass used in air-shower measurements. Fluorescence and Cherenkov light detectors are able to measure X\({}_{\mathrm{max}}\) with a resolution of 20–25 g/cm\({}^{2}\). Radio signals from air showers carry information on the distance of the source of the emission and thus on the depth of the shower maximum, encoded in the steepness of the lateral amplitude distribution as well as in the wavefront structure and the radio pulse shapes. So far, most analyses only exploit the lateral signal distribution. The Tunka-Rex experiment has demonstrated a clear correlation between X\({}_{\mathrm{max}}\) measured with the Tunka Cherenkov light detectors and the Tunka-Rex radio antennas, cf. Fig. 6. The estimated X\({}_{\mathrm{max}}\) resolution of the radio reconstruction is of order 40 g/cm\({}^{2}\) and thus not yet competitive with established techniques. However, there is still room for improvement, e.g., by exploiting additional signal information. Using the dense antenna array of LOFAR (Schellart et al., 2013) and a top-down approach where individual events are compared with dozens of simulations to determine the true X\({}_{\mathrm{max}}\) value from the best-fitting simulation (see Fig. 7), X\({}_{\mathrm{max}}\) resolutions better than 20 g/cm\({}^{2}\) are achievable also with radio measurements. With such a good measurement precision, even small data sets can be used to begin constrain the mass composition of cosmic rays (Buitink et al., 2016).
<figure><img src="content_image/1701.02995/x10.png"><figcaption>Figure 7: Left: Energy fluence measured at individual LOFAR antennas (coloredcircles) in comparison with the signal distribution predicted by the best-fitting of a set of CoREAS simulations (background-color) for a particular airshower event. Middle: One-dimensional projection of the two-dimensional signaldistribution. Right: Quality of the agreement between the energy fluencedistribution measured with LOFAR and the distribution predicted by differentCoREAS simulations of the air shower event. A clear correlation between thevalue of Xmax and the quality of the fit is obvious. All diagrams adapted from(Buitink et al., 2014), reprinted from (Huege, 2016).</figcaption></figure>
## 4 Future applications
There are numerous applications in which radio detectors can benefit existing or newly built cosmic-ray detectors. In particular:
* Any particle detector will profit from additional radio antennas which allow measurements of the pure electromagnetic cascade with a very good energy resolution and reasonable X\({}_{\mathrm{max}}\) resolution. This is attractive in particular if much of the infrastructure (cabling, power, communications) can be shared between the detectors — the radio antenna and readout electronics themselves are fairly inexpensive with prices below 1,000 USD per antenna certainly achievable.
* Radio detectors can be used for a precise calibration of the absolute energy scale of a cosmic-ray detector. This is because the radio signal is not influenced by atmospheric conditions (no scattering, no absorption) and gives direct access to the calorimetric energy in the electromagnetic cascade of the air shower (Glaser et al., 2016). Cross-calibration between different experiments, e.g., via the radiation energy, or even calibration of experiments using first-principle calculations are thus very attractive.
* Horizontal air showers illuminate areas of several km\({}^{2}\)(Kambeitz and for the Pierre Auger Collaboration, 2016) and can thus be detected with antenna arrays using grid spacings of a kilometer or sparser, meaning that measurements of inclined air showers can be used to measure the electromagnetic component of ultra-high-energy cosmic rays.
* Very dense antenna arrays might allow cosmic-ray measurements with unprecedented reconstruction quality for individual air-shower events. The upcoming Square Kilometre Array is expected to reach X\({}_{\mathrm{max}}\) resolutions below 10 g/cm\({}^{2}\)(Zilles et al., 2016) and could investigate the mass composition in the region of transition from Galactic to extragalactic cosmic rays with unprecedented mass resolution.
## 5 Conclusions
Over the past decade, radio detection of extensive air showers has matured from a prototype phase with small installations and an unclear picture of the radio emission mechanisms to full-fledged detector arrays and a detailed understanding of the radio emission physics. Radio detection has by now become a routine part of several cosmic-ray detection efforts and contributes valuable information for the analysis of cosmic-ray data. Particular potential lies in a precise calibration of the absolute energy scale of cosmic-ray detectors, in the large-scale detection of horizontal air showers and in precision measurements with very dense arrays.
## References
* Blümer et al. (2009) J. Blümer, R. Engel, J. R. Hörandel, Cosmic rays from the knee to the highest energies, Progress in Particle and Nuclear Physics 63 (2009) 293 – 338.
* Engel et al. (2011) R. Engel, D. Heck, T. Pierog, Extensive air showers and hadronic interactions at high energy, Ann. Rev. Nucl. Part. Sci. 61 (2011) 467–489.
* Huege (2016) T. Huege, Radio detection of cosmic ray air showers in the digital era, Physics Reports 620 (2016) 1 – 52.
* Kahn and Lerche (1966) F. D. Kahn, I. Lerche, Radiation from cosmic ray air showers, in: Proc. Roy. Soc., volume A-289, p. 206.
* Scholten et al. (2008) O. Scholten, K. Werner, F. Rusydi, A macroscopic description of coherent geo-magnetic radiation from cosmic-ray air showers, Astropart. Phys. 29 (2008) 94–103.
* Askaryan (1962) G. A. Askaryan, Excess negative charge of an electron-photon shower and its coherent radio emission, Soviet Phys. JETP 14 (1962) 441.
* Askaryan (1965) G. A. Askaryan, Coherent radio emission from cosmic showers in air and in dense media, Soviet Phys. JETP 21 (1965) 658.
* Alvarez-Muñiz et al. (2012) J. Alvarez-Muñiz, W. R. Carvalho Jr., E. Zas, Monte Carlo simulations of radio pulses in atmospheric showers using ZHAireS, Astropart. Phys. 35 (2012) 325 – 341.
* de Vries et al. (2011) K. D. de Vries, A. M. van den Berg, O. Scholten, K. Werner, Coherent Cherenkov Radiation from Cosmic-Ray-Induced Air Showers, Phys. Rev. Lett. 107 (2011) 61101.
* Schoorlemmer (2012) H. Schoorlemmer, Tuning in on cosmic rays, Ph.D. thesis, Radboud Universiteit Nijmegen, 2012.
* de Vries et al. (2012) K. D. de Vries, O. Scholten, K. Werner, Macroscopic geo-magnetic radiation model; polarization effects and finite volume calculations, Nucl. Instr. Meth. A 662, Supplement 1 (2012) S175 – S178.
* Link et al. (2015) K. Link, T. Huege, W. D. Apel, et al., Revised absolute amplitude calibration of the LOPES experiment, in: Proceedings of the 34th ICRC, The Hague, The Netherlands, PoS(ICRC2015)311.
* Huege et al. (2013) T. Huege, M. Ludwig, C. W. James, Simulating radio emission from air showers with CoREAS, AIP Conf. Proc. (2013) 128–132.
* Zas et al. (1992) E. Zas, F. Halzen, T. Stanev, Electromagnetic pulses from high-energy showers: Implications for neutrino detection, Phys. Rev. D 45 (1992) 362–376.
* James et al. (2011) C. W. James, H. Falcke, T. Huege, M. Ludwig, General description of electromagnetic radiation processes based on instantaneous charge acceleration in “endpoints”, Phys. Rev. E 84 (2011) 056602.
* Falcke et al. (2005) H. Falcke, W. D. Apel, A. F. Badea, et al., Detection and imaging of atmospheric radio flashes from cosmic ray air showers, Nature 435 (2005) 313–316.
* Apel et al. (2014) W. D. Apel, J. C. Arteaga-Velázquez, L. Bähren, et al., The wavefront of the radio signal emitted by cosmic ray air showers, JCAP (2014) 025.
* Corstanje et al. (2015) A. Corstanje, P. Schellart, A. Nelles, et al., The shape of the radio wavefront of extensive air showers as measured with LOFAR, Astropart. Phys. 61 (2015) 22 – 31.
* Bezyazeekov et al. (2016) P. A. Bezyazeekov, N. M. Budnev, O. A. Gress, et al., Radio measurements of the energy and the depth of the shower maximum of cosmic-ray air showers by Tunka-Rex, JCAP 01 (2016) 52.
* Aab et al. (2016) A. Aab, P. Abreu, M. Aglietta, et al., Energy estimation of cosmic rays with the engineering radio array of the pierre auger observatory, Phys. Rev. D 93 (2016) 122005.
* Huege et al. (2008) T. Huege, R. Ulrich, R. Engel, Dependence of geosynchrotron radio emission on the energy and depth of maximum of cosmic ray showers, Astropart. Phys. 30 (2008) 96–104.
* Bezyazeekov et al. (2015) P. A. Bezyazeekov, N. M. Budnev, O. A. Gress, et al., Measurement of cosmic-ray air showers with the Tunka Radio Extension (Tunka-Rex), Nucl. Instr. Meth. A 802 (2015) 89–96.
* Schulz and for the Pierre Auger Collaboration (2015) J. Schulz, for the Pierre Auger Collaboration, Status and Prospects of the Auger Engineering Radio Array, in: Proceedings of the 34th ICRC, The Hague, The Netherlands, PoS(ICRC2015)615.
* Aab et al. (2016) A. Aab, P. Abreu, M. Aglietta, et al., Measurement of the radiation energy in the radio signal of extensive air showers as a universal estimator of cosmic-ray energy, Phys. Rev. Lett. 116 (2016) 241101.
* Schellart et al. (2013) P. Schellart, A. Nelles, S. Buitink, et al., Detecting cosmic rays with the LOFAR radio telescope, Astronomy & Astrophysics 560 (2013) A98.
* Buitink et al. (2016) S. Buitink, A. Corstanje, H. Falcke, et al., A large light-mass component of cosmic rays at 10\({}^{17}\) – 10\({}^{17.5}\) ev from radio observations, Nature 531 (2016) 70.
* Buitink et al. (2014) S. Buitink, A. Corstanje, J. E. Enriquez, et al., Method for high precision reconstruction of air shower X\({}_{max}\) using two-dimensional radio intensity profiles, Phys. Rev. D 90 (2014) 082003.
* Glaser et al. (2016) C. Glaser, M. Erdmann, J. R. Hörandel, T. Huege, J. Schulz, Simulation of radiation energy release in air showers, Journal of Cosmology and Astroparticle Physics 2016 (2016) 024.
* Kambeitz and for the Pierre Auger Collaboration (2016) O. Kambeitz, for the Pierre Auger Collaboration, Measurement of horizontal air showers with the Auger Engineering Radio Array, in: Proceedings of the ARENA2016 conference, Groningen, The Netherlands, arXiv:1609.05456.
* Zilles et al. (2016) A. Zilles, S. Buitink, T. Huege, Initial simulation study on high-precision radio measurements of the depth of shower maximum with SKA1-low , in: Proceedings of the ARENA2016 conference, Groningen, The Netherlands.
|
1708.01895 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 51464,
"num_imgs": 10,
"llama3_tokens_count": 16106
} | [
"content_image/1708.01895/x1.png",
"content_image/1708.01895/x2.png",
"content_image/1708.01895/x3.png",
"content_image/1708.01895/x4.png",
"content_image/1708.01895/x5.png",
"content_image/1708.01895/x6.png",
"content_image/1708.01895/x7.png",
"content_image/1708.01895/x8.png",
"content_image/1708.01895/x9.png",
"content_image/1708.01895/x10.png"
] | # A Jupiter-mass planet around the K0 giant Hd 208897†
[FOOTNOTE:†][ENDFOOTNOTE]
M. Yılmaz
1Ankara University, Department of Astronomy and Space Sciences, TR-06100, Ankara, Turkey 1
B. Sato
2Tokyo Institute of Technology, Ookayama, Meguro-ku, Tokyo 152-8550, Japan 2
I. Bikmaev
3Kazan Federal University, Department of Astronomy and Satellite Geodesy, 420008, Kazan, Russia 36Academy of Sciences of Tatarstan, Bauman Str, 20, 420111, Kazan, Russia 6
S. O. Selam
1Ankara University, Department of Astronomy and Space Sciences, TR-06100, Ankara, Turkey 1
H. Izumiura
4Okayama Astrophysical Observatory, Honjo 3037-5, Kamogata, Asakuchi, Okayama 719-0232, Japan 4
V. Keskin
5Ege University, Department of Astronomy and Space Sciences, TR-35100, Bornova, İzmir, Turkey 5
E. Kambe
4Okayama Astrophysical Observatory, Honjo 3037-5, Kamogata, Asakuchi, Okayama 719-0232, Japan 4
S. S. Melnikov
3Kazan Federal University, Department of Astronomy and Satellite Geodesy, 420008, Kazan, Russia 36Academy of Sciences of Tatarstan, Bauman Str, 20, 420111, Kazan, Russia 6
A. Galeev
3Kazan Federal University, Department of Astronomy and Satellite Geodesy, 420008, Kazan, Russia 36Academy of Sciences of Tatarstan, Bauman Str, 20, 420111, Kazan, Russia 6
İ. Özavcı
1Ankara University, Department of Astronomy and Space Sciences, TR-06100, Ankara, Turkey 1
E. N. Irtuganov
3Kazan Federal University, Department of Astronomy and Satellite Geodesy, 420008, Kazan, Russia 36Academy of Sciences of Tatarstan, Bauman Str, 20, 420111, Kazan, Russia 6
R. Ya. Zhuchkov
3Kazan Federal University, Department of Astronomy and Satellite Geodesy, 420008, Kazan, Russia 36Academy of Sciences of Tatarstan, Bauman Str, 20, 420111, Kazan, Russia 67Main (Pulkovo) Astronomical Observatory, Russian Academy of Sciences, Saint-Petersburg, 196140, Russia 7 mesutyilmaz@ankara.edu.tr
Received xx xx 2017 / Accepted xx xx 2017
Key Words.:**stars: individual: HD 208897 – stars: planetary systems – stars: fundamental parameters – techniques: radial velocities**†
[FOOTNOTE:†][ENDFOOTNOTE]
For over 10 years, we have carried out a precise radial velocity (RV) survey to find substellar companions around evolved G,K-type stars to extend our knowledge of planet formation and evolution. We performed high precision RV measurements for the giant star HD 208897 using an iodine (\(I_{2}\)) absorption cell. The measurements were made at TÜBİTAK National Observatory (TUG; RTT150) and Okayama Astrophysical Observatory (OAO). For the origin of the periodic variation seen in the RV data of the star, we adopted a Keplerian motion caused by an unseen companion. We found that the star hosts a planet with a minimum mass of \(m_{2}sini=1.40M_{J}\), which is relatively low compared to those of known planets orbiting evolved intermediate-mass stars. The planet is in a nearly circular orbit with a period of \(P=353\) days at about 1 AU distance from the host star. The star is metal rich and located at the early phase of ascent along the red giant branch. The photometric observations of the star at Ankara University Kreiken Observatory (AUKR) and the _HIPPARCOS_ photometry show no sign of variation with periods associated with the RV variation. Neither bisector velocity analysis nor analysis of the Ca II and H\(\alpha\) lines shows any correlation with the RV measurements.
## 1Introduction
**The radial velocity (RV) method is a technique that is widely used to detect exoplanets through the observations of Doppler shifts of spectral lines in the spectrum of the host star of a planet. The reflex motion of a star due to a planetary companion produces a periodic variation of RV with a few** \(ms^{-1}\) **velocity amplitude that depends on the mass and distance of the planet. However, this method requires both a high signal-to-noise (S/N) ratio and a high spectral resolution to achieve such a high RV precision. About 3500 exoplanets have been discovered with various methods so far (The NASA Exoplanet Archive;** **Akeson et al.****(**2017**)****), of which about 640 have been found with the RV technique. While most of the known exoplanets (**\(\sim\)**75%) orbit around G, K-type dwarfs, a small fraction (**\(\sim\)**4%) of these exoplanets have been found around evolved intermediate-mass (**\(1.3-5M_{\odot}\)**) stars (e.g.,** **Niedzielski et al.****(**2016**)**; **Ortiz et al.****(**2016**)**; **Giguere et al.****(**2015**)**; **Jones et al.****(**2015**)**; **Lee et al.****(**2014**)**; **Nowak et al.****(**2013**)**; **Mitchell et al.****(**2013**)**; **Sato et al.****(**2010, 2013**)**; **Omiya et al.****(**2012**)**; **Wang et al.****(**2012**)**; **Johnson et al.****(**2011**)**; **Wittenmyer et al.****(**2011**)****). In spite of about 130 planets discovered around G-K giant stars, only few of these planets (**\(\sim 20\)**) are close to the mass of Jupiter. Therefore, there is still a need to increase the number of planets around G-K giant to strengthen sample.**
**Intermediate-mass stars on the main sequence have very few absorption lines suitable for precise RV studies owing to their high atmospheric temperatures and rapid rotations. However, their evolved counterparts, giants and subgiants, are promising targets for precise Doppler-shift measurements because they show many useful spectral lines thanks to their lower surface temperatures and slower rotation rates.**
**Every new discovery of an exoplanet allows us not only to constrain a general picture of planet formation but also to understand how stellar evolution affects planetary systems. In particular, planets around evolved intermediate-mass stars are of great importance. Intermediate-mass stars tend to have more massive protoplanetary disks than those of sun-like stars, which provides an opportunity to investigate planet formation in different environments other than sun-like stars. At the same time, intermediate-mass stars have shorter evolutionary timescales, which causes their protoplanetary disks to also have shorter lifetimes than those of sun-like stars. In addition, the time allowed for planet formation is also much short compared to sun-like stars in intermediate-mass stars. Searches for planets around giant stars also provide a snapshot of the changes in dynamical configuration of the planetary system during evolution of the host star. In addition, surveys of evolved stars have revealed that the orbital properties of their planets seem significantly different from those of G,K dwarfs and therefore the relevant statistical outcomes** **(****Adibekyan et al.******2013; **Bowler et al.******2010; **Johnson et al.******2010; **Takeda et al.******2008; **Mortier et al.******2012; **Pasquini et al.******2007; **Udry & Santos******2007**)** **are still open to debate. For example, semimajor axes of planets orbiting evolved stars are larger than 0.6 AU. Indeed, there are almost no planets orbiting closer than 0.6 AU around stars with** \(M>1.5M_{\odot}\)******(****Johnson et al.******2007; **Sato et al.******2008; **Wright et al.******2009**)****. It has been proposed that the lack of planets close to evolved intermediate-mass stars is the result of the engulfment of inner-orbit planets by the host stars when the stars ascended the red giant branch (RGB)** **(****Sato et al.******2008; **Villaver & Livio******2009**)****. Moreover, the planet frequency–stellar metallicity correlation in intermediate-mass stars seems tighter than that in G,K dwarfs. Furthermore, the occurrence rate of planets around intermediate-mass stars increases with increasing stellar mass** **(****Reffert et al.******2015; **Johnson et al.******2010**)** **and these occupy less eccentric orbits as compared to those of planets around sun-like stars** **(****Johnson******2008**)****. Therefore, the existence of planets around evolved giants leads to a test of the viability of planet formation models (e.g.,** **Boss****(**2000**)**; **Ida & Lin****(**2004**)**; **Bodenheimer & Pollack****(**1986**)**; **Mordasini et al.****(**2008**)****) and evolution of planetary systems. So far, signs of various important properties have been revealed for planets in intermediate-mass stars, but they are still in need of confirmation based on a much larger number of samples.**
**In 2007, we started a precise Doppler survey to search for planets around evolved intermediate-mass stars using the 1.5 m Russian-Turkish Telescope (RTT150) at TÜBİTAK National Observatory (TUG) within the framework of an international collaboration between Turkey, Russia, and Japan** **(****Yılmaz et al.******2015**)****. The survey program is an extension to the ongoing Okayama Astrophysical Observatory (OAO) planet search program** **(****Sato et al.******2005**)****. About 50 G,K-type giant stars were selected for the survey from the HIPPARCOS** **(****Perryman et al.******1997**)** **catalog according to following criteria: visual magnitude of** \(V\sim 6.5\)**, color index of** \(0.6\leq B-V\leq 1.0\)**, declination of** \(\delta\geq-20^{\circ}\)**, and excluding stars known as photometric variables.**
**In this work, we report the first planet discovery around a giant star** **HD 208897** **in our planet search program using the RTT150 and 1.88 m telescope at OAO. The paper is organized as follows: In section 2, we describe our spectroscopic observations at TUG and OAO, and we also present photometric observations at AUKR (Ankara University Kreiken Observatory) and the** _HIPPARCOS_******(****van Leeuwen et al.******1997**)** **photometry database. The stellar characteristic is presented in section 3, while orbital solutions and other possible causes of the RV variation are discussed in section 4. Finally, we present our discussions and conclusions in section 5.**
## 2Observations and analysis
**Since 2007, we have been carrying out a Doppler planet search program targeting 50 G-K type giants using the RTT150 at TUG. From these observations we found that 13 targets show significant RV variations between 20 and 500** \(ms^{-1}\)**. Therefore we decided to follow up these targets with the 1.88 m telescope at the Okayama Astrophysical Observatory (OAO) after 2012. Moreover, we started to observe these targets photometrically at AUKR to check photometric variability or detect any transit phenomenon. One of these candidates is** **HD 208897****.**
### Observations from TÜBİTAK National Observatory
**We acquired 73 spectra for** **HD 208897** **from 2009 June to 2017 January using Coude Echelle Spectrograph (CES) and** \(2K\times 2K\) **Andor CCD attached to RTT150 at TUG. We used an iodine (**\(I_{2}\)**) absorption cell in front of the entrance slit of the spectrograph to obtain precise RV measurements, which superimposes thousands of molecular absorption lines over the object spectra. Using these lines as a wavelength reference, we simultaneously derived the instrumental profile and Doppler shift relative to stellar template spectrum. The TUG CES spectra covered a wavelength region from 4000 Å to 8000 Å with resolving power** \(R\sim\)**55000. The typical signal-to-noise ratios (**\(S/N\)**) were obtained as 60-120 per pixel at 5500 Å with an exposure time of 1800 seconds for the entire data set. The Doppler precision of RTT150 CES is about 10** \(ms^{-1}\) **over a time span of nine years** **(****Yılmaz et al.******2015**)****.**
### Observations from Okayama Astrophysical Observatory
**From 2014 to 2017, we used the 188 cm telescope and High Dispersion Echelle Spectrograph (HIDES) high-efficiency fiber-feeding system (hereafter HIDES-F) at OAO** **(****Izumiura******1999; **Kambe et al.******2013**)** **and obtained a total of 34 data points for** **HD 208897****. The HIDES-F instrument uses an image slicer as the entrance aperture of the spectrograph and its spectral resolution is fixed to** \(R=55000\)**. The spectra covered a wavelength region 3750 Å to 7500 Å. For precise RV measurements, we used** \(I_{2}\) **absorption cell, which provides an ideal wavelength reference in a wavelength range of 5000 Å to 5800 Å. For stars with the visual magnitude of** \(V<6.5\) **we can obtain a sufficient signal-to-noise ratio of** \(S/N>200\) **with an exposure time shorter than 30 min, with which the RV precision can reach down to 3** \(ms^{-1}\)******(****Harakawa et al.******2015**)****.**
### Ankara University Kreiken Observatory Photometric observations
**Photometric observations of** **HD 208897** **were carried out with the 35 cm T35 telescope and** \(1K\times 1K\) **Apogee ALTA U47 camera at AUKR between 2014 October and 2017 January. The plate scale of camera is** \(0\arcsec.75\) **per pixel and full field of view is** \(13\arcmin\times 13\arcmin\)**. Single color photometric data of the target were obtained using Bessel-R filter. During the observations we used the telescope defocusing technique to achieve high photometric precision by distributing the point spread function (PSF) over many pixels. This approach allowed us to minimize the effects of flat-fielding errors or seeing changes. The diameters of the defocused PSF ranged between 30 and 50 pixels.**
### Data analysis
**The stellar spectra obtained at both TUG and OAO were processed following the standard echelle reduction procedures in IRAF**¹ **software packages, i.e., bias subtraction, extraction of the scattered light produced in the optical system, division by the normalized flat-field, and wavelength calibration by Thorium-Argon (ThAr) lamp reference spectra. After these reduction processes, the spectra were normalized to the continuum, order by order, by fitting a polynomial function to remove the general shape of the aperture spectra and prepare for the precise RV measurement procedure. The precise RVs of the target were derived from the observed star spectra taken through the** \(I_{2}\) **cell via a custom IDL**² **code for CES data and a C code for HIDES-F data, which are based on the analysis technique described by** **Butler et al.****(**1996**)****,** **Sato et al.****(**2002**)****, and** **Sato et al.****(**2012**)****. In this technique, we divided the echelle spectrum into hundreds of chunks with a few Å(typically 3.5 Å) width and applied a Doppler analysis to each chunk. The final measured RV is the weighted mean of the velocities of the individual chunks and all RVs were corrected to the solar system barycenter, which is based on the Jet Propulsion Lab (JPL) ephemeris calculations. The derived RVs are listed in Table** 1 **and shown in Figure** 3 **together with the estimated uncertainties.**
[FOOTNOTE:1][ENDFOOTNOTE]
[FOOTNOTE:2][ENDFOOTNOTE]
**The photometric images taken AUKR were reduced via the photometric tasks of IRAF software. Then we performed standard aperture photometry with the** _ASTROLIB/APER_ **IDL routine. The magnitudes and their errors of stars within the image field were derived using this routine and after that we chose comparison stars for relative photometry. We discarded all variable or stars that are too faint from the comparison list. Finally, we obtained the relative photometry of the target by performing the weighted ensemble photometry technique for the comparison stars. The photometric data of** **HD 208897** **is comprised of 3821 measurements, spanning over eight nights and unevenly sampled. The obtained light curve is shown in the top panel of Figure** 1**. The mean AUKR photometric data exhibit a variation less than 0.005 mag, while whole data set revealed a photometric variability of** \(\sigma\sim 0.03\) **mag. As shown in the bottom panel of Figure** 1 **(black solid line), no significant periodic signal was found in the photometry of these data. In addition, we checked the** _HIPPARCOS_ **photometric variation to examine causes of the apparent RV variation other than orbital motion. The** _HIPPARCOS_ **photometry data of** **HD 208897** **obtained from December 1989 to December 1992 and consists of 84 measurements with a photometric variability of** \(\sigma\sim 8\) **mmag (middle panel of Figure** 1**). The bottom panel of Figure** 1 **shows a periodogram of the** _HIPPARCOS_ **photometry (red solid line). We could not see a significant peak at around the period of the RV variation.**
<figure><img src="content_image/1708.01895/x1.png"><figcaption>Figure 1: Top panel: The AUKR photometric observations of HD 208897. Open redsquares represent mean Bessel-R values of the individual nights. The scatterof the mean brightness is about 0.005 mag. Middle panel: The HIPPARCOSphotometric data, indicating a photometric stability down to σ∼0.008 mag.Bottom panel: Lomb-Scargle periodograms of the AUKR (black) and the HIPPARCOS(red) photometric measurements.</figcaption></figure>
BJD-2450000 [days] | Velocity [ms−1] | Uncertainty [ms−1] | Remark
---|---|---|---
5003.52970 | -27.76 | 6.97 | TUG
5003.55737 | -26.23 | 16.86 | TUG
5077.41720 | -45.88 | 11.99 | TUG
5077.43293 | -40.41 | 12.91 | TUG
5408.42261 | -48.03 | 11.29 | TUG
5408.44515 | -67.52 | 11.47 | TUG
5498.36262 | 22.78 | 18.37 | TUG
5498.38518 | 9.63 | 19.79 | TUG
5498.40774 | 14.99 | 19.26 | TUG
5841.43301 | 17.44 | 19.81 | TUG
5841.45487 | 35.24 | 18.46 | TUG
6088.45577 | -39.76 | 10.38 | TUG
6088.47848 | -9.71 | 10.86 | TUG
6088.50104 | -33.56 | 10.75 | TUG
6181.39702 | -10.27 | 16.58 | TUG
6181.41970 | -27.56 | 14.90 | TUG
6473.55194 | -46.10 | 10.89 | TUG
6473.57450 | -34.12 | 13.38 | TUG
6483.44541 | -23.32 | 15.86 | TUG
6483.46797 | -39.55 | 13.53 | TUG
6591.25732 | 7.82 | 14.91 | TUG
6591.27985 | 14.77 | 15.38 | TUG
6826.49613 | -22.77 | 11.51 | TUG
6826.51867 | -15.76 | 10.73 | TUG
6877.43245 | 13.84 | 13.11 | TUG
6877.45501 | -13.28 | 13.95 | TUG
6967.25788 | 35.85 | 16.89 | TUG
6967.28043 | 23.79 | 17.20 | TUG
6967.30298 | 43.99 | 16.10 | TUG
7159.51955 | -39.39 | 18.36 | TUG
7159.54205 | -20.62 | 14.88 | TUG
7219.40794 | -23.15 | 21.11 | TUG
7219.43044 | 0.90 | 30.78 | TUG
7238.48844 | 38.55 | 21.30 | TUG
7265.27661 | -20.96 | 25.07 | TUG
7265.29911 | -7.82 | 25.62 | TUG
7265.32161 | -24.20 | 22.20 | TUG
7265.40450 | 12.68 | 20.80 | TUG
7271.36935 | -22.67 | 14.77 | TUG
7271.39185 | 7.51 | 14.59 | TUG
7273.32616 | 18.43 | 10.27 | TUG
7273.34866 | 36.32 | 11.97 | TUG
7299.33026 | 60.47 | 20.94 | TUG
7299.35277 | 50.17 | 20.61 | TUG
7332.23936 | 38.92 | 11.52 | TUG
7332.26186 | 7.85 | 16.85 | TUG
7347.35361 | 67.23 | 14.84 | TUG
7351.22947 | 15.08 | 35.24 | TUG
7370.24002 | 79.20 | 27.69 | TUG
7370.26252 | 73.17 | 16.99 | TUG
7539.50055 | -17.34 | 14.61 | TUG
7539.52304 | 2.74 | 11.34 | TUG
7547.47297 | 27.53 | 17.72 | TUG
7547.49548 | 5.43 | 16.22 | TUG
7579.41323 | -29.49 | 16.99 | TUG
7581.39576 | -31.15 | 19.03 | TUG
7581.41828 | 13.32 | 18.11 | TUG
7647.37473 | 25.68 | 16.70 | TUG
7647.39723 | 40.75 | 27.66 | TUG
7651.44143 | 13.98 | 19.99 | TUG
7651.46394 | -6.38 | 19.36 | TUG
7656.32711 | 37.59 | 17.23 | TUG
7656.34962 | 28.01 | 22.27 | TUG
7677.26921 | 40.41 | 14.30 | TUG
7677.29172 | 48.24 | 11.36 | TUG
7678.27693 | 52.54 | 13.71 | TUG
7678.29944 | 83.37 | 22.51 | TUG
7685.36031 | 9.16 | 21.95 | TUG
7685.38282 | 43.40 | 21.48 | TUG
7738.20224 | 35.41 | 17.77 | TUG
7738.22475 | 72.46 | 23.39 | TUG
7776.18465 | 40.48 | 19.06 | TUG
Table 1: Radial velocities for HD 208897.
Table 1 Continued.
BJD-2450000 [days] | Velocity [ms−1] | Uncertainty [ms−1] | Remark
---|---|---|---
6887.23928 | -7.67 | 3.90 | OAO
6915.08478 | 4.98 | 3.13 | OAO
6964.99969 | 27.05 | 3.72 | OAO
7003.95799 | 31.91 | 4.10 | OAO
7174.26960 | -9.54 | 2.96 | OAO
7235.07821 | -12.65 | 3.36 | OAO
7250.08144 | -10.39 | 3.26 | OAO
7262.13763 | 1.81 | 3.20 | OAO
7284.09056 | 11.84 | 3.22 | OAO
7284.20209 | 14.86 | 3.20 | OAO
7307.04974 | 25.73 | 2.66 | OAO
7309.09047 | 25.54 | 3.74 | OAO
7331.04251 | 33.43 | 3.12 | OAO
7372.86249 | 33.99 | 3.90 | OAO
7403.87979 | 31.72 | 3.64 | OAO
7423.89012 | 41.73 | 9.39 | OAO
7475.33415 | 5.60 | 3.55 | OAO
7476.34130 | -8.84 | 3.34 | OAO
7484.33302 | -5.03 | 4.04 | OAO
7508.31346 | -5.27 | 3.40 | OAO
7521.28512 | -8.73 | 3.24 | OAO
7524.29154 | -14.12 | 6.53 | OAO
7542.29471 | -12.71 | 3.06 | OAO
7592.10450 | -3.20 | 3.17 | OAO
7597.17800 | -4.77 | 3.58 | OAO
7599.13811 | -11.39 | 3.00 | OAO
7625.21646 | 13.40 | 3.21 | OAO
7655.00139 | 31.92 | 3.92 | OAO
7666.97960 | 36.09 | 3.33 | OAO
7676.95567 | 31.52 | 4.01 | OAO
7688.95331 | 38.15 | 3.94 | OAO
7742.98101 | 49.39 | 3.72 | OAO
7756.92529 | 37.62 | 3.21 | OAO
7763.92901 | 32.44 | 3.55 | OAO
## 3 Stellar properties
HD 208897 (HIP 108513) is a K0 giant star with a visual magnitude of V=6.51
and color index B−V=1.01. The HIPPARCOS (van Leeuwen 2007) parallax of 15.46
mas corresponds to a distance of 64.68 pc and the absolute visual magnitude
obtained is MV=2.46. The color excess E(B−V) was estimated from the infrared
dust emission maps of Schlegel et al. (1998) and was calibrated according to
Beers et al. (2002). Assuming the extinction to reddening ratio to be 3.1, the
interstellar extinction was found to be at most AV=0.047. The bolometric
correction, B.C=−0.392, was taken from the Flower (1996) tables.
The stellar properties of HD 208897 were derived using the equivalent width
(EW) measurements of Fe I and Fe II lines from I2-free spectrum taken with
RTT150 CES. In the analysis, we excluded lines that were too weak (<5 mÅ) or
too strong (>100 mÅ) lines and used the ODFNEW grid of Kurucz ATLAS9 model
atmospheres (Castelli & Kurucz 2003). In order to determine the stellar
parameters, we iterated the stellar parameters (Teff, logg, [Fe/H],
microturbulance velocity Vt) with the help of the excitation/ionization
balance of iron lines by calculating standard deviation of both
A(FeI)333A(FeI)=log[N(FeI)/N(H)]+12 and A(FeII) abundances. With this method,
we obtained the best solution with the iron abundance of [Fe/H]=0.21±0.15 and
microturbulent velocity of vt=1.28±0.2 kms−1. This result indicates that HD
208897 is a metal-rich star. The stellar atmosphere analysis yields the
stellar parameters of Teff=4860±100 K, vsini=3.9±0.4 kms−1, and logg=3.13±0.1.
From the Stefan-Boltzmann law we derived the bolometric luminosity to be
L∗=12.3±1.1L⊙ and the stellar radius to be R∗=4.98±0.2R⊙. We estimated the
stellar mass to be M∗=1.25M⊙±0.1 with the derived gravity and radius. The
stellar parameters of the giant star HD 208897 are summarized in Table 2,
which lists those by Wittenmyer et al. (2016) for comparison. The star
positions in the Hertzsprung-Russell (H-R) diagram with the theoretical
stellar isochrones are shown in Figure 2. It is clearly shown that the star
has just begun to ascend the RGB. Based on the projected rotational velocity
and star radius, we derived the upper limit for rotational period of 64 days
for HD 208897.
Parameter | This work | Wittenmyer et al. (2016)
---|---|---
Sp. Type | K0 |
V [mag] | 6.51 |
B-V | 1.01 |
π [mas] | 15.46±0.54 |
B.C. | -0.392 |
MV | 2.456 |
Aν | 0.047 |
Teff [K] | 4860±100 | 4905
logL∗ [L⊙] | 1.09±0.07 | 1.09
log g [cgs] | 3.13±0.14 | 3.38
M∗ [M⊙] | 1.25±0.11 | 1.31
R∗ [R⊙] | 4.98±0.20 | 4.88
[Fe/H] [dex] | +0.21±0.15 | +0.13
vsini [kms−1] | 3.90±0.42 | -
Vt [kms−1] | 1.28±0.24 | 1.17
Table 2: Stellar parameters of HD 208897.
## 3Stellar properties
**HD 208897** **(****HIP 108513****) is a K0 giant star with a visual magnitude of** \(V=6.51\) **and color index** \(B-V=1.01\)**. The** _HIPPARCOS_******(****van Leeuwen******2007**)** **parallax of 15.46 mas corresponds to a distance of 64.68 pc and the absolute visual magnitude obtained is** \(M_{V}=2.46\)**. The color excess** \(E(B-V)\) **was estimated from the infrared dust emission maps of** **Schlegel et al.****(**1998**)** **and was calibrated according to** **Beers et al.****(**2002**)****. Assuming the extinction to reddening ratio to be 3.1, the interstellar extinction was found to be at most** \(A_{V}=0.047\)**. The bolometric correction,** \(B.C=-0.392\)**, was taken from the** **Flower****(**1996**)** **tables.**
**The stellar properties of** **HD 208897** **were derived using the equivalent width (EW) measurements of Fe I and Fe II lines from** \(I_{2}\)**-free spectrum taken with RTT150 CES. In the analysis, we excluded lines that were too weak (**\(<5\) **mÅ) or too strong (**\(>100\) **mÅ) lines and used the ODFNEW grid of Kurucz ATLAS9 model atmospheres** **(****Castelli & Kurucz******2003**)****. In order to determine the stellar parameters, we iterated the stellar parameters (**\(T_{eff}\)**,** \(logg\)**, [Fe/H], microturbulance velocity** \(V_{t}\)**) with the help of the excitation/ionization balance of iron lines by calculating standard deviation of both** \(A(FeI)\)³ **and** \(A(FeII)\) **abundances. With this method, we obtained the best solution with the iron abundance of** \([Fe/H]=0.21\pm 0.15\) **and microturbulent velocity of** \(v_{t}=1.28\pm 0.2\) **kms**\({}^{-1}\)**. This result indicates that** **HD 208897** **is a metal-rich star. The stellar atmosphere analysis yields the stellar parameters of** \(T_{eff}=4860\pm 100\) **K,** \(vsini=3.9\pm 0.4\) **kms**\({}^{-1}\)**, and** \(logg=3.13\pm 0.1\)**. From the Stefan-Boltzmann law we derived the bolometric luminosity to be** \(L_{*}=12.3\pm 1.1L_{\odot}\) **and the stellar radius to be** \(R_{*}=4.98\pm 0.2R_{\odot}\)**. We estimated the stellar mass to be** \(M_{*}=1.25M_{\odot}\pm 0.1\) **with the derived gravity and radius. The stellar parameters of the giant star** **HD 208897** **are summarized in Table** 2**, which lists those by** **Wittenmyer et al.****(**2016**)** **for comparison. The star positions in the Hertzsprung-Russell (H-R) diagram with the theoretical stellar isochrones are shown in Figure** 2**. It is clearly shown that the star has just begun to ascend the RGB. Based on the projected rotational velocity and star radius, we derived the upper limit for rotational period of 64 days for** **HD 208897****.**
[FOOTNOTE:3][ENDFOOTNOTE]
Parameter | This work | Wittenmyer et al. (2016)
---|---|---
Sp. Type | K0 |
V [mag] | 6.51 |
B-V | 1.01 |
π [mas] | 15.46±0.54 |
B.C. | -0.392 |
MV | 2.456 |
Aν | 0.047 |
Teff [K] | 4860±100 | 4905
logL∗ [L⊙] | 1.09±0.07 | 1.09
log g [cgs] | 3.13±0.14 | 3.38
M∗ [M⊙] | 1.25±0.11 | 1.31
R∗ [R⊙] | 4.98±0.20 | 4.88
[Fe/H] [dex] | +0.21±0.15 | +0.13
vsini [kms−1] | 3.90±0.42 | -
Vt [kms−1] | 1.28±0.24 | 1.17
Table 2: Stellar parameters of HD 208897.
<figure><img src="content_image/1708.01895/x2.png"><figcaption>Figure 2: Location of HD 208897 in the H-R diagram with evolutionary tracks(Lejeune & Schaerer 2001) for Z = 0.008 (solid lines) and Z = 0.02 (dashedlines) for masses of 1M⊙−3M⊙.</figcaption></figure>
## 4Results
### Radial velocity and orbital solution
**We made the first observations of** **HD 208897** **at TUG and detected a significant RV variation. In order to verify RV variability, we started the OAO follow-up observations. We obtained 107 RV data in total, 34 of which were observed by HIDES at OAO. The observation dates, RVs, internal errors, and observation sites are listed in Table** 1**. Figure** 3 **shows the observed RV curve. The dark gray circles and blue squares correspond to TUG and OAO, respectively. Both RV data show variability with an RMS scatter of 30-40 ms**\({}^{-1}\)**, which is much larger than the measurements uncertainties and expected jitter levels for giant stars** **(****Hekker et al.******2006**)****. Typical RV jitter for giant stars are at a level of around 10-20 ms**\({}^{-1}\)**.**
**We performed the Lomb-Scargle (L-S) periodogram** **(****Scargle******1982**)** **analysis for both TUG and OAO data, along with the whole data set, to search periodicity in the observed RV data. The L-S periodogram for the whole data set shows a significant periodicity at** \(\sim 350\) **days with a confidence level higher than 99.9% (see Figure** 5 **) by calculating the false alarm probability (FAP). We estimated the FAP of the peak with the bootstrap randomization method. We created** \(10^{5}\) **fake data sets by randomly redistributing the observed radial velocities while keeping the observation time fixed, and we subsequently applied the same periodogram analysis to these data sets. Only one fake data set showed a periodogram power higher than the observed data set. The periodic signal seen in the RV time series can be attributable to a planet that orbits around the star.**
**We determined the parameter values of the Keplerian orbit by minimizing the** \(\chi^{2}\) **statistic. We used the exofast** **(****Eastman et al.******2013**)** **IDL code, which is based on MCMC algorithm. We quadratically added stellar jitter (**\(\sigma^{2}_{error}=\sigma^{2}_{obs}+\sigma^{2}_{jitter}\)**) to intrinsic RV noise before performing the Keplerian fit to the RV data. We adopted optimal jitter value for the target when the reduced** \(\chi^{2}\) **of the fit is close to unity. The adjustable parameters in the orbital solution are the orbital period** \(P\)**, time of periastron passage** \(T_{P}\)**, eccentricity** \(e\)**, velocity amplitude** \(K_{1}\)**, argument of periastron** \(w\)**, and RV zero-point** \(V_{0}\)**. The velocity offset was also applied between the TUG and OAO because we used two different stellar templates to derive the RV measurements for TUG and OAO observations and therefore this difference directly indicates a Doppler shift between the two templates. Figure** 3 **and** 4 **show the RV variations and the best Keplerian solutions for TUG+OAO (solid red line), OAO (dotted line), and TUG (solid green line) data, respectively. From the combined data, the radial velocities of** **HD 208897** **can be well fitted by an orbit with a period** \(P=352.7\pm 2\) **days, a velocity semi-amplitude** \(K_{1}=34.7\pm 2\) **ms**\({}^{-1}\)**, and an eccentricity** \(e=0.08\pm 0.06\)**. Adopting a stellar mass of 1.25**\(M_{\odot}\)**, we obtained a minimum mass for the companion as** \(m_{2}sini=1.40M_{J}\) **and a semimajor axis as** \(a=1.05\) **AU. The RMS of the residuals after subtraction of the best Keplerian fit is about 18.13 ms**\({}^{-1}\) **and this value is almost consistent with the measurement uncertainties for TUG data. The residuals to the Keplerian fit do not show a significant periodicity. When only OAO and TUG data were used, the minimum mass of the planet are derived to be** \(m_{2}sini=1.16M_{J}\) **and** \(m_{2}sini=1.70M_{J}\)**, respectively. The RV precision of OAO data is higher than that of TUG, while the TUG observations cover a longer time span, hence allowing us to obtain more accurate periodicity for the orbit of companion (see Figure** 5**). The RV residuals from the best orbital fits of both data sets do not show any periodic variations. All uncertainties in the orbital analyses were derived using the bootstrap Monte Carlo approach by creating 1000 fake data sets. The best orbital parameters and physical properties of the proposed planet are summarized in Table** 3 **with their errors.**
<figure><img src="content_image/1708.01895/x3.png"><figcaption>Figure 3: Observed RV variations and the best-fit orbital solutions of HD208897 with its residuals (bottom) to the best-fit-model. The solid red,dotted, and green dashed lines indicate the best Keplerian fit for TUG+OAO,OAO, and TUG, respectively. Dark gray points and blue squares represent datafrom TUG and OAO, respectively. The offset of 13.63 ms−1 was introduced toHIDES data. The RMS to the fits are 18.13 ms−1, 4.81 ms−1, and 20.54 ms−1,respectively.</figcaption></figure>
<figure><img src="content_image/1708.01895/x4.png"><figcaption>Figure 4: Same as Fig 3 but shows observed RVs and best Keplerian fits atdifferent orbital phases.</figcaption></figure>
<figure><img src="content_image/1708.01895/x5.png"><figcaption>Figure 5: Lomb–Scargle periodograms for TUG, OAO, TUG+OAO RV measurements, andresiduals to the Keplerian fit. The horizontal dotted lines indicate FAPthresholds. Possible peaks (FAP ∼ 0.0001) are seen at periods of about 350days.</figcaption></figure>
Parameter | TUG+OAO | OAO | TUG
---|---|---|---
P (days) | 352.7 ±1.7 | 349.7 ±3.3 | 353.6 ±2.7
K1 (ms−1) | 34.7 ±2.2 | 28.9 ±1.2 | 42.7 ±5.5
e | 0.07 ±0.06 | 0.04 ±0.03 | 0.15 ±0.11
ω (deg) | 167 ±83 | 297 ±64 | 89 ±42
V0 (ms−1) | 12.1 ±1.8 | 14.1 ±0.9 | 11.2 ±3.8
Tp (BJD-2450000) | 5036 ±82 | 6961 ±54 | 4971 ±46
m2 sini (MJ) | 1.40 ±0.08 | 1.16 ±0.05 | 1.70 ±0.18
a (AU) | 1.05 ±0.03 | 1.04 ±0.03 | 1.05 ±0.03
f1(m) (10−9M⊙) | 1.5 ±0.3 | 0.8 ±0.1 | 1.7 ±0.6
a1sini (10−3AU) | 1.1 ±0.1 | 0.9 ±0.2 | 1.4 ±0.3
σjitter (ms−1) | 12.0 | 4.0 | 12.0
ΔRV (ms−1) | 13.63 | - | -
Nobs | 107 | 34 | 73
RMS (ms−1) | 18.13 | 4.81 | 20.54
Reduced √χ2 | 0.95 | 0.96 | 1.01
Note: ΔRV offset between TUG and OAO velocities.
Table 3: Orbital parameters of HD 208897.
<figure><img src="content_image/1708.01895/x6.png"><figcaption>Figure 6: The Ca II H (3968.5Å) absorption line region for HD 208897. The linecore does not exhibit a significant emission.</figcaption></figure>
<figure><img src="content_image/1708.01895/x7.png"><figcaption>Figure 7: Top: Variation of Hα line EWs as a function of time. Middle: EWmeasurements against the radial velocity. Bottom: Periodogram of EWmeasurements.</figcaption></figure>
<figure><img src="content_image/1708.01895/x8.png"><figcaption>Figure 8: Top panel: BVC and BVS variations for HD 208897 The solid red lineshows the linear fit of the bisectors. Bottom panel: Periodograms of thebisectors of the CCFs; the red periodogram indicates BVC and black periodogramshows BVS.</figcaption></figure>
<figure><img src="content_image/1708.01895/x9.png"><figcaption>Figure 9: Planetary mass distribution against the mass of the host star.Filled red circles represent planetary systems orbiting intermediate-massstars while blue filled circle indicates planet around HD 208897. Planetsaround less massive evolved intermediate-mass (1.0M⊙<M∗<1.3M⊙) and metal-rich([Fe/H]>0) stars are plotted with green circles. Dashed and dotted linescorrespond to the velocity semi-amplitude of 40 and 20 ms−1 for a host star,respectively, imparted by a planet at 1 AU.</figcaption></figure>
<figure><img src="content_image/1708.01895/x10.png"><figcaption>Figure 10: Planets distribution in the a−e diagram. Planets aroundintermediate-mass stars are shown with filled red circles. The blue filledcircle indicates the planetary system orbiting HD 208897, while green circlesrepresent planets around 1.0M⊙<M∗<1.3M⊙, logg<4.0 and metal-rich ([Fe/H]>0)stars. Dashed lines express the periastron distance [q=a(1−e)] of 0.5, 1.0,1.5 AU, respectively, from the left.</figcaption></figure>
### Other mechanisms for tadial velocity variations
**There are many mechanisms, such as pulsation, inhomogeneous surface features, and stellar activities, that produce radial velocity variations and also create spectral line shape changes. To check these mechanisms, we examined the Ca II H and H**\(\alpha\) **lines, photometric brightness variations, and spectral line shapes of** **HD 208897****.**
**Activity induced hot or cool spots or plages on the surface of a cool star not only create asymmetries in the stellar absorption lines but also produce brightness variations caused by rotational modulations of these features. As can be seen in Figure** 1**, both AUKR photometric observations and** _HIPPARCOS_ **photometric measurements of** **HD 208897** **show that the star is stable and no significant correlation is seen in these data related to observed RV variation. According to RV amplitude-spot filling factor relation described by** **Hatzes****(**2002**)****, the observed RV amplitude of 35 ms**\({}^{-1}\) **would require a spot that covers 1.5% of the stellar surface with a rotational velocity of about** \(vsini=4\) **kms**\({}^{-1}\)**. The expected photometric variability in this case is** \(\Delta m=0.05\) **mag by assuming a temperature difference of** \(\Delta T=1200\) **K between the spot and the stellar photosphere. This value of variation is 1.5**\(\sigma\) **above the observed scatter seen in the AUKR and** _HIPPARCOS_ **photometric data and can be easily detected. Also, from the projected rotational velocity and stellar radius we estimated an upper limit for the rotational period of** \(P_{rot}=64\) **days, which is about five times smaller then the observed 352 days period of RV variation. Therefore we can quickly discard the hypothesis that rotational modulation creates RV variability in** **HD 208897****.**
**The Ca II H and K and H**\(\alpha\) **line profiles are frequently used as the chromospheric activity indicators. As shown in Figure** 6**, the Ca II H line does not shows any emission feature at the line center. We only used HIDES spectra since TUG CES does not cover the spectral region where these lines are reside. We also measured EW of the H**\(\alpha\) **line via a band pass of 1.0 Å centered on 6562.808 Å with the help of TUG CES data. Figure** 7 **shows variations of EW measurements of H**\(\alpha\) **line as a function of time and against RV. The L-S periodograms are also shown in the bottom panel of Figure** 7**. Clearly, the plot shows no correlation between RV and the H**\(\alpha\) **EW and no significant periodicity exist in the H**\(\alpha\) **line. Also,** **Isaacson & Fischer****(**2010**)** **showed that** **HD 208897** **has a small activity-index value of** \(S_{HK}=0.124\)**, which is consistent with our results. These results reinforce the existence of a substellar companion.**
**Stellar intrinsic activities, such as pulsation or rotational modulation, can cause a change in the absorption line shape. Such a change may be misinterpreted as a Doppler shift of the lines by an orbital motion of the star. In order to examine spectral line shape, we performed an analysis of line bisectors based on cross-correlation function (CCF), which gives an average spectral line of the observed star** **(****Gray******2005**)****. The bisector analyses were performed with the TUG data. In the analysis, we used spectral lines that were located outside the** \(I_{2}\) **absorption region because the** \(I_{2}\) **lines affected the stellar spectrum. The CCFs were created by applying a special mask that discarded all the blended lines in the stellar spectrum and we identified 27 spectral lines that were relatively deep (**\(>0.3\)**). The bisector line was obtained by combining bisector points ranging from the core toward the wings of the CCF profile. We defined three flux levels to calculate velocity spans of the bisector:** \(V_{T}\) **top (between 65% and 85% of the line depth from top),** \(V_{C}\) **central (35-55% ), and** \(V_{B}\) **bottom zones (5-25% ). The bisector velocity span (**\(BVS\)**) measurements were performed using the velocity difference between** \(V_{T}\) **and** \(V_{B}\) **(**\(BVS=V_{T}-V_{B}\)**), and the bisector curvatures (**\(BVC\)**) were derived using the difference of the velocity spans of the upper half of the bisector and the lower half (**\(BVC=(V_{T}-V_{C})-(V_{C}-V_{B})\)**). No correlation was found either between the RV and the BVS variations or between the RV and the BVC variations, which means the RV variations are not associated with the bisector variations. Also, no significant periods were obtained from the L-S periodogram analysis; all trial periods have extremely small significance levels. Moreover, we calculated the fundamental periods of radial pulsation with the method of** **Cox et al.****(**1972**)** **and also solar-like oscillation with the relationships of** **Kjeldsen & Bedding****(**1995**)****. We estimated that the radial pulsation period of** **HD 208897** **to be less than one day and solar-like oscillation period of 0.8 days with RV amplitude of 2.5 ms**\({}^{-1}\)**. These periods are more than two orders of magnitude shorter than the observed period of the RV variations. All these results suggest that the observed RV variations are consistent with the planetary hypothesis. In Figure** 8**, we presented BVS and BVC curves and their L-S periodograms.**
## 5Discussion and conclusions
**In this paper, we report the first planet discovery in the TUG precise Doppler survey. The variability of the RV data observed at both TUG and OAO revealed a periodic signal, which suggests the presence of an unseen and probably low-mass companion. The AUKR and the** _HIPPARCOS_ **photometric data sets of** **HD 208897** **indicate that main cause of the observed RV variation is not rotational modulation due to stellar surface inhomogeneities. Based on the rotational velocity and the radius of the star, the expected upper limit for the rotational period is about 64 days, which is much smaller than the observed RV variation period of 352 days. Also, our bisector analysis showed that there are no correlations between BVS and RV or between BVC and RV. Moreover, from the examination of the activity indicator Ca II H and H**\(\alpha\) **lines we could not find any evidence for chromospheric activity. The estimated period of the fundamental radial mode pulsations and of solar-like oscillations were both derived as a few days. These values are much smaller than the RV variation seen in our data. These results indicate that observed RV variation in** **HD 208897** **can be attributed to an unseen companion. With the estimated stellar mass of 1.25**\(M_{\odot}\) **and combined RV data set (TUG+OAO), we have found a giant planet with the mass of 1.40**\(M_{J}\) **at 1.05 AU around the evolved intermediate-mass star** **HD 208897****. We did not find any long-term RV trend in the residuals of our best-fit model. Our stellar parameters indicate that** **HD 208897** **is a metal-rich star that is at the base of RGB phase.**
**One of the most remarkable features of the** **HD 208897** **system is the mass of the host star. Intermediate-mass stars are those that have masses higher than that of the Sun (typically 1.3**\(M_{\odot}\)**-5.0**\(M_{\odot}\)**). However, the mass of** **HD 208897** **is slightly smaller than this typical value. Figure** 9 **shows the planetary mass distribution as a function of stellar mass. All of the planets orbiting around the evolved stars (**_logg\(<4.0\)_**) in this figure are shown with filled red circles. The planets around evolved stars with masses between 1.0**\(M_{\odot}\) **and 1.3**\(M_{\odot}\)**, and** \([Fe/H]>0\) **are also shown with green circles. Our results indicate that** **HD 208897** **is slightly metal-rich and has a mass at the lower limit of intermediate-mass stars range that harbor a** \(\sim\)**1.5 Jupiter mass size planetary companion. It is generally more difficult to detect such less massive planets around giants because of the relatively larger stellar jitter. However, our discovery shows that we can detect such less massive planets even around GK giants if we perform long-term observations.**
**The planet around** **HD 208897** **has an almost circular orbit (**\(e\sim 0.1\)**) and semimajor axis of about 1 AU. Figure** 10 **demonstrates the distribution of semimajor axes of planets versus orbital eccentricities. Filled red circles indicate planets orbiting evolved intermediate-mass stars, while green circles represent planets around metal-rich evolved stars with masses between 1.0**\(M_{\odot}\) **and 1.3**\(M_{\odot}\)**. As can be seen from the figure, there are only a few planets orbiting metal-rich evolved stars in this mass range and one of these is** **HD 208897****. Most of the planets discovered around the evolved metal-rich stars have an eccentricity below 0.2 and a semimajor axis** \(a>1\) **AU. Our results on** **HD 208897** **seem to reinforce the statistics. These results may be expected for evolved giants since those stars have begun to ascend the RGB and tidal influences would not become important for such distance planets yet. Planets at small orbital separations in evolved giants may be engulfed by their host stars during the stellar evolution** **(****Sato et al.******2008; **Villaver & Livio******2009**)****. The stellar parameters of** **HD 208897****, however, indicate that it is at the earliest phase of RGB evolution and still has a radius (only** \(\sim 0.025AU\)**) that is too small to tidally influence a planet at** \(\sim\)**1AU. Therefore the lack of inner Jupiter size planets or more massive planets than Jupiter in the star and the low eccentricity of the planet around** **HD 208897** **may be primordial.**
**Here, we have reported on one new planetary system around the evolved intermediate-mass star** **HD 208897** **based on precise RV measurements at TUG and OAO. This discovery will be important in understanding the planet formation around metal-rich intermediate-mass stars and the effect of stellar evolution on the planetary system configuration.**
###### Acknowledgements.
**This work was supported by The Scientific and Technological Research Council of Turkey (TÜBİTAK), the project number of 114F099. Authors thank to TUG, KFU, and AST for partial support in using RTT150 (Russian-Turkish 1.5-m telescope in Antalya). This work was also supported by JSPS KAKENHI Grant Numbers JP23244038, JP16H02169. This work was funded by the subsidy 3.6714.2017 / 8.9 allocated to Kazan Federal University for the state assignment in the sphere of scientific activities.**
## References
* **Adibekyan et al. (2013)** **Adibekyan, V. Z., Figueira, P., Santos, N. C., et al. 2013, A&A, 560, A51**
* **Akeson et al. (2017)** **Akeson, R. L., Christiansen, J., Ciardi, D. R., et al. 2017, in American Astronomical Society Meeting Abstracts, Vol. 229, American Astronomical Society Meeting Abstracts, 146.16**
* **Beers et al. (2002)** **Beers, T. C., Drilling, J. S., Rossi, S., et al. 2002, AJ, 124, 931**
* **Bodenheimer & Pollack (1986)** **Bodenheimer, P. & Pollack, J. B. 1986, Icarus, 67, 391**
* **Boss (2000)** **Boss, A. P. 2000, ApJ, 536, L101**
* **Bowler et al. (2010)** **Bowler, B. P., Johnson, J. A., Marcy, G. W., et al. 2010, ApJ, 709, 396**
* **Butler et al. (1996)** **Butler, R. P., Marcy, G. W., Williams, E., et al. 1996, PASP, 108, 500**
* **Castelli & Kurucz (2003)** **Castelli, F. & Kurucz, R. L. 2003, in IAU Symposium, Vol. 210, Modelling of Stellar Atmospheres, ed. N. Piskunov, W. W. Weiss, & D. F. Gray, A20**
* **Cox et al. (1972)** **Cox, J. P., King, D. S., & Stellingwerf, R. F. 1972, ApJ, 171, 93**
* **Eastman et al. (2013)** **Eastman, J., Gaudi, B. S., & Agol, E. 2013, PASP, 125, 83**
* **Flower (1996)** **Flower, P. J. 1996, ApJ, 469, 355**
* **Giguere et al. (2015)** **Giguere, M. J., Fischer, D. A., Payne, M. J., et al. 2015, ApJ, 799, 89**
* **Gray (2005)** **Gray, D. F. 2005, PASP, 117, 711**
* **Harakawa et al. (2015)** **Harakawa, H., Sato, B., Omiya, M., et al. 2015, ApJ, 806, 5**
* **Hatzes (2002)** **Hatzes, A. P. 2002, Astronomische Nachrichten, 323, 392**
* **Hekker et al. (2006)** **Hekker, S., Reffert, S., Quirrenbach, A., et al. 2006, A&A, 454, 943**
* **Ida & Lin (2004)** **Ida, S. & Lin, D. N. C. 2004, ApJ, 604, 388**
* **Isaacson & Fischer (2010)** **Isaacson, H. & Fischer, D. 2010, ApJ, 725, 875**
* **Izumiura (1999)** **Izumiura, H. 1999, in Observational Astrophysics in Asia and its Future, ed. P. S. Chen, 77**
* **Johnson (2008)** **Johnson, J. A. 2008, in Astronomical Society of the Pacific Conference Series, Vol. 398, Extreme Solar Systems, ed. D. Fischer, F. A. Rasio, S. E. Thorsett, & A. Wolszczan, 59**
* **Johnson et al. (2010)** **Johnson, J. A., Aller, K. M., Howard, A. W., & Crepp, J. R. 2010, PASP, 122, 905**
* **Johnson et al. (2011)** **Johnson, J. A., Clanton, C., Howard, A. W., et al. 2011, ApJS, 197, 26**
* **Johnson et al. (2007)** **Johnson, J. A., Fischer, D. A., Marcy, G. W., et al. 2007, ApJ, 665, 785**
* **Jones et al. (2015)** **Jones, M. I., Jenkins, J. S., Rojo, P., Olivares, F., & Melo, C. H. F. 2015, A&A, 580, A14**
* **Kambe et al. (2013)** **Kambe, E., Yoshida, M., Izumiura, H., et al. 2013, PASJ, 65, 15**
* **Kjeldsen & Bedding (1995)** **Kjeldsen, H. & Bedding, T. R. 1995, A&A, 293, 87**
* **Lee et al. (2014)** **Lee, B.-C., Han, I., Park, M.-G., et al. 2014, A&A, 566, A67**
* **Lejeune & Schaerer (2001)** **Lejeune, T. & Schaerer, D. 2001, A&A, 366, 538**
* **Mitchell et al. (2013)** **Mitchell, D. S., Reffert, S., Trifonov, T., Quirrenbach, A., & Fischer, D. A. 2013, A&A, 555, A87**
* **Mordasini et al. (2008)** **Mordasini, C., Alibert, Y., Benz, W., & Naef, D. 2008, in Astronomical Society of the Pacific Conference Series, Vol. 398, Extreme Solar Systems, ed. D. Fischer, F. A. Rasio, S. E. Thorsett, & A. Wolszczan, 235**
* **Mortier et al. (2012)** **Mortier, A., Santos, N. C., Sozzetti, A., et al. 2012, A&A, 543, A45**
* **Niedzielski et al. (2016)** **Niedzielski, A., Villaver, E., Nowak, G., et al. 2016, A&A, 589, L1**
* **Nowak et al. (2013)** **Nowak, G., Niedzielski, A., Wolszczan, A., Adamów, M., & Maciejewski, G. 2013, ApJ, 770, 53**
* **Omiya et al. (2012)** **Omiya, M., Han, I., Izumioura, H., et al. 2012, PASJ, 64, 34**
* **Ortiz et al. (2016)** **Ortiz, M., Reffert, S., Trifonov, T., et al. 2016, A&A, 595, A55**
* **Pasquini et al. (2007)** **Pasquini, L., Döllinger, M. P., Weiss, A., et al. 2007, A&A, 473, 979**
* **Perryman et al. (1997)** **Perryman, M. A. C., Lindegren, L., Kovalevsky, J., et al. 1997, A&A, 323, L49**
* **Reffert et al. (2015)** **Reffert, S., Bergmann, C., Quirrenbach, A., Trifonov, T., & Künstler, A. 2015, A&A, 574, A116**
* **Sato et al. (2008)********Sato, B., Izumiura, H., Toyota, E., et al. 2008, PASJ, 60, 539**
* **Sato et al. (2002)** **Sato, B., Kambe, E., Takeda, Y., Izumiura, H., & Ando, H. 2002, PASJ, 54, 873**
* **Sato et al. (2005)** **Sato, B., Kambe, E., Takeda, Y., et al. 2005, PASJ, 57, 97**
* **Sato et al. (2012)** **Sato, B., Omiya, M., Harakawa, H., et al. 2012, PASJ, 64, 135**
* **Sato et al. (2010)** **Sato, B., Omiya, M., Liu, Y., et al. 2010, PASJ, 62, 1063**
* **Sato et al. (2013)** **Sato, B., Omiya, M., Wittenmyer, R. A., et al. 2013, ApJ, 762, 9**
* **Scargle (1982)** **Scargle, J. D. 1982, ApJ, 263, 835**
* **Schlegel et al. (1998)** **Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525**
* **Takeda et al. (2008)** **Takeda, Y., Sato, B., & Murata, D. 2008, PASJ, 60, 781**
* **Udry & Santos (2007)** **Udry, S. & Santos, N. C. 2007, ARA&A, 45, 397**
* **van Leeuwen (2007)** **van Leeuwen, F. 2007, A&A, 474, 653**
* **van Leeuwen et al. (1997)** **van Leeuwen, F., Evans, D. W., Grenon, M., et al. 1997, A&A, 323, L61**
* **Villaver & Livio (2009)** **Villaver, E. & Livio, M. 2009, ApJ, 705, L81**
* **Wang et al. (2012)** **Wang, L., Sato, B., Zhao, G., et al. 2012, Research in Astronomy and Astrophysics, 12, 84**
* **Wittenmyer et al. (2011)** **Wittenmyer, R. A., Endl, M., Wang, L., et al. 2011, ApJ, 743, 184**
* **Wittenmyer et al. (2016)** **Wittenmyer, R. A., Liu, F., Wang, L., et al. 2016, AJ, 152, 19**
* **Wright et al. (2009)** **Wright, J. T., Upadhyay, S., Marcy, G. W., et al. 2009, ApJ, 693, 1084**
* **Yılmaz et al. (2015)** **Yılmaz, M., Bikmaev, I., Sato, B., et al. 2015, New A, 34, 108**
|
1604.05160 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 210043,
"num_imgs": 0,
"llama3_tokens_count": 59396
} | [] | # On magnetic monopoles, the anomalous \(g\)-factor of the electron and the spin-orbit coupling in the Dirac theory
Gerrit Coddens
Laboratoire des Solides Irradiés,
Ecole Polytechnique, CNRS, CEA,
Université Paris-Saclay,
91128-Palaiseau CEDEX, FRANCE
###### Abstract
We discuss the algebra and the interpretation of the anomalous Zeeman effect and the spin-orbit coupling within the Dirac theory. Whereas the algebra for the anomalous Zeeman effect is impeccable and therefore in excellent agreement with experiment, the _physical interpretation of that algebra_ uses images that are based on macroscopic intuition but do not correspond to the meaning of this algebra. The interpretation violates the Lorentz symmetry. We therefore reconsider the interpretation to see if we can render it consistent also with the symmetry. The results confirm clearly that the traditional physical interpretation of the anomalous Zeeman effect is not correct. We give an alternative intuitive description of the meaning of this effect, which respects the symmetry and is exact. It can be summarized by stating that a magnetic field makes any charged particle spin. This is even true for charged particles “without spin”. Particles “with spin” acquire additional spin in a magnetic field. This additional spin must be combined algebraically with the pre-existing spin. We show also that the traditional discussion about magnetic monopoles confuses two issues, _viz._ the symmetry of the Maxwell equations and the quantization of charge. These two issues define each a different concept of magnetic monopole. They cannot be merged together into a unique all-encompassing issue. We also generalize the minimal substitution for a charged particle, and provide some intuition for the magnetic vector potential. We finally explore the algebra of the spin-orbit coupling, which turns out to be badly wrong. The traditional theory that is claimed to reproduce the Thomas half is based on a number of errors. An error-free application of the Dirac theory cannot account for the Thomas precession, because it only accounts for the instantaneous local boosts, not for the rotational component of the Lorentz transformation. This runs contrary to established beliefs, but Thomas precession can be understood in terms of the Berry phase on a path through the velocity space of the Lorentz group manifold. These results clearly reveal the limitations of the prevailing working philosophy to “shut up and calculate”.
pacs: 03.65.-wQuantum Mechanics and 02.20.-aGroup Theory and 03.65.PmRelativistic wave equations †
[FOOTNOTE:†][ENDFOOTNOTE]
## 1 Introduction
When Uhlenbeck and Goudsmit presented the concept of spin, Lorentz pointed out that the idea could not account for the magnetic dipole moment of the electron. Even if one were to put all the charge of a spherical electron on its equator, the current produced by the spinning motion would not be enough to match the magnitude of the anomalous Zeeman splitting observed. The algebra of the Dirac theory accounts very well for the measured values of this anomalous Zeeman splitting, but in the present paper we point out that the traditional physical interpretation of the mathematical formalism in terms of a magnetic dipole moment associated with the spin is at variance with the meaning of the algebra itself as it violates the built-in Lorentz symmetry. It interprets a vector as a scalar, which is a transgression that is similar to interpreting a tensor as a vector. The importance of such distinctions based on symmetry is well known. In relativistic quantum mechanics one discusses e.g. that the only bilinear Lorentz covariants that exist are one-component scalars, four-component vectors, six-component tensors, four-component axial vectors and one-component pseudo-scalars. Confusing a covariant of one type with a covariant of an other type is violating its symmetry. Using group theory we will propose an approach that respects the symmetry.
Section 2 contains an introduction to some aspects of the representation SU(2) for the rotation group and of the Dirac representation for the homogeneous Lorentz group, which will be used in the paper. This is a subject matter that is considered to be “well known”. But we cover it from a very different, geometrical perspective than in its traditional treatment given in textbooks, which is far more abstract and algebraic. The insight gained from this different perspective will permit us to discern the error in the interpretation of the anomalous \(g\)-factor we mentioned above, and which we discuss in Subsection 6.1. Apparently this problem has escaped attention for more than 80 years, suggesting that the group theory is not as well understood as routinely assumed.
## 2 Group Representation Theory
### SU(2) and the rotations of \({\mathbb{R}}^{3}\)
#### 2.1.1 Principal idea
The general form of a SU(2) rotation matrix is:
(1)
These matrices work on spinors:
(2)
according to:
\[{\boldsymbol{\xi}}^{\prime}={\mathbf{R}}{\boldsymbol{\xi}}.\] (3)
The natural question is of course what the meaning of such a spinor is. In SO(3), the \(3\times 3\) rotation matrices of \({\mathbb{R}}^{3}\) are working on \(3\times 1\) matrices that represent vectors of \({\mathbb{R}}^{3}\), but in SU(2) the situation is different. Here the \(2\times 1\) matrices, i.e. the spinors, represent rotations. The idea can be illustrated by considering the group multiplication table for an arbitrary group \((G,\circ)\):
\begin{tabular}{|c|c c c c c c|}
\hline
\[{\circ}\] & \[g_{1}\] & \[g_{2}\] & \[g_{3}\] & \[\cdots\] & \[g_{j}\] & \[\cdots\] \\
\hline
\[\,g_{1}\,\] & \[\,g_{1}\,{\circ}\,g_{1}\] & \[\,g_{1}\,{\circ}\,g_{2}\] & \[\,g_{1}\,{\circ}\,g_{3}\] & \[\,\cdots\] & \[\,g_{1}\,{\circ}\,g_{j}\] & \[\,\cdots\] \\
\[\,g_{2}\,\] & \[g_{2}\,{\circ}\,g_{1}\] & \[g_{2}\,{\circ}\,g_{2}\] & \[g_{2}\,{\circ}\,g_{3}\] & \[\cdots\] & \[g_{2}\,{\circ}\,g_{j}\] & \[\cdots\] \\
\[\,\vdots\,\] & \[\vdots\] & \[\vdots\] & \[\vdots\] & & \[\vdots\] & \\
\hline \hline
\[\,g_{k}\,\] & \[g_{k}\,{\circ}\,g_{1}\] & \[g_{k}\,{\circ}\,g_{2}\] & \[g_{k}\,{\circ}\,g_{3}\] & \[\cdots\] & \[g_{k}\,{\circ}\,g_{j}\] & \[\cdots\] \\
\hline \hline
\[\,\vdots\,\] & \[\vdots\] & \[\vdots\] & \[\vdots\] & & \[\vdots\] & \\ \hline
\end{tabular}
\begin{tabular}{l}
\\
\\
\\
\[\leftarrow\] \[g_{k}\circ,\] \\
\end{tabular}
(4)
which illustrates that the group element \(g_{k}\) defines a function \(g_{k}\circ:G\to G;\,g_{j}\to g_{k}\,{\circ}\,g_{j}\). The notation \(g_{k}\circ\) for this function is somewhat arcane, but it has the advantage that the way it acts on a group element is obtained by mere juxtaposition of the symbols. In the specific case of the rotation group, we define then a rotation \(g_{k}\) no longer by all its function values \(g_{k}({\mathbf{r}}),\forall{\mathbf{r}}\in{\mathbb{R}}^{3}\), but by all function values \(g_{k}\circ g_{j},\forall g_{j}\in G\). More rigorously, an arbitrary group element \(g_{k}\in G\) is identified with the function \(T_{g_{k}}\in F(G,G)\) that maps \(G\) to \(G\) according to \(T_{g_{k}}:g_{j}\in G\to g_{j}^{\prime}=T_{g_{k}}(g_{j})=g_{k}\,{\circ} \,g_{j}\), where \(T_{g_{k}}\) is just a more standard notation for the function \(g_{k}\circ\) (We use here \(F(S_{1},S_{2})\) as a general notation for the set of functions from \(S_{1}\) to \(S_{2}\)). This identification implies that \(T_{g_{k}}\in F(G,G)\) represents \(g_{k}\in G\). Let us call this representation \(T_{g_{k}}\) of \(g_{k}\) the automorphism representation. The non-standard notation \(g_{k}\circ\) permits writing \(g_{k}\circ:g_{j}\in G\to g^{\prime}_{j}=g_{k}\,{\circ}\,g_{j}\) and grasping more easily the idea of interpreting a rotation as a function that works on other rotations rather than on vectors. If we represent \(g_{j}\) by the SU(2) matrix \({\mathbf{X}}\), \(g_{k}\) by the SU(2) matrix \({\mathbf{R}}\), and \(g^{\prime}_{j}\) by the SU(2) matrix \({\mathbf{X}}^{\prime}\) then we have:
\begin{tabular}{c c c c c c c c c c}
\[g^{\prime}_{j}\] & \[=\] & \[g_{k}\] & \[\circ\] & \[g_{j}\] & & \[g_{j}^{\prime}\] & \[=\] & \[T_{g_{k}}\] & \[(\,g_{j}\,)\] \\
\[\left\downarrow\rule{0.0pt}{14.226378pt}\right.\] & & \[\left\downarrow\rule{0.0pt}{14.226378pt}\right.\] & & \[\left\downarrow\rule{0.0pt}{14.226378pt}\right.\] & or & \[\left\downarrow\rule{0.0pt}{14.226378pt}\right.\] & & \[\left\downarrow\rule{0.0pt}{14.226378pt}\right.\] & \[\left\downarrow\rule{0.0pt}{14.226378pt}\right.\] \\
\[{\mathbf{X}}^{\prime}\] & \[=\] & \[{\mathbf{R}}\] & & \[{\mathbf{X}}\] & & \[{\mathbf{X}}^{\prime}\] & \[=\] & \[{\mathbf{R}}\] & \[{\mathbf{X}}\] \\
\end{tabular}
__ (5)
We see thus that we can represent \(T_{g_{k}}\) also by \({\mathbf{R}}\): In other words, the SU(2) matrices represent from this viewpoint two types of mathematical objects, _viz._ group elements \(g_{j}\in G\) and group automorphisms \(T_{g_{k}}\in F(G,G)\). This translates the fact that the automorphism group \((F(G,G),^{\circ})\) (where \({}^{\circ}\) is the composition of functions) of a group \((G,\circ)\), is isomorphic to \((G,\circ)\) under the mapping \(T\in F(G,F(G,G)):g_{k}\to T_{g_{k}}\). In SU(2) it is possible to remove this ambiguity between the representations of group elements and automorphisms by rewriting the second diagram in Eq. 5 as:
(6)
where we recover Eq. 3 by defining the spinor \({\boldsymbol{\xi}}\) as just a short-hand for the SU(2) rotation matrix:
(7)
by taking its first column. In fact, all the information of the matrix \({\mathbf{X}}\) is already given by its first column. When we know the first column we know everything we need to know to write the second column. Moreover the first column of \({\mathbf{R}}{\mathbf{X}}\) will be \({\mathbf{R}}{\boldsymbol{\xi}}\). If we note the second column of \({\mathbf{X}}\) as \({\boldsymbol{\eta}}\), then the second column of \({\mathbf{R}}{\mathbf{X}}\) will be \({\mathbf{R}}{\boldsymbol{\eta}}\) and it will be possible to derive \({\mathbf{R}}{\boldsymbol{\eta}}\) from \({\mathbf{R}}{\boldsymbol{\xi}}\) in the same way as we could derive \({\boldsymbol{\eta}}\) from \({\boldsymbol{\xi}}\), viz. \(({\mathbf{R}}{\boldsymbol{\eta}})_{0}=-(({\mathbf{R}}{\boldsymbol{\xi}})_{1})^ {*}\) and \(({\mathbf{R}}{\boldsymbol{\eta}})_{1}=(({\mathbf{R}}{\boldsymbol{\xi}})_{0})^{*}\), as the SU(2) matrices constitute a group. A spinor \({\boldsymbol{\xi}}\) in SU(2) can thus be considered as a set of parameters that define a rotation, i.e. a set of coordinates for a rotation. This rises the question how the information about the rotations occurs inside the parameter set \({\boldsymbol{\xi}}\).
#### 2.1.2 Constructing the representation
The answer is that it is done by using the fact that any rotation can be obtained as a product of two spatial reflections in \({\mathbb{R}}^{3}\). The reflections with respect to the planes through the origin of \({\mathbb{R}}^{3}\) generate a group of rotations and reversals, of which the rotation group is a subgroup. It is easy to figure out how we write a \(2\times 2\) reflection matrix, and once we know the matrices for the reflections we can calculate the matrices for the rotations and reversals by making products. A reflection \(A\) is defined by a unit vector \({\mathbf{a}}\) that is normal to its reflection plane. The coordinates of \({\mathbf{a}}\) can be expected to occur as parameters in the \(2\times 2\) matrix \({\mathbf{A}}\) that defines the reflection \(A\) but we do not know where. We therefore write the reflection matrix \({\mathbf{A}}\) heuristically as \({\mathbf{A}}=a_{x}\sigma_{x}+a_{y}\sigma_{y}+a_{z}\sigma_{z}\). The matrix \(\sigma_{x}\) will tell us where and with which coefficients \(a_{x}\) appears in \({\mathbf{A}}\). The same is true, _mutatis mutandis_ for \(\sigma_{y}\) and \(\sigma_{z}\). To find the matrices \(\sigma_{x}\), \(\sigma_{y}\), \(\sigma_{z}\), we express that \({\mathbf{A}}^{2}={\mathds{1}}\). We find that we can meet this requirement when the matrices \(\sigma_{j}\) satisfy the conditions \(\sigma_{j}\sigma_{k}+\sigma_{k}\sigma_{j}=2\delta_{jk}{\mathds{1}}\). In other words, identifying them with the Pauli matrices will give us the representation searched for. By expressing a rotation as the product of two reflections, one can then derive the well-known Rodrigues formula:
\[{\mathbf{R}}({\mathbf{n}},\varphi)=\cos(\varphi/2)\,{\mathds{1}}-\imath\sin( \varphi/2)\,[\,{\mathbf{n}}{\boldsymbol{\cdot\sigma}}\,],\] (8)
for a rotation by an angle \(\varphi\) around an axis defined by the unit vector \({\mathbf{n}}\). To derive this result it suffices to consider two reflections \(A\) (with matrix \([{\mathbf{a}}{\boldsymbol{\cdot\sigma}}]\)) and \(B\) (with matrix \([{\mathbf{b}}{\boldsymbol{\cdot\sigma}}]\)) whose planes contain \({\mathbf{n}}\), and which have an angle \(\varphi/2\) between them, and to use the algebraic identity \([{\mathbf{b}}{\boldsymbol{\cdot\sigma}}]\,[{\mathbf{a}}{\boldsymbol{\cdot \sigma}}]=({\mathbf{b\cdot a}})\,{\mathds{1}}+\imath({\mathbf{b}}\wedge{ \mathbf{a}}){\boldsymbol{\cdot\sigma}}\). There is an infinite set of such pairs of planes, and which precise pair one chooses from this set does not matter.
#### 2.1.3 A parallel formalism for vectors
By construction, this representation contains for the moment only group elements. Of course, it would be convenient if we were also able to calculate the action of the group elements on vectors. This can be done by developing a _parallel formalism for the matrices \({\mathbf{A}}\), wherein \({\mathbf{A}}\) takes a different meaning and obeys a different kind of algebra._ As the matrix \({\mathbf{A}}\) contains the components of the vector \({\mathbf{a}}\) we can conceive the idea of taking the matrix \({\mathbf{A}}\) also as the representation of the unit vector \({\mathbf{a}}\). This idea can be generalized to a vector \({\mathbf{v}}\) of arbitrary length, which is then represented by \({\mathbf{V}}=v_{x}\sigma_{x}+v_{y}\sigma_{y}+v_{z}\sigma_{z}\). We have then \({\mathbf{V}}^{2}=-(\det{\mathbf{V}}){\mathds{1}}=v^{2}{\mathds{1}}\). This idea that within SU(2) a vector \({\mathbf{v}}\in{\mathbb{R}}^{3}\) is represented by a matrix \({\mathbf{v}}{\boldsymbol{\cdot\sigma}}\) according to the isomorphism:
(9)
was introduced by Cartan [1]. From \({\mathbf{V}}^{2}=v^{2}{\mathds{1}}\) it follows that: . To find out how the group acts on these representations of vectors, it suffices to observe that the reflection \(A\), defined by the unit vector \({\mathbf{a}}\), transforms \({\mathbf{v}}\) into \(A({\mathbf{v}})={\mathbf{v}}-2({\mathbf{v\cdot a}})\,{\mathbf{a}}\). Expressed in the matrices this yields: \({\mathbf{V}}\rightarrow-{\mathbf{AVA}}\). We see that this transformation law for vectors \({\mathbf{v}}\) is quadratic in \({\mathbf{A}}\) in contrast with the transformation law for group elements \(g\), which is linear: \({\mathbf{G}}\rightarrow{\mathbf{AG}}\). _Vectors transform thus quadratically as rank-2 tensor products of spinors, whereas spinors transform linearly._
Both in the representation matrices \({\mathbf{A}}={\mathbf{a}}{\boldsymbol{\cdot\sigma}}\) for reflections \(A\) and \({\mathbf{V}}={\mathbf{v}}{\boldsymbol{\cdot\sigma}}\) for vectors \({\mathbf{v}}\), \(\sigma_{x}\), \(\sigma_{y}\) and \(\sigma_{z}\) are thus the Pauli matrices, and the symbol \({\hat{=}}\) serves to flag the introduction of a (rather confusing) stenographic notation \({\boldsymbol{\sigma}}=(\sigma_{x},\sigma_{y},\sigma_{z})\). The Pauli matrices are thus the images of the basis vectors \({\mathbf{e}}_{x}\), \({\mathbf{e}}_{y}\), \({\mathbf{e}}_{z}\) in the isomorphism (\({\mathbf{e}}_{j}\leftrightarrow\sigma_{j}\)) defined by Eq. 9. The drawback of the convenient convention to use the shorthand \({\boldsymbol{\sigma}}\) for \((\sigma_{x},\sigma_{y},\sigma_{z})\) is that it may create the misleading impression that \({\mathbf{v}}{\boldsymbol{\cdot\sigma}}\) represents a scalar, which it does not. It just represents the counterpart in the isomorphism of what would be a pedantic notation for \(v_{x}{\mathbf{e}}_{x}+v_{y}{\mathbf{e}}_{y}+v_{z}{\mathbf{e}}_{z}={\mathbf{v}}\).
The reader will notice that the definition \({\mathbf{V}}={\mathbf{v}}{\boldsymbol{\cdot\sigma}}\) with \({\mathbf{V}}^{2}=v^{2}{\mathds{1}}\) is analogous to Dirac’s way of introducing the gamma matrices to write the energy-momentum four-vector as \(E\gamma_{t}+c{\mathbf{p}}{\boldsymbol{\cdot\gamma}}\) and postulating \((E\gamma_{t}+c{\mathbf{p}}{\boldsymbol{\cdot\gamma}})^{2}=(E^{2}-c^{2}p^{2}){ \mathds{1}}\). In other words, it is the metric that defines the whole formalism, because we are considering groups of metric-conserving transformations (as the definition of a geometry in the philosophy of Felix Klein’s Erlangen program). For more information about the calculus on the rotation and reversal matrices, we refer the reader to reference [2]. Let us just mention that as a reflection \(A\) works on a vector \({\mathbf{v}}\) according to \({\mathbf{V}}\rightarrow-{\mathbf{AVA}}=-{\mathbf{AVA}}^{-1}\), a rotation \(R=BA\) will work on it according to \({\mathbf{V}}\rightarrow{\mathbf{BAVAB}}={\mathbf{RVR}}^{-1}={\mathbf{RVR}}^{\dagger}\). The identity \({\mathbf{R}}^{-1}={\mathbf{R}}^{\dagger}\) explains why we end up with SU(2).
In summary, there are two parallel formalisms in SU(2), one for the vectors and one for the group elements. In both formalisms a matrix \({\mathbf{V}}={\mathbf{v}}{\boldsymbol{\cdot\sigma}}\) can occur but with different meanings. In a formalism for group elements, \({\mathbf{v}}\) fulfills the rôle of the unit vector \({\mathbf{a}}\) that defines the reflection \(A\), such that we must have \(|{\mathbf{v}}|=1\), and then the reflection matrix \({\mathbf{V}}={\mathbf{A}}\) transforms according to: \({\mathbf{A}}\rightarrow{\mathbf{G}}{\mathbf{A}}\) under a group element \(g\) with matrix representation \({\mathbf{G}}\). The new group element represented by \({\mathbf{GA}}\) will then no longer be a reflection that can be associated with a unit vector like it was the case for \({\mathbf{A}}\). In a formalism of vectors, \(|{\mathbf{v}}|\) can be different from \(1\) and the matrix \({\mathbf{V}}\) (that represents now a vector) transforms according to: \({\mathbf{V}}\rightarrow{\mathbf{G}}{\mathbf{V}}{\mathbf{G}}^{-1}={\mathbf{G}}{ \mathbf{V}}{\mathbf{G}}^{\dagger}\). Here \({\mathbf{G}}{\mathbf{V}}{\mathbf{G}}^{\dagger}\) can be associated again with a vector.
#### 2.1.4 Other approaches
The approach outlined above is non-standard. The standard treatment follows in general a linearization procedure. One starts the development by establishing the quadratic formalism \({\mathbf{V}}\rightarrow-{\mathbf{A}}{\mathbf{V}}{\mathbf{A}}\) and \({\mathbf{V}}\rightarrow{\mathbf{R}}{\mathbf{V}}{\mathbf{R}}^{\dagger}\) for vectors. The way back to a linear formalism \({\mathbf{X}}\rightarrow{\mathbf{R}}{\mathbf{X}}\) (for group elements) or \({\boldsymbol{\xi}}\rightarrow{\mathbf{R}}{\boldsymbol{\xi}}\) (for their spinors) is then tricky and shrouded in mystery. It amounts so to say to defining a spinor as a kind of a square root of an isotropic vector.
This runs for instance as follows. One first considers a triad of normalized, mutually orthogonal basis vectors \(({\mathbf{e}}_{x},{\mathbf{e}}_{y},{\mathbf{e}}_{z})\). One then observes that \(({\mathbf{e}}^{\prime}_{x},{\mathbf{e}}^{\prime}_{y},{\mathbf{e}}^{\prime}_{z})=\)\((R({\mathbf{e}}_{x}),R({\mathbf{e}}_{y}),R({\mathbf{e}}_{z}))\) defines \(R\) unambiguously. There is a one-to-one correspondence between the rotated triads \(({\mathbf{e}}^{\prime}_{x},{\mathbf{e}}^{\prime}_{y},{\mathbf{e}}^{\prime}_{z})\) and the rotations \(R\) that produced them by acting on the chosen reference triad \(({\mathbf{e}}_{x},{\mathbf{e}}_{y},{\mathbf{e}}_{z})\). In a second stage, one considers that there is also a one-to-one correspondence between isotropic vectors \({\mathbf{e}}^{\prime}_{x}+\imath{\mathbf{e}}^{\prime}_{y}\) and triads \(({\mathbf{e}}^{\prime}_{x},{\mathbf{e}}^{\prime}_{y},{\mathbf{e}}^{\prime}_{z})\). By separating the real and imaginary parts in \({\mathbf{e}}^{\prime}_{x}+\imath{\mathbf{e}}^{\prime}_{y}\) one can reconstruct \({\mathbf{e}}^{\prime}_{x}\) and \({\mathbf{e}}^{\prime}_{y}\), while \({\mathbf{e}}^{\prime}_{z}={\mathbf{e}}^{\prime}_{x}\wedge{\mathbf{e}}^{\prime} _{y}\).
The isotropic vector is thus a parameter set that defines a rotation in a one-to-one fashion. It is thus a set of complex coordinates for a rotation. The coordinates of the isotropic vector \((x^{\prime},y^{\prime},z^{\prime})={\mathbf{e}}^{\prime}_{x}+\imath{\mathbf{e} }^{\prime}_{y}\) are thus not position coordinates but rotation coordinates. They do not define a position in \({\mathbb{R}}^{3}\) because they were not introduced to do so. The only real point \((0,0,0)\in{\mathbb{R}}^{3}\) that belongs to the isotropic cone and could define a position does not define a triad. The complex coordinates define nevertheless an object in real Euclidean space, viz. the triad. Therefore spinors, which (as we will see) represent the information about these isotropic vectors and the corresponding triads, do turn in Euclidean space, despite the widespread opinion that they should be considered as defined in some abstract internal space like in the example of isospin [3]¹.
[FOOTNOTE:1][ENDFOOTNOTE]
As \({\mathbf{e}}^{\prime}_{x}+\imath{\mathbf{e}}^{\prime}_{y}\) is a vector it transforms according to the rule \({\mathbf{V}}\rightarrow{\mathbf{R}}{\mathbf{V}}{\mathbf{R}}^{\dagger}\). One can then discover spinors by noting that \(\det\,[\,({\mathbf{e}}^{\prime}_{x}+\imath{\mathbf{e}}^{\prime}_{y}){ \boldsymbol{\cdot\sigma}}\,]=0\) because \(({\mathbf{e}}^{\prime}_{x}+\imath{\mathbf{e}}^{\prime}_{y})^{2}=0\). The rows and columns of \([\,({\mathbf{e}}^{\prime}_{x}+\imath{\mathbf{e}}^{\prime}_{y}){\boldsymbol{ \cdot\sigma}}\,]=0\) must therefore be proportional. This permits us to write:
(10)
where
(11)
The numbers \(\sqrt{2}\) in Eq. 10 are introduced to satisfy the normalization condition \(\xi_{0}\xi_{0}^{*}+\xi_{1}\xi_{1}^{*}=1\). This way one linearizes the quadratic formalism \({\mathbf{V}}\rightarrow{\mathbf{R}}{\mathbf{V}}{\mathbf{R}}^{\dagger}\) for vectors in terms of a linear formalism \({\boldsymbol{\xi}}\rightarrow{\mathbf{R}}{\boldsymbol{\xi}}\) for spinors. This is then analogous to the way Dirac linearized the Klein-Gordon equation. The approach enhances our understanding of the formalism, as it permits us to see how the information about the rotated basis that defines the rotation is hidden inside the spinor. But by using it as the starting point for deriving the formalism, a spinor in SU(2) remains a mysterious object, a kind of square root of an isotropic vector, while the essential point, that it is just a rotation, remains hidden. It is conceptually much easier to understand the idea that a vector is a tensor quantity of rank 2 in terms of spinors according to our approach, than to grasp the idea that a spinor would be a kind of square root of an isotropic vector, according to the standard approach. There are several other instances where the standard approach keeps the reader at bay in puzzlement on a sidetrack. This renders the presentation abstract and purely algebraic, while the simple underlying geometrical ideas are lost. This unsatisfactory situation is just copied into quantum mechanics, which relies on the group theory. As we will see, in quantum mechanics we pay cash for the loss of geometrical insight that results from using the group theory as a black box of abstract algebra. We may mention that there is yet a third approach to spinors, based on the stereographic projection. As discussed in reference [2], this derivation also tends to conceal the geometrical ideas by installing a confusion between spinors and vectors, as it may make the reader believe that the information content of a spinor would be that of a position vector of a point of the unit sphere.
### The homogeneous Lorentz group
Also here the basic idea is that a spinor should be a set of coordinates for a group element. The conditions the analogues of the Pauli matrices will have to satisfy are now \(\gamma_{\mu}\gamma_{\nu}+\gamma_{\nu}\gamma_{\mu}=2g_{\mu\nu}{\mathds{1}}\). There is no fourth \(2\times 2\) matrix that would anti-commute with all the Pauli matrices and therefore could be used to represent all reflections in Minkowski space-time and to generate in a second stage all Lorentz transformations. This problem can be overcome in the \(4\times 4\) representation based on the Dirac matrices, where \(a_{\mu}\gamma^{\mu}\) represents the four-vector \((a_{t},{\mathbf{a}})\) and \(\gamma^{t}\neq{\mathds{1}}\). We have then to postulate \((\sum a_{\mu}\gamma^{\mu})^{2}=(a_{t}^{2}-{\mathbf{a\cdot a}})\,{\mathds{1}}\). The simplest representation of the Dirac matrices is the Weyl presentation:
\[=a_{t}\gamma^{t}+a_{x}\gamma^{x}+a_{y}\gamma^{y}+a_{z}\gamma^{z}=a_{\mu}\gamma ^{\mu}\,{\hat{=}}\,(a_{t},{\mathbf{a}}){\boldsymbol{\cdot}}(\gamma^{t},{ \boldsymbol{\gamma}}).\] (12)
This representation is much more easy to manipulate than the traditional text book representation, as due to the block structure of the Weyl representation the formalism reduces to two sets of calculations with \(2\times 2\) matrices. We can write them as \({\mathbf{A}}=a_{t}{\mathds{1}}+{\mathbf{a\cdot}}{\boldsymbol{\sigma}}\) and \({\mathbf{A}}^{\star}=a_{t}{\mathds{1}}-{\mathbf{a\cdot}}{\boldsymbol{\sigma}}\). These matrices occur as blocks on the secondary diagonal. They are both matrices that represent four-vectors in a SL(2,\({\mathbb{C}}\)) representation, but in two different types of SL(2,\({\mathbb{C}}\)) representation. Each of the two vector matrices can be used as starting point to set up a representation SL(2,\({\mathbb{C}}\)) of the Lorentz group [4]. The matrix \({\mathbf{A}}^{\star}\) is obtained from \({\mathbf{A}}\) by the parity transformation \({\mathbf{a}}\,|\,-{\mathbf{a}}\). The SL(2,\({\mathbb{C}}\)) representations that are working on the vector matrices are tricky. The formalism does no longer permit using a unit four-vector \((a_{t},{\mathbf{a}})\) to define a general reflection in SL(2,\({\mathbb{C}}\)) as there is no fourth Pauli matrix to represent reflections with respect to \({\mathbf{e}}_{t}\). Instead of that \(a_{t}\) is associated with \({\mathds{1}}\).
The SL(2,\({\mathbb{C}}\)) representations do thus not permit a clear distinction between \({\mathbf{e}}_{t}\) and the identity element \({\mathds{1}}\) of the Lorentz group, which are both represented by \({\mathds{1}}\). This difficulty is removed by the introduction of the gamma matrices where clearly \(\gamma_{t}\neq{\mathds{1}}\). Nevertheless, if one contents oneself with describing only true Lorentz transformations which are products of an even number of space-time reflections, we can see by following the faith of the matrices within the Weyl representation that the \(2\times 2\) formalism builds a representation, whereby the four-vector \({\mathbf{A}}=a_{t}{\mathds{1}}-{\mathbf{a\cdot}}{\boldsymbol{\sigma}}\) transforms according to \({\mathbf{A}}\rightarrow{\mathbf{LAL}}^{\dagger}\), where \({\mathbf{L}}^{\dagger}\neq{\mathbf{L}}^{-1}\). In the other SL(2,\({\mathbb{C}}\)) representation, \({\mathbf{A}}^{\star}\) transforms according to \({\mathbf{A}}^{\star}\rightarrow{\mathbf{L}}^{-1}{\mathbf{A}}^{\star}{\mathbf{L }}^{-1\dagger}\). We see that in the Weyl formalism the \(2\times 2\) blocks are just sequences where the presence and absence of the symbol \(\star\) alternates, e.g. \({\mathbf{V}}_{2n}^{\star}{\mathbf{V}}_{2n-1}\cdots{\mathbf{V}}_{2}^{\star}{ \mathbf{V}}_{1}\). The algebra in the other block is just given by inverting the presences and absences of the \(\star\) symbol. Everything that happens in one \(2\times 2\) block is thus defined by what happens in the other \(2\times 2\) block, such that we can use the \(2\times 2\) blocks as a shorthand for what happens in the \(4\times 4\) formalism. We may note that \({\mathbf{V}}^{\star}\) has the meaning of \((v_{t},-{\mathbf{v}})\) in the representation without stars. This justifies the use we will make of the \(2\times 2\) matrices in the following sections.
It is no longer possible to cram all the information about a general Lorentz transformation that is coded in a one-to-one fashion within a SL(2,\({\mathbb{C}}\)) matrix into a single \(2\times 1\) spinor like it was the case in SU(2). Fortunately, will not have to bother about this technicality in this paper. Once again, we refer the reader to reference [2] for more details about the solution of this problem and the group calculus.
We must finally point out that a representation has always its own internal self-consistent logic, such that there can be no ground to question any result correctly derived within a given representation by drawing in considerations from outside the context of that representation.
## 3 Lorentz Symmetry of Electromagnetism
### Some simple algebra in SL(2,\({\mathbb{C}}\))
#### 3.1.1 The fields
In view of the facts outlined in Subsection 2.2, in SL(2,\({\mathbb{C}}\)) the four-gradient \(({\partial\over{\partial ct}},{\mathbf{\nabla}})\) is represented by \({\partial\over{\partial ct}}{\mathds{1}}-{\mathbf{\nabla\cdot}}{\boldsymbol{ \sigma}}\). Analogously, the four-potential \((V,c{\mathbf{A}})\) is represented by \(V{\mathds{1}}-c{\mathbf{A\cdot}}{\boldsymbol{\sigma}}\). We can now check what will happen if we “multiply” these two matrices. Using the identity \([\,{\mathbf{a}}{\boldsymbol{\cdot\sigma}}\,]\,[\,{\mathbf{b}}{\boldsymbol{ \cdot\sigma}}\,]=({\mathbf{a\cdot b}}){\mathds{1}}+\imath\,[\ ({\mathbf{a}} \wedge{\mathbf{b}}){\boldsymbol{\cdot\sigma}}\,]\) we find:
\[[\,{\partial\over{\partial ct}}{\mathds{1}}-{\mathbf{\nabla\cdot}}{\boldsymbol {\sigma}}\,]\,[\,{V\over{c}}{\mathds{1}}-{\mathbf{A\cdot}}{\boldsymbol{\sigma} }\,]=\]
\begin{tabular}{c c c c c}
\[{\underbrace{[\,{1\over{c^{2}}}{\partial V\over{\partial t}}+{\mathbf{\nabla \cdot A}}\,]\,{\mathds{1}}}}\] & \[-\] & \[{\underbrace{{1\over{c}}[\,({\mathbf{\nabla}}V+{\partial{\mathbf{A}}\over{ \partial t}}){\boldsymbol{\cdot\sigma}}\,]}}\] & \[+\] & \[{\underbrace{\imath[\,({\mathbf{\nabla}}\wedge{\mathbf{A}}){\boldsymbol{\cdot \sigma}}\,]}}\] \\
Lorentz gauge & & \[{1\over{c}}{\mathbf{E\cdot}}{\boldsymbol{\sigma}}\] & & \[\imath{\mathbf{B\cdot}}{\boldsymbol{\sigma}}\] \\
\end{tabular}
(13)
With the Lorentz gauge condition \({1\over{c^{2}}}{\partial V\over{\partial t}}+{\mathbf{\nabla\cdot A}}=0\), we obtain thus:
\[[\,{\partial\over{\partial ct}}{\mathds{1}}-{\mathbf{\nabla\cdot}}{\boldsymbol {\sigma}}\,]\,[\,{V\over{c}}{\mathds{1}}-{\mathbf{A\cdot}}{\boldsymbol{\sigma} }\,]={1\over{c}}[\,({\mathbf{E}}+\imath c{\mathbf{B}}){\boldsymbol{\cdot\sigma }}\,].\] (14)
We recover thus automatically the expressions for the Lorentz gauge condition, and for the electric and magnetic fields in terms of the potentials. The term \({\mathbf{E}}+\imath c{\mathbf{B}}\) is the electromagnetic field tensor. The presence of \(\imath\) in an expression can be seen to signal that it is a pseudo-vector or a pseudo-scalar². The vector \({\mathbf{E}}\) and pseudo-vector \({\mathbf{B}}\) are the symmetric and anti-symmetric three-component parts of the six-component field tensor. We see thus that symmetry is enough to recover all the definitions. It summarizes in a sense the reason why we need the theory of relativity by showing that Lorentz symmetry is the symmetry that is compatible with the structure of the Maxwell equations. A whole text book development is here elegantly summarized in one line of calculation. With this formalism, one can also write the four Maxwell equations jointly in one, very simple, matrix equation. It seems that this approach was first discovered by Majorana, but most of the time the presentation is less concise than here.
[FOOTNOTE:2][ENDFOOTNOTE]
#### 3.1.2 The interactions
The charge-current four-vector \((\rho,{\mathbf{j}}/c)\) for a moving point charge \(q\) with velocity \({\mathbf{v}}\) is (up to the Lorentz factor \(\gamma=(1-v^{2}/c^{2})^{-1/2}\)) given by \((q,-q{\mathbf{v}}/c)\), which is represented by \(q{\mathds{1}}-{q\over{c}}{\mathbf{v\cdot}}{\boldsymbol{\sigma}}\). Let us now couple this quantity with the electromagnetic-field tensor and calculate³:
[FOOTNOTE:3][ENDFOOTNOTE]
\[[\,q{\mathds{1}}-{q\over{c}}{\mathbf{v\cdot}}{\boldsymbol{\sigma}}\,]\,[\,({ \mathbf{E}}+\imath c{\mathbf{B}}){\boldsymbol{\cdot\sigma}}\,].\] (15)
We obtain then:
\[\boxed{-\,[\,{q\over{c}}{\mathbf{v\cdot E}}\,]\,{\mathds{1}}+q({\mathbf{E}}+{ \mathbf{v}}\wedge{\mathbf{B}}){\boldsymbol{\cdot\sigma}}-\imath\,[\,q{\mathbf{ v\cdot B}}\,]\,{\mathds{1}}+\imath cq\,[\,{\mathbf{B\cdot}}{\boldsymbol{\sigma }}\,]-\imath{q\over{c}}[\,({\mathbf{v}}\wedge{\mathbf{E}}){\boldsymbol{\cdot \sigma}}\,]\quad}\] (16)
The whole paper is devoted to the meaning and the consequences of this single equation. Again, the presence of \(\imath\) signals here pseudo-scalars and pseudo-vectors, while the real terms correspond to scalars and vectors, as can be checked from the behaviour of the various terms under a parity transformation. We recognize here the Lorentz force \({\mathbf{F}}=q({\mathbf{E}}+{\mathbf{v}}\wedge{\mathbf{B}})\) and the power-related term \(q{\mathbf{E\cdot v}}/c\). In fact, the term \({\mathbf{F\cdot v}}=q{\mathbf{E\cdot v}}\) represents the power corresponding to the work \({\mathbf{F\cdot}}d{\mathbf{r}}\) done against the force \({\mathbf{F}}\) during an infinitesimal displacement \(d{\mathbf{r}}\) over a time interval \(dt\). It is well known that the four-vector generalization of the force three-vector \({\mathbf{F}}\) is \(({\mathbf{F\cdot v}}/c,{\mathbf{F}})\), which contains this additional power-related term (up to a constant \(c\)). As the term \(q{\mathbf{E\cdot v}}\) is here divided by \(c\), the result has again the dimension of a force. We will call such terms therefore scalar force terms. The other terms in Eq. 16 are all imaginary and they may at first sight look less familiar.
## 4 Magnetic Monopoles
### Symmetry issue
There is a surprise in Eq. 16 in that it is seen to exhibit a complete symmetry between the electric and magnetic force terms. Each of the imaginary terms in Eq. 16 corresponds to a term that is a relativistic counterpart of a term in \(({\mathbf{F\cdot v}}/c,{\mathbf{F}})\) obtained by using the substitution \({\mathbf{E}}\to c{\mathbf{B}}\), \(c{\mathbf{B}}\rightarrow-{\mathbf{E}}\). The addition of these terms is necessary to obtain full relativistic symmetry for the total result, just like adding \(\imath c{\mathbf{B}}\) to \({\mathbf{E}}\) is necessary to obtain an expression with full relativistic symmetry. Such a perfect symmetry in the forces is something that is believed to occur only if magnetic monopoles were to exist. In such an overall symmetry, the magnetic monopole would be the symmetric counterpart of the electric monopole. It will also not have escaped the attention of the reader that the imaginary three-component quantities that occur in Eq. 16 describe exactly the force exerted by an electromagnetic field on a magnetic monopole \(q_{m}\):
\[{\mathbf{F}}_{m}=q_{m}\,[\,{\mathbf{B\cdot}}{\boldsymbol{\sigma}}\,]-{q_{m} \over{c^{2}}}[\,({\mathbf{v}}\wedge{\mathbf{E}}){\boldsymbol{\cdot\sigma}}\,],\] (17)
provided we take \(q_{m}=cq\).⁴ The one-component quantity \(-\imath\,[\,q{\mathbf{v\cdot B}}\,]\,{\mathds{1}}\) is the corresponding power-related term that completes the force four-vector. It looks therefore as though we get magnetic monopoles out of Eq. 16, while at face value we have not introduced any magnetic monopoles in Eq. 15. _All we have introduced is the charge-current four-vector of one single moving point charge within a formalism that automatically accounts for Lorentz symmetry._ The terms in Eq. 16 are all forces as Eq. 15 actually generalizes a term \([\,q\,{\mathds{1}}\,]\,[\,{\mathbf{E}}{\boldsymbol{\cdot\sigma}}\,]\) in a rest frame to a moving frame by Lorentz covariance. Eq. 16 gives us thus the most general possible expression for an electromagnetic force. It is obvious that one can find a frame wherein such a most general situation is realized. It therefore looks as though invoking magnetic monopoles as a mechanism to obtain the symmetry exhibited by Eq. 16 could be a bit far-fetched. All terms in Eq. 16 are referring to phenomena that are occurring to a single particle, not to various different particles.
[FOOTNOTE:4][ENDFOOTNOTE]
### Point-like rotational motion
We will discuss in Subsubsection 6.1.3 how the whole theory of electromagnetism just resumes to the description of interactions of charges and currents with other charges and currents. A magnetic field is produced by moving charges. A constant magnetic field is thus a mathematical expedient to describe moving charges as a static non-moving phenomenon. It will be argued along similar lines that a magnetic monopole is just a mathematical construct to treat a moving electric charge as a static non-moving quantity.
This may look an absurd statement if we think about the charges as performing a uniform rectilinear motion, that can be described by an appropriate Lorentz transformation. However a charge that is in uniform circular motion in the laboratory frame, whereby the radius of the circle is very small such that it would look like a point to the naked eye, could indeed correspond intuitively to a phenomenon at rest with respect to the laboratory frame. As we will argue, this is the seed of the idea behind identifying the force terms that occur in Eq. 17 as forces acting on magnetic monopoles. Pushing the idea of a motion that cannot be detected to the extreme, a monopole at rest can be imagined as the limit of a charge traveling on a circular orbit whose radius tends to zero such that the orbit shrinks to a point. It will then just become a charge that is rotating in a fixed position (because the precession does not vanish when we take the limit). In considering such a limit we are defining a new mathematical object, somewhat in the same way as Laurent Schwartz [5] defined the mathematical distribution that would correspond to a point-like dipole. This idea of shrinking the orbit to a point will be further elaborated below in Subsection 4.5 and in Subsection 7.2.
From this point of view, magnetic monopoles and magnetic fields are both just theoretical constructions to describe confined motion as a static phenomenon. Coining such mathematical quantities might look like a stroke of genius, but history shows that this was done unwittingly, as magnetic fields were just introduced as a phenomenological tool to describe the observations in terms of static quantities before their physics were truly understood in terms of motion. They were thus defined based on a visual illusion of rest. Of course, the idea to treat motion as a phenomenon at rest is a kind of tricky and beyond guessing. If not clearly spelled out, it may thus easily lead to confusion.
### Quantization issue
We may note that Dirac’s argument that the existence of monopoles would lead to the quantization of charge amounts to postulating:
\[{qq_{m}\over{2\pi\epsilon_{0}\hbar c^{2}}}\in{\mathbb{Z}}.\] (18)
For an electron with charge \(q\) and its associated magnetic monopole charge \(q_{m}=cq\), the quantity that is required to be an integer becomes then:
\[{qq_{m}\over{2\pi\epsilon_{0}\hbar c^{2}}}=2\alpha,\quad{\text{where:}}\quad \alpha={q^{2}\over{4\pi\epsilon_{0}\hbar c}}\quad{\text{is~{}the~{}fine- structure~{}constant}}.\] (19)
As \(\alpha\approx 1/137\), the prediction \(2\alpha\in{\mathbb{Z}}\) is way off, but by rewriting Eq. 19 as:
\[{qq_{m}\over{2\pi\alpha\epsilon_{0}c^{2}}}=\hbar,\] (20)
and considering \(\alpha\) as a constant of nature, Dirac’s argument could perhaps be saved. In fact, as orbital angular momentum occurs in multiples of \(\hbar\), charge would have to come in multiples of \(q\). However, in particle physics, \(\alpha\) is not considered to be a constant.
### Refining the concepts
There are several attitudes one can adopt with respect to these considerations. The first one would be to reject them all together with contempt, on the grounds that they seem to question the official viewpoint of hard-won “established science”. However, an alternative attitude is possible. Before we can address it we must first further develop our _Weltbild_ for the magnetic monopole.
In the Appendix we show that an electron traveling at a velocity \({\mathbf{v}}\) can be associated with a singular current (that has not to flow along a closed loop). It will become obvious from this approach that a single moving charge can indeed be considered as a “magnetic charge” with a magnetic moment. This magnetic moment could be called a _monopole moment_ because there is only one charge in motion, in contrast with the macroscopic situation in e.g. a circular current loop where there are many such charges and we talk about a dipole moment. The main difference is that one cannot really claim that there exists a torque pulling on a single charge on a circular orbit described as a loop, while one can when there is more than one charge traveling around the loop. In this respect, the magnetic moment of a single magnetic charge can be called a “magnetic monopole moment”, where it has to be emphasized that there is _no hyphen_ between “magnetic” and “monopole” because “magnetic” refers to a magnetic moment, not to a magnetic monopole.
A magnetic monopole is an all-together different and unrelated concept. Not all single “magnetic charges” can be considered as magnetic monopoles. Only the magnetic charges that correspond to point-like rotational motion will be considered as true magnetic monopoles. Magnetic monopoles will thus not be new physical particles one would have to search for to confirm Dirac’s predictions. They will appear to be just a mathematical means to deal with moving charges whose motion remains hidden to the eye by remaining confined “inside a point”, with the effect that we think that we are dealing with a purely static situation. There is then a confusion that has to be avoided. The “magnetic monopole moment” of a magnetic monopole could be called a magnetic-monopole moment _(with a hyphen)_, where the latter would be an ellipsis for a magnetic-monopole’s magnetic monopole moment.
There is an important reason for making the distinction between “magnetic charges” corresponding to currents \(q{\mathbf{v}}\) and true point-like magnetic monopoles. If we associated magnetic monopoles with the visible, not point-like currents \(q{\mathbf{v}}\), then the classification of the interaction terms would be wrong. The criterion to classify the terms would then be if some term \({\mathbf{j}}=q{\mathbf{v}}\) occurs in them or otherwise. The part \(q({\mathbf{v}}\wedge{\mathbf{B}})\) would then be due to the interaction of the magnetic monopole with the magnetic field, while it is conventionally attributed to the interaction of the moving electric charge with the magnetic field. On the other hand, the part \(cq{\mathbf{B}}\) would be due to the interaction of the electric charge with the magnetic field, while it is conventionally attributed to the interaction of the magnetic monopole with the magnetic field.
The classification must rather be based on a distinction between point-like hidden and not point-like visible currents. We will show in Subsection 7.2 that the terms in Eq. 17 are related to precession. Such a precession can be mentally visualized as a kind of point-like hidden motion and it has hidden energy we must account for if we want to get our calculations right. The terms in Eq. 17 correspond thus to hidden rotational motion rather than to translational motion (which is conceptually always visible). These rotational effects can be described also in terms of a vorticity (as will be discussed in Subsubsection 7.2.3). The duality between the force in Eq. 17 and the Lorentz force is thus rather based on a duality between non point-like visible (rotational and translational) and point-like invisible (rotational) motion than on a duality between electric charges and magnetic charges.
### Disentangling the two unrelated issues of symmetry and quantization
On the basis of all this, we can and _must_ now also make a further distinction between two concepts of magnetic monopoles. They should not be confused because they address two completely unrelated issues. The first issue is symmetry in the equations describing electromagnetism. As Eq. 17 shows, we get it for free. We do not need to postulate the existence of a true magnetic monopole to obtain the symmetrical counterpart of the Lorentz force in the form of Eq. 17. These terms are already there and we will show that they have already been experimentally observed in the form of the anomalous Zeeman effect and the spin-orbit coupling. Our first concept of magnetic monopole is thus only a mathematical hype. We rewrite \(cq\) as \(q_{m}\) just to enhance the symmetry, rendering it more evident. We obtain then a shiny interpretation for the symmetry revealed by the existence of the terms in Eq. 17, but as the derivations show, it is also possible to describe everything in a less glittering way that only calls for electric monopoles. There is no new physical quantity, the only truly existing physical quantity is \(q\).
The second issue is the quantization of charge, which we do not get for free at all, which is why Dirac introduced his magnetic-monopole concept. Dirac’s argument does not explain everything, as it hinges on the quantization of angular momentum which is also just an empirical fact. We know very well to _describe_ the quantization of angular momentum by quantum mechanics but our intuition about it is not any better than our intuition about the quantization of charge. In fact, quantization of charge is conceptually a less difficult concept than quantization of angular momentum because charge is a fundamental quantity. Its definition does not rely on the definition of other quantities which are themselves not quantized like \({\mathbf{r}}\) and \({\mathbf{p}}\) in the definition \({\mathbf{r}}\wedge{\mathbf{p}}\) of angular momentum. Of course, it is eventually just experimental evidence that can tell if Dirac’s construction is justified. If Dirac’s monopole existed then it would lead to a second equation that is completely analogous to Eq. 16, with \(q\) replaced by \(Q_{m}/c\) for some value of \(Q_{m}\neq q_{m}\).
The two different issues lead to two distinct concepts of magnetic monopoles, as we do not need Dirac’s monopole to get the symmetry between the electric and the magnetic force terms, while the monopole \(q_{m}\), which accounts for that symmetry does not tally with the prediction based on Dirac’s construction needed to obtain quantization. Traditionally the two issues are mentioned in one breath. If in following this tradition however, we merge the two _a priori completely unrelated issues_ into a single one, then it would appear as though Dirac’s construction misses the point underlying the introduction of \(q_{m}=cq\) in the first issue, which is that a magnetic monopole just serves to describe rotational motion confined to a point. In fact, Dirac’s monopole is a current (called a string) that stretches from some point to infinity, which is all but confined (Why this is wrong will be further discussed in Subsubsection 8.6.2). If we blend the two issues, the existence of two equations, one with \(q_{m}\) and one with \(Q_{m}\) will also raise the question why we use only \(Q_{m}\) and not also \(q_{m}\) in Dirac’s argument. If we do not confuse them, then one might perhaps think of an argument why we only use \(Q_{m}\), based on the idea that \(q_{m}\) is not a “true monopole” in Dirac’s sense.
## 5 Anomalous Zeeman Effect and Spin-Orbit Coupling as Purely Classical Phenomena
### A striking similarity
There is an alluring way to interpret Eq. 16 completely differently as follows:
\begin{tabular}{}
\end{tabular}
\begin{tabular}{}
\end{tabular}
(21)
We have labeled here the magnetic-monopole terms from Eq. 17 as “looking similar” to the anomalous Zeeman effect and to the spin-orbit coupling. What we want to refer to with the terminology “looks similar”, is that the imaginary terms on the second line of Eq. 21 correspond to force terms which are up to the proportionality factor \(-{\hbar\over{2m_{0}c}}\) equal to the energy terms derived from the Dirac theory for the anomalous Zeeman effect and for the spin-orbit coupling, _but without the correction for the Thomas precession.⁵_
[FOOTNOTE:5][ENDFOOTNOTE]
As will be explained in Subsection 8, Eq. 21 is obtained from Eq. 15 which just expresses the product of the three terms \([\,q{\mathds{1}}-{q\over{c}}{\mathbf{v\cdot}}{\boldsymbol{\sigma}}\,]\,[\,{ \partial\over{\partial ct}}{\mathds{1}}-{\mathbf{\nabla\cdot}}{\boldsymbol{ \sigma}}\,]\,[\,{V\over{c}}{\mathds{1}}-{\mathbf{A\cdot}}{\boldsymbol{\sigma}}\,]\), while the correct physics for the anomalous Zeeman effect and the spin-orbit coupling are obtained by considering a variant \(-{\hbar\over{2m_{0}c}}\,[\,{\partial\over{\partial ct}}{\mathds{1}}-{\mathbf{ \nabla\cdot}}{\boldsymbol{\sigma}}\,]\,[\,q{\mathds{1}}+{q\over{c}}{\mathbf{v \cdot}}{\boldsymbol{\sigma}}\,]\,[\,{V\over{c}}{\mathds{1}}+{\mathbf{A\cdot}}{ \boldsymbol{\sigma}}\,]\), wherein the three types of terms are occurring in a different order.
For the derivation of the anomalous Zeeman term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B\cdot}}{\boldsymbol{\sigma}}\,]\), the change of order has no incidence, such that the algebra that leads to the term \(\imath cq\,[\,{\mathbf{B\cdot}}{\boldsymbol{\sigma}}\,]\) in Eq. 21 is exactly the same as the one that occurs in the derivation of the anomalous Zeeman effect from the Dirac equation. That we obtain the expression for the anomalous Zeeman effect up to the factor \(-{\hbar\over{2m_{0}c}}\) in Eq. 21 is thus not a coincidence. Also for the calculation of the spin-orbit term the order of the three terms in the product does not matter. For both orders of the three terms the calculation does not account for the correction due to the Thomas precession.
There is a classical rationale for the correct calculation of the spin-orbit coupling that runs as follows. As can be seen e.g. from Eq. 22, the total magnetic field experienced by the electron in its co-moving frame is (to first approximation, whereby one neglects the factors \(\gamma\)) given by \({\mathbf{B}}^{\prime}={\mathbf{B}}-{1\over{c^{2}}}{\mathbf{v}}\wedge{\mathbf{E }}={\mathbf{B}}+{\mathbf{B}}_{n}\). The electric field of the nucleus gives thus rise to a magnetic field \({\mathbf{B}}_{n}=-{1\over{c^{2}}}({\mathbf{v}}\wedge{\mathbf{E}}){\boldsymbol{ \cdot\sigma}}\) in a frame that is co-moving with the traveling electron. The interaction of the electron with the magnetic field \({\mathbf{B}}_{n}\) gives rise to an anomalous Zeeman term \(-{q\hbar\over{2m_{0}}}[\,{\mathbf{B}}_{n}{\boldsymbol{\cdot\sigma}}\,]={\hbar \over{2m_{0}c}}\times{q\over{c}}[\,({\mathbf{v}}\wedge{\mathbf{E}}){ \boldsymbol{\cdot\sigma}}\,]\). This leads finally to an interaction energy \({\hbar\over{2m_{0}^{2}c^{2}}}{1\over{r}}{\partial U\over{\partial r}}\). This has to be corrected for the Thomas precession \({\boldsymbol{\omega}}_{T}={1\over{2c^{2}}}{\mathbf{v}}\wedge{\mathbf{a}}={q \over{2m_{0}c^{2}}}{\mathbf{v}}\wedge{\mathbf{E}}\) of the electron in its orbit around the nucleus (see Eq. 30). According to Subsection 7.3, the corresponding energy is \(\hbar\omega_{T}/2={\hbar\over{4m_{0}^{2}c^{2}}}{1\over{r}}{\partial U\over{ \partial r}}\) which has to be subtracted from the Zeeman term. The absolute value of the correction for the Thomas precession is half that of the Zeeman term, which is the reason why it is referred to as the Thomas half. Both terms on the second line of Eq. 21 are thus anomalous Zeeman terms due to magnetic fields in the rest frame of the electron, while the Thomas precession is a relativistic correction in the laboratory frame.
As mentioned above in Footnote 5, it is claimed in text books that the exact spin-orbit coupling term including the correction for Thomas precession can be derived from the Dirac theory (see e.g. [6]). We will show that this is a falsehood. The calculations claimed to derive the spin-orbit coupling with its correction for Thomas precession from the Dirac equation are wrong algebra leading to a correct physical result. The traditional Dirac theory is in reality unable to derive any of the two terms containing \({\mathbf{v}}\wedge{\mathbf{E}}\) described above, from the Dirac equation with minimal coupling. It only succeeds in deriving the correct result by introducing logical errors. Our approach is thus superior to the traditional approach. By using the Dirac equation with the correct physical coupling, it is able to derive one of the two terms. The term it fails to derive is the correction for Thomas precession.
We may note that even an electron at rest within a magnetic field is subjected to the anomalous Zeeman effect. But in the absence of motion there is no Thomas precession, such that the anomalous Zeeman effect does then not require a correction for Thomas precession. Of course, Thomas precession occurs in any type of motion. There exists thus also a correction for Thomas precession in the case of a particle that is moving in a magnetic rather than in an electric field. However, the non-relativistic correction term for Thomas precession during the motion of an electron in a magnetic field \({\mathbf{B}}\) is given by \({\boldsymbol{\omega}}_{T}={1\over{2c^{2}}}{\mathbf{v}}\wedge{\mathbf{a}}={q \over{2m_{0}c^{2}}}{\mathbf{v}}\wedge({\mathbf{v}}\wedge{\mathbf{B}})\), which remains very small in the non-relativistic limit due to the presence of the factor \(v^{2}/c^{2}\).
As we want to discuss the anomalous Zeeman effect and the spin-orbit coupling in the rest of the paper, the similarities exhibited in Eq. 21 seem to be a good way to make the transition between the two parts of the paper. That we can derive the anomalous Zeeman effect and the spin-orbit effect this way from a variant of the calculation leading to Eq. 16 and Eq. 21 is a major upheaval, because these two phenomena are traditionally considered as purely due to the electron spin. Just as it looked as though we obtained magnetic monopoles from Eq. 16 without having introduced them in Eq. 15, here it looks as though _we now obtain spin-related effects without having introduced spin_ in the variant of Eq. 15.
It is extremely important to note that the matrix calculations one performs in going from Eq. 13 to Eq. 21 or from the variant to the more correct equation _are not quantum mechanical_. They are independent of any context of wave equations and entirely classical as all we have used is a group-theoretical formalism that automatically accounts for Lorentz symmetry. Very obviously, these derivations also _do not contain spin_. _We have only introduced the charge-current four-vector of a single moving point charge._ The algebraic expressions for the anomalous Zeeman effect and the spin-orbit coupling must thus be considered as purely classical and not spin-related.
We might indeed have been convinced that these terms are quantum mechanical rather than classical because they fitted nicely into quantum mechanics after their experimental discovery, while they were not covered by classical mechanics. But they become quantum mechanical only by the way we use them in quantum mechanics, where they lead to quantized discrete energy levels, rather than to a continuum of energy values. This is indeed a feature of the experimental data that we are unable to understand classically. Due to this quantization and due to the fact that quantum mechanics looks so inscrutable to classical intuition, there was perhaps not too much incentive to ask oneself if - at least in principle - the existence of these supplementary imaginary terms could not have an analogous counterpart within a purely classical relativistic context. As also the normal orbital Zeeman effect is quantized and has a well-known classical counterpart, it would have been legitimate to ask that question. In Section 6 we will discuss the physics of the anomalous Zeeman effect and the spin-orbit coupling in further detail.
### The Pauli equation
We must now give the reader a first glimpse of the reason why the fact that the anomalous Zeeman effect and the spin-orbit coupling (without the correction for Thomas precession) seem to occur in Eq. 21 with a factor \({\hbar\over{2m_{0}c}}\) is not a coincidence. The Dirac matrices just contain the SL(2,\({\mathbb{C}}\)) matrices \(c{\hat{{\mathbf{p}}}}{\boldsymbol{\cdot\sigma}}=-\imath c\hbar{\mathbf{\nabla} }{\boldsymbol{\cdot\sigma}}\) and \(-qc{\mathbf{A}}{\boldsymbol{\cdot\sigma}}\) in their block structure. Squaring the Dirac equation will lead to a term \([\,c{\hat{{\mathbf{p}}}}{\boldsymbol{\cdot\sigma}}\,]\,[\,-qc{\mathbf{A}}{ \boldsymbol{\cdot\sigma}}\,]\). And from this product we will obtain a term \(-c^{2}\hbar q{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\). Squaring the Dirac equation also leads to a term \(c^{2}{\hat{{\mathbf{p}}}}^{2}\), which has to be divided by \(2m_{0}c^{2}\) to reduce it to the operator \({\hat{{\mathbf{p}}}}^{2}/2m_{0}\) which is used in the Schrödinger equation in the non-relativistic limit. In fact, the transition from relativistic to classical mechanics is obtained by putting \(E=m_{0}c^{2}+E_{cl}\). Then \(E^{2}-c^{2}p^{2}=m_{0}^{2}c^{4}\) leads to \(E_{cl}^{2}+2m_{0}c^{2}E_{cl}+m_{0}^{2}c^{4}-c^{2}p^{2}=m_{0}^{2}c^{4}\). The terms \(m_{0}^{2}c^{4}\) can be dropped on both sides. After dividing both sides by \(2m_{0}c^{2}\) and neglecting the term \(E_{cl}^{2}/2m_{0}c^{2}\) based on the observation that \(E_{cl}\ll 2m_{0}c^{2}\), one obtains then \(E_{cl}={p^{2}\over{2m_{0}}}\). The quantum mechanical version of this argument is arguably obtained by introducing \(E=m_{0}c^{2}+E_{cl}\) in the wave function through \(\psi=e^{-\imath m_{0}c^{2}t/\hbar}\psi_{cl}\) in the complete Dirac equation including the minimal substitution (but we will discover later on that this procedure can contain a major pitfall).
This way one can derive the Pauli equation from the Dirac equation⁶ and due to the division by \(2m_{0}c^{2}\) this equation contains now the anomalous Zeeman term \(-{\hbar q\over{2m_{0}}}{\mathbf{B\cdot\sigma}}\). As Eq. 21 contains \(c{\mathbf{B}}\) this explains the conversion factor \({\hbar\over{2m_{0}c}}\). The Pauli equation does not contain a term that looks like the spin-orbit interaction, because it does not use the correct “minimal” substitution. We postpone the discussion of this point to Section 8, because it is rather intricate and raises several subsidiary issues.
[FOOTNOTE:6][ENDFOOTNOTE]
## 6 Problems with the Traditional Interpretation of the Anomalous Zeeman Effect
### The anomalous Zeeman effect
#### 6.1.1 An inconvenient truth: The physical imagery violates the symmetry
The anomalous Zeeman effect \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) is traditionally attributed to a coupling between the magnetic field \({\mathbf{B}}\) and the spin \({\hbar\over{2}}{\boldsymbol{\sigma}}\). In reality \({\mathbf{B}}{\boldsymbol{\cdot\sigma}}\) is not a scalar product but just the way the vector \({\mathbf{B}}\) is written in the group theory. Traditionally one interprets the term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) indeed as analogous to the orbital Zeeman term \(-{qB\over{2m_{0}}}\,{\hat{L}}_{z}{\mathds{1}}\)\(=-{q\over{2m_{0}}}({\mathbf{B\cdot}}{\hat{{\mathbf{L}}}})\,{\mathds{1}}\),⁷ whereby one would have to replace \({\hat{{\mathbf{L}}}}\) by \({\hat{{\mathbf{S}}}}={\hbar\over{2}}{\boldsymbol{\sigma}}\) in the shorthand notation \(-{q\over{2m_{0}}}({\mathbf{B\cdot}}{\hat{{\mathbf{L}}}})\). This is done by rewriting \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) as \(-{2q\over{2m_{0}}}\,[\,{\mathbf{B\cdot}}{\hbar\over{2}}{\boldsymbol{\sigma}}\,]\), where \({\hbar\over{2}}{\boldsymbol{\sigma}}\) is postulated to be the operator \({\hat{{\mathbf{S}}}}\) corresponding to the “spin vector” \({\mathbf{S}}\), such that the term becomes then \(-{\mathbf{B\cdot}}{2q\over{2m_{0}}}{\hat{{\mathbf{S}}}}\) yielding an energy eigenvalue \(-{\mathbf{B\cdot}}{2q\over{2m_{0}}}{\mathbf{S}}=-{\boldsymbol{\mu}}_{e}{ \mathbf{\cdot B}}\), where \(\mu_{e}=2\mu_{B}\). It is because the eigenvalues of \(S_{z}\) of \({\hat{S}}_{z}\) are \(\pm{\hbar\over{2}}\), that one is then obliged to introduce a factor \(g=2\) into the algebra to recover the correct value \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\). This interpretation tries thus to represent the term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) as corresponding to the potential energy \({\boldsymbol{\mu}}_{e}{\mathbf{\cdot B}}\) of a (spin-induced) magnetic dipole \({\boldsymbol{\mu}}_{e}\) within a magnetic field \({\mathbf{B}}\). But as we have explained in connection with Eq. 9 in Section 2, the quantity \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) represents a _pseudo-vector, not a scalar_ like \(-{q\over{2m_{0}}}({\mathbf{B\cdot}}{\hat{{\mathbf{L}}}})\,{\mathds{1}}\), whose scalar character transpires very clearly from the presence of the unit matrix \({\mathds{1}}\) in it. The picture of the magnetic dipole \({\boldsymbol{\mu}}_{e}\) that would be proportional to a “spin vector” \({\mathbf{S}}\) is based on the misleading notation \({\mathbf{a}}{\boldsymbol{\cdot\sigma}}\) we warned against above. The quantity \({\boldsymbol{\sigma}}\) is in reality the set of the three vectors \({\mathbf{e}}_{x}\), \({\mathbf{e}}_{y}\), \({\mathbf{e}}_{z}\). In the analogy one replaces thus wrongly the scalar quantity \(B_{z}{\hat{L}}_{z}\,{\mathds{1}}=-\imath\hbar B_{z}(x{\partial\over{\partial y }}-y{\partial\over{\partial x}})\,{\mathds{1}}\) by the vector quantity \(B_{z}{\hat{S}}_{z}=B_{z}{\hbar\over{2}}\sigma_{z}\) (with analogous substitutions for \(B_{x}{\hat{L}}_{x}\,{\mathds{1}}\) and \(B_{y}{\hat{L}}_{y}\,{\mathds{1}}\)). The notation \({\hbar\over{2}}{\boldsymbol{\sigma}}\) stands for a set of three vectors while the quantity \({\hat{{\mathbf{L}}}}{\mathds{1}}\) stands for a set of three scalars. In fact, the three scalars \({\hat{{\mathbf{L}}}}=-\imath\hbar(y{\partial\over{\partial z}}-z{\partial\over {\partial y}},z{\partial\over{\partial x}}-x{\partial\over{\partial z}},x{ \partial\over{\partial y}}-y{\partial\over{\partial x}})\) are represented in matrix form in the group theory by multiplying them with the unit matrix, such that \({\hat{{\mathbf{L}}}}\,{\mathds{1}}\) becomes a set of three matrices. But despite their matrix form the quantities \({\hat{L}}_{j}\,{\mathds{1}}\) continue to represent scalars in the group theory, with an all together different symmetry than the matrices \({\hbar\over{2}}\sigma_{j}\) which represent vectors. The term that is interpreted as \(-{\boldsymbol{\mu}}_{e}{\mathbf{\cdot B}}\) just cannot be a potential energy as it is a pseudo-vector. Whereas the algebra used to calculate the anomalous \(g\)-factor is exact, such that it correctly reproduces the experimental results, the physical interpretation proposed is thus mathematically unsustainable, even if it might be intuitively appealing.
[FOOTNOTE:7][ENDFOOTNOTE]
Of course these considerations clash ignominiously with accepted notions. We must therefore insist that the SL(2,\({\mathbb{C}}\)) and Dirac representations are completely self-consistent formalisms and that their algebra is a closed system that contains all it needs to contain, such that it is pointless to attack the conclusion by drawing in considerations that are external to SL(2,\({\mathbb{C}}\)). In view of the strong resistance this conclusion might provoke, we give further arguments to back it and make a strong case for it (while we will give the correct interpretation of the term \(-{q\hbar\over{2m_{0}}}{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\) in Section 7).
(1) As we explained above, the quantity \({\hbar\over{2}}{\boldsymbol{\sigma}}\) in the term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) does not represent the spin in SU(2). It is thus not the spin operator \({\hat{{\mathbf{S}}}}\), and it is not correct to transform the term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) into \(-{q\over{m_{0}}}{\mathbf{B\cdot}}{\hat{{\mathbf{S}}}}=-g{q\over{2m_{0}}}{ \mathbf{B\cdot}}{\hat{{\mathbf{S}}}}\), with \(g=2\). We may note that in the context of the Dirac equation the presence of the term \(-{\hbar q\over{m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) is due to the minimal substitution used to derive the Dirac equation in an electromagnetic field from the free-space Dirac equation. But this is a substitution for a point charge, which is why it is called minimal in the first place (As pointed out in Subsection 8.2, it does not even account for the motion of the electron in the laboratory frame). If we had wanted to account for a potential energy within the magnetic field, of a magnetic dipole \({\boldsymbol{\mu}}_{e}\) associated with the spin, we should have introduced a more complicated substitution, with a term expressing how \({\boldsymbol{\mu}}_{e}\) couples to the electromagnetic field, or how it couples to the charge or magnetic dipole moment of another particle that is also present in its neighbourhood within the magnetic field. As we have not put such spin-related dipole effects into the formalism, they cannot come about by magic.
(2) That there is no spin-related dipole effect in the term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) can also be appreciated from the fact that Eqs. 14-21 and their variant have been derived without introducing any considerations about spin. As the terms that occur in these calculations are the same ones as those that occur in the Dirac theory, none of the operators in the Dirac theory contains the spin. The physical origin of the terms we recollect from the squared Dirac equation is not the presence of spin. The terms only come about because we treat the problem in full rigor by using relativistic group theory. Within the Dirac equation, the spin occurs only implicitly, _viz._ inside the spinor wave function (as clearly explained in [2]). It is the requirement that the spinor wave function must be an eigenstate of a vector operator \({\mathbf{K}}{\boldsymbol{\cdot\sigma}}\) or a pseudo-vector operator like \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) (or their four-dimensional analogous expressions in terms of gamma matrices), that forces the spin to align itself with the vector \({\mathbf{K}}\) or pseudo-vector \({\mathbf{B}}\) in the calculations and is this way responsible for the occurrence of the up and down eigenstates.
(3) We may also appreciate that it would be extremely puzzling if it were true that we can calculate a magnetic dipole moment \({\boldsymbol{\mu}}_{e}\) produced by the electron spin with fantastic precision in quantum electrodynamics, without having to specify anything in the calculations about the internal charge-current and mass distributions inside the electron. In fact, the presence of such current distributions is intuitively the only mechanism we know to account for the existence of a magnetic dipole \({\boldsymbol{\mu}}\). The remark of Lorentz we quoted above shows that using this mechanism to explain the anomalous Zeeman effect could be wrong despite the fact that it is intuitively appealing.
(4) As will be pointed out in the Appendix, the interpretation of the orbital Zeeman term \(-{\boldsymbol{\mu\cdot}}{\mathbf{B}}\), on which one tries to build here intuition about the anomalous Zeeman term by analogy, is itself flawed, because there does not exist such a thing as a potential energy with respect to a magnetic field. The correct interpretation of \(-{\boldsymbol{\mu\cdot}}{\mathbf{B}}\) will be given in Subsubsection 7.2.1.
(5) We may further note that a three-component expression for the spin can never be a complete description within a fully relativistic context, due to the theorem about the bilinear covariants mentioned in the Introduction. Only covariants with \(1\), \(4\) or \(6\) components exist. Therefore, either one or three components must be missing in the three-component description. In our work [2], the expression for the relativistic generalization of the spin operator \({\hbar\over{2}}\,[{\mathbf{s}}{\boldsymbol{\cdot\sigma}}\,]\) in SU(2), becomes a four-component quantity \({\hbar\over{2}}\,[s_{t}{\mathds{1}}+{\mathbf{s}}{\boldsymbol{\cdot\sigma}}\,]\) within SL(2,\({\mathbb{C}}\)).
(6) As we have not introduced spin in Eq. 15 or its variant, all “dipole” effects we can expect to obtain from Eq. 15 or its variant are orbital effects due to moving point charges. In fact, we introduce magnetic moments into the formalism through the term \(-{q\over{c}}\,[\,{\mathbf{v}}{\boldsymbol{\cdot\sigma}}\,]\) in Eq. 15 or its variant. This can be understood from our calculation of a magnetic moment produced by a current presented in the Appendix. The anomalous Zeeman term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) can thus not involve any magnetic dipole moment for the electron that we are studying: It cannot contain a spin-induced “intrinsic” magnetic dipole moment as it can be derived in a context without spin. It also cannot contain any orbital magnetic “dipole” moment as it does not contain \({\mathbf{v}}\). The anomalous Zeeman effect does indeed not depend on the velocity of the electron. In the measurements of the anomalous \(g\)-factor within a Penning trap [7], we can reduce the velocity of the electron such that one practically reaches the limit \({\mathbf{v}}\rightarrow{\mathbf{0}}\). The anomalous Zeeman effect would still exist.
(7) We may note that the exercise of rewriting \(-{\hbar q_{1}\over{2m_{0}}}\,[\,{\mathbf{B\cdot}}{\boldsymbol{\sigma}}\,]\) as \(-({\hat{{\boldsymbol{\mu}}}}_{e}{\mathbf{\cdot B}})\,{\mathds{1}}\) is not only mathematically flawed but also physically futile. The wrong interpretation of the algebra is used to introduce by brute force an intuitive classical image, while the facts of the non-interpreted algebra will eventually force us to give up on that classical image anyway. The presence of a scalar product \(-{\boldsymbol{\mu}}_{e}{\mathbf{\cdot B}}\) conjures up the image of an operator with a continuous spectrum, due to the continuum of possible angles between \({\boldsymbol{\mu}}_{e}\) and \({\mathbf{B}}\), while eventually this classical picture of a magnetic dipole in a magnetic field has to be abandoned as only the up and down states are observed experimentally.
(8) To readers who may still feel reluctant to accept the arguments formulated under points (1)-(7) because they fly in the face of the accepted notions, we may perhaps ask to consider how the inherent profound difficulty of quantum mechanics forces us to teach it a little bit like a religion. A nice example of a quasi-religious mystery is the concept of particle-wave duality. With some misplaced irony one could compare it with the Christian dogma of the mystery of the Holy Trinity. Three different persons (the Father, the Son and the Holy Ghost), are claimed to be only one God, and one is invited to accept this puzzling postulate as a factual truth, for which will not be given any further explanation because it is a mystery. In complete analogy, an electron is postulated to be both a particle and a wave. This is also quite puzzling a notion, for which we are told that we should accept it as a quantum mystery [8]. One could claim sarcastically that the two mysteries resemble one another as two peas in a pod. Of course, there is a very essential point that makes all the difference between the quantum mystery and the religious mystery, and justifies why we can postulate that one has to accept the quantum mystery without further asking and just “shut up and calculate”. That point is the agreement of the theory with experimental evidence. Quantum mechanics passes the test of the comparison with experimental data with flying colors by grinding out all the correct answers with impressive precision.
Asking if a physical theory provides the correct answers is indeed a crucial criterion to assess its value. But a formalism that turns out the correct answers is a far cry from a theory if it is mathematically flawed. The points (1)-(7) show that with respect to the criterion of mathematical self-consistency, the traditional interpretation of the anomalous \(g\)-factor of the electron in quantum mechanics is not even an option. What is wrong in this respect is not the algebra itself, which is correct, such that it indeed turns out the right answers. It is the _interpretation_ routinely given to that algebra that is absurd as it violates the mathematics, even if the algebraic expression \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) correctly accounts for the anomalous Zeeman effect. On this and countless other occasions quantum mechanics has time and again overruled the logically peremptory, true geometrical meaning of the group theory explained in Section 2 in favour of a self-cooked parallel interpretation [2]. The argument that we should accept this because the theory turns out the right answers does not hold sway. What is confirmed by the experiments is just the algebra, not the interpretation of that algebra. The real anathema resides in interpreting \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) as a scalar, not in questioning the traditional orthodoxy.
Of course these observations also raise the question how we must define the spin operator if it is not given by \({\hat{{\mathbf{S}}}}={\hbar\over{2}}{\boldsymbol{\sigma}}\). We may mention that it is possible to preserve the physical image of the spin as a vector \({\hbar\over{2}}\,{\mathbf{s}}\), by using a different definition \({\hbar\over{2}}\,[\,{\mathbf{s}}{\boldsymbol{\cdot\sigma}}\,]\) for the spin operator than \({\hat{{\mathbf{S}}}}={\hbar\over{2}}{\boldsymbol{\sigma}}\). This approach can be developed without changing a iota to the results of Pauli’s spin calculus, such that it leads to an identical agreement with experimental data. It even presents less problems of interpretation, but the issue entails a whole domino chain of related questions and answers, whose development and discussion are beyond the scope of any paper of reasonable length [2].
#### 6.1.2 Difficult questions
The problem with this discussion of the anomalous Zeeman effect is that without the traditional interpretation, the physics related to the non-relativistic Zeeman term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) become mysterious. The problem is two-fold. There is a classical enigma, as the interaction exists in principle already within classical electromagnetism, as the calculations given in Eqs. 14-21 or their variant clearly illustrate. The question is then of course what the classical meaning of this classical effect is supposed to be. There is also a quantum mechanical enigma. In fact, the two energy levels observed in the anomalous Zeeman splitting are traditionally interpreted as corresponding to the spin-up and spin-down states. But how can we explain the anomalous Zeeman splitting if the derivation of the term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) does not rely on a notion of spin?
#### 6.1.3 Not a dipole-dipole but a charge-dipole interaction symmetry
The first step in our attempts to make sense of this puzzling situation is as follows. The Lorentz transformation for the electromagnetic field under a boost with velocity \({\mathbf{v}}\) can be written as:
\[{\mathbf{E}}_{\parallel}^{\prime}={\mathbf{E}}_{\parallel},\quad{\mathbf{B}}_{ \parallel}^{\prime}={\mathbf{B}}_{\parallel},\quad{\mathbf{E}}^{\prime}_{\perp }=\gamma({\mathbf{E}}_{\perp}+{\mathbf{v}}\wedge{\mathbf{B}}_{\perp}),\quad{ \mathbf{B}}^{\prime}_{\perp}=\gamma({\mathbf{B}}_{\perp}-{1\over{c^{2}}}{ \mathbf{v}}\wedge{\mathbf{E}}_{\perp}).\] (22)
Here the indices \(\perp\) and \(\parallel\) are with respect to \({\mathbf{v}}\). Starting from a rest frame without magnetic field (\({\mathbf{B}}={\mathbf{0}}\)), we obtain a magnetic field (\({\mathbf{B}}^{\prime}\neq{\mathbf{0}}\)) in a moving frame, such that a magnetic field can be viewed (schematically) as a relativistic byproduct of the electric field. One could introduce the philosophy that only the Coulomb field really exists and that the magnetic field is just an optical illusion. By adopting this viewpoint, we can see that it is the macroscopic quantity \({\mathbf{B}}\) that contains a set of terms \(q{\mathbf{v}}_{j}\) whose sum can be associated with a macroscopic magnetic dipole moment \({\boldsymbol{\mu}}\). The whole of Eq. 21 would in essence just be due to a calculation:
\[(\,q_{1}{\mathds{1}}+{q_{1}\over{c}}[\,{\mathbf{v}}_{1}{\boldsymbol{\cdot \sigma}}\,]\,)\,\sum_{j}(\,q_{2j}{\mathds{1}}-{q_{2j}\over{c}}\,[\,{\mathbf{v} }_{j2}{\boldsymbol{\cdot\sigma}}\,]\,)=\]
\[\sum_{j}(\,q_{1}q_{2j}{\mathds{1}}+{q_{1}q_{2j}\over{c}}[\,{\mathbf{v}}_{1}{ \boldsymbol{\cdot\sigma}}\,]-{q_{1}q_{2j}\over{c}}[\,{\mathbf{v}}_{2j}{ \boldsymbol{\cdot\sigma}}\,]-{q_{1}q_{2j}\over{c^{2}}}\,({\mathbf{v}}_{1}{ \mathbf{\cdot v}}_{2j}){\mathds{1}}-\imath{q_{1}q_{2j}\over{c^{2}}}\,[\,({ \mathbf{v}}_{1}\wedge{\mathbf{v}}_{2j}){\boldsymbol{\cdot\sigma}}\,]\,).\] (23)
Here \(q_{1}\) and \({\mathbf{v}}_{1}\) are the parameters of the electron we are studying, after putting it in the electromagnetic field \(({\mathbf{E}},{\mathbf{B}})\), which is generated by a distribution of charged particles described by the parameters \(q_{2j}\) and \({\mathbf{v}}_{2j}\). The many terms \({\mathbf{v}}_{2j}\) are hidden (with their coupling terms) in the macroscopic quantity \({\mathbf{B}}\), and the terms \(q_{2j}\) in \({\mathbf{E}}\) or \({\mathbf{B}}\). The macroscopic quantities \({\mathbf{E}}\) and \({\mathbf{B}}\) are only tools to express interactions of charges and currents with other charges and currents. We can then “derive” the structure of Eq. 21 from the backbone of Eq. 23, by replacing \(\sum_{j}\,q_{2j}{\mathbf{v}}_{2j}\to c{\mathbf{B}}\) and for the remaining terms \(q_{2}\rightarrow{\mathbf{E}}\). The term \({q_{1}q_{2j}\over{c}}[\,{\mathbf{v}}_{1}{\boldsymbol{\cdot\sigma}}\,]\) will lead to two contributions \({q\over{c}}\,({\mathbf{v\cdot E}})\,{\mathds{1}}\) and \({q\over{c}}\,({\mathbf{v}}\wedge{\mathbf{E}}){\boldsymbol{\cdot\sigma}}\), because \({\mathbf{E}}\) will be defined by \({\mathbf{r}}_{2j}-{\mathbf{r}}_{1}\). To make a rigorous derivation one would of course have to weight the various contributions with their coupling terms.
It can be seen this way that it is much more logical to consider the anomalous Zeeman effect as a term that accounts for the interaction of the charge of the electron with the magnetic dipoles produced by the current loop of the moving electrons which generate the magnetic field. That such an effect exists is shown by the classical derivation of Eq. 21 wherein the term containing \({\mathbf{B\cdot}}{\boldsymbol{\sigma}}\) does not allow for any “intrinsic” magnetic dipole moment on behalf of an electron of charge \(q\) we subject to the magnetic field. It does not make sense to invoke another physical mechanism for the same term in the Dirac equation. The origin of the term containing \(q{\mathbf{B}}\) is obvious. We might initially just consider the force \([\,q{\mathds{1}}\,][\,{\mathbf{E}}{\boldsymbol{\cdot\sigma}}\,]\) instead of Eq. 15 and then generalize both terms independently by Lorentz symmetry. We recover then the general expression of Eq. 15, wherein \({\mathbf{B}}\) is no longer zero. We must thus conclude that there is in general an interaction between an electric point charge and a magnetic field. The pre-factor \(-{\hbar q\over{2m_{0}}}\) in the anomalous Zeeman term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B\cdot}}{\boldsymbol{\sigma}}\,]\) has been used to attribute a non-classical magnetic dipole moment \({\boldsymbol{\mu}}_{e}\) to the charge \(q\), while in reality the term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B\cdot}}{\boldsymbol{\sigma}}\,]\) corresponds to a charge-dipole interaction of a point charge \(q\) with the macroscopic dipole \({\boldsymbol{\mu}}\) corresponding to the current loops that produce \({\mathbf{B}}\). We can see that the anomalous value \(g=2\) is just due to the introduction of \({\hbar\over{2}}\) with the aim to recover \({\hbar\over{2}}{\boldsymbol{\sigma}}\) as described above. In reality, if we take the liberty to continue to use the textbook misnomer “magnetic dipoles” for the magnetic moments of single moving electrons, then all “magnetic-dipole” effects in the formalism are orbital, and there is no spin-induced magnetic dipole moment in the algebra. The term in the Dirac equation that gives rise to the anomalous Zeeman term belongs to the symmetric counterpart \(cq{\mathbf{B}}{\boldsymbol{\cdot\sigma}}+{q\over{c}}({\mathbf{v}}\wedge{ \boldsymbol{E}}){\boldsymbol{\cdot\sigma}}\), of the Lorentz force \(q({\mathbf{E}}+{\mathbf{v}}\wedge{\mathbf{B}}){\boldsymbol{\cdot\sigma}}\) where the rôles of the electric and magnetic fields have been exchanged. Within the expression \(cq{\mathbf{B}}{\boldsymbol{\cdot\sigma}}+{q\over{c}}({\mathbf{v}}\wedge{ \boldsymbol{E}}){\boldsymbol{\cdot\sigma}}\) itself, the term \(cq{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\) is the symmetric counterpart of \({q\over{c}}({\mathbf{v}}\wedge{\boldsymbol{E}}){\boldsymbol{\cdot\sigma}}\) wherein the rôles of charge and “magnetic dipoles” have been exchanged. Within \(cq{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\) the charge of the electron interacts with the “magnetic dipoles” of the moving electrons that generate the magnetic field. In the term \({q\over{c}}({\mathbf{v}}\wedge{\boldsymbol{E}}){\boldsymbol{\cdot\sigma}}\) the “magnetic dipole” generated by the moving electron interacts with the charges of the electric field. The symmetry of the anomalous Zeeman effect is intrinsically different from that of the orbital Zeeman effect, as the latter depends on two velocities, rather than one velocity. In other words, the orbital Zeeman effect corresponds to a “dipole”-dipole interaction, while the anomalous Zeeman effect stems from a charge-dipole interaction and the spin-orbit coupling to a “dipole”-charge interaction. This shows clearly that the anomalous \(g\)-factor should not be visualized in terms of a “dipole”-dipole interaction, as Dirac has done.
## 7 The Physical Meaning of the Anomalous Zeeman Effect
### Introduction
Of course, getting a feeling for the anomalous Zeeman effect and explaining why it leads to two Zeeman levels is more complicated, even if it follows very clearly from the algebra. All terms in Eq. 21 we “understood” classically are in reality facts of life that we had to accept after discovering them experimentally and to which we got used. The anomalous Zeeman effect and the spin-orbit coupling should thus be considered on the same footing. However, the spin-orbit term can be understood in terms of precession, as explained in Subsection 8.1. It is a current-charge interaction, and therefore the symmetrical counterpart of the anomalous Zeeman effect, which is of the charge-current type. Let us therefore check if we can also interpret the anomalous Zeeman effect in terms of precession. To do this we need a more detailed understanding of the magnetic potential and its vorticity.
### The magnetic potential
#### 7.2.1 Derivation
How do we deal with the kinetic energy of a moving charge in circular motion when we describe it as a stationary magnetic phenomenon? As we will show, it is done by expressing the kinetic energy in terms of a fake potential energy \(U\). Potential energy is a scalar. As the moving charge corresponds to a current \(q{\mathbf{v}}\), which is a vector quantity, the only way to create a scalar quantity out of this current is to combine it with another vector quantity \({\mathbf{A}}\) into a scalar product, e.g. \(U=-q\,({\mathbf{v\cdot A}})\). For the moment we consider \({\mathbf{A}}\) as a general vector that has not yet been specified any further. In a constant magnetic field \({\mathbf{B}}=B{\mathbf{e}}_{z}\), with \(B>0\) the moving charge \(q<0\) whose velocity \({\mathbf{v}}=v{\mathbf{e}}_{y}\), \(v>0\) corresponds to a current \(q{\mathbf{v}}\), will perform a uniform circular motion at the cyclotron frequency \(\omega_{c}>0\), which in the non-relativistic limit is given by \(\omega_{c}=-{qB\over{m}}\). The velocity is then \(v=-{qBr\over{m}}\). Let us rewrite \(-q{\mathbf{v\cdot A}}\) as \(-q{\mathbf{A\cdot v}}=-{q\over{m_{0}}}{\mathbf{A}}{\mathbf{\cdot p}}\). To make this correspond to the kinetic energy \({\mathbf{p}}^{2}/2m_{0}\) we must thus have \(-{q\over{m_{0}}}{\mathbf{A}}={\mathbf{p}}/2m_{0}={\mathbf{v}}/2=-{qB\over{2m_{ 0}}}r{\mathbf{e}}_{y}\). From this it follows that \({\mathbf{A}}=\)\({1\over{2}}rB{\mathbf{e}}_{y}\)\(=-{1\over{2}}{\mathbf{r}}\wedge{\mathbf{B}}\). This is exactly the magnetic vector potential that corresponds to a constant magnetic field \({\mathbf{B}}\) as can be checked by calculating \({\mathbf{B}}={\mathbf{\nabla}}\wedge{\mathbf{A}}\). Using \({\mathbf{A}}=-{1\over{2}}{\mathbf{r}}\wedge{\mathbf{B}}\) it is easy to rewrite \(U=-q\,({\mathbf{v\cdot A}})\) as \(U={\boldsymbol{\mu\cdot}}{\mathbf{B}}\), where \({\boldsymbol{\mu}}=-{q\over{2m_{0}}}{\mathbf{L}}\). This derivation does not require any consideration of a current loop. We see thus that the intuitive picture we use for this term in terms of a true potential energy is wrong, as anticipated under point (4) in Subsubsection 6.1.1.
The sign used in the expression \(-q{\mathbf{v\cdot A}}\) may surprise, but we must remind here the reason why we introduce the minimal substitution. In the absence of a magnetic field, the correct parameters to write the Lorentz transformation would be \((E-qV({\mathbf{r}}),c{\mathbf{p}})\). In a first non-relativistic approximation the quantity \(E-qV\) we want to obtain becomes \(E-qV\approx m_{0}c^{2}+{{\mathbf{p}}^{2}\over{2m_{0}}}-qV=m_{0}c^{2}+T-U\), just like in Lagrangian dynamics, where one justifies its introduction merely by showing that it makes things work. In the case of a purely magnetic field, there is no electric potential, such that this expression becomes then \(m_{0}c^{2}+{{\mathbf{p}}^{2}\over{2m_{0}}}\) in a frame wherein the centre of the orbit is at rest, such that it can be considered as a “static” magnetic situation. The expression we want to obtain requires thus adding the kinetic energy, and this can be achieved by subtracting the “potential energy” of the current, which is the negative kinetic energy, as defined above.
Of course the preceding lines only explain the case when \({\mathbf{v}}\parallel{\mathbf{A}}\), and not the cosine term in \(U=-q\,({\mathbf{v\cdot A}})\). It is less easy to grasp the meaning of the cosine term in \(U=-q\,({\mathbf{v\cdot A}})\). In principle, in a constant magnetic field \({\mathbf{v}}\) must be parallel to \({\mathbf{A}}\). Therefore, if \({\mathbf{v}}\not\parallel{\mathbf{A}}\), the motion of charged particle must be a forced motion with respect to the magnetic field. Let us therefore consider a charge in uniform motion on a circular orbit with a velocity \({\mathbf{v}}\) in a constant magnetic field \({\mathbf{B}}=B_{z}{\mathbf{e}}_{z}+B_{x}{\mathbf{e}}_{x}\). This motion takes place in a plane perpendicular to \({\mathbf{B}}\). We will then have \({\mathbf{A}}=-{1\over{2}}{\mathbf{r}}\wedge{\mathbf{B}}=-{1\over{2}}{\mathbf{r }}\wedge B_{z}{\mathbf{e}}_{z}\)\(-{1\over{2}}{\mathbf{r}}\wedge B_{x}{\mathbf{e}}_{x}\). Put \({\mathbf{A}}_{1}=-{1\over{2}}{\mathbf{r}}\wedge B_{z}{\mathbf{e}}_{z}\), \({\mathbf{A}}_{2}=-{1\over{2}}{\mathbf{r}}\wedge B_{x}{\mathbf{e}}_{x}\), such that \({\mathbf{A}}={\mathbf{A}}_{1}+{\mathbf{A}}_{2}\). It is then obvious that \({\mathbf{A\cdot v}}={\mathbf{A}}_{1}{\mathbf{\cdot v}}+{\mathbf{A}}_{2}{ \mathbf{\cdot v}}\). This explains why the part of the kinetic energy corresponding to the velocity \({\mathbf{v}}\) that one can attribute to \({\mathbf{B}}_{1}=B_{z}{\mathbf{e}}_{z}\) is given by \({\mathbf{A}}_{1}{\mathbf{\cdot v}}\). If we consider \({\mathbf{B}}_{1}\) and we observe the circular orbit with velocity \({\mathbf{v}}\), then only the part \({\mathbf{A}}_{1}{\mathbf{\cdot v}}\) of the kinetic energy can be attributed to \({\mathbf{B}}_{1}\). The rest must be attributed to the second force which forces the charged particle to follow an orbit in a plane that is not perpendicular to \({\mathbf{B}}_{1}\), and which in this analysis is based on \({\mathbf{B}}_{2}\). But in other situations, the origin of the force could be a different kind of field than a magnetic field \({\mathbf{B}}_{2}\), e.g. an electric field. It is only in such cases that a negative cosine term (leading to a “negative kinetic energy”) can have physical meaning. The field could also be varying with time rather than a constant field as in our example of \({\mathbf{B}}_{2}\).
The magnetic vector potential \({\mathbf{A}}\) is often presented as a meaningless mathematical quantity that has just been introduced to simplify the calculations and whose use is justified because it makes things work. After the discovery of the Aharonov-Bohm effect, this viewpoint has been challenged by Feynman [12] who stated that \({\mathbf{A}}\) is for quantum mechanics more significant than \({\mathbf{B}}\). In relativity, \(c{\mathbf{A}}\) builds a four-vector with \(V\), such that its meaning should be as physical as the meaning of \(V\), and conceptually related to it. Despite all this, the vector potential has remained a concept that is not very intuitive. Here we have discovered a clear meaning for it. As often pointed out, \({\mathbf{A}}\) is only defined up to a constant, just like \(V\). Its meaning is thus indeed as clear as the meaning of \(V\), and both quantities are quite intuitive. Several authors [13]-[14] have tried to make a case for this viewpoint, based e.g. on an experiment by Blondel [15].
#### 7.2.2 Larmor frequency
The kinetic energy \({p^{2}\over{2m_{0}}}={1\over{2}}m_{0}\omega_{c}^{2}r^{2}\) of the electron on the circular orbit can also be written in terms of the angular momentum \(L=m_{0}\omega_{c}r^{2}\) as \({p^{2}\over{2m_{0}}}={1\over{2}}\omega_{c}L\). To express the kinetic energy for a particle with an angular momentum \(L\) in the form \(L\omega\), we must thus not use the true cyclotron frequency \(\omega_{c}\) for \(\omega\), but the fictive Larmor frequency \(\omega_{L}=\omega_{c}/2\). This quantity pops up in all quantum mechanical calculations. One may feel tempted to infer from spotting this quantity in the equations that the orbital rotational motion in the physical problem studied is happening at the frequency \(\omega_{L}\) instead of \(\omega_{c}\), which is quite puzzling. One might ask oneself why the electron is turning slower on its orbit than one might expect based on classical mechanics. Is this just one more quantum mystery? The solution to this riddle is thus that \(\omega_{L}\) is only an auxiliary quantity to simplify the notations.
There is another way to highlight this point that the Larmor frequency is just an auxiliary concept. Larmor’s construction starts from the calculation of the fictitious forces in a rotating frame. In this frame there will be a fictitious centrifugal force and a fictitious Coriolis force. The expression for the force in the moving frame is \(m_{0}({d^{2}{\mathbf{r}}\over{dt^{2}}}+2{\boldsymbol{\omega}}\wedge{d{\mathbf{ r}}\over{dt}}+{\boldsymbol{\omega}}\wedge({\boldsymbol{\omega}}\wedge{\mathbf{ r}}))\). He then expresses that there should be a frequency of rotation where the magnetic field is completely canceled by the fictitious forces. This is the Larmor frequency. The Larmor frame “erases” the magnetic field. In this frame there will be no position and no velocity that make the particle feel the presence of the magnetic field or the fictitious Coriolis and centrifugal forces because all these forces cancel each other exactly. Of course in every point of the inertial frame the particle is not at rest but moving at a velocity \({\mathbf{v}}({\mathbf{r}})\) rather than being at rest with a velocity \({\mathbf{v}}^{\prime}={\mathbf{0}}\). The relation between the velocities in the two frames is \({\mathbf{v}}({\mathbf{r}})={\mathbf{v}}^{\prime}({\mathbf{r}})+{\boldsymbol{ \omega}}\wedge{\mathbf{r}}\).
It looks contradictory that the orbital motion of a particle in a magnetic field takes place at the cyclotron frequency, while we state that the particle does not feel the magnetic force in the frame that is rotating at the Larmor frequency. Following the idea that the particle does not feel the magnetic force it would appear that the particle could be at rest in the Larmor frame, while following from what we know in the inertial frame, the particle should appear to be moving at a residual frequency \(\omega_{L}=\omega_{c}-\omega_{L}=\omega_{c}/2\) in the Larmor frame. If the particle has a residual velocity in the Larmor frame, should it then not have to feel an attractive force that ensures that it stays on the circular orbit at this residual velocity? As Larmor’s construction shows, it does not erase the centrifugal force \(m_{0}\,{\boldsymbol{\omega}}\wedge({\boldsymbol{\omega}}\wedge{\mathbf{r}})\). It still remains present under the form \(-{q^{2}\over{2m_{0}}}{\mathbf{B}}\wedge({\mathbf{r}}\wedge{\mathbf{B}})\) in the rotating frame. When there is a central electric force present in the frame that is much larger, then we can neglect the term \(-{q^{2}\over{2m_{0}}}{\mathbf{B}}\wedge({\mathbf{r}}\wedge{\mathbf{B}})\). It is this possibility to neglect the term which leads to Larmor’s idea that we erase all fictitious forces. But of course when there is no electric field we can no longer neglect the term \(-{q^{2}\over{2m_{0}}}{\mathbf{B}}\wedge({\mathbf{r}}\wedge{\mathbf{B}})\). In this expression we can substitute \({1\over{2}}{\mathbf{r}}\wedge{\mathbf{B}}={\mathbf{A}}\). Here \(-{q\over{m_{0}}}{\mathbf{A}}\) corresponds to \({\mathbf{v}}_{*}=-{\mathbf{r}}\wedge{\boldsymbol{\omega}}_{L}\). We obtain thus \({q\over{2}}{\mathbf{v}}\wedge{\mathbf{B}}\) which is exactly the missing force, that will yield the missing frequency \(\omega_{L}\). This solves the paradox classically. We can also solve the paradox relativistically. Relativity shows that there is also an electric field in the Larmor frame, due to the local Lorenz transformation (with boost vector \({\mathbf{v}}^{\prime}({\mathbf{r}})\)) of the magnetic field in the inertial frame (see Eq. 22). This electric field exerts a local attractive and central force on the particle. It is this electric field that is responsible for the fact that the particle that is in cyclotron motion in the inertial frame is not at rest in the Larmor frame but moving at the residual frequency \(\omega_{L}=\omega_{c}-\omega_{L}\). The two solutions of the paradox coincide in the non-relativistic limit.
That the Larmor frequency appears in the equations is due to the fact that we must divide all terms that occur in the squared Dirac equation by \(2m_{0}c^{2}\) to be able to reduce it to the Pauli equation. We see here the same factor of \(2\) entering the calculations as the one that occurs in the derivation of the expression of the magnetic potential. It is the factor \(2\) that occurs in the expression \({p^{2}\over{2m_{0}}}\). And in both cases, the true frequency of the motion is \(\omega_{c}\). An analysis of the original meaning of the Larmor frequency shows that drawing the conclusion that the true frequency would be \(\omega_{L}\) rather than \(\omega_{c}\), just because this is the quantity that comes to the fore in the calculation, is wrong because it fails to discern that the rotating frame introduces an electric field.
#### 7.2.3 Larmor precession as the vorticity of the magnetic potential
The traditional minimal substitution is given by:
\[E\to E-qV,\quad{\mathbf{p}}\rightarrow{\mathbf{p}}-q{\mathbf{A}}.\] (24)
Non-relativistically we can write
\[{\mathbf{p}}-q{\mathbf{A}}=m_{0}({\mathbf{v}}-{q\over{m_{0}}}\,{\mathbf{A}}).\] (25)
From this we can appreciate that \(-{q\over{m_{0}}}\,{\mathbf{A}}({\mathbf{r}})\) behaves as a velocity field \({\mathbf{v}}_{*}({\mathbf{r}})\). For a constant magnetic field \({\mathbf{B}}\) we have \({\mathbf{A}}=-{1\over{2}}{\mathbf{r}}\wedge{\mathbf{B}}\). On a first contact this looks a bit mysterious, because we can choose the origin at will and calculate \({\mathbf{r}}\) with respect to this origin, the result will always be correct and independent from the choice of origin. But this fact is well known under the name of gauge invariance and expresses the fact that the vector potential is defined up to an arbitrary constant. As the scalar potential and the vector potential are related to each other by Lorentz transformation, it is evident that we cannot choose both arbitrary constants simultaneously at will, which is why they are related through a gauge condition. We will further discuss this arbitrary constant later on in our discussion. For the constant magnetic field, the velocity field \(-{q\over{m_{0}}}\,{\mathbf{A}}({\mathbf{r}})\) is of the type:
\[{\mathbf{v}}_{*}({\mathbf{r}})=+{\mathbf{r}}\wedge{q\over{2m_{0}}}\,{\mathbf{B }}=-{\mathbf{r}}\wedge{\boldsymbol{\omega}}_{L},\] (26)
where we have introduced:
\[{\boldsymbol{\omega}}_{L}=-{q\over{2m_{0}}}\,{\mathbf{B}}.\] (27)
In other words, in the non-relativistic approximation, the velocity field is the same as the one we would observe in a frame that is rotating at an angular velocity \({\boldsymbol{\omega}}_{L}\) corresponding to the Larmor frequency. According to the thoughts described above, we can consider this as a fictive auxiliary quantity. The rotating frame we discover here would then be rotating at a fictive frequency, while the true rotational motion would be the cyclotron frequency.
Let us now consider the velocity field \({\mathbf{v}}({\mathbf{r}})\) of a liquid and imagine that we would put a paddlewheel inside this liquid. The velocity field has vorticity and it will be make the paddlewheel turn as discussed in [16]. In this reference Auroux considers a circular path of radius \(r\) on which he calculates the average velocity \({\mathbf{v}}\) of the paddlewheel. He then takes the limit \(r\to 0\). The paddlewheel becomes then infinitesimal. According to this calculation, the infinitesimal paddlewheel will rotate at a frequency:
\[{\boldsymbol{\omega}}={1\over{2}}\nabla\wedge{\mathbf{v}}({\mathbf{r}}).\] (28)
When \({\mathbf{v}}={\boldsymbol{\omega}}\wedge{\mathbf{r}}\), such that the liquid bodily rotates with the frequency \({\boldsymbol{\omega}}\), the paddlewheel will locally spin with the same frequency as the rotation of the liquid body. We have a daily-life confirmation of this idea on a merry-go-round. Such a merry-go-round is actually a better realization of the idea of a liquid that bodily rotates at a constant frequency than a real-life liquid where the angular frequencies might be radius-dependent. Let us associate the merry-go-round with a global rotating frame. When you are on a merry-go-round your personal local frame will not describe a circular motion with the directions of its axes \(x,y,z\) fixed. The axes of your local frame are also rotating: Your local frame exhibits precession. Most of the time we gloss over of this fact by using a reference frame based on a local basis \({\mathbf{e}}_{\phi},{\mathbf{e}}_{r}\) in a symmetry-adapted system of curvilinear coordinates, which already incorporates this effect. But the fact that this is a varying basis must be taken into account in the mathematics, and motivates the introduction of covariant derivatives. The argument of the merry-go-round just expresses Berry’s phase [17] on a circular orbit which is a geodesic.
Imagine now that you have a point-like charged particle without spin that is put into circular orbital motion within a magnetic field. It is then conceivable that the merry-go-round effect could give it spin, whereby the angular frequency of that spin would be the cyclotron frequency. That spinning motion would have a kinetic energy \(L\omega_{c}/2=L\omega_{L}\), where again the Larmor frequency is an auxiliary quantity. The spinning motion is intrinsic and if we wanted to describe the spinning point particle geometrically, we would have to describe it as a spinning point. But in geometry, points are not spinning. However, we can describe the spinning motion in terms of vorticity. We can treat the Larmor frequency geometrically by using the same reasoning on \({\mathbf{v}}_{*}\) as on \({\mathbf{v}}\). We see thus that the “vorticity” of the velocity field of a constant magnetic field is such that a charged point-like particle will spin in the field at the cyclotron frequency. Any energy calculations that could be based on the equation \(E=\hbar\omega\) will have to be done using the Larmor frequency and this Larmor frequency can be calculated from:
\[{\boldsymbol{\omega}}_{L}={1\over{2}}\nabla\wedge{\mathbf{v}}_{*}({\mathbf{r}} )=-{1\over{2}}\nabla\wedge{q\over{m_{0}}}{\mathbf{A}}({\mathbf{r}})=-{q\over{2 m_{0}}}\,{\mathbf{B}}.\] (29)
Of course, in the case of a magnetic field, there is no real liquid that would push on the charge like on a paddlewheel, but we can obtain the results also without assuming the presence of a liquid. The idea of a liquid is only introduced to obtain a velocity field. As we have already a velocity field \(-{q\over{m}}{\mathbf{A}}\), we can dispense with the introduction of a liquid to obtain the mathematical result derived. The idea is to describe precession induced by circular motion. Such precession is certainly considered in magnetism, and one even calculates relativistic corrections for it in terms of Thomas precession.
Within a frame in global rotation, the physical effects of rotation that affect the paddlewheel or the charge are independent from the choice of the origin for the local frame. This corresponds to the notion that the magnetic potential is defined up to a constant. Of course, the force that is needed to counterbalance the “centrifugal force” of the rotating frame is provided by the Lorentz force, and this “centrifugal force” will indicate what the true centre of the rotation is. For the calculation of the physical effects of precession however, it does not matter where the centre of the fictive merry-go-round is. We could take any point as the centre of rotation, it would not make any difference. We may also note that in the limit \(r\to 0\), we reduce the circular current \(q{\mathbf{v}}\) to a point-like monopole, exactly according to the idea introduced above.
Let us further explore these ideas we used to define monopoles and vector potentials and make the radius of the circular motion of the charged particle in the magnetic field shrink to zero. What will happen then? As the non-relativistic cyclotron frequency is independent from the radius of the circular motion, this will leave us with a point-like spinning motion, even for a spin-less charged particle at rest! A charge in circular motion on an orbit with a diameter that is so small that we cannot see it with the naked eye will be a hidden motion. This hidden motion can be treated by the previous arguments, which lead to the idea that a magnetic field could make a spin-less particle with charge \(q\) at rest spin at an angular frequency \({\boldsymbol{\omega}}_{c}=-{q\over{m_{0}}}\,{\mathbf{B}}\). The point-like hidden motion can be treated as the interaction between the scalar charge \(q\) and the vorticity of magnetic potential \({\mathbf{A}}\), such that we end up with an interaction of the electric charge with the magnetic field \({\mathbf{B}}\). Energy calculations have to be done however with \({\boldsymbol{\omega}}_{L}=-{q\over{2m_{0}}}\,{\mathbf{B}}\). However, this energy \(\hbar\omega_{L}\) will not be the kinetic energy of the orbit shrunk to zero because this is also zero. It is the energy due to the precession. The moment of inertia that intervenes in the precession is not related to a mass distribution that corresponds to a point mass at a distance \(r\) from the centre of the circular orbit. It is the moment of inertia of the mass distribution inside the spinning top that visualizes the spinning electron (see Subsection 7.3). Simultaneously, a point-like monopole is not the magnetic charge of a circular current loop whose radius shrinks to zero. In taking this limit, \(v\to 0\), such that there is no current or magnetic charge left, and no true magnetic monopole. As also \(L\to 0\), there is also no magnetic moment left. It is the precession that corresponds to the magnetic monopole. The quantity \(cq\) can be symbolically identified with a monopole \(q_{m}\), like we have done. The concept of the magnetic monopole is useful to distinguish a geometrical point modeling a point charge from a spinning point charge. We may note that the precession terms which correspond to the anomalous Zeeman effect and the spin-orbit coupling are thus both related to the hidden rotational energy of the hidden rotational motion. This explains thus why the terms in Eq. 17 exist and how they can be associated with magnetic monopoles, such that their classification is correct.
We may finally note why the merry-go-round metaphor fails for the spin-orbit term. In fact, for a circular motion in a central Coulomb field \({\mathbf{F}}={q_{1}q_{2}\over{4\pi\epsilon_{0}r^{3}}}{\mathbf{r}}\), the rotational frequency \(\omega({\mathbf{r}})\) is not a constant, but depends on \({\mathbf{r}}\), such that the image of a liquid that is bodily rotating no longer holds. The consequence hereof is that the paddlewheel will no longer rotate at the same frequency as the liquid.
### Precession
Precession changes the rest mass of a particle. In [2] we have shown that the Dirac equation can be derived from the Rodrigues equation (Eq. 8 in Section 2) by putting \(\varphi=\omega_{0}\tau\) in the rest frame of the electron and making the assumption \({\hbar\omega_{0}\over{2}}=m_{0}c^{2}\). The assumption was introduced in a hand-waving way by assuming that the rest mass of the electron would correspond to the kinetic energy stored in its spinning motion.
The derivation proposed in [2] of the Dirac equation is entirely classical, which is a very surprising fact, as the Dirac equation is the core of the whole machinery of quantum mechanics. The Schrödinger equation can be derived from it. How can these equations then possibly be classical? It is explained in [2] that the properties of quantum mechanics that make it different from classical mechanics are only due to the way we use the Dirac and Schrödinger equations in the calculations when we apply them to specific problems. In fact, in following the motto “shut up and calculate” we introduce unwittingly features into the algebra that are meaningless from the viewpoint of the classical meaning of the algebra. One of those features that are puzzling is the superposition principle. Just as adding rotation matrices does not lead to a new rotation matrix, adding spinors has _a priori_ not a clear meaning. Another of the weird things that we cannot understand classically is the quantization of spin and angular momentum. The algebra of the Dirac equation forces the spin and the angular momentum to align with the magnetic field, as discussed above, and it is difficult to understand classically why it could not be otherwise.
Let us here thus just accept the fact that the spin must be aligned with the magnetic field, like quantum mechanics tells us. In fact, when we consider in the calculations the possibility that the spin or the angular momentum are not aligned, we find that the spin or angular momentum must precess, but that this does not lead to a constant energy. To recover a constant energy, one must introduce another force, and the motion becomes then forced with respect to the magnetic field. If we do not draw this force into the calculations, they are incomplete and it is then vain to try to understand them. What it would imply that the spin is not aligned without forcing, such that the energy is not constant is not clear, but this problem is in a sense eluded by the alignment condition. We may thus assume that the treatment ceases to be classical at the moment we are accepting the alignment condition.
We must now point out that we already know the physical meaning of the anomalous Zeeman effect. In fact, it is now time to remember that the Larmor precession term \({\boldsymbol{\omega}}_{L}{\boldsymbol{\cdot\sigma}}=-{q\over{2m_{0}}}{\mathbf{ B}}{\boldsymbol{\cdot\sigma}}\) we obtained from the discussion in Subsection 7.2 is identical to the term obtained from quantum mechanics in the non-relativistic limit of the Dirac equation for an electron in a magnetic field. When the particle is rotating in its rest frame the precession frequency \(\omega_{c}\) will add up algebraically to the rotation frequency \(\omega_{0}\) of the particle, changing the apparent frequency of its rotation, which is why it eventually entails a correction for the energy \(E={\hbar\omega_{0}\over{2}}\to E=\hbar(\omega_{0}\pm\omega_{c})/2\)\(=m_{0}c^{2}\pm\hbar\omega_{L}\), where the \(\pm\) sign is due to the existence of spin up and spin down states. This gives a very neat explanation for the anomalous Zeeman splitting for an electron at rest. The very important point that follows from this argument is that the amplitude of the anomalous Zeeman effect in the Dirac theory must be strictly identical to that of the orbital Zeeman effect, because on orbit the precession is in phase with the orbital motion due to the merry-go-round effect, and the precession for the particle at rest is obtained from the precession during the orbital motion by letting the radius of the orbit shrink to zero. Moreover the anomalous Zeeman effect is not due to the spin of the electron, but due to _the additional spin_ given to the electron by the interaction of its charge with the magnetic field.
### Conclusion about the anomalous Zeeman effect
We see thus that the theory is able to calculate the energy values of the two equilibrium states \(m_{0}c^{2}\pm{\hbar qB\over{2m_{0}}}\) of the spinning electron in terms of a charge-dipole interaction, without associating a magnetic dipole with the spin. The idea is thus that a magnetic field makes any charge turn⁸ at a frequency \({\boldsymbol{\omega}}_{c}=-{\gamma q\over{m_{0}}}{\mathbf{B}}\), and with a kinetic energy described by the length of the pseudo-vector \(\hbar{\boldsymbol{\omega}}_{L}=-{\hbar\gamma q\over{2m_{0}}}{\mathbf{B}}\), independently from the issue if it already has spin or otherwise. But when the particle has spin, only the two states wherein the precession vector and the intrinsic-spin vector are aligned give rise to well-defined energies. The force that makes the particle spin should not be confused with a torque, already for reasons of dimension only. The reason for its existence is purely due to relativistic symmetry.
[FOOTNOTE:8][ENDFOOTNOTE]
We did not introduce spin in Eqs. 15-21. The only quantity in the Dirac equation that contains the spin is the wave function. This is also very clear from reference [2]. This also explains why we can calculate the \(g\)-factor so accurately in quantum electrodynamics: We just do not use a dipole moment \({\boldsymbol{\mu}}_{e}\) due to the electron spin in the calculation of the equilibrium state. Simultaneously this copes with Lorentz’s objection that the hypothetical magnetic dipole moment of the electron is too large to allow for an explanation in terms of a current loop inside the electron: There is simply no spin-associated magnetic dipole moment in the formalism.
We may note that the correct theory for ferromagnetism also does not rely on magnetic dipoles, but on Heisenberg’s mechanism of an exchange interaction which is based on the Pauli principle and the Coulomb interaction. In the Dirac formalism, charge and spin occur in mere juxtaposition, without blending into a more complex quantity like a dipole. The interaction is just defined by the charge, while the formalism shows that the spin \({\mathbf{s}}\) has to line up with the magnetic field \({\mathbf{B}}\), because the wave function must be an eigenstate of the pseudo-vector operator \({\mathbf{B}}{\boldsymbol{\cdot\sigma}}\) or \({\mathbf{B}}{\boldsymbol{\cdot\gamma}}\).
The mere juxtaposition of spin and charge in the Dirac equation looks similar to that in the exchange mechanism. The image of a dipole moment is just not present in the algebra of the Dirac equation. One may speculate about defining a magnetic dipole moment \({\boldsymbol{\mu}}_{e}\), because it flatters our intuition of a little magnet, but this idea that \(-{q\over{2m_{0}}}{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\) must willy-nilly correspond to a magnetic dipole moment is a preconceived notion that is just flawed mathematically. As already pointed out above, the appropriate interpretation is that there are three types of interaction: charge-charge, dipole-dipole, and charge-dipole. The anomalous Zeeman effect is a charge-dipole interaction and one should refrain from identifying this charge-dipole interaction with a dipole-dipole interaction. There is thus always a precession energy associated with the presence of a charge \(q\) in a magnetic field \({\mathbf{B}}\), which is the reason why we find this term already in Eq. 21, which does not contain spin.
After all this, there is yet another argument in favor of the interpretation of the term \(-{q\over{2m_{0}}}{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\) proposed in this paper. Dirac’s theory works only well for the leptons [18], as the true value of \(g\) for the electron is given by: \(g_{e}=2.00231930436153(53)\). It works almost as well for the muon \(g_{\mu}=2.0023318416(13)\), and much less well for the neutron (\(g_{n}=3.82608545(90)\)) and the proton (\(g_{p}=-5.585694713(56)\)). It is tempting to assume that in the cases of the proton and the neutron, a true dipole moment \({\boldsymbol{\mu}}\) due to internal currents might intervene⁹ and give rise to some term \(-{\boldsymbol{\mu\cdot}}{\mathbf{B}}\), while this is just not the case for the electron, which is truly a point particle or is so small that the effect of such currents, if they exist, can be neglected, and only the charge term \(-{\hbar q\over{2m_{0}}}\,[\,{\mathbf{B}}{\boldsymbol{\cdot\sigma}}\,]\) plays a significant rôle.
[FOOTNOTE:9][ENDFOOTNOTE]
## 8 Problems with the traditional derivation of the spin-orbit coupling
### Preliminary remarks
The spin-orbit term \({1\over{m_{0}c}}\,{1\over{r}}\,{\partial U\over{\partial r}}{\mathbf{L}}{ \boldsymbol{\cdot\sigma}}\) is easily shown to be equal to \({q\over{c}}[\,({\mathbf{v}}\wedge{\mathbf{E}}){\boldsymbol{\cdot\sigma}}\,])\), where \(U=qV\) is the potential energy.¹⁰ Following the discussion in Subsubsection 6.1.3 the spin-orbit interaction is of the “dipole”-charge type, due to the presence of the term \(q{\mathbf{v}}\wedge{\mathbf{E}}\). Here \(q{\mathbf{v}}\) represents the “dipole” and \({\mathbf{E}}\) contains the charge. Also here the interpretation of \({\boldsymbol{\sigma}}\) in \({\mathbf{L}}{\boldsymbol{\cdot\sigma}}\) in terms of a “spin operator” \({\mathbf{S}}={\hbar\over{2}}{\boldsymbol{\sigma}}\) is not appropriate. The term \({\mathbf{L}}{\boldsymbol{\cdot\sigma}}\) within \({1\over{m_{0}c}}\,{1\over{r}}\,{\partial U\over{\partial r}}{\mathbf{L}}{ \boldsymbol{\cdot\sigma}}\) just represents the angular momentum. Imaging the spin-orbit coupling as a “dipole”-dipole interaction containing a term \({\mathbf{L\cdot S}}\) is in conflict with the symmetry. First of all \({1\over{m_{0}c}}\,{1\over{r}}\,{\partial U\over{\partial r}}{\mathbf{L}}{ \boldsymbol{\cdot\sigma}}\) is a vector rather than a scalar. Secondly, its “dipole”-charge interaction symmetry is not compatible with a “dipole”-dipole interaction symmetry. An analogous problem of an over-interpretation of an operator that violates the symmetry occurs in the definition of helicity \({\mathbf{u}}{\boldsymbol{\cdot\sigma}}\), with \({\mathbf{u}}={\mathbf{p}}/p\) for neutrinos in particle physics.
[FOOTNOTE:10][ENDFOOTNOTE]
The frequency \(\omega_{T}\) of the Thomas precession correction is given [9; 10] by:
\[\omega_{T}={\gamma^{2}\over{\gamma+1}}{1\over{c^{2}}}{\mathbf{v}}\wedge{ \mathbf{a}}.\] (30)
In an electric field \({\mathbf{E}}\), this can be written as \({\gamma\over{\gamma+1}}{1\over{m_{0}c^{2}}}{\mathbf{v}}\wedge q{\mathbf{E}}\). For \(\gamma\approx 1\) this corresponds to \({1\over{2m_{0}c^{2}}}{\mathbf{v}}\wedge q{\mathbf{E}}\). To correct the spin-orbit term for the Thomas precession we must subtract \(\hbar\omega_{T}/2\) from its absolute value. This has thus absolutely nothing to do with an electric dipole moment induced by the relativistic motion of a magnetic dipole moment associated with the spin of the electron, as has often been claimed [11]. As we have argued all along, such a hypothetical magnetic dipole moment does not even exist in the context of the Dirac equation. The Thomas precession is a relativistic correction term that contributes to the global spin-orbit precession along the orbit. The global expression is equal to the Thomas precession with the reversed sign.
#### 8.1.1 Simple derivation of the expressions for the Thomas precession
For what will follow it might be useful to explain carefully what Thomas precession is. There are actually two equations for the Thomas precession that one can prove. The first one summarizes the conceptual definition of the effect, which is that the composition of two boosts is a composition of a boost and a rotation: \({\mathbf{B}}({\mathbf{v}}_{2}){\mathbf{B}}({\mathbf{v}}_{1})={\mathbf{R}}({ \mathbf{n}},\vartheta_{T}){\mathbf{B}}({\mathbf{v}})\). The rotation axis \({\mathbf{n}}\) is here perpendicular to both \({\mathbf{v}}_{1}\) and \({\mathbf{v}}_{2}\), such that it is just the rotation angle \(\vartheta_{T}\) and the boost parameter \({\mathbf{v}}\) that have to be calculated. But specifying the Thomas precession requires only the calculation of \(\vartheta_{T}\). The second equation formulates the effect of Thomas precession on an orbit, and states that \({d\vartheta_{T}\over{dt}}={\gamma^{2}\over{(\gamma+1)c^{2}}}{\mathbf{v}}\wedge {\mathbf{a}}\).
The following derivation of the first equation follows the argument in Subsection VIII D of [9]. We note \(\gamma=\cosh w\), \(\beta\gamma=\sinh w\), \(\beta=\tanh w\). We have then \(\cosh{w\over{2}}=\sqrt{{\gamma+1\over{2}}}\), \(\sinh{w\over{2}}=\sqrt{{\gamma-1\over{2}}}\), \(\tanh{w\over{2}}=\sqrt{{\gamma-1\over{\gamma+1}}}\). In SL(2,\({\mathbb{C}}\)), a boost with velocity \({\mathbf{v}}=v{\mathbf{u}}\) is given by \({\mathbf{B}}({\mathbf{v}})=\cosh{w\over{2}}\,{\mathds{1}}-\sinh{w\over{2}}\,[ \,{\mathbf{u}}{\boldsymbol{\cdot\sigma}}\,]\). Here \({\mathbf{u}}\) is the unit vector parallel to \({\mathbf{v}}\). We must now calculate \({\mathbf{B}}({\mathbf{v}}_{2}){\mathbf{B}}({\mathbf{v}}_{1})\). For simplicity, we can take \({\mathbf{u}}_{1}={\mathbf{e}}_{x}\), \({\mathbf{u}}_{2}=\cos\alpha\,{\mathbf{e}}_{x}+\sin\alpha\,{\mathbf{e}}_{y}\). We find then:
\[{\mathbf{B}}({\mathbf{v}}_{2}){\mathbf{B}}({\mathbf{v}}_{1}) =(\cosh{w_{2}\over{2}}\,\cosh{w_{1}\over{2}}+\sinh{w_{2}\over{2}} \,\sinh{w_{1}\over{2}}\,\cos\alpha)\,{\mathds{1}}\]
\[-\imath\,\sinh{w_{2}\over{2}}\,\sinh{w_{1}\over{2}}\,\sin\alpha\, [\,{\mathbf{e}}_{z}{\boldsymbol{\cdot\sigma}}\,]-\cosh{w_{2}\over{2}}\,\sinh{w _{1}\over{2}}\,[\,{\mathbf{e}}_{x}{\boldsymbol{\cdot\sigma}}\,]\]
\[-\sinh{w_{2}\over{2}}\,\cosh{w_{1}\over{2}}\,[\,(\cos\alpha\,{ \mathbf{e}}_{x}+\sin\alpha\,{\mathbf{e}}_{y}){\boldsymbol{\cdot\sigma}}\,].\] (31)
This must be equal to the product \({\mathbf{R}}(\vartheta_{T},{\mathbf{e}}_{z})\,{\mathbf{B}}({\mathbf{v}})\)\(=\)\((\,\cos{\vartheta_{T}\over{2}}\,{\mathds{1}}-\imath\cos{\vartheta_{T}\over{2}} \,[\,{\mathbf{e}}_{z}{\boldsymbol{\cdot\sigma}}\,]\,)\)\(\,(\cosh{w\over{2}}\,{\mathds{1}}-\sinh{w\over{2}}\,[\,{\mathbf{u}}{ \boldsymbol{\cdot\sigma}}\,])\) of a rotation around the \(z\)-axis and a boost:
\[{\mathbf{R}}(\vartheta_{T},{\mathbf{e}}_{z})\,{\mathbf{B}}({ \mathbf{v}}) =\cos{\vartheta_{T}\over{2}}\,\cosh{w\over{2}}\,{\mathds{1}}- \imath\sin{\vartheta_{T}\over{2}}\,\cosh{w\over{2}}\,[\,{\mathbf{e}}_{z}{ \boldsymbol{\cdot\sigma}}\,]\]
\[-\cos{\vartheta_{T}\over{2}}\,\sinh{w\over{2}}\,[\,{\mathbf{u}}{ \boldsymbol{\cdot\sigma}}\,]-\sin{\vartheta_{T}\over{2}}\,\sinh{w\over{2}}\,[ \,({\mathbf{e}}_{z}\wedge{\mathbf{u}}){\boldsymbol{\cdot\sigma}}\,].\] (32)
By identifying the parts containing the unit matrix \({\mathds{1}}\) and the parts containing \(\imath[\,{\mathbf{e}}_{z}{\boldsymbol{\cdot\sigma}}\,]\) one obtains:
\[\tan{\vartheta_{T}\over{2}}={\tanh{w_{1}\over{2}}\,\tanh{w_{1} \over{2}}\,\sin\alpha\over{1+\tanh{w_{1}\over{2}}\,\tanh{w_{1}\over{2}}\cos \alpha}},\] (33)
which is Eq. 145 in reference [9].
The second equation can be derived from this equation. But one can calculate the identity \({d\vartheta_{T}\over{dt}}={\gamma^{2}\over{(\gamma+1)c^{2}}}{\mathbf{v}}\wedge {\mathbf{a}}\) also directly, by considering the identity: \({\mathbf{B}}(d{\mathbf{v}}_{\perp}){\mathbf{B}}({\mathbf{v}})={\mathbf{R}}({ \mathbf{n}},d\vartheta_{T}){\mathbf{B}}(d{\mathbf{v}}\oplus{\mathbf{v}}_{\perp})\), where \({\mathbf{v}}\oplus{\mathbf{w}}\) denotes the boost vector associated with the composition of boosts \(B({\mathbf{w}})^{\circ}B({\mathbf{v}})\). We are considering here only \(d{\mathbf{v}}_{\perp}\) as collinear boosts do not lead to Thomas precession. By using Taylor expansions one can show that to first order \({\mathbf{B}}(d{\mathbf{v}}_{\perp})\)\(=\)\({\mathds{1}}-{1\over{2c}}\,dv_{\perp}\,[\,({\mathbf{e}}_{z}\wedge{\mathbf{u}}) {\boldsymbol{\cdot\sigma}}\,]\). One obtains then \({\mathbf{B}}(d{\mathbf{v}}_{\perp}){\mathbf{B}}({\mathbf{v}})=\sqrt{{\gamma+1 \over{2}}}\,{\mathds{1}}\)\(-{1\over{2c}}\sqrt{{\gamma+1\over{2}}}\,dv_{\perp}\,[\,({\mathbf{e}}_{z}\wedge {\mathbf{u}}){\boldsymbol{\cdot\sigma}}\,]\)\(-\sqrt{{\gamma-1\over{2}}}\,[\,{\mathbf{u}}{\boldsymbol{\cdot\sigma}}\,]\)\(-{\imath\over{2c}}\,\sqrt{{\gamma-1\over{2}}}\,dv_{\perp}\,[\,{\mathbf{e}}_{z} {\boldsymbol{\cdot\sigma}}\,]\). Here \(\gamma\) is the Lorentz factor that corresponds to \(v\). The identification of this result with Eq. 8.1.1 yields then : \({d\vartheta_{T}\over{2}}={1\over{2c}}dv_{\perp}\sqrt{{\gamma-1\over{\gamma+1}}}\) (whereby we have replaced of course \(\vartheta_{T}\) by \(d\vartheta_{T}\)). From this one obtains \({d\vartheta_{T}\over{d\tau}}={\gamma\over{\gamma+1}}{1\over{c^{2}}}{\mathbf{v} }\wedge{\mathbf{a}}\) in the co-moving frame, i.e. the rest frame of the electron. Here \(\tau\) is the proper time of the electron, i.e. the time in the co-moving frame. Taking into account the time dilatation this yields \({d\vartheta_{T}\over{dt}}={\gamma^{2}\over{\gamma+1}}{1\over{c^{2}}}{\mathbf{v }}\wedge{\mathbf{a}}\) in the laboratory frame. Here \({\mathbf{a}}_{\perp}\) takes the same value in the laboratory frame and in the co-moving frame because it is perpendicular to \({\mathbf{v}}\). This is presumably the simplest derivation possible of the expression for the Thomas precession. It avoids using hyperbolic geometry to derive this result, as was done in reference [9]. We can also see from this derivation why the Dirac equation with the minimal substitution cannot be used to derive the Thomas precession. As discussed in Subsection 8.2 (and explained in reference [2]), the minimal substitution accounts for the instantaneous value of \({\mathbf{B}}({\mathbf{v}})\) in \(({\mathbf{r}},t)\), but does not take care of \({\mathbf{a}}_{\perp}\) or \({\mathbf{R}}(\vartheta_{T},{\mathbf{e}}_{z})\).
### Generalization of Dirac’s minimal substitution for the case of a moving charge
As already stated in Subsection 5.2, Dirac’s minimal substitution is not general enough as it does not account for the interactions of the current \(q{\mathbf{v}}\) of the moving electron with the electromagnetic potential. It is explained in [2] that in the context of the Dirac equation, the minimal substitution must not be seen as an abstract rule that one copies mechanically from Lagrangian dynamics and that one can justify with hindsight by the fact that it works. The goal of the minimal substitution is to introduce the true instantaneous kinetic parameters \((\gamma,\gamma{\mathbf{v}}/c)\), that will permit us to write the true instantaneous Lorentz transformation for the time. As explained in [2], the primary idea is that the the transformation of the time \(ct^{\prime}=\gamma(ct-{\mathbf{v\cdot x}}/c)\) under a free-space Lorentz boost can also be written using the parameters \((E,c{\mathbf{p}})=m_{0}c^{2}(\gamma,\gamma{\mathbf{v}}/c)\) and \(m_{0}c^{2}\) instead of \((\gamma,\gamma{\mathbf{v}}/c)\). In fact, the Dirac equation is derived from the Rodrigues equation by transforming \(({d\over{dc\tau}},0,0,0)\)\(\rightarrow\)\({\partial\over{\partial ct}},{\partial\over{\partial x}},{\partial\over{ \partial y}},{\partial\over{\partial z}})\) according to the boost that transforms \((m_{0}c^{2},0,0,0)\rightarrow(E,c{\mathbf{p}})\).¹¹ But in a potential \(V\), the rest energy of a particle is no longer \(m_{0}c^{2}\) but \(m_{0}c^{2}+qV\), such that the parameters \((E,c{\mathbf{p}})\) of a moving particle are no longer the correct set of kinetic parameters to describe the instantaneous Lorentz transformation, because \(E\) is no longer purely kinetic. To be able to write the correct Lorentz transformation, one must know the “true kinetic energy”, i.e. that part of the total energy that is not potential energy. This leads to the substitution \(E\to E-qV\), \({\mathbf{p}}\rightarrow{\mathbf{p}}-q{\mathbf{A}}\). The idea is that for an electron at rest in an electric potential, the equation is \(-{\hbar\over{\imath}}{d\over{d\tau}}\gamma_{t}\psi=(m_{0}c^{2}+qV)\,\psi\), where \(\tau\) is the time in the rest frame. By generalizing this to a general frame by Lorentz covariance we obtain then arguably the general Dirac equation with the minimal substitution \(E\to E-qV\), \({\mathbf{p}}\rightarrow{\mathbf{p}}-q{\mathbf{A}}\).
[FOOTNOTE:11][ENDFOOTNOTE]
The problem is here that \((qV,cq{\mathbf{A}})\) is not exactly what one would obtain from \((qV,{\mathbf{0}})\) by transforming it to a general frame. Whereas the four-potential \((V,c{\mathbf{A}})\) is Lorentz covariant, the quantity \(q\) is not, because \(q\) is part of a charge-current four-vector \((\gamma q,\gamma q{\mathbf{v}}/c)\). Therefore \((qV,cq{\mathbf{A}})\) is not the correct Lorentz covariant generalization for the expression that must be used in the most general substitution. In the most general substitution we would need in stead of \(qV\) and \(q{\mathbf{A}}\) the terms that we can obtain by considering: \([\,\gamma q\,{\mathds{1}}+\gamma{q\over{c}}{\mathbf{v}}{\boldsymbol{\cdot \sigma}}\,]\,[\,V{\mathds{1}}+c{\mathbf{A}}{\boldsymbol{\cdot\sigma}}\,]\). The result will lead to two terms \(qV\) and \(qc{\mathbf{A}}\) with the same sign. This is needed because the terms \(qV\) and \(qc{\mathbf{A}}\) are used with the same signs in the traditional minimal substitution. Furthermore, we obtain a term \(\gamma q\,({\mathbf{v\cdot A}})\,{\mathds{1}}\), which after the substitutions \(E\to E-qV,{\mathbf{p}}\rightarrow{\mathbf{p}}-q{\mathbf{A}}\), will lead to a term \(-\gamma q\,({\mathbf{v\cdot A}})\,{\mathds{1}}\), whose sign is in conformity with the discussion in Subsubsection 7.2.1. It may look confusing that in the derivation of Eq. 13 and Eq. 34 we have not used the strategy of alternating signs as would follow from the rules that prevail for SL(2,\({\mathbb{C}}\)). For Eq. 13 this may be due the way the magnetic potential has been historically defined. For Eq. 34, we note that we can make juxtapostions like \([\,\gamma q\,{\mathds{1}}+\gamma{q\over{c}}{\mathbf{v}}{\boldsymbol{\cdot \sigma}}\,]\,[\,V{\mathds{1}}+c{\mathbf{A}}{\boldsymbol{\cdot\sigma}}\,]\) with any number of terms and any choice of signs. The goal is to check to what kind of multi-vectors this leads. This is more the case here and the choice made here leads to the correct results for the further use. These considerations lead to:
\[[\,\gamma q{\mathds{1}}+\gamma q{{\mathbf{v}}\over{c}}{\boldsymbol{\cdot\sigma }}\,]\,[\,V\,{\mathds{1}}+c{\mathbf{A}}{\boldsymbol{\cdot\sigma}}\,]=\]
\[\gamma qV\,{\mathds{1}}+\gamma qc{\mathbf{A}}{\boldsymbol{\cdot\sigma}}+\gamma {q\over{c}}V{\mathbf{v}}{\boldsymbol{\cdot\sigma}}+\gamma q\,({\mathbf{v\cdot A }})\,{\mathds{1}}+\imath\gamma q({\mathbf{v}}\wedge{\mathbf{A}}){\boldsymbol{ \cdot\sigma}}.\] (34)
These are the quantities that one would have to use in the substitution that generalizes the minimal substitution.
### How we obtain the spin-orbit-coupling term from the Dirac equation with the generalized substitution
The generalized substitution was given in Eq. 34. The term \(\gamma{q\over{c}}V{\mathbf{v}}{\boldsymbol{\cdot\sigma}}\) in Eq. 34 is a vector and cannot contribute to the potential energy. However, after squaring the Dirac equation, which will produce the announced variant \([\,{\partial\over{\partial ct}}{\mathds{1}}-{\mathbf{\nabla\cdot}}{\boldsymbol {\sigma}}\,]\,[\,\gamma q{\mathds{1}}+\gamma q{{\mathbf{v}}\over{c}}{ \boldsymbol{\cdot\sigma}}\,]\,[\,V\,{\mathds{1}}+c{\mathbf{A}}{\boldsymbol{ \cdot\sigma}}\,]\), it will lead to a term containing \([\,{\mathbf{\nabla}}{\boldsymbol{\cdot\sigma}}\,]\,[\,\gamma{q\over{c}}V{ \mathbf{v}}{\boldsymbol{\cdot\sigma}}\,]\). For \({\mathbf{A}}={\mathbf{0}}\), this will lead in the non-relativistic limit to a term \({q\over{c}}({\mathbf{\nabla\cdot}}(V{\mathbf{v}}))\,{\mathds{1}}\) and a term \(\imath{q\over{c}}({\mathbf{\nabla}}\wedge V{\mathbf{v}}){\boldsymbol{\cdot \sigma}}\). The first term leads to \(-{q\over{c}}({\mathbf{v\cdot E}})\,{\mathds{1}}\)\(+{qV\over{c}}{\mathbf{\nabla\cdot v}}\,{\mathds{1}}\). In the former term we can recognize the power term in Eq. 21. The calculation of the term \(\imath{q\over{c}}({\mathbf{\nabla}}\wedge V{\mathbf{v}}){\boldsymbol{\cdot \sigma}}\) yields \(\imath{q\over{c}}[\,V\,({\mathbf{\nabla}}\wedge{\mathbf{v}})+{\mathbf{v}} \wedge{\mathbf{E}}\,]\). For uniform circular motion \({\mathbf{\nabla}}\wedge{\mathbf{v}}=2{\boldsymbol{\omega}}\). The term \(V({\mathbf{\nabla}}\wedge{\mathbf{v}})\) becomes then \(2V{\boldsymbol{\omega}}=-2{\mathbf{v}}\wedge{\mathbf{E}}\). In total we have thus \(V\,({\mathbf{\nabla}}\wedge{\mathbf{v}})+{\mathbf{v}}\wedge{\mathbf{E}}=-{ \mathbf{v}}\wedge{\mathbf{E}}\), such that \(\imath{q\over{c}}({\mathbf{\nabla}}\wedge V{\mathbf{v}}){\boldsymbol{\cdot \sigma}}=-\imath{q\over{c}}{\mathbf{v}}\wedge{\mathbf{E}}\). After multiplication by \(-{\hbar\over{2m_{0}c}}\) the term that goes with \(\imath\) becomes \({\hbar\over{2m_{0}^{2}c^{2}}}({1\over{r}}{\partial U\over{\partial r}}){ \mathbf{L}}\), which is the spin-orbit precession in an electric field, _but without the correction for Thomas precession_.
### Apparent failure with respect to the traditional derivation
If we derive the Pauli equation from the Dirac equation with the generalized substitution, it will thus also contain the spin-orbit term, with the correct sign, but without the correction for Thomas precession. Traditionally, the spin-orbit term is derived from the Dirac equation with the minimal substitution \(E\to E-qV,{\mathbf{p}}\rightarrow{\mathbf{p}}-q{\mathbf{A}}\). This seems to entirely discredit our approach as it makes it look as though it contradicts our criticism that this substitution does not take into account the velocity of the electron. By combining the traditional approach with the present approach it may even seem that we get the result for the spin-orbit coupling twice. But as already stated previously, all this is not true because the traditional approach is wrong and _it is not possible at all_ to derive the spin-orbit term from the Dirac equation based on the traditional minimal substitution. Let us now explain why.
### Errors in the traditional derivation
#### 8.5.1 How a change of basis offers a first hint about the existence of an error
Let us point out in which way the traditional derivation is based on flawed mathematics. The way we are writing the Dirac equation in this paper is different from the traditional one. In the Weyl representation we have:
(35)
while in the traditional representation we have:
(36)
The difference just corresponds to a change of basis in \({\mathbb{R}}^{5}\) provided with a metric \(x_{4}^{2}+x_{5}^{2}-x^{2}-y^{2}-z^{2}\), whereby we change the orientation of the \(ct\)-axis between the axes \(Ox_{4}\) and \(Ox_{5}\) in the \(Ox_{5}x_{4}\) plane, e.g. by a rotation over \({\pi\over{2}}\). We can consider space-time as embedded into the five-dimensional space, and its mathematical properties do not depend on the way it is embedded: Meaningful mathematical properties do not depend on a choice of basis. The two representations are therefore equivalent by a similarity transformation. This argument actually indicates how one can try to prove Pauli’s theorem that all choices of gamma matrices are equivalent. Equivalent choices of gamma matrices correspond just to different choices of a basis. Now, in the Weyl representation, the Dirac equation for an electron in a Coulomb field will be:
\[=m_{0}c\] (37)
We have thus:
\[[\,(-{\hbar\over{\imath}}{\partial\over{\partial ct}}-{qV\over{c} })\,{\mathds{1}}+{\hbar\over{\imath}}{\mathbf{\nabla}}{\boldsymbol{\cdot\sigma }}\,]\,\psi_{2} =m_{0}c\,\psi_{1},\]
\[\,\psi_{1} =m_{0}c\,\psi_{2},\] (38)
which leads to:
\[[\,(-{\hbar\over{\imath}}{\partial\over{\partial ct}}-{qV\over{c} })\,{\mathds{1}}+{\hbar\over{\imath}}{\mathbf{\nabla}}{\boldsymbol{\cdot\sigma }}\,]\,[\,(-{\hbar\over{\imath}}{\partial\over{\partial ct}}-{qV\over{c}})\,{ \mathds{1}}-{\hbar\over{\imath}}{\mathbf{\nabla}}{\boldsymbol{\cdot\sigma}}\,] \,\psi_{1} =m_{0}^{2}c^{2}\,\psi_{1},\]
\[\,[\,(-{\hbar\over{\imath}}{\partial\over{\partial ct}}-{qV\over{ c}})\,{\mathds{1}}+{\hbar\over{\imath}}{\mathbf{\nabla}}{\boldsymbol{\cdot \sigma}}\,]\,\psi_{2} =m_{0}^{2}c^{2}\,\psi_{2}.\] (39)
The point is now that these equations are completely decoupled. The terms that do not contain \(V\) are e.g. \([\,(-{\hbar\over{\imath}}{\partial\over{\partial ct}})\,{\mathds{1}}+{\hbar \over{\imath}}{\mathbf{\nabla}}{\boldsymbol{\cdot\sigma}}\,]\,[\,(-{\hbar\over {\imath}}{\partial\over{\partial ct}})\,{\mathds{1}}-{\hbar\over{\imath}}{ \mathbf{\nabla}}{\boldsymbol{\cdot\sigma}}\,]\,\psi_{1}=[\,\hbar^{2}(\Delta-{1 \over{c^{2}}}{\partial^{2}\over{\partial t^{2}}})\,{\mathds{1}}\,]\,\psi_{1}\), for the first equation. The terms that combine \((-{\hbar\over{\imath}}{\partial\over{\partial ct}})\,{\mathds{1}}\) and \({qV\over{c}}\,{\mathds{1}}\) give rise to: \({2\hbar\over{\imath}}{qV\over{c^{2}}}{\partial\psi_{1}\over{\partial t}}\), where we have assumed that \(V\) does not vary with time. The interesting terms with vector symmetry are those that combine \({\hbar\over{\imath}}{\mathbf{\nabla}}{\boldsymbol{\cdot\sigma}}\) and \({qV\over{c}}\,{\mathds{1}}\). They give rise to \({\hbar\over{\imath}}{q\over{c}}\,[\,{\mathbf{E}}{\boldsymbol{\cdot\sigma}}\,] \,\psi_{1}\). There is thus a term \([\,{\mathbf{E}}{\boldsymbol{\cdot\sigma}}\,]\psi_{j}\), but no term containing \([\,({\mathbf{E}}\wedge{\mathbf{p}}){\boldsymbol{\cdot\sigma}}\,]\psi_{j}\). There is even not a term containing \([\,{\mathbf{p}}{\boldsymbol{\cdot\sigma}}\,]\psi_{j}\), because the two terms containing \({qV\over{c}}{\hbar\over{\imath}}\,[\,{\mathbf{\nabla}}{\boldsymbol{\cdot\sigma }}\,]\) have opposite signs and cancel in the algebra. This is an exact result. There is here no fuss about approximations or series expansions. The vector operators working on \(\psi_{j}\) are all based on vector quantities in the plane of motion, while \(({\mathbf{E}}\wedge{\mathbf{p}}){\boldsymbol{\cdot\sigma}}\) is perpendicular to it. The term \(({\mathbf{E}}\wedge{\mathbf{p}}){\boldsymbol{\cdot\sigma}}\) can thus never occur in the algebra if it is carried out correctly.
The operator \({2\hbar\over{\imath}}{qV\over{c^{2}}}{\partial\over{\partial t}}\,{\mathds{1}}\) will admittedly lead to a vector term that is perpendicular to the plane of motion, that becomes equal to \(-\imath\,[\,{\boldsymbol{\omega}}{\boldsymbol{\cdot\sigma}}\,]=-\imath\omega\, [\,{\mathbf{s}}{\boldsymbol{\cdot\sigma}}\,]\) for a particle at rest. Here \({\mathbf{s}}\) is the spin axis, which is in general identified with \({\mathbf{e}}_{z}\), while the plane of motion is identified with the \(Oxy\) plane. After the necessary algebra this term will lead to a change \(m_{0}\to m_{0}+qV/c^{2}\) for the rest mass or \(\hbar\omega_{0}\rightarrow\hbar\omega=\hbar\omega_{0}+qV\) for the rest energy of the particle in a potential. As explained in Subsection 8.2, the purpose of the minimal substitution was exactly to reproduce this result. Also the term \({2\hbar\over{\imath}}{qV\over{c^{2}}}{\partial\psi_{1}\over{\partial t}}\,\) does thus not incorporate the spin-orbit coupling.
As the traditional representation is equivalent to the Weyl representation up to a similarity transformation, this conclusion must also be valid in the traditional representation. Note that the difference between the Weyl representation and the traditional interpretation does not affect the coordinates \((x,y,z)\), as it corresponds only to a change of the orientation of the time axis in the \((x_{4},x_{5})\)-plane. The two representations must thus lead to the same results and to the same expressions for the results. This suggests that something with the traditional derivation of the spin-orbit term must be wrong.
We can further argue that the only way to derive the terms for the spin-orbit coupling correctly from a Dirac equation is using the extended substitution we introduced in Eq. 34. In fact, to obtain a term \([\,({\mathbf{E}}\wedge{\mathbf{p}}){\boldsymbol{\cdot\sigma}}\,]\psi_{j}\) one needs a succession \([\,{\mathbf{\nabla}}{\boldsymbol{\cdot\sigma}}\,]\,V\,[\,{\mathbf{p}}{ \boldsymbol{\cdot\sigma}}\,]\,\psi_{j}\). This can never be obtained from a succession of two \(2\times 2\) SL(2,\({\mathbb{C}}\)) operators like the one that occurs in Eq. 8.5.1. A succession of three such operators is needed. With the extended substitution one obtains the succession \([\,{\mathbf{\nabla}}{\boldsymbol{\cdot\sigma}}\,]\,V\,[\,{\mathbf{v}}{ \boldsymbol{\cdot\sigma}}\,]\,\psi_{j}\) after squaring the Dirac equation, and this will lead to the required spin-orbit term. The traditional approach is unable to accomplish this because it misses the presence of \(\gamma\,[\,{qV\over{c}}\,{\mathbf{v}}{\boldsymbol{\cdot\sigma}}\,]\) in the minimal substitution.
#### 8.5.2 The origin of the error: the solutions are mixed states
Nevertheless, the traditional approach presents a derivation of the spin-orbit term whereby it even seems to account correctly for the Thomas precession. This traditional derivation of the spin-orbit coupling is a piece of wrong algebra that by good fortune produces the correct physical result desired. As explained in the preceding lines, it starts from the wrong minimal substitution which can never yield the desired result because it does not contain the interactions with the current. It then introduces a second error to obtain the result that agrees with the experimental observations by brute force. What the second error is can be discovered by considering the Dirac equation for an electron at rest in the absence of any electromagnetic field. In the Weyl representation this leads to:
\[=m_{0}c\,\psi_{1},\]
\[\,\psi_{1} =m_{0}c\,\psi_{2},\] (40)
and after “squaring” to:
\[=m_{0}^{2}c^{2}\,\psi_{1},\]
\[\,\psi_{2} =m_{0}^{2}c^{2}\,\psi_{2},\] (41)
A viable simultaneous solution of Eqs. 8.5.2 and 8.5.2 is:
(42)
In the traditional representation, the Dirac equation becomes:
\[=m_{0}c\,\psi_{1},\]
\[\,\psi_{2} =-m_{0}c\,\psi_{2}.\] (43)
This needs no further squaring as the equations are already decoupled. One possible solution is:
(44)
What transpires from the solution in the traditional presentation is that the spinor is a mixed state. It contains two pure states that lead to the same rest energy \(m_{0}c^{2}\) for the electron, one with the electron spinning counterclockwise and one with the electron spinning clockwise around the \(z\)-axis:
(45)
The mixed state must of course still be normalized. In the mixed state each pure state has thus the same probability \({1\over{2}}\). In fact, the rotation angle \(\omega_{0}t\) is an algebraic quantity. Feynman found out also that one has to use such mixed states composed of pure states with opposite sign and equal probabilities in quantum electrodynamics. In following the tradition to associate negative-energy states with anti-particles, the mixed state contains particles and anti-particles with equal probabilities.
Feynman wondered what the meaning of this would be. The solution of that riddle is that the mixed states describe statistical ensembles of electrons. Negative frequencies already occur in SU(2), which is just Euclidean geometry and does not contain anti-particles. There is therefore absolutely no necessity to associate negative frequencies with anti-particles. It is more appropriate to associate negative frequencies with particles that spin the other way around than those that give rise to positive frequencies. Both signs of the frequency will however yield the same kinetic rotational energy, and therefore the same rest mass and energy for the electron. Considering such a mixed state makes thus perfect physical sense, while the mixed state based on antiparticles is puzzling.
Of course, the interpretation of the negative frequencies we are proposing here clashes once more acrimoniously with currently accepted notions. But in reference [2] we give a derivation of the Dirac equation, that does not rely at all on the existence of anti-particles. Charge conjugation symmetry is just not part of the picture. As we explain in reference [2] we can consider anti-particles in a later stage, and then the traditional arguments can be used to show that we should associate them with negative frequencies. But this is then a different representation than the original one. The situation is a little bit like that with the two different versions of SL(2,\({\mathbb{C}}\)). Just as the two versions of SL(2,\({\mathbb{C}}\)) are related one to another by parity transformation and should not be merged into a single SL(2,\({\mathbb{C}}\)) formalism, the two particle and anti-particle representations are related by charge-conjugation and should not be merged into a single formalism, because a negative frequency would then acquire two different, mutually exclusive interpretations. Moreover in the merged formalism, the rest energies of a positron and an electron are truly considered as opposite, such that they would add up to zero, while they should add up to twice 511 keV. Of course, the alternative interpretation of the negative frequencies puts the whole discussion about Majorana and Dirac neutrinos in a different context.¹²
[FOOTNOTE:12][ENDFOOTNOTE]
Also in the case of the Weyl representation the states are mixed, even if it is less obvious here because the two pure states carry the same sign for \(\omega_{0}t\). But in the Weyl representation, the equation for the pure state of a single electron in its rest frame is:
(46)
Here the \(\Psi\) is a two-column SL(2,\({\mathbb{C}}\)) matrix representing the spinning electron, while \(\Psi^{-1\dagger}\) represents the spinning electron in the SL(2,\({\mathbb{C}}\)) representation of opposite handedness; \(\tau\) is the proper time. The block-diagonal \(4\times 4\) matrix \({\mathbf{D}}(g)\) with the blocks \(\Psi\) and \(\Psi^{-1\dagger}\) on the diagonal is the Weyl representation of a group element \(g\) obtained from an even number of reflections and represents the spinning electron. This equation can never lead to the Dirac equation, because the right-hand side can never be simplified according to:
(47)
In fact, let us divide out \(m_{0}c\) on both sides. Then on the left-hand side the non-zero blocks are off-diagonal, while on the right-hand side they are diagonal. The two sides have different symmetries and they can therefore never be identical. In the traditional representation this kind of observation becomes hidden by the fact that there are no longer vanishing blocks. Due to the fact that the simplification expressed in Eq. 47 is not possible, one is forced to adopt a linear combination of single-electron states to obtain the Dirac equation. This linear combination is (see reference [2]):
(48)
and we will have then:
(49)
This shows that also in the Weyl representation the solutions of the Dirac equation are mixed states.
Mixed states do not have a direct obvious meaning in the pure group theory. E.g. the sum of two rotation matrices is not a rotation matrix, and therefore the sum of two spinors is _a priori_ not a new spinor. Group elements cannot be added, they can only be multiplied. Linear combinations of representations of group elements do not represent new group elements. They belong to the so-called group ring. One can give such a mixed state however a meaning by considering it as defining a statistical ensemble, just as is done in the traditional interpretation of such mixed states in quantum mechanics. We can thus consider the free-space solution as degenerate, and the presence of electromagnetic fields can lift this degeneracy.
We may note that in the simpler formalism of SU(2), the solution of the eigenvalue equation \({\hbar\over{2}}\,[\,{\mathbf{s}}{\boldsymbol{\cdot\sigma}}\,]\psi=\pm{\hbar \over{2}}\psi\) for a spin aligned with the unit vector \({\mathbf{s}}\) is also a mixed state. The not normalized “spinor” \(\psi\) is the first column of the sum \({\mathbf{S}}={\mathds{1}}+{\mathbf{s}}{\boldsymbol{\cdot\sigma}}\) of the two group elements \({\mathds{1}}\) and \({\mathbf{s}}{\boldsymbol{\cdot\sigma}}\). For this sum, we will have \([\,{\mathbf{s}}{\boldsymbol{\cdot\sigma}}\,]\,{\mathbf{S}}={\mathbf{S}}\). In other words: \(\psi=({\mathds{1}}+{\mathbf{s}}{\boldsymbol{\cdot\sigma}})\,\psi_{1}\), where \(\psi_{1}\) is the first column of \({\mathbf{s}}{\boldsymbol{\cdot\sigma}}\). This leads indeed to \(\psi=[\,{\mathbf{s}}{\boldsymbol{\cdot\sigma}}\,]\,\psi\). When \({\mathbf{s}}={\mathbf{e}}_{z}\), the mixed nature of \(\psi\) becomes concealed after normalization by the fact that \(\psi_{1}\) and \([\,{\mathbf{s}}{\boldsymbol{\cdot\sigma}}\,]\,\psi_{1}\) accidentally take the same numerical values. This is also further discussed in reference [2].
#### 8.5.3 Problems in taking the Schrödinger limit of the mixed states
Now, the non-relativistic Schrödinger limit is obtained by removing the rest mass from the equation. If we want to get rid of the rest mass in the Weyl representation, we can define the classical wave function:
(50)
After such a substitution, the rest mass will be removed from the calculations. But using the same trick in the traditional representation (see e.g. [6], page 65) will lead to a meaningless statistical ensemble containing states with rest masses \(0\) and \(2m_{0}\). The correct substitution to be used in the traditional representation is thus:
(51)
rather than Eq. 50, because this is the substitution that leads to a meaningful statistical ensemble. This substitution would look absurd if we thought that the state given by Eq. 45 is a pure state. But for the mixed state, this substitution does make sense. This difference between the substitutions needed in Eqs. 50 and 51 just reflects the difference between the choices \(\gamma_{4}\) and \(\gamma_{5}\) for the gamma matrix that has to be associated \({\partial\over{\partial ct}}\) in the algebra. It is actually much more logical to write the Dirac equation in the traditional representation as:
(52)
thereby considering \(-{\hbar\over{\imath}}\,{\partial\over{\partial ct}}\,\gamma_{t}\) rather than \(-{\hbar\over{\imath}}\,{\partial\over{\partial ct}}\) as the energy operator that generalizes the prescriptions that are valid in the context of the Schrödinger equation. All this is of course a major watershed because it invalidates the whole philosophy of small and large \(2\times 2\) representations in the Dirac equation. However, there are just two SL(2,\({\mathbb{C}}\)) representations in a \(4\times 4\) representation based on gamma matrices, and these two SL(2,\({\mathbb{C}}\)) representations are mathematically of “the same size”. As can be seen from Subsection 2.2, they are defined in a completely symmetrical way. They are just of different handedness. If we refuse to accept this, then the Weyl representation would have two representations that are equally large, while the traditional representation would have a small and a large one. Why should such differences all at once pop up in the application of the mathematics if the physics in the two choices of gamma matrices must be the same?
It is further very important to realize that it is absolutely not necessary to make the substitution of Eqs. 50 or 51 in order to obtain the non-relativistic limit, as we showed above. One can do the calculations fully relativistically. It suffices then to consider afterwards the limit whereby \({\mathbf{v}}\) becomes small and to subtract \(m_{0}c^{2}\) from the final result. But when we do the calculations that way, we will never get a term \(2m_{0}\) into the algebra. It is the division by the term \(2m_{0}\) in the equations that one obtains from the wrong substitution which leads to the illusion that the algebra can describe the Thomas half correctly. This shows that it is not possible to obtain the Thomas half from the Dirac formalism, and that this is true both in the Weyl and in the traditional representation!
Of course it could be argued that we could solve Eq. 8.5.2 by taking another solution than Eq. 44, whereby we keep \(\psi_{1}\) and put \(\psi_{2}=0\). We would then obtain a pure state. But this solution will also no longer lead to a term \(2m_{0}\) in the algebra, and it could only be used for an electron at rest.
Finally, the traditional approach suggests that up to a certain order the expression for the Thomas precession in terms of the quantity \({\mathbf{v}}\wedge{\mathbf{E}}\) would be generally valid and not depend on the geometry of the orbit, while the further developments will show that it is in many instances necessary to assume that the motion is uniform and circular.
#### 8.5.4 Methodological remarks
The conclusion that the traditional calculation of the spin-orbit coupling is wrong is very unsettling. The reason why it has not been noticed that this error crept in, is that in the traditional approach the Dirac equation has been guessed. This means that we use it without knowing on what kind of precise assumptions it is built. In the approach described in reference [2], the equation has been derived. This enables discerning a number of issues that in the traditional approach are not even suspected. It is very hard to suspect that the solutions of the Dirac equation have to be mixed states. And it remains difficult to spot that this leads to errors creating the _illusion_ that the spin-orbit coupling would be treated correctly by the Dirac equation, even when one knows the underlying assumptions. All this reveals the limits of the nonchalant advice that one should just “shut up and calculate” because it would “work”. But contrary to the claims, it does not work. As seen here, following the advice in a blindfolded way can lead to errors of appreciation. Spotting such errors and teasing out their consequences can prove a very difficult task.
Of course it is not easy to decide which approach is correct and which one is wrong when we pit two different approaches one against another. It is in this respect always better to give several arguments in support of a conclusion, in order to prove the internal consistency of the logic. Here we are giving four arguments.
(1) The first one is based on the fact that the solutions of the Dirac equation are mixed states. This may be less convincing to the reader because he does not know the results derived in reference [2], but again these results have an internal consistence.
(2) The second one is based on the fact that the approach based on the Weyl representation should yield the same result as the approach based on the traditional representation. This is the argument that should be really clear to all readers.
(3) A third one is based on the fact that the derivation of the Dirac equation ignores instantaneous accelerations. One should indeed not be surprised that the calculation does not take into account the Thomas precession. As mentioned above and in Subsubsection 7.2.1, the Dirac equation is derived by using only boost parameters, like e.g. \((E,c{\mathbf{p}})\) in the case of the free-space equation. It considers only instantaneous boosts, not instantaneous accelerations. The equation makes sure that we get the instantaneous local boost correct in every point \(({\mathbf{r}},t)\). Thomas precession can only occur in a sequence of non-collinear boosts. It is thus the result of an instantaneous acceleration perpendicular to the velocity. An instantaneous boost as used to derive the Dirac equation can thus not account for Thomas precession. This is also discussed in further detail in [2].
(4) One can appreciate the point that the Dirac equation with a four-potential only treats boosts and not Thomas precession also as follows. Defining a general Lorentz transformation requires six independent parameters. The electromagnetic four-potential is defined by four parameters, but they are linked by the Lorentz gauge condition, such that the four-potential contains three independent parameters. These three parameters define the boost part of the Lorentz transformation, while the three remaining independent parameters of the general Lorentz transformation correspond to the rotational part of it. These can thus not be defined by the Dirac equation with a four-potential, because there are just no parameters in the equation that would permit defining them.
A weak point of the Dirac equation is thus that it is covariant with respect to boosts, but not with respect to instantaneous accelerations that are perpendicular to the instantaneous velocity.
### A more physical approach to the spin-orbit coupling
#### 8.6.1 Road map
Thomas precession is by definition an effect in the co-moving frame. It is a mis-appreciation of the correct clock rate of the electron due to the fact that the co-moving frame is rotating. According to Purcell’s calculation of the Thomas precession [10] the Thomas precession corresponds exactly to the difference of the merry-go-round effects in the laboratory frame and in the co-moving frame. The two merry-go-round effects are different due to Lorentz contraction (or time dilatation). Of course effects of the motion on the rest mass must be made in the co-moving frame, as it is in this frame that the electron is at rest. These effects must then be transformed back to the laboratory frame. But whereas Purcell’s calculation gives the correct algebraic result, it does not tell us why we can make the calculation the way he does. Why do we need to consider the difference between two merry-go-round effects?
A similar problem exists for the other part of the spin-orbit effect, the part that occurs (up to a proportionality factor) in Eq. 21 and is not corrected for Thomas precession. In our derivation of Eq. 21 it is mere algebra. As this is not satisfactory, one would like a physical explanation for it. Such a classical derivation for it has been proposed in Subsection 5.1. But also here the argument developed leads to the correct algebraic result, while it is not completely explained what the idea is behind the calculation. We may wonder e.g. why one should consider the magnetic field experienced by the electron in the co-moving frame in the first place, while we want to calculate results in the laboratory frame. Is it not possible to make a calculation in the laboratory frame right ahead? In the calculation only the magnetic part of the electromagnetic field in the co-moving frame intervenes. Due to our previous discussion of the anomalous Zeeman effect, we understand very well the effects of the magnetic field on the electron in its rest frame. In this calculation we only consider the instantaneous boost. This result must of course be back-transformed to the laboratory frame. What remains to calculate, is the effect of the accelerations perpendicular to the instantaneous velocity, and this is the Thomas precession. The Thomas precession due to the magnetic field can to a first approximation be neglected. What remains is thus the effect of the component of the electric field that is perpendicular to the instantaneous velocity, and that will be the Thomas precession.
In the following we will talk a lot about phases corresponding to the merry-go-round effect as opposed to Berry phases. We will however not consider such phases accumulated over a whole loop, but rather the instantaneous rates of change of them, because these correspond to the idea of precession. What we want to explain are the following points:
(1) One can consider two types of precession, a merry-go-round effect in position space \({\mathbb{R}}^{3}\) (which is not curved) and a precession in velocity space. The velocity space is curved and helps actually to visualize the group manifold. The precession on this curved space yields the Berry phase after an integration over a closed loop.
(2) The instantaneous rate of change in phase due to the merry-go-round effect in position space corresponds to the part of the spin-orbit effect without Thomas precession. It is the part that occurs in Eq. 21.
(3) This instantaneous rate of change in phase due to the merry-go-round effect in position space does not correspond to the exact rate of change of the geometrical phase due to parallel transport. The exact calculation of the latter can be made by first considering the merry-go-round effect in position space and then making a correction to it.
(4) The correction we have to carry out to obtain the correct rate of change of geometrical phase is the Thomas precession. According to Purcell’s calculation, it is the difference between the merry-go-round effects in velocity space in the co-moving frame and in the laboratory frame. Perhaps it is worth pointing out that there is no real merry-go-round effect on the group. When we make a closed orbit on the group, we are getting back to a Lorentz transformation that is identical to the one we started from, such that the phase difference can only be a multiple of \(2\pi\). That we have to worry about the Thomas precession is due to the fact that we are dealing with a closed orbit in \({\mathbb{R}}^{3}\) rather than on the group and that we have constructed the Dirac equation by only taking into account the boost part of the Lorentz transformations (see Footnote 11).
In a merry-go-round calculation one basis vector of the co-moving frame remains always aligned with the instantaneous velocity. The problem is thus that such a continuous alignment with the tangent to the orbit in position space does in general not correspond to parallel transport. It is possible to get a feeling for this by an analogy with the motion of a triad of basis vectors along a closed loop on the surface of a sphere. The closed loop could e.g. be a small circle. Let us assume we have defined spherical coordinates \((r,\theta,\phi)\) and local frames with basis vectors \({\mathbf{e}}_{r},{\mathbf{e}}_{\theta},{\mathbf{e}}_{\phi}\) as usual, such that the \(z\)-axis corresponds to \(\theta=0\). In a trip with uniform velocity \(v\) along a small circle defined by \(\theta=\theta_{0}\neq 0\), the geometrical phase will not be given by \(2\pi\) as one could be tempted to conclude from the permanent alignment of \({\mathbf{e}}_{\phi}\) with the instantaneous velocity \({\mathbf{v}}\). This permanent alignment corresponds to the merry-go-round effect, and does not account correctly for the value of the geometrical phase. As shown by the Gauss-Bonnet theorem, the geometrical phase will be given by \(\Omega=2\pi(1-\cos\theta_{0})\), where \(\Omega\) is the solid angle subtended by the small circle. Only when \(\theta_{0}={\pi\over{2}}\) will the procedure of aligning \({\mathbf{e}}_{\phi}\) with \({\mathbf{v}}\) yield the correct result, because then the path is a geodesic. The calculation \(\Omega=2\pi(1-\cos\theta_{0})\) reproduces exactly what happens e.g. in Foucault’s pendulum. This will be further developed in Subsubsection 8.6.2.
We may note that the calculation of the global Wigner rotation over a closed path in reference [9] is based on a theorem of hyperbolic geometry that just rephrases the Gauss-Bonnet theorem. This shows very clearly that the angle built up by Thomas precession along a path in velocity space is the Berry phase. As the hyperbolic geometry is the geometry of velocity space, we are getting here a first clue for the existence of two precession effects. One in the hyperbolic velocity space, and one in position space. Instead of a path in a curved space of positions, Thomas precession occurs on a path in a curved space of velocities. This curved space represents actually the parameters that define an element of the Lorentz group. The velocities are the boost parameters and correspond to the boosts, the Berry angles correspond to the rotations.
In summary, we can state that the Dirac equation is wrong because it does not account properly for the parallel transport, such that it gets the Berry phase wrong. The reason for this is that the Dirac equation just calculates the merry-go-round effect in position space. This idea is already present in reference [2], but I was just unable to find the appropriate words to verbalize it. In reference [2], it was also discussed how strange it is to postulate that the wave function is a function. It means that the Berry angle over a given path must be multiple of \(2\pi\) and it is this feature that leads to quantization. But phases differences that are a multiple of \(2\pi\) occur only on geodesics. Hence, quantum mechanics postulates that the orbits must be geodesics with respect to the spinning motion on the group manifold, and are therefore quantized. General relativity postulates that orbits must be geodesics with respect to displacement motion. This also implies rotations to a certain extend, but it does not consider the rotation due to the spin. The fully correct picture would thus imply that the orbits must be geodesics with respect to the combined effect of rotation and translation.
#### 8.6.2 The Berry phase on the surface of a sphere
The merry-go-round effect on a circle with radius \(r\) in the \(Oxy\) plane can obtained by considering the infinitesimal angle \(\Delta\theta=\Delta r_{\perp}/r\). In vector form this can be rewritten as \(\Delta\theta\,{\mathbf{e}}_{z}={\mathbf{r}}\wedge\Delta{\mathbf{r}}_{\perp}/r^ {2}\). This leads to \({\boldsymbol{\omega}}={d\theta\over{dt}}\,{\mathbf{e}}_{z}\)\(=\)\({1\over{r^{2}}}{\mathbf{r}}\wedge{\mathbf{v}}\). But such a calculation corresponds only to the correct value for the geometrical phase in flat space (which is here the \(Oxy\) plane). It is no longer true if we consider the geometrical phase along a small circle with radius \(r\) on the surface of a sphere with radius \(R\), as it does not account for the curvature of the surface of the sphere.
How the correct calculation of the geometrical phase runs in curved space can be illustrated on this example of a motion along a small circle with radius \(r\) on a sphere with radius \(R\). This case study corresponds exactly to description of Foucault’s pendulum. Let us take the rotation axis of the Earth as the \(z\)-axis. The locally vertical direction on the surface of the sphere is given by the vector \({\mathbf{e}}_{R}\). The precession of Foucault’s pendulum can be trivially calculated from \(({\boldsymbol{\omega}}{\mathbf{\cdot e}}_{R})\,{\mathbf{e}}_{R}\), which is the component of \({\boldsymbol{\omega}}=\omega{\mathbf{e}}_{z}\) along \({\mathbf{e}}_{R}\). This shows that the calculation in curved space cannot be made by using \({1\over{r^{2}}}{\mathbf{r}}\wedge{\mathbf{v}}\), whereby \({\mathbf{r}}\) would be the position vector of the pendulum with respect to the centre of the small circle.
In the calculation one must take \(d{\mathbf{r}}=dr\,{\mathbf{e}}_{\phi}\) where \(r=R\cos\theta\), and \(dr=r\,d\phi\). The term \(d{\mathbf{r}}/r\) must then be multiplied by \({\mathbf{e}}_{\phi}\wedge{\mathbf{R}}/R={\mathbf{e}}_{\theta}\), rather than \({\mathbf{r}}/r\), because what one wants to calculate is the rotation in the local tangent plane spanned by \({\mathbf{e}}_{\phi},{\mathbf{e}}_{\theta}\). This is a rotation around the local vertical defined by the vector \({\mathbf{e}}_{R}\). This calculation leads to \({\mathbf{e}}_{\theta}\wedge{\mathbf{e}}_{\phi}\,d\phi=d\phi\,{\mathbf{e}}_{R}\). The integral over the circle of this quantity yields \(2\pi\cos\theta\,{\mathbf{e}}_{z}\). The de-phasing is then \(\Delta\phi=2\pi-2\pi\cos\theta\), or \(2\pi(1-\cos\theta)\,{\mathbf{e}}_{z}\) in vector form. This is equal to \(\iint{1\over{R^{2}}}{\mathbf{e}}_{R}{\mathbf{\cdot}}d{\mathbf{S}}\), where \(d{\mathbf{S}}=R^{2}\sin\theta\,d\theta\,d\phi\,{\mathbf{e}}_{R}\), in agreement with the Gauss-Bonnet theorem. The quantity \(\Delta\phi\) is the Berry phase, which can be expressed as \(\oint\,{\mathbf{A\cdot}}d{\mathbf{r}}\), where the vector quantity \({\mathbf{A}}\) is analogous to a vector potential. Putting \({1\over{R^{2}}}{\mathbf{e}}_{R}={\mathbf{\nabla}}\wedge{\mathbf{A}}\) permits to rewrite \(\Delta\phi=\iint\,({\mathbf{\nabla}}\wedge{\mathbf{A}})\,{\mathbf{\cdot}}d{ \mathbf{S}}=\oint\,{\mathbf{A}}d{\mathbf{r}}\). Here \(d{\mathbf{r}}=R\,\sin\theta\,d\phi\,{\mathbf{e}}_{\phi}\), such that one must thus take \({\mathbf{A}}={1-\cos\theta\over{R\,\sin\theta}}{\mathbf{e}}_{\phi}\) to obtain the correct result. Using the general expression for \({\mathbf{\nabla}}\wedge{\mathbf{A}}\) in spherical coordinates one can verify that this value of \({\mathbf{A}}\) leads indeed to \({\mathbf{\nabla}}\wedge{\mathbf{A}}={1\over{R^{2}}}{\mathbf{e}}_{R}\). As \({\mathbf{A}}\) is a vector, one is tempted to interpret it as a vector potential.
The vector \({\mathbf{A}}\) shows many similarities with a magnetic vector potential. But of course it is not a magnetic vector potential, because there are no magnetic fields in our problem. The vector \({\mathbf{A}}\) does thus not define a magnetic monopole. This can be seen from the theorem of Gauss for an electric field: \(\oiint{q\over{4\pi\varepsilon_{0}R^{2}}}{\mathbf{e}}_{R}{\mathbf{\cdot}}d{ \mathbf{S}}={q\over{\varepsilon_{0}}}=\iiint\,\rho({\mathbf{r}})\,d{\mathbf{r}}\). Here \(\rho({\mathbf{r}})=q\delta({\mathbf{r}})\). This is the sum of two surface integrals \(\iint_{S_{1}}\,{\mathbf{E\cdot}}d{\mathbf{S}}={q\over{\varepsilon_{0}}}2\pi(1- \cos\theta)\) and \(\iint_{S_{2}}\,{\mathbf{E\cdot}}d{\mathbf{S}}={q\over{\varepsilon_{0}}}2\pi(1+ \cos\theta)\) over the two areas \(S_{1}\) and \(S_{2}\) on both sides of the small circle, and which together make up the full sphere. The result for one of the two areas is then perfectly analogous with the calculation we have performed above. As there is by definition only an electric field \({\mathbf{E}}={q\over{4\pi\varepsilon_{0}R^{2}}}\,{\mathbf{e}}_{R}\) within this problem, the quantity \({\mathbf{A}}={q\over{\varepsilon_{0}}}{1\over{R}}\tan{\theta\over{2}}\,{ \mathbf{e}}_{\phi}\), which leads to \({\mathbf{\nabla}}\wedge{\mathbf{A}}={\mathbf{E}}\), is not a magnetic potential. We have always \({\mathbf{\nabla\cdot E}}={\rho({\mathbf{r}})\over{\varepsilon_{0}}}\), with \(\rho({\mathbf{r}})=q\delta({\mathbf{r}})\). For all points different from the origin, the fact that \({\mathbf{E}}={\mathbf{\nabla}}\wedge{\mathbf{A}}\) leads to \({\mathbf{\nabla\cdot}}({\mathbf{\nabla}}\wedge{\mathbf{A}})=0\). It can be checked indeed that \({\mathbf{\nabla\cdot E}}=0\) for \({\mathbf{r}}\neq{\mathbf{0}}\) by using the general expression for the divergence in spherical coordinates. The pitfall is now to conclude from this that \(\oiint({\mathbf{\nabla}}\wedge{\mathbf{A}}){\mathbf{\cdot}}d{\mathbf{S}}= \iiint{\mathbf{\nabla\cdot({\mathbf{\nabla}}\wedge{\mathbf{A}})}}\,d{\mathbf{r }}=0\). In fact, one expects \({\mathbf{\nabla\cdot({\mathbf{\nabla}}\wedge{\mathbf{A}})}}=0\) by analogy with \({\mathbf{a\cdot}}({\mathbf{a}}\wedge{\mathbf{b}})=0\), but this is not true at the origin where \({\mathbf{A}}\) has a singularity. In putting \({\mathbf{\nabla\cdot({\mathbf{\nabla}}\wedge{\mathbf{A}})}}=0\), we forget the singularity in \({\mathbf{\nabla\cdot E}}\) at the origin. As \(\oiint\,({\mathbf{\nabla}}\wedge{\mathbf{A}}){\mathbf{\cdot}}d{\mathbf{S}}= \iiint{\mathbf{\nabla\cdot E}}\,d{\mathbf{r}}\), we obtain the correct result when we use \({\mathbf{\nabla\cdot E}}={\rho({\mathbf{r}})\over{\varepsilon_{0}}}\). The same reasoning can be applied to the magnetic charge \(q_{m}\,\delta({\mathbf{r}})=cq\,\delta({\mathbf{r}})\) following the analogy developed in Footnote 4.
The mathematical difficulties that arise here are due to the abstraction of representing the charge distribution by \(q\delta({\mathbf{r}})\). The delta “function” can be described as a limit of test charge distributions \(\lim_{\lambda\rightarrow\lambda_{0}}\,\rho_{\lambda}({\mathbf{r}})\). The problem is that for integration and differentiation we are not allowed to assume that the derivative of the limit will be the limit of the derivative, or the integral of the limit will be the limit of the integral: \(\lim_{\lambda\rightarrow\lambda_{0}}\int\,f({\mathbf{r}})\rho_{\lambda}({ \mathbf{r}})d{\mathbf{r}}\)\(\neq\)\(\int\,[\,\lim_{\lambda\rightarrow\lambda_{0}}f({\mathbf{r}})\rho_{\lambda}({ \mathbf{r}})\,]\,d{\mathbf{r}}\), and \(\lim_{\lambda\rightarrow\lambda_{0}}\,D[\,f({\mathbf{r}})\rho_{\lambda}({ \mathbf{r}})\,]\)\(\neq\)\(D\,[\,f({\mathbf{r}})\lim_{\lambda\rightarrow\lambda_{0}}\rho_{\lambda}({ \mathbf{r}})\,]\). Changing the order of the operations is in general not a valid procedure. This error occurs in Dirac’s definition of the “delta function”, and now also here in the differentiation procedure, where it gives rise to the confusion that \({\mathbf{\nabla\cdot({\mathbf{\nabla}}\wedge{\mathbf{A}})}}\) would be zero at \({\mathbf{r}}={\mathbf{0}}\). The limits must be defined in the sense of distributions, and then these difficulties will disappear. In fact, \({\mathbf{\nabla\cdot({\mathbf{\nabla}}\wedge{\mathbf{A}})}}\) is not zero at the origin, even though \(\lim_{{\mathbf{r}}\rightarrow{\mathbf{0}}}{\mathbf{\nabla\cdot({\mathbf{\nabla }}\wedge{\mathbf{A}}({\mathbf{r}}))}}=0\). The correct result for the example of the electric monopole is \({\mathbf{\nabla\cdot({\mathbf{\nabla}}\wedge{\mathbf{A}})}}={\mathbf{\nabla \cdot E}}={q\over{\epsilon_{0}}}\,\delta({\mathbf{r}})\).
In complete analogy, there is another solution to the existence of a magnetic monopole than inventing a Dirac string. It suffices to accept that \({\mathbf{\nabla\cdot B}}=0\) is no longer generally valid. It is \({\mathbf{\nabla\cdot B}}\neq 0\) when there is a monopole, and \({\mathbf{\nabla\cdot B}}=0\) when there is no monopole. This tallies then exactly with the meaning of the equation \({\mathbf{\nabla\cdot B}}=0\), which expresses that there are no magnetic monopoles.
The introduction of the Dirac string is illogical and runs contrary to the principle of Occam’s razor. Based on symmetry reasons one wonders if magnetic monopoles could exist. As we have shown in Section 4, the equations contain already for each term its symmetric counterpart. The fields and potentials of the electric and the magnetic monopoles have exactly the same mathematical structure. In the further development one stumbles then on a result \({\mathbf{\nabla\cdot B}}=q_{m}\,\mu_{0}\,\delta({\mathbf{r}})\neq 0\), where it had been \({\mathbf{\nabla\cdot B}}=0\) without the monopoles. This result presents a golden opportunity to enhance the symmetry even further by rendering this completely analogous to \({\mathbf{\nabla\cdot E}}={q\over{\epsilon_{0}}}\,\delta({\mathbf{r}})\neq 0\). But instead of accepting this tried and proved solution one rejects it and postulates that \({\mathbf{\nabla\cdot B}}=0\) should remain universally valid. This choice is mathematically wrong, it breaks the symmetry, and to cover up for the error, one is then forced to introduce the stunning concept of a Dirac string. It is all together an egregious procedure and boils down to inventing “new physics”, just to explain away a trivial mathematical paradox inherent to the use of singular distributions.
#### 8.6.3 Spin-orbit coupling without Thomas precession
There are several steps in the following demonstration that the merry-go-round effect is equivalent to the part of the spin-orbit coupling that occurs in Eq. 21. We will neglect here often the relativistic \(\gamma\)-factors, relativistic effects on the mass and consider only uniform circular motion. From the treatment it will transpire that for non-circular motion the calculations could be off the mark.
(1) According to Subsubsection 7.2.1, \({\mathbf{\nabla}}\wedge{{\mathbf{p_{0}}}\over{2}}={\mathbf{\nabla}}\wedge q{ \mathbf{A}}=q{\mathbf{B}}\). Furthermore Eq. 28 shows that in continuing to neglect relativistic modifications in the mass, we have \({\mathbf{\nabla}}\wedge{{\mathbf{p_{0}}}\over{2}}=m_{0}{\boldsymbol{\omega}}\). Combining the two we obtain \({\boldsymbol{\omega}}={q{\mathbf{B}}\over{m_{0}}}\).
(2) For circular motion in cylinder coordinates we have \({\mathbf{\nabla}}\wedge{{\mathbf{p_{0}}}\over{2}}={1\over{2r}}{dL\over{dr}}{ \mathbf{e}}_{z}\).
(3) For uniform circular motion we can put \({d\phi\over{dt}}=\omega\). We have then \(L=m_{0}\omega r^{2}\). From this it follows that \({1\over{2r}}{dL\over{dr}}=m_{0}\omega={L\over{r^{2}}}\).
(4) The calculation of the merry-go-round effect yields \({\boldsymbol{\omega}}={1\over{r^{2}}}{\mathbf{r}}\wedge{\mathbf{v}}\), such that \(m_{0}{\boldsymbol{\omega}}={1\over{r^{2}}}{\mathbf{r}}\wedge{\mathbf{p}}_{0}\).
(5) From (1)-(4) it follows that the merry-go-round effect leads to \(m_{0}{\boldsymbol{\omega}}=q{\mathbf{B}}={\mathbf{\nabla}}\wedge q{\mathbf{A}}\). In deriving the Pauli equation we divide this by \(2m_{0}\), such that the frequency that intervenes in the energy calculations is the Larmor frequency \({q{\mathbf{B}}\over{2m_{0}}}\).
(6) When there is no magnetic field in the laboratory frame, the electric field will yield a magnetic field in the co-moving frame which is given by \({\mathbf{B}}^{\prime}=\gamma{1\over{c}}{\mathbf{v}}\wedge{\mathbf{E}}\). We neglect here the factor \(\gamma\). The calculation of the merry-go-round effect in the co-moving frame for an electron that has a relative velocity \(d{\mathbf{w}}\) with respect to it, does not depend on \(d{\mathbf{w}}\) and is just given by \(m_{0}{\boldsymbol{\omega}}=q{\mathbf{B}}^{\prime}={q\over{c}}{\mathbf{v}} \wedge{\mathbf{E}}\), as derived above. By taking the limit \(d{\mathbf{w}}\to 0\), we obtain then the merry-go-round effect in the moving frame. The corresponding Larmor frequency is \({q\over{2m_{0}c}}{\mathbf{v}}\wedge{\mathbf{E}}\). Transforming this frequency to the lab frame involves a factor \(\gamma\) which we neglect again. This way we have shown that the merry-go-round effect corresponds to the part of the spin-orbit coupling that appears in Eq. 21. Of course this calculation neglects the radial contribution of \(\gamma{\mathbf{E}}\) in the co-moving frame, but as the electron is at rest in the co-moving frame, this electric field will not lead to precession. This shows that the reason we transform to the co-moving frame is that we want to eliminate the effects due to the radial part of \({\mathbf{E}}\) from the calculation. The radial part still plays a rôle in the Thomas precession.
#### 8.6.4 Thomas precession as the Berry phase on the velocity space hyperboloid
We will consider motion in the \(Oxy\) plane. We can drop then the \(z\)-coordinate from the velocity four-vector \((\gamma,\)\(\gamma v_{x}/c,\)\(\gamma v_{y}/c,\)\(\gamma v_{z}/c)\). We have then \(\gamma^{2}-(\gamma v_{x}/c)^{2}-(\gamma v_{y}/c)^{2}=1\). This the equation of a hyperboloid in \({\mathbb{R}}^{3}\). In formal analogy with what has been done for the calculation of the Berry phase on a sphere, we will introduce here hyperbolic coordinates \((R,u,\phi)\) for points \((X,Y,Z)\) in \({\mathbb{R}}^{3}\). The transformation between the Cartesian coordinates \((X,Y,Z)\) and their corresponding hyperbolic coordinates is given by: \(Z=R\cosh u\), \(X=R\sinh u\cos\phi\), \(Y=R\sinh u\sin\phi\). This can be considered as an abstract change of coordinates, whereby the real meaning of \((X,Y,Z)\) is irrelevant. This change of coordinates is not a 1-1-mapping between \({\mathbb{R}}^{3}\) and \({\mathbb{R}}^{3}\) as for spherical coordinates, as it implies that \(Z^{2}-X^{2}-Y^{2}=R^{2}>0\). The coordinate transformation is thus only defined for points inside the cone \(Z^{2}-X^{2}-Y^{2}=0\). For the special choice \((X_{0},Y_{0},Z_{0})=(\gamma v_{x}/c,\gamma v_{y}/c,\gamma)\), we have then: \(\gamma=\cosh u\), \(\gamma v_{x}/c=\sinh u\cos\phi\), \(\gamma v_{y}/c=\sinh u\sin\phi\) such that the velocity space is just the surface \(Z^{2}-X^{2}-Y^{2}=1\) (corresponding to \(R=1\)) in the vector space \({\mathbb{R}}^{3}\) with coordinates \((X,Y,Z)\). We can define \({\mathbf{e}}_{R}\), \({\mathbf{e}}_{u}\) and \({\mathbf{e}}_{\phi}\) as usual. Note that \({\mathbf{e}}_{R}\) is here no longer orthogonal to \({\mathbf{e}}_{u}\) and \({\mathbf{e}}_{\phi}\) in the Euclidean sense as the metric of the velocity space is not Euclidean. The basis vectors are rather mutually orthogonal with respect to the metric \(Z^{2}-X^{2}-Y^{2}\). Calculation of the Jacobian matrix shows that the volume element is given by \(R^{2}\sinh u\,du\,d\phi\,dR\). The oriented surface element on the revolution hyperboloid \(R=1\) is given by: \(d{\mathbf{S}}=\sinh u\,du\,d\phi\,{\mathbf{e}}_{z}\).¹³ The surface surrounded by the closed loop defined by \(\gamma=\cosh u_{0}\) is thus \(2\pi\,(\cosh u_{0}-1)\). From \(v_{y}/v_{x}=\tan\phi\), one can calculate \((1+\tan^{2}\phi)\,d\phi=(v_{x}dv_{y}-v_{y}d_{x})/v_{x}^{2}\), which leads to \(d\phi={\gamma^{2}\over{c^{2}(\gamma^{2}-1)}}{\mathbf{v}}\wedge{\mathbf{a}}\). As the Thomas precession along the closed loop \(u=u_{0}\) is given by \((\cosh u_{0}-1)\,2\pi\), along a part \(d\phi\) of this closed loop it is therefore given by \(d\vartheta_{T}=(\cosh u_{0}-1)\,d\phi={\gamma^{2}\over{c^{2}(\gamma+1)}}{ \mathbf{v}}\wedge{\mathbf{a}}\). This is completely analogous to what we did for the small circle on a sphere. The coordinate lines \(\phi=\phi_{0}\) correspond to accelerations \({\mathbf{a}}\parallel{\mathbf{v}}\). The contributions \(du\) do therefore not contribute to the Thomas precession. The integration of the Thomas precession over a closed loop of any shape will give the total Berry phase according to the Gauss-Bonnet theorem. No taking into account the contribution \({\mathbf{a}}\parallel{\mathbf{v}}\) is here just analogous to the way we define an integral as the limit of a procedure whereby we use the Simpson rule with ever smaller meshes.¹⁴
[FOOTNOTE:13][ENDFOOTNOTE]
[FOOTNOTE:14][ENDFOOTNOTE]
#### 8.6.5 A critical remark
After all this a critical question remains. In the fully relativistic solution of the hydrogen problem, the spin-orbit coupling terms do not show up in the calculations. Where are they? And why do we get the correct solutions from the minimal substitution if it is incomplete? The answer is that by solving the problem in cylindrical or spherical coordinates, without introducing covariant derivatives¹⁵, we actually introduce the spin-orbit effect (without the correction for Thomas precession) for circular orbits.
[FOOTNOTE:15][ENDFOOTNOTE]
As a matter of fact, the transition between Cartesian coordinates and cylindrical coordinates is given by the entirely geometric transformation:
\[{\partial f\over{\partial r}}{\mathbf{e}}_{r}+{1\over{r}}{\partial f\over{ \partial\phi}}{\mathbf{e}}_{\phi}+{\partial f\over{\partial z}}{\mathbf{e}}_{z }={\partial f\over{\partial x}}{\mathbf{e}}_{x}+{\partial f\over{\partial y}}{ \mathbf{e}}_{y}+{\partial f\over{\partial z}}{\mathbf{e}}_{z},\] (53)
which shows that \(\nabla\equiv{\partial\over{\partial r}}{\mathbf{e}}_{r}+{1\over{r}}{\partial \over{\partial\phi}}{\mathbf{e}}_{\phi}+{\partial\over{\partial z}}{\mathbf{e} }_{z}\). But when we perform dynamical calculations and calculate temporal derivatives, we must take into account the fact that \(({\mathbf{e}}_{r}(t),{\mathbf{e}}_{\phi}(t),{\mathbf{e}}_{z}(t))\) are functions of time. In the dynamical calculations we must correct for this temporal dependence by introducing covariant derivatives. When we use \(({\mathbf{e}}_{r}(t),{\mathbf{e}}_{\phi}(t),{\mathbf{e}}_{z}(t))\) in dynamical calculations of the position \({\mathbf{r}}(t)\) of a particle, \(({\mathbf{e}}_{r}(t),{\mathbf{e}}_{\phi}(t),{\mathbf{e}}_{z}(t))\) will be a precessing frame. Let us now consider a co-moving copy of this frame at the position \(P({\mathbf{r}}(t))\) of the particle rather than at the origin \(O\). By making the calculations in cylindrical coordinates without introducing the covariant derivatives, this co-moving frame will for uniform circular motion carry out exactly the merry-go-round motion in position space. We have seen that this corresponds to the part of the spin-orbit coupling that occurs in Eq. 21. This is fortunate, as the Dirac equation based on the minimal substitution does not correct for the merry-go-round effect. The Dirac equation Lorentz transforms \(({\partial\over{\partial\tau}},0)\) to \(({d\over{dt}},{\mathbf{\nabla}})\) in \(({\mathbf{r}},t)\) by using the local instantaneous boost with parameter \({\mathbf{v}}({\mathbf{r}},t)\) and without considering the Thomas precession part of the Lorentz transformation. In these instantaneous boosts, the triads remain just all the time parallel to the \(({\mathbf{e}}_{x},{\mathbf{e}}_{y},{\mathbf{e}}_{z})\), such that any kind of spin-orbit effect remains ignored. By introducing cylindrical coordinates without introducing covariant derivatives, we are thus making up for this shortcoming of the Dirac equation in the case of uniform circular motion, as we are including the merry-go-round effect in the laboratory frame. But as we already pointed out, this fails to account for the Thomas precession and it is thus not completely exact, even for uniform circular motion.
We see thus that the solution of the problem of the hydrogen atom is based on a completely different approach than the one that tries to calculate the various contributions to the spin-orbit coupling explicitly, as attempted in our approach. By not introducing the covariant derivatives we take a short cut to a part of the calculation. We are making an error by using the minimal substitution in the Dirac equation, with the result that we fail to take into account the spin-orbit effect (without the correction for the Thomas precession). But by making a second error when we forget to use covariant derivatives we finally get a result that includes all effects, except the Thomas precession. We may finally note that aligning the reference frame with \({\mathbf{v}}\) will not be correctly be accounted for by introducing symmetry-adapted coordinates when the orbit is no longer circular. There will then be a further error in the calculation of the spin-orbit coupling. The problem of the spin-orbit coupling could be addressed perhaps more conveniently by a completely different approach whereby one tries to write \({d\over{d\tau}}\) in the presence of a potential in a moving frame by introducing Christoffel symbols and following the approach Einstein used to write dynamical equations of motion in general relativity.
### Can one correct the Dirac equation for the errors?
Of course one could ask how one should modify the Dirac equation to take into account the Thomas precession correctly. But treating the Thomas correction with exacting rigor is not a simple matter. The mass does no longer change only by translation (i.e. displacement) but also by rotation (i.e. spin). Let us explain this more in detail. The expression given in Eq. 30 is correct. From \(q{\mathbf{E}}={d{\mathbf{p}}\over{dt}}=m{\mathbf{a}}+{\mathbf{v}}{dm\over{dt}}\), it follows that \({\mathbf{v}}\wedge q{\mathbf{E}}=m{\mathbf{v}}\wedge{\mathbf{a}}\). However, due to the Thomas precession and the presence of the potential, \(m\) is no longer given by \(m=\gamma m_{0}\). Let us assume that \(m=\gamma m_{0}+(\gamma qV\pm{\hbar\omega_{T}\over{2}})/c^{2}\). As \(m>0\) does no longer contain the sign of \(\omega\), the sign \(\pm\) must be taken as negative if the Thomas precession takes place in the same sense as the spin, and as positive otherwise. Hence we have:
\[\omega_{T}={\gamma^{2}\over{\gamma+1}}\,\cdot\,{{\mathbf{v}}\wedge q{\mathbf{E }}\over{\gamma m_{0}c^{2}+\gamma qV\pm{\hbar\omega_{T}\over{2}}}}\] (54)
We would like to solve this equation for \(\omega_{T}\). To do so, we must first eliminate all reference to \({\mathbf{v}}\) and \(v\) from the equation. As explained in Footnote 10, \(v\) cannot be calculated from a conservation law between kinetic and potential energy like in classical mechanics, because the electron might have emitted radiation. However, quantized emission of radiation can be taken into account in the value of \({\mathbf{L}}\), if we admit that the emission of radiation conserves total angular momentum. One can thus use \({\mathbf{v}}\wedge q{\mathbf{E}}={1\over{m}}{1\over{r}}{\partial U\over{ \partial r}}{\mathbf{L}}\), where \(mc^{2}=\gamma m_{0}c^{2}+\gamma qV\pm{\hbar\omega_{T}\over{2}}\). When \({\mathbf{r}}\perp{\mathbf{v}}\) we can also use \({\mathbf{L}}={\mathbf{r}}\wedge m{\mathbf{v}}\) to put \(v=Lc^{2}/(r(\gamma m_{0}c^{2}+\gamma qV\pm{\hbar\omega_{T}\over{2}}))\) and solve this as an equation of \(v\) in terms of \(\omega_{T}\) and \(r\). This expression for \(v\) must then consistently be used to replace all occurrences of \(v\) in the equation that results from Eq. 54 after the replacement \({\mathbf{v}}\wedge q{\mathbf{E}}={1\over{m}}{1\over{r}}{\partial U\over{ \partial r}}{\mathbf{L}}\). This way we obtain a complicated equation for \(\omega_{T}\), with \(r\) as a parameter. If the degree of this equation is larger than \(4\) it will only be possible to solve it by numerical methods.
It is not obvious that it would be a correct procedure to add this term to the Dirac equation, because after squaring it will lead to new terms. The other contributions to the spin-orbit coupling only enter the scene after squaring. This raises the question if \(m=\gamma m_{0}+(\gamma qV\pm{\hbar\omega_{T}\over{2}})/c^{2}\) is actually the correct _ansatz_. Should we perhaps not just use \(E=mc^{2}\), where \(E\) is the total energy, to calculate the mass? At first sight, it seems that the equation \(m=\gamma m_{0}+(\gamma qV\pm{\hbar\omega_{T}\over{2}})/c^{2}\) seems to account for all corrections on the rest mass in the co-moving frame after a subsequent transformation to the laboratory frame, provided we introduce spherical or cylindrical coordinates in the solution of the Dirac equation obtained by a minimal substitution. When we solve the equation rather in Cartesian coordinates, then we must use the generalized substitution in order to get the spin-orbit coupling correct.
### Further remarks on the generalized substitution
The purely magnetic part of Eq. 34 is:
\[\gamma cq{\mathbf{A}}{\boldsymbol{\cdot\sigma}}+\gamma q\,({\mathbf{v\cdot A}} )\,{\mathds{1}}+\imath\gamma q({\mathbf{v}}\wedge{\mathbf{A}}){\boldsymbol{ \cdot\sigma}}.\] (55)
When we use Eq. 34 in its full generality to make the correct substitution in the Dirac equation, it will lead to a staggering amount of algebra after squaring the Dirac equation, even when the electromagnetic fields are not varying with time, because the quantities \({\mathbf{v}}({\mathbf{r}},t)\) and \(\gamma({\mathbf{r}},t)\) will in general still depend on space and time. There will in any case also be a term with \({\mathbf{a}}{\boldsymbol{\cdot\sigma}}\) due to the combination \([\,{\partial\over{\partial t}}{\mathds{1}}\,]\,[\,{\mathbf{v}}{\boldsymbol{ \cdot\sigma}}\,]\) in the SL(2,\({\mathbb{C}}\)) matrices. We have therefore just discussed the anomalous Zeeman effect and the spin-orbit coupling in the non-relativistic limit. Some further aspects of Eq. 55 will be discussed in the Appendix. We will give there e.g. an extremely simple derivation of the term \(-{\boldsymbol{\mu\cdot}}{\mathbf{B}}\) that occurs in the literature. This derivation is exact, in contrast with wishy-washy derivations based on a treatment of a current loop.
## 9 Conclusion
The traditional interpretation of the anomalous \(g\)-factor in the Dirac theory might be physically attractive as it corresponds to our macroscopic intuition, but it does not agree with the true meaning of the algebra and it violates the Lorentz symmetry. We have shown this by developing an alternative approach to the physics of the anomalous \(g\)-factor by just respecting the correct geometrical interpretation of the algebra, a feat that the traditional approach is not able to accomplish as illustrated by the symmetry violation mentioned. We have proposed an interpretation that respects the Lorentz symmetry. We have thereby stuck to the working philosophy that the algebra should remain strictly the same such that only the geometrical interpretation of the algebra can be changed (in such a way that it agrees with the _given_ geometrical meaning of the algebra) and agreement with experiment is automatically preserved.
However, this working philosophy fails when we try to apply the same methods to the algebra of the spin-orbit coupling. In searching an explanation for this failure we discover that it is not our approach but the traditional approach that contains a number of flaws. A similar analysis of Dirac’s theory of the magnetic monopole raises also some troubling issues. The origin of these problems can be traced back to a craze for abstraction whereby any feeling for the original geometrical meaning of the algebra used in the group representation theory is lost.
The whole study presented in this paper just ensues from the natural wish to make sense of Eq. 16/Eq. 21 which was obtained from a few lines of algebra based on group representation theory. This group representation theory provides also all the necessary tools to solve the many problems encountered along this search for better insight. In conclusion we think that this work shows what a powerful tool group theory can be in the quest of trying to make sense of quantum mechanics.
_Acknowledgements._ I wish to thank Prof. Dr. J.-E. Wegrowe for fruitful discussions.
## 10 Appendix. Additional Calculations
### Criticism of the calculation of the energy \(-{\boldsymbol{\mu\cdot}}{\mathbf{B}}\) of a magnetic moment \({\boldsymbol{\mu}}\) based on a current loop
Many text books propose that the potential energy of a current loop within a magnetic field would be given by \(U_{dipole}=-{\boldsymbol{\mu\cdot}}{\mathbf{B}}\). This is really problematic for several reasons:
(1) The force of a magnetic field \({\mathbf{B}}\) on a moving charge \(q\) with velocity \({\mathbf{v}}\) is \({\mathbf{F}}=q({\mathbf{v}}\wedge{\mathbf{B}})\). As \({\mathbf{F\cdot}}d{\boldsymbol{\ell}}={\mathbf{F\cdot}}{\mathbf{v}}dt=0\), a magnetic field cannot do work on a moving charge. It is therefore puzzling how we could define a potential energy \(U_{dipole}\) with respect to a magnetic field.
(2) Some textbooks argue that the magnetic field exerts a torque on the loop and on the dipole moment \({\boldsymbol{\mu}}\) of the loop. But the torque is calculated in a special pair of points of the loop without discussing what happens in all the other pairs of points. The special points correspond to the maximal value of the torque. By analyzing what happens in other pairs of points, one can find a pair of points where the torque is zero. This is the minimal possible value for the torque. For all other intermediate values of the torque, one can find a corresponding pair of points. That one has to select the pair of points with the maximal torque to obtain the correct value for the “potential energy” of the current loop with dipole moment \({\boldsymbol{\mu}}\) makes the calculation look hand-waving. Moreover, a torque is not a force such that it is not clear what its relation with a potential energy ought to be.
(3) In both cases where one applies these arguments to a single electron in circular orbit around the nucleus of an atom, one has to assume that the charge \(q\) of the electron is smeared out over the whole loop. The very definition of \({\boldsymbol{\mu}}\) used in all the atomic calculations is built on this idea, which is certainly not correct. Such a single electron is not a true current loop and not a true dipole. Moreover, only one force is acting on it, not a torque. One needs thus at least two electrons to define a dipole moment and a torque as for a macroscopic current loop. The correct definitions must thus be based on the analysis of a single electron. After that, one can put several electrons on the same orbit as one can define for a single electron to obtain the macroscopic quantities in terms of dipole moments and torques.
### Magnetic moment of a current carried by a single electron
We will try to remove here these weird conceptual problems by a better description of the problem. First we address the recurring issue that the charge of the electron is not smeared out over a loop. In an atom, the charge density will not be uniform but singular \(\rho(\ell)=q\delta(\ell-\ell_{0})\), where \(\ell_{0}\) is the position of the charge, and \(\ell\) denoted the curvilinear length along the orbit. This leads to a singular current loop, which we can use to describe the real situation of the moving charge. The singular loop will have a singular magnetic moment. Relativistically, we will have \(I=\gamma I_{0}\).
\[{\boldsymbol{\mu}}=\oint\gamma I_{0}{1\over{2}}\,{\mathbf{r}}\wedge d{\mathbf{ r}}=\oint I_{0}{1\over{2}}\,{\mathbf{r}}\wedge\gamma{\mathbf{v}}\,dt=\oint{1 \over{m_{0}}}{1\over{2}}\,{\mathbf{r}}\wedge{\mathbf{p}}\,dq\,=\,{q\over{2m_{0 }}}{\mathbf{r}}_{0}\wedge{\mathbf{p}}_{0}={q\over{2m_{0}}}{\mathbf{L}}_{0}.\] (56)
We have used here \(I_{0}\,dt=dq\). We can consider \({\mathbf{r}}_{0}\) as the centre of the circular orbit that defines the loop that can be associated with it. In an atom, it would be the position vector of the electron with respect to the nucleus. While it is obvious that \(I\) is singular, and that also the charge distribution \(dq=\rho(\ell)d\ell=q\delta(\ell-\ell_{0})d\ell\) is singular, it is obvious that \(\oint dq=\oint q\delta(\ell-\ell_{0})d\ell=q\). We obtain then \({\boldsymbol{\mu}}=-{q\over{2m_{0}}}{\mathbf{L}}_{0}\). We can replace \({\mathbf{L}}_{0}\) by \({\mathbf{L}}\) as \({\mathbf{L}}\) is a constant of motion. We may note that it is also possible to define \({\boldsymbol{\mu}}\) classically such that it does not account for \(\gamma\) in \({\mathbf{L}}\). One must then write \(\gamma{\boldsymbol{\mu}}\) instead of \({\boldsymbol{\mu}}\). The quantity \({\boldsymbol{\mu}}\) is not a dipole moment, because there is only one moving charge. It is a monopole moment.
This calculation of \({\boldsymbol{\mu}}\) is exact, while from most presentations one gets the impression that it would a back-of-the-envelope calculation that is only a rough estimation. It may be noted that we do not need the loop. All we have to do is to integrate over a small segment \(d{\mathbf{r}}\). In fact, in the reasoning followed above, the singular magnetic moment was only defined for a line segment \(d{\mathbf{r}}\) and we could choose the loop at will. The rest of the loop that one may add can be chosen arbitrarily as it will give a zero contribution to the contour integral. And this is true at any moment. But taking the orbit for the contour integral has the advantage that we never have to change the arbitrary choice and that the definition will be valid over the whole orbit. It will then be a definition that suits the description of a stationary situation. The further manipulations introduce \({\mathbf{L}}\) which is a constant. And in the end we integrate \(dq\). In conclusion, with the necessary provisos, the current density of a moving single charged particle is singular and can always be considered as giving rise to a magnetic charge \({\boldsymbol{\mu}}\) which is a magnetic monopole moment (without hyphen).
As explained by Griffiths and Hnizdo [19], in Gilbert’s description a magnetic dipole moment consists just of two monopole moments. When we consider the current loop from the viewpoint of a moving Lorentz frame, the symmetry between the two monopole moments becomes broken, which leads to a “hidden momentum”. The conclusions reached in our approach are completely in line with these ideas and show that a single electron in circular motion corresponds to a magnetic monopole moment.
We still have to address the issue (1). The solution of this conundrum is that \(-{\boldsymbol{\mu\cdot}}{\mathbf{B}}\) does not represent the potential energy, but the kinetic energy as already pointed out in the main text. The following subsection will show how we get it into the formalism by applying the correct substitution for the coupling of the charge to the free-space Dirac equation.
### Further discussion of Eq. 55 and the origin of the term \(-{\boldsymbol{\mu\cdot}}{\mathbf{B}}\)
We consider the terms in Eq. 55 with the opposite sign, such that the signs will be those that will appear in the equation after making the substitution \(E\to E-qV,{\mathbf{p}}\rightarrow{\mathbf{p}}-q{\mathbf{A}}\). The term \(-\gamma q{\mathbf{A}}{\boldsymbol{\cdot\sigma}}\) has already been discussed in terms of a velocity field with a vorticity. As it is a vector, it can not contribute to the potential energy. However the vorticity of this term can affect the energy. The term \(-\gamma q\,({\mathbf{v\cdot A}})\,{\mathds{1}}\) can be rewritten as \(-{\mathbf{j\cdot A}}\), such that it can be understood as the “potential energy” of a current, while in reality it describes its kinetic energy, as verified on a simple example in the main text. By introducing \({\mathbf{A}}=-{1\over{2}}{\mathbf{r}}\wedge{\mathbf{B}}\), the term \(-\gamma q\,({\mathbf{v\cdot A}})\,{\mathds{1}}\) can also be written as \(-{\mathbf{L\cdot}}{q{\mathbf{B}}\over{2m_{0}}}{\mathds{1}}\), where \({\mathbf{L}}={\mathbf{r}}\wedge\gamma m_{0}{\mathbf{v}}\). This angular momentum \({\mathbf{L}}\) is defined with respect to an arbitrary origin, as \({\mathbf{r}}\) is defined with respect to an arbitrary origin. The term \(-{\mathbf{L\cdot}}{q{\mathbf{B}}\over{2m_{0}}}\) has the dimension of an energy. We can write it in terms of the “potential energy” term considered above:
\[-U_{dipole}=-{q\over{2m_{0}}}{\mathbf{L}}{\mathbf{\cdot B}}=\gamma{\boldsymbol {\mu}}{\mathbf{\cdot B}}=-{\mathbf{L}}{\boldsymbol{\cdot\omega}}_{L}.\] (57)
The term \(\gamma\) associated with \({\boldsymbol{\mu}}{\mathbf{\cdot B}}\) is motivated by the fact that \({\boldsymbol{\mu}}\) has been defined with \({\mathbf{L}}=m_{0}{\mathbf{r}}\wedge{\mathbf{v}}\). The minimal substitution will lead to \(E\to E+U_{dipole}=E-\gamma{\boldsymbol{\mu}}{\mathbf{\cdot B}}\). This quantity \(U_{dipole}\) is determined up to an arbitrary constant just like a Coulomb potential is determined up to an arbitrary constant. The arbitrary constant is related to the fact that we can define \({\mathbf{r}}\) in \({\mathbf{A}}=-{1\over{2}}{\mathbf{r}}\wedge{\mathbf{B}}\) with respect to an arbitrarily chosen origin. The constant can be fixed by choosing an origin. When we choose an origin to calculate the Coulomb potential, this will simultaneously determine an origin for the potential \(-{\boldsymbol{\mu}}{\mathbf{\cdot B}}\). If there is no Coulomb potential, then we must find a different criterion to define the origin. The term \({q\over{2m_{0}}}{\mathbf{L}}\) can be rewritten as \({{\mathbf{L}}\over{2m_{0}c}}q_{m}\). Here \({{\mathbf{L}}\over{2m_{0}c}}\) contains \({\mathbf{r}}\) as a position vector with respect to the arbitrary origin, such that \({\boldsymbol{\mu}}\) is the monopole moment of the magnetic charge with respect to this origin. Magnetic moments can be defined with respect to arbitrary points, just like angular momentum is in principle defined with respect to an arbitrary origin. This magnetic moment is just the product of the magnetic charge \(q{\mathbf{v}}\) with a position vector \({\mathbf{r}}\). Just as only differences of potential energy make physical sense, only differences of magnetic moments make physical sense. These differences will occur from the moment on we have two magnetic charges. It is thus when we have two moving charges in points \({\mathbf{r}}_{1}\) and \({\mathbf{r}}_{2}\) that the arbitrariness disappears because \({\mathbf{r}}_{2}-{\mathbf{r}}_{1}\) does then no longer depend on the choice of the origin, such that we then can really define a magnetic dipole moment. It is because it consists of many magnetic moments that a macroscopic current loop can be identified with a magnetic dipole moment. But the singular magnetic moment of a single moving charge is not a magnetic dipole moment. One could call it an arbitrary magnetic monopole moment (which is not the same as a magnetic-monopole moment). One can split the magnetic dipole moment into two such magnetic monopole moments, just as we can split a dipole into two monopoles.
Finally, \(-\imath\gamma q({\mathbf{v}}\wedge{\mathbf{A}}){\boldsymbol{\cdot\sigma}}\)\(=\)\(\imath\gamma{q\over{2}}(({\mathbf{r\cdot v}}){\mathbf{B}}-({\mathbf{B\cdot v}} ){\mathbf{r}}){\boldsymbol{\cdot\sigma}}\). This term is also determined up to an arbitrary constant. While it has here the dimension of an energy, it’s pseudo-vector character precludes using it as a potential energy. For a circular motion of the charge due to the Lorentz force exerted by the magnetic field, this term reduces zero if we choose the origin at the centre of the circle. In fact, \({\mathbf{r}}\), \({\mathbf{v}}\), \({\mathbf{B}}\) are then all mutually orthogonal. Once again, the case \({\mathbf{v}}\not\parallel{\mathbf{A}}\) can only have meaning in a context of forced motion. In the non-relativistic limit, where we can neglect \(\gamma\), the force term will lead to four terms: two pseudo-scalar terms and two vector terms. One of the vector terms will be the Lorentz force \(q({\mathbf{v}}\wedge({\mathbf{\nabla}}\wedge{\mathbf{A}})){\boldsymbol{\cdot \sigma}}\). This will lead to the torque on a non-aligned current loop considered by many authors, from the moment on we consider two or more charges rather than one in the current loop. The other vector term will be \(q(({\mathbf{\nabla}}\wedge{\mathbf{v}})\wedge{\mathbf{A}}){\boldsymbol{\cdot \sigma}}\). The two pseudo-scalar terms can be written together as \(-\imath q{\mathbf{\nabla\cdot}}({\mathbf{v}}\wedge{\mathbf{A}})\,{\mathds{1}}\)\(=\)\(\imath q\,({\mathbf{v\cdot B}})\,{\mathds{1}}\)\(-\imath q{\mathbf{A\cdot}}({\mathbf{\nabla}}\wedge{\mathbf{v}})\,{\mathds{1}}\).
We can analyze this non-aligned situation again in terms of two mutually orthogonal fields \({\mathbf{B}}_{1}\) and \({\mathbf{B}}_{2}\). The motion of a single charged particle in the field \({\mathbf{B}}\) will then just be a circular motion in the field \({\mathbf{B}}={\mathbf{B}}_{1}+{\mathbf{B}}_{2}\). It will take place in the plane orthogonal to \({\mathbf{B}}\) with velocity \({\mathbf{v}}\). We can decompose \({\mathbf{A}}={\mathbf{A}}_{1}+{\mathbf{A}}_{2}\). The global term \(-\imath\gamma q({\mathbf{v}}\wedge{\mathbf{A}}){\boldsymbol{\cdot\sigma}}\) will be zero, as \({\mathbf{v}}\parallel{\mathbf{A}}\). However, the two terms \(-\imath\gamma q({\mathbf{v}}_{j}\wedge{\mathbf{A}}){\boldsymbol{\cdot\sigma}}\) will both be different from zero, with their sum adding up to zero. The terms \(\gamma q({\mathbf{v}}\wedge{\mathbf{A}}_{1})\) and \(\gamma{\mathbf{v\cdot A}}_{1}\) have norms \(\gamma vA_{1}\sin\chi\) and \(\gamma vA_{1}\cos\chi\), where \(\chi\) is the angle between \({\mathbf{v}}\) and \({\mathbf{A}}_{1}\). The term \(\gamma q({\mathbf{v}}\wedge{\mathbf{A}}_{1})\) accounts thus for that part of the energy that can be associated with \({\mathbf{v}}\), other than the kinetic energy \(\gamma{\mathbf{v\cdot A}}_{1}\) which has to be attributed to the motion in the field \({\mathbf{B}}_{1}\).
To conclude, we can state that in the absence of an electric field we obtain in the non-relativistic limit for the substitution required:
\[E\to E-{\boldsymbol{\mu}}{\mathbf{\cdot B}},\quad c{\mathbf{p}} \to c{\mathbf{p}}-cq{\mathbf{A}}.\] (58)
This calculation is exact and the picture is clear. We do not understand why the electron spin must be aligned (such that the energies are quantized and the imaginary terms become zero), because we do not understand the electron spin. The real vector term \(cq{\mathbf{A}}\) in \(c{\mathbf{p}}\to c{\mathbf{p}}-cq{\mathbf{A}}\) corresponds in principle to the minimal substitution for a spin-less particle but its vorticity yields the frequency of a spinning motion that after multiplication by \(\hbar/2\) can represent energy pumped into the spin. Finally, we can see from all this that in the non-relativistic limit the Zeeman effect is given by: \((L+1+2S){qB\over{2m_{0}}}\), where \(L\geq 0\).
## References
* (1) E. Cartan, in _The Theory of Spinors_, Dover, New York, 1981.
* (2) G. Coddens, in _From Spinors to Quantum Mechanics_, Imperial College Press, (2015).
* (3) L.C. Biedenharn and J.D. Louck, in _Angular Momentum in Quantum Mechanics, Theory and Application_, Encyclopedia of Mathematics and its Applications, Vol. 8, Addison-Wesley, Reading, MA, (1981), and Cambridge University Press, (1985).
* (4) H.F. Jones, in _Groups, Representations and Physics_, Adam Hilger, Bristol (1990).
* (5) L. Schwartz, in _Théorie des Distributions_, (Hermann, Paris, 1973).
* (6) C. Itzykson and J.-B. Zuber, in _Quantum Field Theory_, McGraw-Hill, New York (1980).
* (7) L.S. Brown, and G. Gabrielse, Rev. Mod. Phys. **58**, 233 (1986).
* (8) In the experiments of Couder _et al._ (Y. Couder, S. Protière, A. Fort, and A. Boudaoud, Nature **437**(7056), 208-208 (2005)), one observes an _association_ of a particle with a wave, which gives a more intuitive reading for the possibility of having particles and waves at the same time. This is reminiscent of de Broglie’s theory of the pilot wave, but it is still different. Following these ideas, an electron is not both a particle and a wave. The physical situation is just that of a particle associated with a wave. In [2] we describe a natural association of this type without having to postulate that the wave would guide the particle.
* (9) J.A. Rhodes, and M.D. Semon, Am. J. Phys. **72**, 945 (2004).
* (10) E.M. Purcell, unpublished note (January 1975).
* (11) G. Fisher, Am. J. Phys. **39**,1533 (1971); V. Hnizdo, Am. J. Phys. **80** ,645 (2012).
* (12) R.P. Feynman, R. Leighton, and M. Sands, in _The Feynman Lectures on Physics, Vol. 2_, Addison-Wesley, Reading, MA.
* (13) E.J. Konopinski, Am. J. Phys. **45(5)**, 499-502;
* (14) G. Giuliani, Eur. J. Phys. **31** (2010) 871-880.
* (15) A. Blondel, Compt. Rend. Acad. Sci. 159, 674-679 (1914).
* (16) D. Auroux, in _Multivariable Calculus_, MIT Course 18.02SC, Massachusetts Institute of Technology (2010).
* (17) M.V. Berry, Proc. R. Soc. Lond. **A 392**, 45-57 (1984).
* (18) W. Greiner, and J. Reinhardt, in _Quantum Electrodynamics (4th ed.)_ (Springer, Heidelberg, 2008).
* (19) D.J. Griffiths, and V. Hnizdo, Am. J. Phys. **81**, 570 (2013).
|
1908.08934 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 43245,
"num_imgs": 7,
"llama3_tokens_count": 11466
} | [
"content_image/1908.08934/x1.png",
"content_image/1908.08934/x2.png",
"content_image/1908.08934/x3.png",
"content_image/1908.08934/x4.png",
"content_image/1908.08934/x5.png",
"content_image/1908.08934/x6.png",
"content_image/1908.08934/x7.png"
] | # Evolving roles and dynamics for catch and slip bonds during adhesion cluster maturation
Elizaveta A. Novikova\({}^{1,2}\) and Cornelis Storm\({}^{2,3}\)
\({}^{1}\)Institut de Biologie de l’Ecole Normale Superieure (IBENS), Département de Biologie, Ecole Normale Supérieure, CNRS, Inserm, PSL Research University, 46 rue d’Ulm, 75005 Paris, France
\({}^{2}\)Department of Applied Physics, Eindhoven University of Technology, P. O. Box 513, NL-5600 MB Eindhoven, The Netherlands
\({}^{3}\)Institute for Complex Molecular Systems, Eindhoven University of Technology, P. O. Box 513, NL-5600 MB Eindhoven, The Netherlands
February 28, 2024
###### Abstract
Focal adhesions are the loci of cellular adhesion to the extracellular matrix. At these sites, various integrins forge connections between the intracellular cytoskeleton and the outside world: large patches of multiple types of integrins together grip hold of collagen, fibronectin and other extracellular matrix components. The mixture of integrins composing the FA will, in general, contain both slip bond integrins and catch bond integrins—bonds whose lifetime increases with applied load and bonds for whom it decreases when forced. Prior work suggests that catch bonds are essential for proper FA stability and mechanosensory functionality. In the present work, we investigate, numerically, the interplay between the two distinct types of bonds and ask how the presence, in the same FA cluster, of slip bonds augments the behavior of the catch bonds. We show, that mixing the two components m low-force mechanical integrity, lacking in purely catch systems, while preserving the potential to strengthen the FA bond by force as well as the mechanosensory qualities of the catch bonds. We investigate the spatial distribution in mixed-integrin FA’s and show that the differential response to loading leads, via an excluded volume interaction, to a dependence of the individual integrin diffusivities on the applied load, an effect that has been reported in experiments.
## I Introduction
Cells are able to sense and react to the stiffness of the extracellular environment Lo et al. (2000); Discher et al. (2005); Kong et al. (2005). Through their focal adhesions (FAs) cells are able to confer mechanical forces onto the extracellular matrix (ECM). Inside each FA Kanchanawong et al. (2010) transmembrane proteins called integrins provide direct links between the cells’ internal contractile machinery and various ECM components. Integrins are heterodimers, composed of two subunits called \(\alpha\) and \(\beta\); each of these comes in various kinds. Together, there are about 25 different integrins in the vertebrates, allowing their cells to form robust adhesions to ECM components like collagen, fibronectin and vitronectin. Within a single FA, multiple integrin species are generally represented Cluzel et al. (2005) and previous work suggests that this mixed nature of FA provides enhanced functionality. For instance, the signaling pathways of two widely studied and commonly cohabitating integrin types— \(\alpha_{5}\beta_{1}\) and \(\alpha_{V}\beta_{3}\)—interfere White et al. (2007), and their roles in adhesion and motility complement each other Roca-Cusachs et al. (2009); Balcioglu et al. (2015). Interactions—direct or indirect—between integrins of different types have been implicated in guiding force generation and rigidity sensing Elosegui-Artola et al. (2014). To better understand the roles of integrins in adhesion, models of the mechanosensing and mechanotransduction mechanisms on the level of the cell Walcott and Sun (2010); Marcq et al. (2011) and on the level of focal adhesion Schwarz et al. (2006); Erdmann and Schwarz (2004); Schwarz and Erdmann (2007); Elosegui-Artola et al. (2014) were developed. In this work we consider a focal adhesion with two types of integrins, a system analogous to the one that was treated in Elosegui-Artola et al. (2014). We complete the findings in Elosegui-Artola et al. (2014) with the analysis of mixed cluster stability and a modeling of integrin mobility inside the focal adhesion. Our results exploit and extend our simulations in Novikova and Storm (2013): We supply a more realistic model, we develop a method to determine macroscopic stiffness-dependent parameters of the focal adhesion. The central question that we answer here is: How does the force, exerted on focal adhesion, influence the diffusivity of free integrins inside it? We consider a force-response of a focal adhesion consisting of two integrin types under a constant force load. Using the assumption of uniform load sharing, we determine the equilibrium values of such a mixed cluster, depending on the individual properties of the bonds in it, and on the cluster composition. We explore the stability of a mixed cluster under load and then include simple lateral diffusion of integrins on a two-dimensional lattice. We determine the diffusivity of free (unbound) integrins cas a function of the applied force, and suggest that the diffusivity of integrins inside a focal adhesion is a macroscopic parameter which reflects the force exerted on it. Assuming that a cell invests equal energy in every focal adhesion, we conclude that the diffusivity of integrins inside a focal adhesion depends on the stiffness of the extracellular matrix and the level of force, which increases as the adhesion matures.
## II Binding and unbinding of single catch and slip bonds
The binding and unbinding rates \(k_{b}\) and \(k_{u}\) characterize the equilibrium kinetics of a single, noncovalent molecular bond. These rates are load-dependent; in response to an applied pulling force \(f\) the unbinding rate of so-called _slip-bonds_ is predicted, according to Kramer’s rate theory Kramers (1940), to increase exponentially as
\[k^{\rm sb}_{u}=k_{0}^{\rm sb}\exp{\left(\frac{+f\xi_{\rm sb}}{{k_{\textrm{B}}T }}\right)}\,.\] (1)
In this expression, \(\xi_{\rm sb}\) is a microcscopic unbinding length; \(k_{\rm B}\) is the Boltzmann constant and \(T\) is the absolute temperature. \(k_{0}^{\rm sb}\) is the unforced unbinding rate; the rate at which the bond opens up under the effect of spontaneous fluctuations. It is set by a bare attempt frequency \(k_{0}\) and \(\Delta U_{\rm sb}\), the height of the energetic barrier corresponding to the dissociation of the bond
\[k_{0}^{\rm sb}=k_{0}\exp{\left(\frac{\Delta U_{\rm sb}}{{k_{\textrm{B}}T}} \right)}\,.\] (2)
In the case of a _catch-bond_, the unbinding behavior Dembo et al. (1988) is quite different: When a moderate tension is applied to this bond, the bond dissociation rate initially _decreases_, corresponding to an increase in the single-bond lifetime. Using the so-called ’two pathway model’ Pereverzev et al. (2005), the total unbinding rate of such a catch bond may be computed as
\[k^{\rm cb}_{u}=k_{0,1}^{\rm cb}\exp{\left(\frac{+f\xi_{1}}{{k_{\textrm{B}}T}} \right)}+k_{0,2}^{\rm cb}\exp{\left(\frac{-f\xi_{2}}{{k_{\textrm{B}}T}}\right) }\,,\] (3)
that is, a sum of two rates of corresponding to two parallel processes. Process 1 (with bare unbinding rate \(k_{0,1}\) and dissociation length \(\xi_{1}\)) captures dissociation along a slip-like path, as may be surmised from the increase in rate with increasing force. Process 2 (with bare unbinding rate \(k_{0,2}\) and dissociation length \(\xi_{2}\)) describes dissociation along a catch path, different in the sense that the force-dependence in the exponent carries a minus sign which leads to a decreasing catch unbinding rate.
In previous work Novikova and Storm (2013), we show how to re-express Eq. (3) in terms of the normalized catch bond unbinding rate (in the case that \(\xi_{1}=\xi_{2}\equiv\xi_{\rm cb}\) as a function of the dimensionless force \(\phi={f\xi_{\rm cb}}/{{k_{\textrm{B}}T}}\) using only two parameters \(\phi_{1}\) and \(\phi_{2}\) reflecting, respectively, the dissociation energy barriers for the slip- and the catch path.:
\[k^{\rm cb}_{u}(\phi)=e^{(\phi-\phi_{1})}+e^{-(\phi-\phi_{2})}\,.\] (4)
In the present work, we are interested in coupling these catch bonds with slip bonds. Their unbinding rate Eq. (1), likewise, may be re-expressed in terms of the same nondimensional forces and rates
\[k^{\rm sb}_{u}(\phi)=e^{(\phi/\rho_{\xi}-u_{\rm sb})}\,,\] (5)
with two additional parameters: \(\rho_{\xi}=\xi_{\rm cb}/\xi_{\rm sb}\) is the ratio of the catch and slip bond dissociation lengths, and \(u_{\rm sb}=-\Delta U_{\rm sb}/{{k_{\textrm{B}}T}}\) sets the zero-force unbinding rate of the slip bond. Once its unbinding rate \(k_{u}\) is known, the average lifetime of a single bond is computed as
\[k_{u}(\phi)\equiv(k_{0}\tau(\phi))^{-1}\,.\] (6)
After their discovery, single biological catch-bonds have received considerable attention in the community. Recent experiments T. et al. (2003); Kong et al. (2009); Nordin et al. (2012) measured catch-bond characteristics by pulling single integrin-ligand bond with an AFM-tip. In this work we use the parameters of an individual integrin- fibronectin catch bond, which were obtained in one of these experiments T. et al. (2003). As earlier in Novikova and Storm (2013), we use the two-pathway model from Pereverzev et al. (2005), and fit it to the data from Kong et al. (2009), with fit parameters \(\phi_{1}\) and \(\phi_{2}\). The dimensionless force \(\phi\) is computed as \(\phi=f/f^{\star}\), where \(f^{\star}=5.38\) is a scaling force. As also noted in Elosegui-Artola et al. (2014), compared to catch-bonds slip-bonds formed by integrins have not been studied in as much detail; for demonstrational purposes we will, in the present paper, fix the catch bond parameters at the aforementioned values and will vary the slip bond parameter \(\rho_{\xi}\) to set the relative force responsivity. Throughout this paper, we set \(u_{\rm sb}=1\) as the reference, zero-force unbinding rate for slip bonds. In Fig. 1, we plot the resulting catch- and slip lifetimes for various values of \(\rho_{\xi}\). The distinct force-lifetime responses are clearly visible with the catch bond showing the characteristic maximum at finite force at the unbinding lifetime.
<figure><img src="content_image/1908.08934/x1.png"><figcaption>FIG. 1: Average lifetimes τc,τs and of catch- (red) and slip (green) bonds asa function of dimensionless force, ϕ. Parameter values for catch bonds(described by Eq. (4)): ϕ2=4.02, ϕ1=7.78; for slip bonds (described by Eq.(5)) short dashed: ρξ=1, usb=1, long dashed: ρξ=3.8,usb=1, solid line: ρξ=6.6,usb=1.</figcaption></figure>
With these preliminaries in place, we turn to the behavior of a mixed cluster containing both catch and slip bonds, at finite force.
## III A mixed catch-slip cluster at fixed force: Mean field theory
Following the approach laid out in Schwarz _et al._Schwarz et al. (2006), we consider a fixed total number of bonds (bound or unbound) \(N_{\rm t}\), out of which \(N_{\rm ct}\) are catch-bonds and \(N_{\rm st}\) are slip-bonds; \(N_{\rm ct}\) and \(N_{\rm st}\) are individually conserved. We will let \(i\) denote the number of bound catch bonds, and \(j\) the number of bound slip bonds at time \(t\). We denote the probability of having \(i\) closed catch bonds and \(j\) closed slip bonds at a given time \(t\) by \(p_{i,j}(t)\); its evolution is governed by a a one-step, two-variate master equation:
\[\frac{\mathrm{d}p_{i,j}(t)}{\mathrm{d}t} = r^{s}_{i,j+1}(F_{\rm t})p_{i,j+1}+r^{c}_{i+1,j}(F_{\rm t})p_{i+1 ,j}\] (7)
\[+g^{c}_{i-1,j}p_{i-1,j}+g^{s}_{i,j-1}p_{i,j-1}\]
\[-\left[r^{c}_{i,j}(F_{\rm t})+r^{s}_{i,j}(F_{\rm t})+g^{c}_{i,j}+ g^{s}_{i,j}\right]p_{i,j}\,,\]
where \(r^{s/c}(F)\) are the force-dependent unbinding rates for slip (\(s\)) and catch (\(c\)) bonds, and \(g^{s/c}\) are the rebinding rates setting the typical time for the formation of a new catch or slip attachment to an extracellular ligand. \(F_{\rm t}\) is the total force applied to all bonds. As such, the first line of the RHS of Eqs. (7) describes the change in \(p_{i,j}(t)\) due to the unbinding of either a catch or a slip bond from a state with one additional bound bond compared to \(\{i,j\}\); the second line represents rebinding of either a catch or a slip bond from a state with one fewer bound bond compared to \(\{i,j\}\), and the third line represents unbinding _and_ rebinding of either type of bond from the state \(\{i,j\}\) itself. Eqs. (7) describe a stochastic process underlying the temporal evolution of the probability distribution \(p_{i,j}(t)\). Derived from it are the quantities that we will initially be most interested in, the expectation values for the total number of bound bonds \(N\), and those for the numbers of bound catch (\(N_{\rm c}\)) and slip bonds (\(N_{\rm s}\)) individually
\[N_{\rm c}(t) \equiv \langle i\rangle(t)=\sum_{\{i,j\}}i\,p_{i,j}(t)\]
\[N_{\rm s}(t) \equiv \langle j\rangle(t)=\sum_{\{i,j\}}j\,p_{i,j}(t)\]
\[N(t) \equiv \langle i+j\rangle(t)=\sum_{\{i,j\}}(i+j)\,p_{i,j}(t)\,.\] (8)
We assume the rate of rebinding to be independent of both the applied force (because new bonds form, by definition, at zero tension) and of the type of bond. This helps simplify the initial conditioning of the system, and although it may be necessary to revisit this assumption to permit quantitative analysis we are, for the purpose of this paper, interested first in establishing the qualitative effects of mixing slip and catch bonds in adhesive clusters. Force-independent rebinding is enforced by setting
\[g^{c}_{i,j} = g^{c}_{i}=k_{0}\gamma(N_{\rm ct}-i)\]
\[g^{s}_{i,j} = g^{s}_{j}=k_{0}\gamma(N_{\rm st}-j)\] (9)
i.e., rebinding is proportional to the instantaneous number of available, unbound bonds of the same type. Again, we simplify the system by assuming that \(\gamma\) is independent of the force, and is the same for both types of bond. Of course, there is no reason for this to hold in real life; the kinetics of integrin-ligand bond formation will differ by type.
The force-dependent unbinding rates \(r^{s}(F)\) and \(r^{c}(F)\) are where the differential characteristics of catch- and slip-bond manifest themselves. From now on we describe the process in terms of the total dimensionless force \(\Phi=F_{t}/f^{\star}\), and define
\[r^{c}_{i,j}(\Phi) = r^{c}_{i}(\Phi)\equiv i\,k_{0}\,k^{\rm cb}_{u}(\bar{\phi})\]
\[r^{s}_{i,j}(\Phi) = r^{s}_{j}(\Phi)\equiv j\,k_{0}\,k^{\rm sb}_{u}(\bar{\phi})\,,\] (10)
where the normalized rates \(k^{\rm cb}_{u}\) and \(k^{\rm sb}_{u}\) are evaluated at the average loading force, which we obtain by assuming a uniform distribution of the total load accross all bound bonds, _i.e._
\[\bar{\phi}=\frac{\Phi}{i+j}\] (11)
Nonuniformly distributed load may well be present in focal adhesions, and may be implemented by a spatially varying distribution of \(\Phi\); again we start from the simplest scenario here. With these conventions, we derive dirctly from Eq. (7) an evolution equation for \(N(t)\), the equilibrium number of bound bonds
\[\frac{\mathrm{d}}{\mathrm{d}t}N=\sum_{\{i,j\}}(i+j)\left(\frac{\mathrm{d}p_{i, j}}{\mathrm{d}t}\right)=-\langle r^{c}_{i,j}\rangle+\langle g^{c}_{i,j}\rangle -\langle r^{s}_{i,j}\rangle+\langle g^{s}_{i,j}\rangle\,,\] (12)
where the summation is over all of the possible numbers \(\{i,j\}\) of bound catch- and slip- bonds in a cluster, and \(\langle\rangle\) denotes averages in the distribution \(p_{i,j}(t)\). Eq. (12) can be split into two separate equations, describing the equilibrium number of catch \(N_{\rm c}=\langle i\rangle\) and slip \(N_{\rm s}=\langle j\rangle\) bonds separately. Assuming that all rate functions vary slowly around their equilibrium values, we make the mean field approximation by replacing \(\langle r^{c}_{i,j}\rangle\), \(\langle r^{s}_{i,j}\rangle\), \(\langle g^{c}_{i,j}\rangle\), and \(\langle g^{s}_{i,j}\rangle\) by the first terms in their Taylor expansions around \(\{\langle i\rangle,\langle j\rangle\}\): \(\langle r^{c}_{i,j}\rangle\approx r^{c}_{\langle i\rangle,\langle j\rangle}\), \(\langle g^{c}_{i,j}\rangle\approx g^{c}_{\langle i\rangle,\langle j\rangle}\) etc. This transforms Eqs. (12) into the following coupled system
\[\left\{\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t}{N_{\rm c}}= &-N_{\rm c}k^{\rm cb}_{u}\left(\frac{\Phi}{N_{\rm c}+N_{\rm s}} \right)+\gamma(N_{\rm ct}-N_{\rm c})\\ \frac{\mathrm{d}}{\mathrm{d}t}{N_{\rm s}}=&-N_{\rm s }k^{\rm sb}_{u}\left(\frac{\Phi}{N_{\rm c}+N_{\rm s}}\right)+\,\gamma(N_{\rm st }-N_{\rm s})\,.\end{aligned}\right.\] (13)
Here the time \(t\) is actually the nondimensionalized time \(tk_{0}\), but we may set \(k_{0}=1\)s without losing generality. Note, also, the nature of the coupling: In our model, the different types of bonds are aware of each other only through the shared total force \(\Phi\). At equilibrium, the RHS of both equations in the system above vanish. At zero overall force, the equations fully decouple. The number of bound slip bonds becomes independent of \(\rho_{\xi}\). For general forces, the coupled system 13 has two solutions for each value of force. One of the solutions is unstable, the other solution corresponds to the local equilibrium and is stable. These two solution branches are readily obtained by direct numerical solution of Eqs. (13), with the RHS’s equated to zero.
As shown in Fig. 2, at the equilibrium level the effect of mixing catch and slip bonds is that slip bonds provide most of the adhesion at low forces, while the catch bonds take over at intermediate and high forces. This is a marked increase in functionality over having just catch bonds; while these are able to stabilize adhesions at high forces they must pass through an extended, weakly bound regime to get there. Mixed catch-slip adhesion clusters always have an appreciable number of the integrins bound and as such provide stability at all force levels. We now compare these numerical solutions to the results of stochastic simulations of the mixed bonds system.
## IV Stochastic simulations of mixed clusters: Equilibrium bond numbers
While the mean field approximation can teach us something about equilibrium behavior and expectation values, it says nothing about the dynamic behavior and in particular is not able to address the lifetime of the stable state. As we have demonstrated in earlier work, the mechanism for cluster unbinding is fluctuation-driven, what we have called the stable solution branch is actually a metastable branch and a sufficiently large bond number fluctuation—which will come along at some point—prompts unbinding of the entire cluster. In order to address the lifetime of mixed clusters, we therefore turn to stochastic simulations, for which we use the Gillespie algorithm Gillespie (1977). We initiate the system at a certain total number of bound bonds of each type, and specify the cluster composition (total numbers of available catch and slip bonds). The choice of the initial value of bound bonds determines the typical evolution of the simulation, in the sense that in order to reach the (meta)stable solution branch the initial values must be chosen within the basin of attraction of that branch. A typical simulation allows us to compute the typical evolution of the number of bound catch and slip bonds with time, as Fig. 2 demonstrates. The solid lines are the equilibrium predictions from Eqs. (13), and indeed the system is seen to converge onto the predicted values after a brief equilibration period. For this particular choice of parameters, the cluster is stable over the entire time of the simulation. However, the stochastic simulations also capture cluster unbinding, as is shown in Fig. 3, where an initially stabilized cluster unbinds after a spontaneous supercritical bond number fluctuation. Repeating these simulations multiple times, for different total forces and different parameter values, we collecting statistics on both the average values of the number of bound bonds of each type, and the lifetime of the composite cluster. Fig. 3 shows that, as predicted by the mean-field model, the average relative numbers of bound catch (\(n_{s}=\langle N_{\rm s}(t)\rangle_{t}/N_{\rm st}\)) and slip (\(n_{c}=\langle N_{\rm c}(t)\rangle_{t}/N_{\rm ct}\)) bonds in a stable adhesive cluster follows the expected behavior, and that catch and slip bonds preserve their tendencies even when coupled to each other via the force applied to a composite cluster. The number of catch bonds still peaks at some finite forces, while the equilibrium fraction of bound slip bonds decreases monotonically with increasing force. In measuring these average bound bond numbers, we take into account only the times during which a stable adhesion is present; should the cluster unbind we stop measuring. Thus, what this simulation is bearing out is that the composition of stably adherent clusters is reliably predicted by Eqs. (13).
<figure><img src="content_image/1908.08934/x2.png"><figcaption>FIG. 2: Relative fraction of closed catch and slip bonds as a function of theforce in a cluster of 2048 catch and 2048 slip bonds. Orange/blue pointscorrespond to simulation results for catch/slip bonds, starting from all bondsclosed for zero force. Pink/green lines - deterministic solution, obtained bysolving equations 13, for experimentally derived catch bond force-lifetimecurves and slip bonds with ρξ=3.8 and usb=1. Rebinding rate for both catch andslip bonds is γ=1. The vertical line at Φ=12×103 is the total force at whichwe simulate the dynamics for Fig. 3; the two black dots where this lineintersects the stable branches for catch and slip bonds represent thepredicted equilibrium binding fractions.</figcaption></figure>
<figure><img src="content_image/1908.08934/x3.png"><figcaption>FIG. 3: Relative fraction of closed catch and slip bonds as a function of timefor Φ=12×103,ρξ=3.8 and usb=1, γ=1, 2048 catch bonds and 2048 slip bonds.Bound fractions for both were initialized at 0.6. The red and green linesrepresent the evolution of the fraction of bound catch (red) and slip (green)bonds over time. The horizontal black dotted lines represent the predictedequilibrium values for these fractions according to Eq. (13). These valuesalso correspond to the black dots in Fig. 2.</figcaption></figure>
## V Stochastic simulations of mixed clusters: Cluster lifetimes
With the force-dependent numbers of bound bonds and their partitioning between catch and slip now clear, we may ask what functional advantage, if any, the presence of both types of bonds offers over only a single species of integrin. Is it true, that the increased presence of bound bonds (mostly slip) at low forces translates into increased lifetimes in this region, and is this providing additional and previously missing low-force stability? Our stochastic simulations allow us to measure the lifetime of a mixed cluster, and compare it to the lifetimes of clusters containing only catch, or only slip bonds. Representative results are collected in Fig. 4.
<figure><img src="content_image/1908.08934/x4.png"><figcaption>FIG. 4: Comparison between the lifetimes of a cluster consisting of 100 catchbonds (red curve, parameters: ρξ=1, γ=0.2,usb=1), 100 slip bonds (green curve,parameters: ρξ=1,γ=0.2,usb=1) and a cluster containing 50 slip bonds and 50catch bonds (blue curve, same parameters as the pure systems).</figcaption></figure>
<figure><img src="content_image/1908.08934/x5.png"><figcaption>FIG. 5: Lifetime of a cluster consisting of 50 catch and 50 slip bondsdepending on force. Each of the points represent the result of 100 simulationtrajectories started out from Nc=25 and Ns=25 after 4⋅106 steps each, and ρξ=1(red) ρξ=3.8 (green), ρξ=6.6 (blue), solid line of matching color correspondto T25,25 \- the solution of Eq. (18).</figcaption></figure>
As Fig. 4 illustrates, mixing catch- and slip bonds provides additional functionality compared to either of the two single-component systems. At low forces, the slip bonds provide initial stability to a nascent cluster compared to a catch-only cluster. This eliminates the weakly bound low-force regime from the pure catch system; the slip bonds ensure immediate and effective adhesion. As the force rises, and slip bonds are gradually replaced by catch bond integrins the behavior of the entire cluster increasingly reflect their high-force stabilizing effect; the blue curve is above the green curve. The advantage of mixing is thus obvious from the mean lifetime; the catch bonds in the mixed cluster provide additional stability and a far higher threshold force for unbinding at high forces compared to pure slip bond systems, while the slip bonds provide greatly enhanced stability at lower forces.
That the lifetime of the mixed cluster is nowhere longer than either the pure catch or the pure slip system shouldn’t come as a surprise; in the regimes where the behavior of one type of bonds dominates this behavior is always going to be diluted to some extent by the presence of the other, subdominant bond type. We speculate that the overall improvement of both low- and high force stability takes precedence over further increases in lifetime at one particular force regime.
Fig. 4 also suggests a particular sequence to the dynamics of integrin recruitment to developing focal adhesions. As the tension builds in the stress fiber attached to the focal adhesion, the system travels along the \(\Phi\)-axis. Based on our mixed-cluster model, we suggest this phase of tension-buildup drives a shift in FA composition, or at least in the partitioning of those integrins that are bound to the substrate. Younger focal adhesions benefit most from bound slip-type bonds, whereas mature focal adhesions will rely more on catch bonds.
The partitioning of bound bonds will be exceedingly difficult to measure directly. Their complement—the unbound bonds—may well be a better target to validate the predicted behavior. In the following sections, we detail how careful observation of the diffusive behavior of both bond types inside the FA may reveal a force-dependent compositional shift.
## VI Lateral diffusion of catch and slip bonds in the adhesive zone
Why should the force-dependent composition of a mixed-cluster adhesion affect the mean diffusivity of integrins inside a FA? To see this, consider an area densely covered with integrins of both types, some bound and some unbound. In-plane hopping of one integrin to a neighboring site then requires it to exchange places with a neighbor that is also not bound, and therefore also free to move. An abundance of bound bonds, which are immobilized by their connection to the ECM, in this environment reduces the opportunities for such hops, and thus strongly suppresses the diffusivity of unbound integrins. Indeed, single-protein tracking experiments Rossier et al. (2012) report clear changes in the diffusivity depending on the applied tension. To model the diffusion, we include now the spatial distribution of integrins in our model by putting the integrins on a square lattice, with lattice spacing \(\lambda\).
In these simulations, the binding and unbinding behavior is as it was before in the Gillespie approach, but now we add as a potential update move the exchange of position between two neighbouring lattice sites provided both are unbound. In such a simulation, the diffusion coefficient \(D\) may be computed following Haus and Kehr (1987) as the coefficient of proportionality between the mean residence time at a lattice site \(\langle t_{\rm res}\rangle\) and the squared lattice spacing:
\[D=\frac{\lambda^{2}}{2d\langle t_{\rm res}\rangle}\,,\] (14)
where \(d=2\) is the dimensionality of the lattice. For a single, unbound integrin on an otherwise empty lattice, the transition rate \(r_{0}\) for hopping between neighboring sites according is set to
\[r_{0}=\frac{D_{0}}{\lambda^{2}}\,;\] (15)
we shall refer to \(D_{0}\) as the free diffusion constant. In a simulation run with many binding and unbinding integrins of both types then proceeds as follows: neighboring unbound bonds to exchange sites with a rate \(r_{0}\). To be able to put some actual numbers on the quantities we compute, we choose the lattice spacing \(\lambda\) such that the total density \(\rho_{\rm tot}\) of integrins matches the value reported in Elosegui-Artola et al. (2014), setting \(\lambda=\sqrt{\rho_{\rm tot}}\approx 20\) nm. The free diffusion constant is set to \(D_{0}=0.32\) Subject to the rule that exchanges are only permitted if both neighbors are unbound, we measure how long each bond spends at a single lattice site before moving to the other site. Averaging over all bonds of a single type (catch or slip) we compute the mean residence time, \(\langle t_{\rm res,c/s}\rangle\), from which according to Eq. (14) for a 2D system the diffusion coefficients may be computed as:
\[D_{c/s}=\frac{\lambda^{2}}{4\langle t_{\rm res,c/s}\rangle}\,.\] (16)
The diffusion coefficient for either bond type, in a system with a given number of catch- and slip bonds is determined by two factors: how many bonds of a given type are able to move (_i.e._, are unbound), and how many unbound neighbors of either type are in the direct vicinity. Fig. 6 shows the resulting behavior. The dots in this figure represent simulation data and show that the changing composition of the cluster, as the force rises, is indeed reflected directly in the diffusive behavior of the free integrins. Initially, the mobility of the catch bonds is considerably higher, reflecting the fact that many of them are not yet bound and thus able to diffuse. The slip bonds, in contrast, are mostly bound and thus a large fraction of them is immobile. As the force increases, this picture is reversed and while the slip bonds are, on average, becoming increasingly mobile more and more catch bonds are becoming bound and immobile.
This simple physical picture can be summarized in the following formula for the effective, force-dependent diffusion coefficient of catch and slip integrins in adhesion sites densely covered in integrins
\[D_{\rm c/s}(\Phi)=D_{0}\biggl{(}1\!-\!n_{c/s}(\Phi)\biggr{)}\biggl{[}1\!-\! \frac{N_{\rm ct}\!n_{c}(\Phi)}{N_{\rm ct}+N_{\rm st}}-\frac{N_{\rm st}n_{s}( \Phi)}{N_{\rm ct}+N_{\rm st}}]\biggr{]}\,,\] (17)
where \(n_{c}(\Phi)\) and \(n_{s}(\Phi)\) are the fraction of bound catch- or slip- bonds, respectively. The term between round brackets accounts for the availability of nonbound bonds, the term between square brackets accounts for the availability of nonbound neighbours. The predictions of Eq. (17), after plugging in the equilibrium values of \(n_{c}(\Phi)\) and \(n_{s}(\Phi)\) computed earlier, are graphed with solid lines in Fig. 6, confirming the agreement with our simulations. Comparing Fig. 6 with Fig. 2 confirms the intuitive correspondence between diffusivity and the bound/unbound fractions of both species.
<figure><img src="content_image/1908.08934/x6.png"><figcaption>FIG. 6: Mean diffusivity of catch and slip bonds as a function of the forceexerted on the cluster. Parameter values: ρξ=3.8 and usb=1, γ=1. Dots aresimulation results, solid lines are calculated from Eq. (17).</figcaption></figure>
## VII Discussion
We have studied the behavior of adhesive clusters composed of a mixture of catch and slip bonds. Our results show, that such mixed clusters provide increased functionality over either of the two pure systems—the bonds, in fact, complement each other in the sense that the addition of slip bonds provides additional stability at low forces to purely catch systems, and the addition of catch bonds provides increased load-bearing capacity and strength at higher forces. While our model does not include direct interactions between the two types of bonds, they _do_ interact indirectly, via the shared force.
As a result of this indirect, nonlinear coupling between the bond types, the fractions of bound bonds, for both species, change as the force is increased. Our model therefore suggests, that the two types of bonds not only play different roles within a composite cluster, but that they are also differentially engaged depending on the applied force. Because the force exerted at a given focal adhesion increases as the adhesion matures, this implies that the engagement (or activation) of different integrin species automatically becomes organized in time, with early adhesions featuring mostly bound slip bonds and late-stage adhesion featuring more adherent catch bonds.
In experiments, this differential engagement will be exceedingly difficult to quantify or even image directly, because all of this may happen even against a background of constant overall focal adhesion composition. What changes over time are the fractions of bonds of either type that are actually _bound_ to the ECM. To circumvent this difficulty, we suggest to measure, rather, the average diffusivity of the different types of integrins inside a focal adhesion which we find to report directly on their instantaneous activation (engagement). Moreover, _changes_ in these diffusivities might be used to assess force-dependent changes in the contributions of these different species. While, to be sure, this is still by no means straighforward it has actually been demonstrated in previous experiments Rossier et al. (2012). Our results show, that similar measurements executed at different times can compare nascent, early and mature focal adhesions and have the potential to verify the differential engagement of various integrin types during adhesion. Again, we stress that engagement and concentration are two distinct quantities; the presence of an integrin does not imply its state of activation.
While it is most certainly oversimplifying the spectacular biophysics of the focal adhesion our model is a first attempt to quantitatively assess the benefits of complexity and redundancy in cellular adhesion. We find, that even with two only species such benefits are readily identified, may be intuitively understood and modeled, and that the evolution of the system is robustly self-organized— encoded through physical, statistical-mechanical principles rather than specific biochemical regulation. Experiments well within reach of the current state-of-the-art should be able to confirm some of the predictions we make here.
###### Acknowledgements.
This work was supported by funds from the Netherlands Organization for Scientific Research (NWO-FOM) within the program on Mechanosensing and Mechanotransduction by Cells (FOM-E1009M). We thank Prof. Ulrich Schwarz, Prof. Erik Danen, Dr. Thorsten Erdmann and Dr. Emrah Balcioglu for valuable discussions.
## References
* H. E. Balcioglu, H. van Hoorn, D. M. Donato, T. Schmidt, and E. H. J. Danen (2015)The integrin expression profile modulates orientation and dynamics of force transmission at cell–matrix adhesions. Journal of Cell Science128 (7), pp. 1316–1326. External Links: Document, ISSN 0021-9533 Cited by: §I.
* C. Cluzel, F. Saltel, J. Lussi, F. Paulhe, B. A. Imhof, and B. Wehrle-Haller (2005)The mechanisms and dynamics of \(\alpha_{v}\beta_{3}\) integrin clustering in living cells. The Journal of Cell Biology171 (2), pp. 383–392. Cited by: §I.
* M. Dembo, D.C. Torney, K. Saxman, and D. Hammer (1988)The reaction-limited kinetics of membrane-to-surface adhesion and detachment. Proceedings of the Royal Society B: Biological Sciences234, pp. 55–83. Cited by: §II.
* D.E. Discher, P.A. Janmey, and Y. Wang (2005)Tissue cells feel and respond to the stiffness of their substrate. Science310 (5751), pp. 1139–1143. Cited by: §I.
* A. Elosegui-Artola, E. Bazelliares, M. D. Allen, I. Andreu, R. Oria, R. Sunyer, J. J. Gomm, J. F. Marshall, J. L. Jones, X. Trepat, and P. Roca-Cusachs (2014)Rigidity sensing and adaptation through regulation of integrin types. Nature Materials14 (6). External Links: ISSN 1476-4660, Document Cited by: §I, §II, §VI.
* T. Erdmann and U. S. Schwarz (2004)Stability of adhesion clusters under constant force. Physical Review Letters92 (10), pp. 108102. Cited by: §I.
* D. T. Gillespie (1977)Exact stochastic simulation of coupled chemical reactions. Journal of Physical Chemistry81 (25), pp. 2340–2361. Cited by: §IV.
* J. W. Haus and K. W. Kehr (1987)Diffusion in regular and disordered lattices. Physics Reports150 (5), pp. 263–406. Cited by: §VI.
* N.G. V. Kampen (1987)Stochastic processes in physics and chemistry. North-Holland Physics Publishing. Cited by: Appendix A.
* P. Kanchanawong, G. Shtengel, A. M. Pasapera, E. B. Ramko, M. W. Davidson, H. F. Hess, and C. M. Waterman (2010)Nanoscale architecture of integrin-based cell adhesions. Nature468 (7323), pp. 580–584. Cited by: §I.
* F. Kong, A. J. Garcia, A. P. Mould, M. J. Humphries, and C. Zhu (2009)Demonstration of catch bonds between an integrin and its ligand. J. Cell. Biol.185 (7), pp. 1275–1284. Cited by: §II.
* H. J. Kong, J. Liu, K. Riddle, T. Matsumoto, K. Leach, and D. J. Mooney (2005)Non-viral gene delivery regulated by stiffness of cell adhesion substrates. Nature Materials4 (6), pp. 460–464. Cited by: §I.
* H. A. Kramers (1940)Brownian motion in a field of force and the diffusion model of chemical reactions. Physica VII4, pp. 284–304. Cited by: §II.
* C. Lo, H. Wang, M. Dembo, and Y. Wang (2000)Cell movement is guided by the rigidity of the substrate. Biophysical Journal79 (1), pp. 144–152. Cited by: §I.
* P. Marcq, N. Yoshinaga, and J. Prost (2011)Rigidity sensing explained by active matter theory. Biophysical Journal101, pp. L33–L35. Cited by: §I.
* D. Nordin, L. Donlon, and D. Frankel (2012)Characterising single fibronectin-integrin complexes. Soft Matter8, pp. 6151–6160. External Links: Document Cited by: §II.
* E. A. Novikova and C. Storm (2013)Contractile fibers and catch-bond clusters: a biological force sensor?. Biophysical Journal105 (6), pp. 1336 – 1345. Note: External Links: ISSN 0006-3495, Document Cited by: §I, §II.
* Y. V. Pereverzev, O. V. Prezhdo, M. Forero, E. V. Sokurenko, and W. E. Thomas (2005)The two-pathway model for the catch-slip transition in biological adhesion. Biophysical Journal89 (3), pp. 1446 – 1454. Cited by: §II, §II.
* P. Roca-Cusachs, N. C. Gauthier, A. del Rio, and M. P. Sheetz (2009)Clustering of \(\alpha_{5}\beta_{1}\) integrins determines adhesion strength whereas \(\alpha_{v}\beta_{3}\) and talin enable mechanotransduction. Proceedings of the National Academy of Sciences106 (38), pp. 16245–16250. External Links: Document Cited by: §I.
* O. Rossier, V. Octeau, J.-B. Sibarita, C. Leduc, B. Tessier, D. Nair, V. Gatterdam, O. Destaing, C. A. s-Rizo, R. Tampé, L. Cognet, D. Choquet, B. Lounis, and G. Giannone (2012)Integrins \(\beta_{1}\) and \(\beta_{3}\) exhibit distinct dynamic nanoscale organizations inside focal adhesions. Nature Cell Biology14 (10), pp. 1057–1067. Cited by: §VI, §VII.
* U. S. Schwarz, T. Erdmann, and I.B. Bischofs (2006)Focal adhesions as mechanosensors: the two-spring model. Biosystems83 (2-3), pp. 225–232. Cited by: §I, §III.
* U. S. Schwarz and T. Erdmann (2007)Impact of receptor-ligand distance on adhesion cluster stability. European Physical Journal E22, pp. 123–137. Cited by: §I.
* Marshall,B. T., Long,M., Piper,J. W., Yago,T., McEver,R. P., and Zhu,C. (2003)Direct observation of catch bonds involving cell-adhesion molecules. Nature423 (6936), pp. 190–193. Cited by: §II.
* S. Walcott and S. X. Sun (2010)A mechanical model of actin stress fiber formation and substrate elasticity sensing in adherent cells. Proceedings of the National Academy of Sciences USA107 (17), pp. 7757–7762. Cited by: §I.
* D. P. White, P. T. Caswell, and J. C. Norman (2007)\(\alpha_{v}\beta_{3}\) And \(\alpha_{5}\beta_{1}\) integrin recycling pathways dictate downstream rho kinase signaling to regulate persistent cell migration. The Journal of Cell Biology177 (3), pp. 515–525. Cited by: §I.
## Appendix A Analytical calculation of the lifetime of a mixed cluster
<figure><img src="content_image/1908.08934/x7.png"><figcaption>FIG. 7: Sketch of the configurational space that cluster with two types ofbonds explores. The parameters of the system are the number of bound catchbonds (along the x-axis) and the number of bound slip-bonds (along they-axis). All unbinding pathways correspond to trajectories that end up in theorigin at the lower left corner. The example trajectory of unbinding (blueline) starts from (i,j) bound bonds(black point) and ends at an absorbingboundary at (0,0) (the red point); it is subject to reflecting boundariesalong the red lines. The trajectory is confined to be inside the phase spaceat all times when 0⩽i⩽Nst and 0⩽j⩽Nct.</figcaption></figure>
The time \(T_{i,j}\), that it takes a cluster of \(i\) bound catch and \(j\) bound slip bonds to reach the point where all catch and slip bonds are unbound, obeys a recursive equation which may be derived using the methods set out in Kampen (1987). This relation reads
\[T_{i,j} =T_{i+1,j}\frac{g_{i}}{g_{i}\!+\!g_{j}\!+\!r^{c}_{i,j}\!+\!r^{s}_ {i,j}}+\] (18)
\[+ T_{i,j+1}\frac{g_{j}}{g_{i}\!+\!g_{j}\!+\!r^{c}_{i,j}\!+\!r^{s}_ {i,j}}\!+\!T_{i,j-1}\frac{r^{s}_{i,j}}{g_{i}\!+\!g_{j}\!+\!r^{c}_{i,j}\!+\!r^{ s}_{i,j}}+\]
\[+ T_{i-1,j}\frac{r^{c}_{i,j}}{g_{i}\!+\!g_{j}\!+\!r^{c}_{i,j}\!+\! r^{s}_{i,j}}\!+\!\frac{1}{g_{i}\!+\!g_{j}\!+\!r^{c}_{i,j}\!+\!r^{s}_{i,j}}\,,\]
with \(g\) and \(r\) the binding and unbinding rates as defined in the main text. The last term in Eqs. (18) corresponds to the time that it takes to leave state \({i,j}\) to any of its neighboring states in configurational space, and the first four terms represent the lifetimes of those four neighboring states, multiplied by the transition probabilities to those states. Writing this out for all possible combinations of catch- and slipbonds, one obtains \(N_{\rm c}\times N_{\rm s}\) equations for \(T_{i,j}\). This system of coupled algebraic equations is to be solved subject to a number of boundary conditions:
\[T_{0,0} = 0\quad:\quad\text{absorbing boundary},\] (19)
\[T_{-1,0} = 0\quad:\quad\text{no negative }i,\] (20)
\[T_{0,-1} = 0\quad:\quad\text{no negative }j,\] (21)
\[g^{s}_{N_{\rm st}} = 0\quad:\quad\text{reflecting boundary},\] (22)
\[g^{c}_{N_{\rm ct}} = 0\quad:\quad\text{reflecting boundary},\] (23)
\[r^{s}_{i,0} = 0\quad:\quad\text{reflecting boundary},\] (24)
\[r^{c}_{0,j} = 0\quad:\quad\text{reflecting boundary}.\] (25)
Eq. (19) reflects that the cluster does not rebind after all its bonds are unbound. Eqs. (20) and (21) express the condition that the number of bound bonds is never negative. Eqs. (22) and (23) take care that the cluster can not rebind more bonds than are available, and finally Eqs. (24) and (25) take care that the rupture rates vanish when no bonds of each type are bound. .
The analytical expression for the solution of system 18 is quite bulky, and cannot be expressed in a compact from for each of the \(T_{i,j}\). However, Eq. (18) is straightforwardly solved for a given total number of catch and slip bonds. These solution are graphed in Fig. 5, where we calculate the lifetime of a cluster consisting of 50 catch bonds and 50 slip bonds with various parameters and confirm the analytical outcome by comparing to stochastic simulations.
|
1910.10220 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 69109,
"num_imgs": 8,
"llama3_tokens_count": 20949
} | [
"content_image/1910.10220/x1.png",
"content_image/1910.10220/x3.png",
"content_image/1910.10220/x4.png",
"content_image/1910.10220/x6.png",
"content_image/1910.10220/x9.png",
"content_image/1910.10220/x12.png",
"content_image/1910.10220/x15.png",
"content_image/1910.10220/x18.png"
] | # Quantitative comparison of Anderson impurity solvers applied to transport in quantum dots
Bruno Max de Souza Melo
Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói, RJ, Brazil
Luis G. G. V. Dias da Silva
Instituto de Física, Universidade de São Paulo, C.P. 66318, 05315–970 São Paulo, SP, Brazil
Alexandre Reily Rocha
Instituto de Física Teórica, São Paulo State University (UNESP), São Paulo SP, Brazil
Caio Lewenkopf
Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói, RJ, Brazil
###### Abstract
We study the single impurity Anderson model (SIAM) using the equations of motion method (EOM), the non-crossing approximation (NCA), the one-crossing approximation (OCA), and Wilson’s numerical renormalization group (NRG). We calculate the density of states and the linear conductance focusing on their dependence on the chemical potential and on the temperature paying special attention to the Kondo and Coulomb blockade regimes for a large range of model parameters. We report that some standard approximations based on the EOM technique display a rather unexpected poor behavior in the Coulomb blockade regime even at high temperatures. Our study offers a critical comparison between the different methods as well as a detailed compilation of the shortcomings and limitations due the approximations involved in each technique, thus allowing for a cost-benefit analysis of the different solvers that considers both numerical precision and computational performance.
=1
## I Introduction
The single impurity Anderson model (SIAM) is one of the most recurrent models in condensed matter physics playing an important role on the study of strongly correlated systems. Tsvelick and Wiegmann (1983); Hewson (1997); Georges _et al._ (1996) Originally put forward to describe the physics of doped metals with magnetic impurities Anderson (1961); Krishna-murthy _et al._ (1975, 6, 7), the model has reached a remarkable protagonism being used to study the mixed valence regime in rare earth compounds Newns and Read (1987); Coleman (1984); Hewson (1997) as well as the Coulomb blockade and the Kondo effect in quantum dots. Meir _et al._ (1991); Pustilnik and Glazman (2004); Meir _et al._ (1993); Ng and Lee (1988); Costi _et al._ (1994); Goldhaber-Gordon _et al._ (15, 16); Pustilnik and Glazman (2001); Glazman and Pustilnik (2003) In the context of the dynamical mean field theory (DMFT) Georges _et al._ (1996) the model has gained new traction, and contributed to describe the properties of a huge variety of strongly correlated materials like transition metal oxides Kugler _et al._ (2019), heavy-fermion systems, high temperature superconductors, etc.. Kotliar _et al._ (2006); Georges _et al._ (1996) In the field of molecular electronics Liang _et al._ (2002); Park _et al._ (2002); Thoss and Evers (2018) the numerical solution of the SIAM is an important step to study the transport properties of strongly correlated molecular devices through an approach that combines the density functional theory (DFT), the dynamical mean field theory (DMFT) and the non-equilibrium Green’s function formalism (NEGF). Droghetti and Rungger (2017); Chioncel _et al._ (2015); Appelt _et al._ (2018); David Jacob (2015); Jacob _et al._ (2009, 2010)
The large variety of scenarios in which the model can be applied demands the calculation of spectral densities, thermodynamical quantities like the free energy and entropy as well as transport characteristics. In order to compute this variety of physical quantities, several impurity solvers have been developed for the SIAM over the years such as the numerical renormalization group (NRG) Krishna-murthy _et al._ (7, 6), the equations of motion (EOM) Theumann (1969); Lacroix (1981, 1982), slave boson mean field approximation Coleman (1984), non-crossing (NCA)Bickers (1987) and one-crossing approximations (OCA) Pruschke and Grewe (1989), exact diagonalization method Georges _et al._ (1996), quantum Monte-Carlo algorithms Hirsch and Fye (1986); Kotliar _et al._ (2006), iterative perturbation theory Yamada (1975); Yosida and Yamada (1970, 1975), to name a few.
These impurity solvers differ mainly by the computational cost involved and by the range of the model parameters where the used method is reliable. Indeed, the search for different methods is mainly a consequence of the fact that no method can provide reliable results over the whole physically relevant parameter space Georges _et al._ (1996); Kotliar _et al._ (2006). In many cases it is necessary to employ more than one method to better understand a specific problem Meir _et al._ (1993).
In view of the variety of available methods and scenarios in which the Anderson model can be applied the task of benchmarking impurity solvers in each context is very important and far from being exhausted. The choice of a specific solver involves the knowledge of the aspects that make it more competitive in some range of parameters than others.
In this work we carry out a systematic comparison of the accuracy of the standard impurity solvers in the literature, namely, the equations of motion method Kashcheyevs _et al._ (2006), the non crossing approximation Bickers (1987), the one-crossing approximation Pruschke and Grewe (1989); Haule _et al._ (2010), and the numerical renormalization group (NRG)Bulla _et al._ (2008). We compute the Green’s function of a quantum dot using the different methods and obtain the density of states and the linear conductance of the dot as well as its dependence on the chemical potential and the temperature. We discuss the strengths and weaknesses, as well as the computational efficiency of the above mentioned impurity solvers for a broad range of parameters of physical interest.
This paper is organized as follows. In Sec. II we briefly present the model system and the impurity solvers assessed in this study. In Sec. III we compare their accuracy and numerical efficiency by computing the density of states and the linear conductance as a function of charging energy, chemical potential, temperature, etc., covering all the standard regimes of the model. Finally, in Sec. IV we present a summary of our findings and our conclusions.
## II Model and methods
The model Hamiltonian we use to benchmark the impurity solvers is the single-impurity Anderson model Anderson (1961). We write the SIAM as
\[H=H_{\rm imp}+H_{\rm C}+H_{\rm B}.\] (1)
Here, \(H_{\rm imp}\) reads
\[H_{\rm imp}=\sum_{\sigma}\varepsilon_{\sigma}f^{\dagger}_{\sigma}f_{\sigma}+Un _{\uparrow}n_{\downarrow},\] (2)
where \(f^{\dagger}_{\sigma}\)\((f_{\sigma})\) creates (annihilates) an electron with spin projection \(\sigma\) at the impurity site, \(n_{\sigma}=f^{\dagger}_{\sigma}f_{\sigma}\) is the occupation number operator, and \(U\) is the Coulomb charging energy, the energy cost for double occupancy of the impurity site.
We address a two-probe setup, where the system of interest is connected to electronic reservoirs in thermal and chemical equilibrium by considering two semi-infinite electrodes. The corresponding Hamiltonian reads
\[H_{\text{B}}=\sum_{\alpha,\mathbf{k},\sigma}\varepsilon_{\alpha\mathbf{k} \sigma}c^{\dagger}_{\alpha\mathbf{k}\sigma}c_{\alpha\mathbf{k}\sigma},\] (3)
where \(\alpha=\left\{L,R\right\}\) labels the leads, \(c^{\dagger}_{\alpha\mathbf{k}\sigma}\)\((c_{\alpha\mathbf{k}\sigma})\) creates (annihilates) an electron of wave number \(\bf k\) and spin projection \(\sigma\) at the \(\alpha\) lead, and \(\varepsilon_{\alpha\mathbf{k}\sigma}\) is the corresponding single-particle energy.
Finally,
\[H_{\text{C}}=\sum_{\alpha,\mathbf{k},\sigma}\left(V_{\alpha\mathbf{k}\sigma}c_ {\alpha\mathbf{k}\sigma}^{\dagger}f_{\sigma}+{\rm H.c.}\right),\] (4)
represents the coupling between the impurity and the electrodes, where \(V_{\alpha\mathbf{k}\sigma}\) is the so-called hybridization matrix element.
The nonequilibrium Green’s function theory allows one to write the linear conductance \(\mathcal{G}\) of the system as Meir and Wingreen (1992); Wingreen and Meir (1994)
(5)
where the retarded Green’s function \(G_{\sigma}^{r}(\omega)\) is the Fourier transform of
\[G_{\sigma}^{r}(t)=-i\theta(t)\langle\{f_{\sigma}(t)f_{\sigma}^{\dagger}(0)\}\rangle,\] (6)
\(n_{\rm F}(\omega)=1/(1+e^{\beta\omega})\) is the Fermi distribution, with \(\beta=(k_{B}T)^{-1}\). Throughout this work the chemical potential is set to \(\mu=0\) and \(k_{B}=1\). Finally, \(\Gamma_{\sigma}\) stands for
\[\Gamma_{\sigma}(\omega)=\frac{\Gamma_{\sigma}^{L}(\omega)\Gamma_{\sigma}^{R}( \omega)}{\Gamma_{\sigma}^{L}(\omega)+\Gamma_{\sigma}^{R}(\omega)}.\] (7)
The decay widths \(\Gamma_{\sigma}^{\alpha}(\omega)\)¹ are given by
[FOOTNOTE:1][ENDFOOTNOTE]
\[\Gamma^{\alpha}_{\sigma}(\omega)=2\pi\sum_{\mathbf{k}}|V_{\alpha\mathbf{k} \sigma}|^{2}\delta(\omega-\varepsilon_{\alpha\mathbf{k}\sigma})\] (8)
and are proportional to the imaginary part of the embedding self-energy \(\Sigma^{\alpha}_{\sigma}(\omega)\) due to the scattering processes between the impurity and the leads Haug and Jauho (2008)
\[\Sigma^{\alpha}_{\sigma}(\omega)=\sum_{{\bf k}}V_{\alpha\mathbf{k}\sigma}\frac {1}{\omega-\varepsilon_{\alpha\mathbf{k}\sigma}}V_{\alpha\mathbf{k}\sigma}^{*}.\] (9)
As usual, by making the substitution \(\omega\to\omega\pm i\eta\) with \(\eta\to 0^{+}\), one obtains the retarded and advanced counterparts of the Green’s functions and self-energies presented in this paper.
The standard derivation of Eq. (5) relies on assuming proportional coupling, namely, \(\Gamma^{R}(\omega)=\lambda\Gamma^{L}(\omega)\)Meir and Wingreen (1992). It has been recently shown Dias da Silva _et al._ (2017) that, within the linear response regime, Eq. (5) is valid as long as \(\Gamma_{\sigma}(\omega)\) varies slowly on the scale of \(kT\).
It is sometimes useful to take the wide-band limit and consider \(V_{\alpha\mathbf{k}\sigma}\) to be independent of \(\mathbf{k}\) as well as taking the (non-interacting) density of states of the electrons to be constant within the band-width:
\[\rho^{\alpha}_{\sigma}(\omega)=\sum_{\mathbf{k}}\delta(\omega-\varepsilon_{ \alpha\mathbf{k}\sigma})\equiv\rho^{\alpha}_{\sigma}(0)\mbox{ for }-D\leq \omega\leq D\;,\] (10)
where \(D\) is the half-bandwidth and \(\omega\!=\!0\) is taken to be the Fermi energy in the leads. In this case, \(\Gamma^{\alpha}_{\sigma}\equiv 2\pi|V_{\alpha\sigma}|^{2}\rho^{\alpha}_{\sigma }(0)\) becomes independent of the energy \(\omega\).
Next we briefly review the methods we use to calculate the Green’s functions \(G_{\sigma}^{r}\), namely, equations of motion (EOM) Haug and Jauho (2008), slave-boson approximations (SBA) Bickers (1987); Pruschke and Grewe (1989), and numerical renormalization group (NRG) Bulla _et al._ (2008). Since most of this material is well known, we present only the elements necessary for the discussion of the results that follow, referring the interested reader to the original literature.
### Equations of motion (EOM) method
The most straightforward technique to calculate equilibrium and nonequilibrium Green’s functions is the equations of motion method Haug and Jauho (2008). This technique consists in taking time-derivatives of the Green’s functions to generate a set of coupled equations of motion. For bilinear Hamiltonians the system of equations can be solved exactly Haug and Jauho (2008); Hernández _et al._ (2007). For many-body Hamiltonians the hierarchy of equations is infinite and it is necessary to truncate the equations by means of physical arguments, yielding different approximate solutions to the problem.
The EOM simplest approximation for the SIAM is to obtain the exact Green’s function of the uncoupled impurity Hamiltonian and to include an embedding self-energy by hand Haug and Jauho (2008), namely
\[G_{\sigma}(\omega)=\frac{1-\langle n_{\bar{\sigma}}\rangle}{\omega-\varepsilon _{\sigma}-\Sigma_{0}(\omega)}+\frac{\langle n_{\bar{\sigma}}\rangle}{\omega- \varepsilon_{\sigma}-U-\Sigma_{0}(\omega)},\] (11)
where
\[\Sigma_{0}(\omega)=\Sigma^{L}_{\sigma}\left(\omega\right)+\Sigma^{R}_{\sigma} \left(\omega\right)=\sum_{\alpha\mathbf{k}}\frac{\left|V_{\alpha\mathbf{k}} \right|^{2}}{\omega-\epsilon_{\alpha\mathbf{k}}},\] (12)
and the occupation is
\[\langle n_{\bar{\sigma}}\rangle=\int_{-\infty}^{\infty}d\omega n_{F}(\omega) \rho_{\sigma}({\omega}),\] (13)
with the impurity density of states given by
\[\rho_{\sigma}({\omega})=-\frac{1}{\pi}\text{Im}[G_{\sigma}^{r}(\omega)]~{}.\] (14)
Equations (11) and (13) are solved self-consistently. This approach, hereafter called EOM0, is frequently regarded as a good approximation to the SIAM in the weak coupling regime Haug and Jauho (2008).
A more consistent approach is to write the EOM for the impurity Green’s function, Eq. (6), using the full Hamiltonian. The simplest truncation of the resulting hierarchy of coupled equations is to consider the two-particle Green’s functions at the Hartree-Fock level Haug and Jauho (2008). This approximation, that we call EOM1, is formally equivalent to the Hubbard I approximation Hubbard (1963); Gebhard (1997). The resulting Green’s function is
\[G_{\sigma}(\omega)=\frac{\omega-\varepsilon_{\sigma}-U(1-\langle n_{\bar{ \sigma}}\rangle)}{(\omega-\varepsilon_{\sigma})(\omega-\varepsilon_{\sigma}-U) -\Sigma_{0}(\omega)\left[\omega-\varepsilon_{\sigma}-U(1-\langle n_{\bar{ \sigma}}\rangle)\right]}.\] (15)
It is often stated that Eq. (11) is a good approximation to (15) Haug and Jauho (2008). As we discuss in the next section, the agreement is at most qualitative.
The next level of complexity is to neglect spin correlations in the two-particle Green’s functions Meir _et al._ (1991) to write
\[G_{\sigma}(\omega)=\frac{\varepsilon-\varepsilon_{\sigma}-U(1-\langle n_{\bar{ \sigma}}\rangle)-\Sigma_{0}(\varepsilon)-\Sigma_{3}(\omega)}{[\omega- \varepsilon_{\sigma}-\Sigma_{0}(\omega)][\omega-\varepsilon_{\sigma}-U(1- \langle n_{\bar{\sigma}}\rangle)-\Sigma_{0}(\omega)-\Sigma_{3}(\omega)]+U \Sigma_{1}(\omega)},\] (16)
where the self-energies \(\Sigma_{1,3}\) are given by Meir _et al._ (1991):
\[\Sigma_{i}(\omega)=\sum_{\mathbf{q}\beta}A^{(i)}_{\mathbf{q}\beta \sigma}|V_{\mathbf{q}\beta\sigma}|^{2} \Bigg{(}\frac{1}{\omega+\varepsilon_{\mathbf{q}\beta\sigma}- \varepsilon_{\overline{\sigma}}-\varepsilon_{\sigma}-U}+\frac{1}{\omega- \varepsilon_{\mathbf{q}\beta\sigma}+\varepsilon_{\overline{\sigma}}- \varepsilon_{\sigma}}\Bigg{)}\] (17)
with \(A^{(1)}_{\mathbf{q}\beta\sigma}=n_{\rm F}(\varepsilon_{\mathbf{q}\beta\sigma})\) and \(A^{(3)}_{\mathbf{q}\beta\sigma}=1\). We call this truncation scheme EOM2. We note that it is formally equivalent to the Hubbard III approximation Hubbard (1964).
A more sophisticated truncation scheme that includes spin correlations has been introduced in Ref. Kashcheyevs _et al._, 2006. In the absence of a magnetic field, the impurity Green’s function reads
\[G_{\sigma}(\omega)=\frac{u(\omega)-\langle n_{\bar{\sigma}}\rangle-P_{\bar{ \sigma}}(\omega)-P_{\bar{\sigma}}(\omega_{2})}{u(\omega)[\omega-\epsilon_{\bar {\sigma}}-\Sigma_{0}(\omega)]+[P_{\bar{\sigma}}(\omega)+P_{\bar{\sigma}}( \omega_{2})]\Sigma_{0}(\omega)-Q_{\bar{\sigma}}(\omega)+Q_{\bar{\sigma}}( \omega_{2})}~{},\] (18)
where \(u(\omega)\equiv U^{-1}[U-\omega+\epsilon_{\sigma}+2\Sigma_{0}(\omega)-\Sigma_{ 0}(\omega_{2})]\) and \(\omega_{2}=-\omega+\epsilon_{\sigma}+\varepsilon_{\bar{\sigma}}+U\). The functions \(P_{\sigma}(\omega)\) and \(Q_{\sigma}(\omega)\) are given by:
\[P_{\sigma}(\omega)\equiv \frac{i}{2\pi}\oint_{C}n_{\rm F}(z)G_{\sigma}(z)\frac{\Sigma_{0}( z)-\Sigma_{0}(\omega)}{\omega-z}dz\]
\[Q_{\sigma}(\omega)\equiv \frac{i}{2\pi}\oint_{C}n_{\rm F}(z)[1+\Sigma_{0}(z)G_{\sigma}(z)] \frac{\Sigma_{0}(z)-\Sigma_{0}(\omega)}{\omega-z}dz\,.\] (19)
In this approximation, called hereafter EOM3, \(G_{\sigma}(\omega)\) is obtained by solving equations (18), (19), and (13) self-consistently.
Let us briefly address the particular case of particle-hole symmetry in which \(\varepsilon_{0}=-U/2\) and \(\langle n_{\bar{\sigma}}\rangle=1/2\) for the approximation schemes presented above. In this case, the EOM0 Green’s function, Eq (11), reduces to
\[[G^{\rm sym}_{\sigma}(\omega)]^{-1}=\omega-\Sigma_{0}(\omega)-\frac{U^{2}}{4[ \omega-\Sigma_{0}(\omega)]}~{},\] (20)
Interestingly, the EOM1 approximation, Eq. (15), simplifies to
\[[G^{\rm sym}_{\sigma}(\omega)]^{-1}=\omega-\Sigma_{0}(\omega)-\frac{U^{2}}{4 \omega}~{},\] (21)
Finally, the EOM2 and EOM3 Green’s functions, Eqs. (16) and (18), respectively, coincide and can be written as
\[[G^{\rm sym}_{\sigma}(\omega)]^{-1}=\omega-\Sigma_{0}(\omega)-\frac{U^{2}}{4[ \omega-3\Sigma_{0}(\omega)]}~{}.\] (22)
It is surprising and somewhat unexpected that the EOM0 result is closer to the Green’s function of the more involved EOM2 and EOM3 schemes than the EOM1 one since Eq. (11) is frequently regarded as an approximation to the exact form in Eq. (15).
Clearly these Green’s functions are temperature independent and hence the EOM solutions can not describe Kondo physics in the particle-hole symmetric point Theumann (1969); Kashcheyevs _et al._ (2006).
### Slave boson approximation (SBA)
Let us now discuss the slave-boson method for solving the SIAM Hamiltonian Coleman (1984) in both the non-crossing (NCA) Pruschke and Grewe (1989); Gerace _et al._ (2002); Aguado and Langreth (2003) and the one-crossing approximations (OCA) Haule _et al._ (2001); Sposetti _et al._ (2016).
In some contexts, the interaction part of the SIAM is one of the largest energy scales of the system. Hence, a perturbation expansion in \(U\) may be problematic. Alternatively, noting that the hybridization terms are small compared to \(U\) and to the kinetic part of the Hamiltonian Bickers (1987), one can treat them as the perturbation term of the problem. However, we can not use the standard machinery of perturbation theory, since the SIAM Hamiltonian entails an unperturbed term that is not bilinear and, in this case, Wick’s theorem does not apply Barnes (1976).
It is possible to circumvent this problem by employing another representation for the impurity operators, the so-called slave boson approximation (SBA). In the SBA one writes the impurity operator as Sposetti _et al._ (2016); Barnes (1976, 1977)
\[f^{\dagger}_{\sigma}=s_{\sigma}^{\dagger}b+d^{\dagger}_{\bar{\sigma}\sigma}s_{ \bar{\sigma}}~{},\] (23)
where \(b^{\dagger}\), \(s_{\sigma}^{\dagger}\) and \(d^{\dagger}_{\bar{\sigma}\sigma}\) are auxiliary or pseudoparticle (PP) operators which create, over the vacuum \(|vac\rangle\), the states \(|0\rangle\), \(|\sigma\rangle\) and \(|\bar{\sigma}\sigma\rangle\), respectively. \(b^{\dagger}\) and \(d^{\dagger}_{\bar{\sigma}\sigma}\) are bosonic operators, while \(s_{\sigma}^{\dagger}\) is a fermionic operator.
In this representation the SIAM Hamiltonian reads
\[H= \sum_{\mathbf{k}\sigma}\varepsilon_{\mathbf{k}\sigma}c_{\mathbf{k }\sigma}^{\dagger}c_{\mathbf{k}\sigma}+\sum_{\sigma}\varepsilon_{0}s^{\dagger} _{\sigma}s_{\sigma}+(2\varepsilon_{0}+U)d^{\dagger}_{\bar{\sigma}\sigma}d_{ \bar{\sigma}\sigma}+\]
\[+\sum_{\mathbf{k}\sigma}\left(V_{\mathbf{k}\sigma}s^{\dagger}_{ \sigma}b\;c_{\mathbf{k}\sigma}+{\rm H.c.}\right)\]
\[+\sum_{\mathbf{k}\sigma}\left(V_{\mathbf{k}}d^{\dagger}_{\bar{ \sigma}\sigma}s_{\bar{\sigma}}c_{\mathbf{k}\sigma}+{\rm H.c.}\right)~{}.\] (24)
Here, for simplicity, we consider the case where \(\varepsilon_{0}=\varepsilon_{\uparrow}=\varepsilon_{\downarrow}\). The subspace of impurity states is formed by four states, namely, \(|0\rangle\), \(\mid\uparrow\rangle\), \(\mid\downarrow\rangle\), and \(\mid\uparrow\downarrow\rangle\) representing an empty, singly-occupied with spin up/down, and doubly-occupied impurity states, respectively. This implies that, at any given time Aguado and Langreth (2003); Haule _et al._ (2001, 2010); Jacob _et al._ (2010)
\[Q\equiv b^{\dagger}b+\sum_{\sigma}s_{\sigma}^{\dagger}s_{\sigma}+d_{\bar{ \sigma}\sigma}^{\dagger}=1~{},\] (25)
where \(Q\) is called pseudo-particle charge. The constraint \(Q=1\) has to be enforced in the calculation of the expectation value of any given observable \(\langle O\rangle\). In practice, one first calculates an unrestricted expectation value, for instance, using diagrammatic techniques in the grand canonical ensemble to obtain \(\langle O\rangle_{G}\). In this ensemble a chemical potential \(-\lambda\) is associated to the pseudo-particle charge \(Q\). Next, one projects the result into the \(Q=1\) canonical subspace to find \(\langle O\rangle_{C}\) using the so-called Abrikosov’s trick, namely Coleman (1984); Kroha and Wölfle (1998); Abrikosov (1965); Hewson (1997); Wingreen and Meir (1994)
\[\langle O\rangle_{C}=\lim_{\lambda\to\infty}\frac{\langle O\rangle_{G}}{ \langle Q\rangle_{G}}.\] (26)
The impurity Green’s function is written in terms of the pseudo-particle Green’s functions Kroha and Wölfle (1998)
\[G_{b}(\omega) =[\omega-\lambda-\Sigma_{b}(\omega)]^{-1}\]
\[G_{s_{\sigma}}(\omega) =[\omega-\lambda-\varepsilon_{0}-\Sigma_{s_{\sigma}}(\omega)]^{-1}\]
\[G_{d}(\omega) =[\omega-\lambda-2\varepsilon_{0}-U-\Sigma_{d}(\omega)]^{-1},\] (27)
whose self-energies are obtained by analytic continuation and projection into the \(Q=1\) subspace, namely Sposetti _et al._ (2016)
\[\begin{split}\Sigma_{b}(\omega)&=\int_{ -\infty}^{\infty}\frac{d\epsilon}{\pi}n_{\rm F}(\epsilon)\sum_{\sigma}\Delta_{ \sigma}(\epsilon)G_{s_{\sigma}}(\epsilon+\omega)\Lambda_{\sigma}^{(0)}(\omega, \epsilon),\\ \Sigma_{s_{\sigma}}(\omega)&=\int_{-\infty}^{\infty} \frac{d\epsilon}{\pi}n_{\rm F}(\epsilon)\Big{[}\Delta_{\sigma}(-\epsilon)G_{b} (\epsilon+\omega)\times\\ &\times\Lambda_{\sigma}^{(0)}(\epsilon+\omega,-\epsilon)+\Delta_{ \bar{\sigma}}(\epsilon)G_{d_{\sigma\bar{\sigma}}}(\epsilon+\omega)\times\\ &\times\Lambda_{\sigma\bar{\sigma}}^{(2)}(\epsilon+\omega, \epsilon)\Big{]},\\ \Sigma_{d_{\sigma\bar{\sigma}}}(\omega)&=\int_{- \infty}^{\infty}\frac{d\epsilon}{\pi}n_{\rm F}(\epsilon)\left[\Delta_{\sigma}( -\epsilon)G_{s_{\bar{\sigma}}}(\epsilon+\omega)\Lambda_{\sigma\bar{\sigma}}^{( 2)}(\omega,-\epsilon)+\right.\\ &+\left.\Delta_{\bar{\sigma}}(-\epsilon)G_{s_{\sigma}}(\epsilon+ \omega)\Lambda_{\bar{\sigma}\sigma}^{(2)}(\omega,-\epsilon)\right],\\ \end{split}\] (28)
where \(\Delta_{\sigma}(\varepsilon)\equiv\sum_{\alpha}\Gamma_{\sigma}^{\alpha}( \varepsilon)/2\). Equations (II.2) and (28) are solved self-consistently.
In the simplest diagrammatic perturbation theory approximation, which neglects vertex corrections, \(\Lambda_{\bar{\sigma}\sigma}^{(2)}(\omega,\omega^{\prime})=\Lambda_{\sigma}^{( 0)}(\omega,\omega^{\prime})=1\). This simplification corresponds to the so-called non-crossing approximation (NCA) Bickers (1987). By including higher order diagrams, one obtains
\[\begin{split}\Lambda_{\sigma}^{(0)}(\omega,\omega^{ \prime})&=1+\int_{-\infty}^{\infty}\frac{d\epsilon}{\pi}n_{\rm F} (\epsilon)\Delta_{\bar{\sigma}}(\epsilon)G_{s_{\bar{\sigma}}}(\omega+\epsilon) \\ &~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\times G_{ d_{\bar{\sigma}}\sigma}(\omega+\omega^{\prime}+\epsilon)\\ \Lambda_{\sigma\bar{\sigma}}^{(2)}(\omega,\omega^{\prime})& =1+\int_{-\infty}^{\infty}\frac{d\epsilon}{\pi}n_{\rm F}(- \epsilon)\Delta_{\sigma}(\epsilon)\times\\ &~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\times G_{ s_{\bar{\sigma}}}(\omega-\epsilon)G_{b}(\omega-\omega^{\prime}-\epsilon).\end{split}\] (29)
which contain vertex corrections at the order of one-crossing approximation (OCA) Haule _et al._ (2001); Sposetti _et al._ (2016).
The impurity density of states, Eq. (14), is given in terms of the pseudo-particle spectral functions, namely Sposetti _et al._ (2016); Haule _et al._ (2001); Pruschke and Grewe (1989)
\[\rho_{\sigma}(\omega)=\frac{Z_{C}(0)}{Z_{C}(1)}\int_{-\infty}^{ \infty}d\epsilon~{}e^{-\beta\epsilon}\Big{[} A_{b}(\epsilon)A_{s_{\sigma}}(\epsilon+\omega)~{}+\]
\[A_{s_{\sigma}}(\epsilon)A_{d}(\epsilon+\omega)\Big{]}\] (30)
where \(Z_{C}(1)\) is the canonical partition function for a system with one impurity and \(Z_{C}(0)\) is the canonical partition function of the conduction electrons (no impurity). The ratio \(Z_{C}(1)/Z_{C}(0)\), which appears from the projection to the \(Q=1\) subspace, can be written as
\[\frac{Z_{C}(1)}{Z_{C}(0)}=\int_{-\infty}^{\infty}d\epsilon\,e^{-\beta\epsilon} \Big{[}A_{b}(\epsilon)+\sum_{\sigma}A_{s_{\sigma}}(\epsilon)+A_{d}(\epsilon) \Big{]}.\] (31)
These elements allow us to compute the conductance \(\mathcal{G}\) given by Eq. (5).
The energy integrals in Eqs. (II.2) and (31) have to be evaluated carefully because of the exponential divergences of the statistical factors appearing in both expressions. Refs. Hettler _et al._, 1998; Costi _et al._, 1996 discussed how to deal with large negative values of \(\beta\varepsilon\) in the integrand. The exponential factor is compensated by the threshold behavior of the auxiliary spectral functions, vastly reported in the literature. Hettler _et al._ (1998); Costi _et al._ (1996); Kroha and Wölfle (1998) The singular threshold structure poses serious difficulties to converge the NCA and OCA equations.Hettler _et al._ (1998). The trick to achieve convergence exploits the fact that theses equations are invariant under a frequency argument shift of the PP spectral functions, namely, \(\omega\to\omega+\lambda_{0}\). Hettler _et al._ (1998); Costi _et al._ (1996); Kroha and Wölfle (1998) In the implementation used here Haule _et al._ (2010)\(\lambda_{0}\) can be calculated in two ways: (i) \(\lambda_{0}\) is fixed and \(Q\) is calculated ²and (ii) \(\lambda_{0}\) is calculated and \(Q\) is fixed ³. In both methods, the parameter \(\lambda_{0}\) shifts the peak of the main ⁴ PP to \(\omega=0\). The \(\lambda_{0}\) calculation combined with a non-uniform frequency mesh with a high-density of points around \(\omega=0\) greatly improves the convergence of the equations. However, it is worth to emphasize that the sharp peaks in the PP spectral functions can still lead to instabilities in the calculation, in particular for very low temperatures and away from the particle-hole symmetry point.
[FOOTNOTE:2][ENDFOOTNOTE]
[FOOTNOTE:3][ENDFOOTNOTE]
[FOOTNOTE:4][ENDFOOTNOTE]
Despite its success in capturing important low temperature features of the SIAM, the SBA has several known problems. Tosi _et al._ (2011); Grewe _et al._ (2008); Anders (1995); Schmitt _et al._ (2009); Vildosola _et al._ (2015) The NCA, for instance, fails to give the correct Kondo scale \(T_{K}\) at finite \(U\)Haule _et al._ (2001). The inclusion of vertex corrections solve this pathology to some extent. Vildosola _et al._ (2015); Grewe _et al._ (2008) However for sufficiently low temperatures (\(T\lesssim 0.1T_{K}\)) the height of the Kondo resonance of the OCA spectral functions slightly overestimates the limit imposed by Friedel’s sum rule. Haule _et al._ (2010); Tosi _et al._ (2011); Schmitt _et al._ (2009); Vildosola _et al._ (2015) As pointed out in Ref. Rüegg _et al._, 2013 the OCA also violates the sum rules for the coefficients of the high-frequency expansion of the electronic self-energy, especially in the case of multi-orbital Anderson models.
### Numerical renormalization group (NRG)
Last, we employ Wilson’s numerical renormalization group (NRG) method Bulla _et al._ (2008) to the SIAM in order to calculate the impurity spectral function \(\rho_{\sigma}(\omega)\) as well as the conductance \(\mathcal{G}\), given by Eq. (5).
We can summarize the main ingredients of the NRG scheme as follows. The first step (i) is to perform a logarithmic discretization (in energy) of the non-interacting conduction electron Hamiltonian [Eq. (3)] by considering discrete energy intervals \(D\Lambda^{-(m+1)}\leq\omega\leq D\Lambda^{-m}\) where \(\Lambda>1\) is a discretization parameter. Step (ii) involves mapping the impurity+discretized band to an effective 1D tight-binding chain (usually dubbed the “Wilson chain”), whose first site is coupled to the interacting impurity. Due to the logarithmic discretization in the original band, the mapping produces couplings \(t_{n}\) between sites \(n\) and \(n+1\) which decay as \(t_{n}\sim\Lambda^{-n/2}\). This feature also defines a characteristic energy scale for a given Wilson chain length \(N\) as \(D_{N}=\frac{1}{2}\left(1+\Lambda^{-1}\right)\Lambda^{-(N-1)/2}D\) where \(D\) is the half-bandwidth of the metallic band appearing in Eq. (10).
Finally, step (iii) amounts to an iterative numerical diagonalization of the resulting Hamiltonian \(H(N)\) of the impurity plus a Wilson chain of length \(N\), and the subsequent mapping to a system with one extra site \(H(N+1)\), given by the renormalization condition:
\[H(N+1)=\sqrt{\Lambda}H(N)+\xi_{N}\sum_{\sigma}(f^{\dagger}_{N+1,\sigma}f_{N, \sigma}+f^{\dagger}_{N,\sigma}f_{N+1,\sigma}),\] (32)
where \(f^{\dagger}_{N,\sigma}\) creates an electron with spin \(\sigma\) on site \(N\) of the Wilson chain (sometimes referred to as “shell \(N\)”) and \(\xi_{N}\propto t_{N}\Lambda^{N/2}\sim 1\) is the renormalized coupling. The final approximation is to use the 1000–2000 lowest-energy states of \(H(N)\) (the “kept” states) to generate a basis for \(H(N+1)\). The process is then repeated for \(N+2,N+3,\cdots,N_{\rm max}\) until the desired lowest energy scale \(D_{N_{\rm max}}\) is achieved.
The key result out of the NRG algorithm is the calculation of the (many-body) energy spectrum \(\{|r\rangle_{N}\}\) for each \(H(N)\) in the form \(H(N)|r\rangle_{N}\!=\!E^{N}_{r}|r\rangle_{N}\) along with matrix elements \(\langle r|\hat{\mathcal{O}}|r^{\prime}\rangle_{N}\) for local operators \(\hat{\mathcal{O}}\) within the approximations involved (logarithmic discretization of the conduction band and truncation of the spectrum at each \(N\)).
#### ii.3.1 Impurity spectral density
With the many-body spectrum at hand, one can formally write the leading contribution of the NRG spectrum to the impurity spectral function at a frequency \(\omega\sim D_{N}\) in the Lehmann representation as
\[\rho^{N}_{\sigma}(\omega)\!=\!\sum_{r,r^{\prime}}|\mathcal{A}^{N}_{rr^{\prime} }|^{2}\frac{e^{-\beta E^{N}_{r}}+e^{-\beta E^{N}_{r^{\prime}}}}{Z_{N}}\delta( \omega-E^{N}_{r^{\prime}}+E^{N}_{r})\;,\] (33)
where \(Z_{N}=\sum_{r}e^{-\beta E^{N}_{r}}\) and \(\mathcal{A}^{N}_{rr^{\prime}}=\langle r|f_{\sigma}|r^{\prime}\rangle_{N}\) is the matrix element of the impurity operator \(f_{\sigma}\).
As it stands, Eq. (33) yields a sum of delta functions centered at the excitation energies \(\Delta E^{N}_{r,r^{\prime}}\equiv E^{N}_{r^{\prime}}-E^{N}_{r}\) in the spectrum for \(N\)-sites in the Wilson chain. In order to obtain the continuous spectral function \(\rho_{\sigma}(\omega)\) at energy \(\omega\), some additional approximations are needed. First, the delta functions in Eq. (33) need to be broadened, a procedure which might introduce overbroadening errors Žitko (2011). In order to minimize such errors, Gaussian or log-Gaussian kernels with small broadening widths are typically used Bulla _et al._ (2008); Peters _et al._ (2006); Weichselbaum and Von Delft (2007).
The second approximation is to collect the spectral contributions for different shells to the spectral function at a given energy \(\omega\). Early NRG calculations Costi _et al._ (1994) employed the so-called “single-shell approximation”, which amounts to take a single contribution at each shell \(N\) in the form: \(\rho_{\sigma}(\omega\!\approx\!D_{N})\approx\rho^{N}_{\sigma}(\omega\!=\!a( \Lambda)D_{N})\) where and \(a(\Lambda)\sim 2-3\) is a \(\Lambda\)-dependent numeric pre-factor. Modern NRG implementations Peters _et al._ (2006); Weichselbaum and Von Delft (2007) use contributions from several shells, weighted by the reduced density-matrix at iteration \(N\) and separating contributions from states “kept” and “discarded” during the truncation process at each \(N\) (the “complete Fock space” (CFS) approach) in order to avoid double counting between contributions of different shells.
In this work, we employ the “full-density-matrix NRG” (FDM-NRG) approach Weichselbaum and Von Delft (2007) to calculate the spectral functions at zero and finite temperatures. For zero-temperature calculations, both FDM-NRG and CFS approaches are equivalent Peters _et al._ (2006); Weichselbaum and Von Delft (2007). For finite temperatures, FDM-NRG is usually the method of choice, with a caveat: as it is true in all NRG implementations, the spectral function resolution in energy is limited for \(\omega\ll T\) due to the appearance of spurious peaks related to errors introduced by the logarithmic discretization procedure Žitko (2011). As such, even the FDM-NRG procedure with “z-trick averaging” Yoshida _et al._ (1990) produces spurious peaks for \(\omega\ll T\). Thus, good quality data is typically limited to energies \(\omega\gtrsim T\).
#### ii.3.2 Conductance
In order to obtain the conductance from NRG data, we can combine Eqs. (5) and (33) and use the delta functions to perform the integrals in energy for each shell. As a result, we obtain the contribution to the conductance from shell \(N\) as: Zawadzki and Oliveira (2018)
\[g_{N}(T)=\frac{\pi\beta}{Z_{N}}\sum_{r,r^{\prime}}\frac{|\mathcal{A}^{N}_{rr^{ \prime}}|^{2}}{e^{\beta E^{N}_{r}}+e^{\beta E^{N}_{r^{\prime}}}}\;,\] (34)
and
\[{\mathcal{G}}(T)=\frac{2e^{2}}{h}\Gamma\sum_{N}g_{N}(T)\;.\] (35)
Within the NRG approximations, Eq. (35) gives \({\mathcal{G}}(T)\) down to arbitrarily low-temperatures of order \(T\sim D_{N_{\rm max}}\), fully describing the regime \(T\ll T_{K}\), where \(T_{K}\) is the Kondo temperature. Notice that no broadening in the delta functions is necessary in using Eqs. (34) and (35). For both spectral function and conductance calculations, it is advisable to perform \(z\)-averaging in order to get rid of the spurious oscillations arising from the NRG discretization errors. In the calculations, we have averaged over \(N_{z}\!=\!5-10\) values of \(z\).
## III Results
In this section we critically compare calculations of the density of states and the conductance for the SIAM obtained by the methods discussed above. For the OCA we used the implementation of Haule and collaborators Haule _et al._ (2010), while for the others we used in-house codes. All calculations were performed in the wide-band approximation and in the absence of a magnetic field, namely, \(\varepsilon_{\uparrow}=\varepsilon_{\downarrow}=\varepsilon_{0}\).
When studying the SIAM it is usual to separate two limits based on the characteristic Kondo temperature of the system: the high and the low temperature regime when, \(T\gg T_{K}\) and \(T\lesssim T_{K}\), respectively. We proceed accordingly using the Haldane estimate Haldane (1978) for \(T_{K}\), namely,
\[T_{K}\sim\sqrt{U\Gamma}\exp\!\left(\!-\pi\frac{|\varepsilon_{0}(\varepsilon_{0 }+U)|}{U\Gamma}\right),\] (36)
as a guideline.
### Density of states
#### iii.1.1 High-temperature limit
Let us begin with the high temperature limit, \(T\gg T_{K}\). Figure 1 shows \(\rho_{\sigma}(\omega)\) for the particle-hole symmetric case, \(\varepsilon_{0}=-U/2\). We find that all methods, except NRG, give similar qualitative results. The main feature is the double peak structure with resonances at \(\omega=\varepsilon_{0}\) and \(\omega=\varepsilon_{0}+U\), both with a width \(\Gamma\) characteristic of the Coulomb blockade regime in quantum dots.
Despite the qualitative similarities, even at the high temperature limit, there are significant differences between the approximation schemes. We find that the EOM0 and EOM1 give very poor results as we discuss below. Fig. 1b shows that both peaks have the same height, implying that \(\langle n_{\sigma}\rangle=0.5\). Using this input, one can obtain an analytical estimate for the EOM DOS at \(\omega=\pm U/2\) , namely,
\[\rho^{\rm EOM0}\left(\omega=\pm U/2\right)=\frac{1}{\pi\Gamma}\]
\[\rho^{\rm EOM1}\left(\omega=\pm U/2\right)=\frac{2}{\pi\Gamma}\]
\[\rho^{\rm EOM2}(\omega=\pm U/2) =\rho^{\rm EOM3}(\omega=\pm U/2)=\frac{1}{2\pi\Gamma}.\] (37)
This indicates that, at odds with the claims found in the literature Haug and Jauho (2008), the different EOM truncation schemes render very discrepant density of states, see Fig. 1b.
Figure 1 shows that EOM2, EOM3, NCA, and OCA give very similar results for \(\rho(\omega)\), while NRG overbroadens the Hubbard peaks, a well-known limitation of the method Grewe _et al._ (2008); Georges _et al._ (1996). We point out that, as discussed in Sec. II.3.1, the FDM-NRG procedure for obtaining spectral functions suffers from spurious oscillations in the \(\omega\ll T\) range Weichselbaum and Von Delft (2007). Thus, only NRG data for \(|\omega|\gtrsim T\) is shown in Figs. 1a and 2.
<figure><img src="content_image/1910.10220/x1.png"><figcaption>Figure 1: (a) Density of states of the SIAM for different methods for ε0=−U/2,Γ=0.1U, and T=0.1U . (b) Same parameters as above, comparison betweendifferent EOM schemes.</figcaption></figure>
<figure><img src="content_image/1910.10220/x3.png"><figcaption>Figure 2: Density of states of the SIAM for Γ=0.1U and T=0.1U for ε0=−U/4.</figcaption></figure>
As \(\varepsilon_{0}\) is moved away from the particle-hole symmetry point, the DOS peaks become asymmetric. For \(\varepsilon_{0}=-U/4\) and keeping the same values for \(\Gamma\) and \(T\), the different solvers still give very similar results, in line with the analysis we presented for the symmetric case. The results are shown in Fig. 2. The differences between the approximation schemes become more pronounced for \(\varepsilon_{0}=0\), see Fig. 3a). In particular, the EOM2 density of states exhibits a smaller peak height at the Fermi level as compared with the one obtained by the other methods.
Note that for \(T\lesssim\Gamma\) the most relevant resonance configurations for the conductance calculations are the asymmetric ones, where the DOS peak matches \(\mu=0\). The failure of the EOM2 approximation in reproducing the DOS obtained by more accurate approaches for the asymmetric case indicates that, even at the high temperature regime, spin correlations cannot be entirely neglected.
Figure 3b sheds further light on the EOM results. It clearly shows that since the EOM0 self-energy is just the embedding \(\Sigma_{0}(\omega)\), this approximation underestimates \(\Sigma_{\sigma}(\omega)\) and, hence, overestimates the DOS peak heights. An analysis of \(\rho_{\sigma}(\omega)\) at \(\omega=0\) and \(\omega=U\) reveals that Eq. (15) has another rather unphysical feature: at the peaks, \(\rho^{\rm EOM1}_{\sigma}\) becomes independent of \(\langle n_{\overline{\sigma}}\rangle\), that is, both DOS peaks have the same height. Curiously, the simplest heuristic approximation EOM0 gives superior results in this respect.
<figure><img src="content_image/1910.10220/x4.png"><figcaption>Figure 3: Impurity density of states for ε0=0, T=0.1U. In (a) we use Γ=0.1U.In (b) we set Γ=0.02U and compute the DOS near the Fermi level ω=0 fordifferent EOM schemes and also for the NCA. Here the estimated Kondotemperature is TK∼10−18U for ε0=−U/2. In the inset we show the density ofstates in a more extended range of energies.</figcaption></figure>
#### iii.1.2 Low-temperature limit
Now we turn to the low-temperature regime, Fig. 4. Besides the usual resonances at the energies \(\omega\approx\varepsilon_{0}\) and \(\omega\approx\varepsilon_{0}+U\) the \(\rho_{\sigma}(\omega)\) calculated using the SBA and NRG exhibit a pronounced and sharp peak at the Fermi energy, a clear signature of the Kondo effect. Interestingly, while the slave-boson methods suffer from instabilities for \(T\ll T_{K}\) (see, for instance, the discussion at the end of Sec. II.2), this is the regime where NRG works best.
<figure><img src="content_image/1910.10220/x6.png"><figcaption>Figure 4: Density of states of the SIAM for Γ=0.2U, T=0.001U, and (a) ε0=−U/2,(b) ε0=−0.26U, and (c) ε0=0. The insets show a zoom near the region ω=0.</figcaption></figure>
Figure 4 shows the results we obtain for \(\Gamma=0.2U\) and \(T=0.001U\). For the particle-hole symmetric case, \(\varepsilon_{0}=-U/2\), the Haldane estimate gives \(T_{K}\approx 0.009U\gg T\). This \(T_{K}\) value is consistent with the NRG estimate of \(T_{K}\approx 0.00208U\) obtained from the impurity magnetic susceptibility \(\chi(T)\) (not shown) using the criteriaKrishna-murthy _et al._ (6)\(T_{K}\chi(T_{K})=0.0701\) The \(T\!\ll\!T_{K}\) regime is precisely where the NRG is expected to work best, giving an accurate description of the \(\omega\!=\!0\) peak all the way to \(T/T_{K}\to 0\)Tsvelick and Wiegmann (1983); Hewson (1997); Grewe _et al._ (2008). As such, we will use the \(T\!=\!0\) NRG results as a benchmark for our \(T\lesssim T_{K}\) analysis.
As shown in Fig. 4a, the OCA significantly improves the width and the height of the Kondo resonance in comparison with the NCA. In addition, it gives an improvement in the estimate of \(T_{K}\) of at a least an order of magnitude, as previously reported David Jacob (2015); Haule _et al._ (2001); Kotliar _et al._ (2006); Haule _et al._ (2001). The peak at \(\omega=0\) is absent in the EOM solutions. It is well known Kashcheyevs _et al._ (2006) that, in the truncation schemes (and the corresponding neglected correlation processes) discussed in this paper, due to the particle-hole symmetry the EOM Green’s function cannot exhibit a resonance at \(\omega=0\)Kashcheyevs _et al._ (2006). More recently, it has been shown Van Roermund _et al._ (2010); Lavagna (2015) that one can overcome this shortcoming of the EOM method by considering processes beyond the EOM3 truncation scheme.
We also compare the different methods in the particle-hole asymmetric case. In Fig. 4b we plot the spectral functions for \(\varepsilon_{0}=-0.26U\approx-U/4\). In accordance with NRG results, both NCA and OCA show a Kondo peak at \(\omega\!\approx\!0\) in addition to the usual single-particle peaks at \(\omega\!=\!\varepsilon_{0}+U=0.74U\) and \(\omega\!=\!\varepsilon_{0}=-0.26U\). The latter resembles more of a “shoulder” due to the broadening and the overlap with the Kondo peak at the Fermi energy. The EOM spectral functions also exhibit a resonance at the Fermi level. In addition, there is an unphysical peak at \(\omega=2\varepsilon_{0}+U=0.48U\) which is enhanced in the EOM3 solution. This spurious singularity can be identified with the singularities of \(\Sigma_{1}(\omega)\) in EOM2, Eq. (17) and with the ones of the functions \(P(z)\) and \(Q(z)\), Eq. (19), in EOM3. Spurious peaks in EOM have been already reported in a different context Czycholl (1985).
Figure 4c shows the computed density of states for \(\varepsilon_{0}\!=\!0\). In this “mixed-valence” regime, no clear Kondo effect is expected as the net magnetic moment in the impurity is suppressed. As such, the NRG \(T\!=\!0\) spectral density shows only a broad, single-particle-like peak centered at \(\omega\!=\!\varepsilon_{0}\!=\!0\). It is also in this regime where the differences between the methods at the energies \(\omega\) very close to the Fermi level are quite remarkable.
Both the NCA and OCA density of states show a sharp peak near \(\omega\!=\!0\), while there is no such peak in the EOM3 and NRG results (see inset of Fig. 4c). The presence of these peaks in the NCA and OCA results strongly influence the conductance calculations, as discussed in the next section. The EOM2 density of states exhibits a somewhat unexpected result with an anti-peak (deep) at the Fermi level. This rather artificial feature severely compromises the performance of the EOM2 scheme in the conductance calculations, as it will be made more clear in the following section.
The results of this section can be summarized as follows. While NRG is the best method to describe the low-energy excitations of the spectrum at low temperatures, the SBA approaches (OCA and NCA) give a quite reasonable qualitative picture of the physics, with OCA remarkably improving over NCA results. The EOM schemes, by contrast, display very poor performance in capturing the low-energy properties. On the other hand, the most appropriate methods to describe the high-energy features of the spectrum are the SBA, followed by the EOM schemes, while NRG shows a characteristic poor performance in describing the spectral features in this limit.
Let us conclude this section with a discussion of the computational times involved in the spectral function calculations. The EOM solvers take few seconds to converge and run very well on just one core (processor Intel ®Xeon ®X5650 2.67GHz). The simple implementation and the low computational cost explain why the EOM methods, despite their limitations, are still widely used. The NCA solver also quickly converges and its performance is similar to the EOM one. Using the same processor and \(N=1060\) points in the pseudo-particle frequency mesh NCA converges in \(9s\) within 20 iterations. In contrast, OCA converges in 76 minutes within 15 iterations, that is, it consumes typically 400 times more CPU than NCA. The main reason for this are the double convolutions in the OCA equations, which involve overlapping integrations over functions with sharp peak structures Grewe _et al._ (2008). The NRG calculations were performed in a cluster with a similar processor unit, namely, Intel(R) Xeon(R) CPU E5620 @ 2.40GHz with 8Gb RAM. The zero-temperature spectral functions with DM-NRG, which is a bit less precise but numerically much less expensive requires \(\sim\) 30min per \(z\) value \(\times\) 5 values to average \(\sim\) 2h30 per spectral curve. In turn, the zero-temperature spectral functions with CFS consumed \(\sim\) 2h40 per \(z\) value \(\times\) 5 values to average \(\sim\) 13h-14h per spectral curve. Finally, the finite-temperature FDM-NRG cost \(\sim\) 5h per \(z\) value \(\times\) 5 values to average \(\sim\) 25h per spectral curve. All CPU times refer to single-core calculations. Table 1 summarizes our findings.
Method | CPU time
---|---
EOM | <1 min
NCA | <1 min
OCA | ∼1 h
CFS-NRG | ∼13−14 h
FDM-NRG | ∼25 h
Table 1: Typical times taken by each method to deliver a converged spectral
function. All CPU times refer to single-core calculations.
### Conductance
Let us now consider the calculation of two-point electrical conductance in the linear response regime. Hereafter, we assume that the single-particle energy \(\varepsilon_{0}\) can be tuned by an external applied gate voltage, \(\varepsilon_{0}=\varepsilon_{0}(V_{g})\) and set \(\mu=0\).
The linear conductance \(\mathcal{G}\), given by Eq. (5), is a convolution of the DOS and the derivative of the Fermi distribution. Hence, the features discussed in the previous section give insight on the behavior of \(\mathcal{G}\). It is important to stress that the NRG computes the conductance directly from Eq. (35), avoiding the difficulties of the method in calculating high-energy properties of the spectral function. Thus, the NRG results will serve as a benchmark to the conductance calculations in all regimes studied here.
In quantum dots, the high-temperature limit corresponds to the Coulomb blockade regime. Here, the conductance peaks are close to the resonances at \(\varepsilon_{0}\) and \(\varepsilon_{0}+U\). The Coulomb blockade conductance peak at \(\varepsilon_{0}\approx 0\) corresponds to a dot with either empty or single electron occupation, while the resonance at \(\varepsilon_{0}+U=0\) is associated with a single or double occupied configuration. The low conductance in the mid-valley region arises mainly due to cotunneling processes Aleiner _et al._ (2002); Foa Torres _et al._ (2003), that are also nicely described by a real-time diagrammatic technique Schoeller and Schön (1994). As the temperature is lowered, Kondo physics sets in. Its main manifestation is to increase the mid-valley conductance with decreasing \(T/T_{K}\), reaching the unitary regime as \(T/T_{K}\to 0\).
Let us begin analyzing the high-temperature limit. All panels of Fig. 5 exhibit the main features of the Coulomb blockade regime: Two peaks separated by an energy \(\sim U\) with low mid valley conductance. Figure 5a shows the conductance obtained by the methods presented in Sec. II for \(T=0.1U\). All approximations show a good qualitative agreement, except for the EOM2 result, which presents pronounced discrepancies, particularly near the charge fluctuation regime. This rather unexpected behavior of the EOM2 result is a consequence of the deviation in the density of states compared to other methods, as discussed in the previous section.
<figure><img src="content_image/1910.10220/x9.png"><figcaption>Figure 5: Conductance G in units of 2e2/h as a function of the single particleenergy ε0 in units of U for μ=0, Γ=0.1U, and different temperature values (a)T=0.1U, (b) T=0.01U, and (c) T=0.001U.</figcaption></figure>
As we decrease the temperature towards the Kondo regime, the discrepancies between the results delivered by the different methods become more pronounced. The conductance peak heights given by the EOM are consistently smaller than those obtained by NRG, while SBA tends to overestimate them.
As expected, EOM fails to describe the rise of the mid-valley conductance with decreasing \(T/T_{K}\), while within the SBA methods, only OCA gives good results compared with NRG. We stress that the EOM3 gives better results than the SBA for the empty (\(\varepsilon_{0}>0\)) and double occupation (\(\varepsilon_{0}<-U\)) quantum dot configurations.
In order to address the Kondo regime, \(T\lesssim T_{K}\), we increase the coupling strength to \(\Gamma=0.2U\) such that \(T_{K}\) increases above \(\sim 0.002U\) for all \(\varepsilon_{0}\) values in the range of interest. This speeds the convergence of the NCA and OCA algorithms, while avoiding the low-\(T\) instabilities of these methods, see discussion at the end of Sec. II.2. The results are presented in Fig. 6 . In this temperature regime, the limitations of the EOM method in treating correlations become even more manifest, particularly at \(\varepsilon_{0}=-U/2\), which corresponds to the particle-hole symmetry point. We emphasize that, as in the previous case, EOM3 is very accurate in describing the \(\varepsilon_{0}>0\) and \(\varepsilon_{0}<-U\) regions.
<figure><img src="content_image/1910.10220/x12.png"><figcaption>Figure 6: Conductance G in units of 2e2/h as a function of the impurity energylevel ε0/U for μ=0, Γ=0.2U, and (a) T=0.04U, (b) T=0.003U, and (c) T=0.0004U.</figcaption></figure>
Taking the NRG conductance as benchmark, we find that the OCA represents a significant improvement over the NCA results. This is a direct consequence of the better description of the Kondo peak by the OCA, as discussed in Sec. III.1: The Kondo peak in the OCA spectral function is higher and wider than in the NCA one. However, a closer comparison with the NRG result shows that the OCA conductance fails to develop a Kondo plateau, as can be seen in Fig. 6c.
When the temperature is further decreased, the OCA conductance exhibits a plateau, but it exceeds the maximum conductance, \({\cal G}_{\rm max}=2e^{2}/h\) (see also Ref. Melo, 2019). This undesired unphysical feature is related to the violation of the Friedel sum rule by the NCA/OCA procedure discussed in Sec. II.2, which results in the overestimation of the height of the Kondo peak Tosi _et al._ (2011).
The NCA and OCA results are also at odds with NRG for \(\varepsilon_{0}<-U\) and \(\varepsilon_{0}>0\), showing an increase of \({\cal G}\) as the temperature is decreased. This behavior can be understood by looking at the spectral functions in these parameter regions. In the discussion of the results of Fig. 4c, we noted that the density of states obtained by the NCA approach exhibits a sharp peak at the Fermi level and this peak increases when the temperature decreases. As shown in Fig. 4c, the \(T\!=\!0\) there is no such peak in the NRG data. We conclude that the observed increase in the conductance shown in the NCA and OCA data are related to this spurious resonance. Intriguingly, we have not been able to determine the numerical origin of this behavior in the NCA/OCA density of states’ close to the charge fluctuation points.
Let us now investigate the temperature dependence of the conductance with more detail. For this purpose we select three different configurations, namely, \(\varepsilon_{0}+U/2=0\), \(\varepsilon_{0}+U/4=0\), and \(\varepsilon_{0}=0\). The latter cases correspond to the mid-valley, a crossover, and the resonance peak configurations, respectively.
Figure 7 shows that in the high temperature limit all methods agree very well, as it has been already suggested by Fig. 5a. In the present analysis, we are not including the NRG results, which we will discuss later when addressing the scaling behavior of the conductance.
As the temperature decreases, the differences are enhanced. As expected, in the particle-hole symmetric point (Fig. 7a), both EOM schemes dramatically fail to capture the increase of the conductance due to the missing Kondo peak in the spectral function. We notice that these methods render nearly temperature-independent spectral functions. The OCA method gives significantly better results than NCA, capturing the onset of conductance increase with good accuracy. When the temperature is decreased below about \(T_{K}/10\), the OCA conductance saturates, although at an nonphysical value greater than the quantum of conductance \(G_{\text{plateau}}>2e^{2}/h\). This feature is related to the violation of the Friedel sum rule in the OCA procedure mentioned above. Tosi _et al._ (2011); Melo (2019).
<figure><img src="content_image/1910.10220/x15.png"><figcaption>Figure 7: Conductance G in units of 2e2/h as a function of the temperature Tfor (a) ε0=−U/2, (b) ε0=−U/4, (c) ε0=0 corresponding respectively to the midvalley, a crossover, and the resonance peak configuration. The dashed linesindicate the unitary limit. Here Γ=0.2U.</figcaption></figure>
For \(\varepsilon_{0}=-U/4\) (Fig. 7b), the EOM2 conductance starts to decrease below a certain temperature. The EOM3 method seems to correct this unexpected behavior to a certain extent, but its already mentioned nonphysical features prevent the method to yield conductance values close to the OCA/NCA ones. The difference between the NCA and OCA results becomes smaller, the NCA curve being always below the OCA one which is consistent with the idea that NCA yields a smaller Kondo temperature. For \(\varepsilon_{0}=0\), Fig. 7c, EOM2 performs well only in the high temperature limit \(10^{-1}\lesssim T/U<1\), while EOM3 gives good results for \(T/U\gtrsim 10^{-2}\). Remarkably, the NCA conductance shows results very similar to those of the OCA curve.
The analogy between quantum dots and impurity systems extends to the scaling behavior of their properties. Indeed, in Ref. Kouwenhoven and Glazman, 2001 it has been experimentally verified that the conductance of a quantum dot, in the low temperature limit, depends only on the ratio \(T/T_{K}\).
According to a semi-empirical result obtained by NRG calculations for a spin-\(1/2\) system, the conductance of a quantum dot is given by Costi _et al._ (1994); Goldhaber-Gordon _et al._ (16); Roura-Bas (2010); Tosi _et al._ (2011)
\[\mathcal{G}(T)=\frac{2e^{2}}{h}{\frac{1}{[1+(2^{1/s}-1)(T/T_{K})^{2}]^{s}}},\] (38)
with \(s=0.22\) chosen to fit the experimental dataGoldhaber-Gordon _et al._ (16).
Let us discuss the scaling behavior of \(\mathcal{G}\) obtained by the OCA and NCA methods in comparison with the NRG formula above. For that purpose, it is important to note that the Kondo temperature given by the OCA method is larger than the one given by NCA (see discussion in Sec. III.1).
In order to calculate the Kondo temperature we adopt the procedure put forward in Ref. Tosi _et al._, 2011 which defines \(T_{K}\) as the temperature for which \(\mathcal{G}/(2e^{2}/h)=1/2\). In this way, for \(\Gamma=0.2U\) and \(\varepsilon_{0}=-U/2\), we obtain \(T_{K}^{\text{OCA}}\approx 0.0028U\) and \(T_{K}^{\text{NCA}}\approx 0.000097U\). (It is interesting to recall that we obtain \(T_{K}^{\text{NRG}}\approx 0.0021U\) from the NRG susceptibility analysis.) We scale the conductance curves for each method by the corresponding Kondo temperature.
In Fig. 8 we show the conductance as function of \(T/T_{K}\) obtained in NRG, OCA, and NCA methods. Since the OCA method gives unphysical conductance values when \(T\lesssim 0.1T_{K}\), our analysis of the scaling starts from this temperature Tosi _et al._ (2011). Clearly, the scaling obtained using OCA is better than the one yielded by the NCA solver in the sense that the former is closer to the NRG result than the latter. However, as previously reported in Ref. Tosi _et al._, 2011, the inclusion of vertex corrections is not enough to recover the NRG prediction.
<figure><img src="content_image/1910.10220/x18.png"><figcaption>Figure 8: Conductance G in units of 2e2/h as a function of the scaledtemperature T/TK for ε0=−U/2. Here Γ=0.2U and TOCAK≈0.0028U andTNCAK≈0.000097U. The Haldane estimate gives TK=0.009U.</figcaption></figure>
Let us finish discussing CPU times. For the EOM and SBA methods the CPU times involved in the conductance calculations for a given value of \(\varepsilon_{0}/U\) are essentially the same as the spectral function ones, discussed in the previous section. Since \({\cal G}\) is calculated by NRG using Eq. (35), its computation is much faster than the one for DOS. The approximate clock times are about 20 minutes per calculation using Intel(R) Xeon(R) CPU E5620 @ 2.40GHz (one core per job) with 8Gb RAM.
## IV Summary and conclusions
In this work we study the EOM, SBA, and NRG impurity solvers, comparing their results for the spectral function and linear conductance for the SIAM covering all physical relevant regimes.
In summary, as stated in Ref. Georges _et al._, 1996, there is no “universal” impurity solver that can be considered the most appropriate choice to describe both the spectral function and transport properties over the all situations of interest. We provide a thorough analysis that serves as a guide to choose the most adequate approximation depending on the accuracy and computational time for a given regime and/or observable.
We find that for the high temperature regime most approximations give comparable results for \(\rho_{\sigma}(\omega)\) and \(\mathcal{G}\). Surprisingly the widely used EOM0, EOM1, and EOM2 approaches give poor results for the spectral functions at the mixed valence regime, a drawback that is fixed to a large extent by EOM3. As already known, NRG fails to give reliable high-energy spectral functions, which is a problem for calculation schemes that combine NRG with DMFT.
At intermediate temperatures, corresponding to the onset of Kondo physics, only SBA and NRG lead to quantitative accurate results. Even so, NCA dramatically underestimates \(T_{K}\) and, hence, gives poor results for the mid-valley conductance. It is worth to mention that EOM3 gives surprisingly good results at the charge fluctuation regime.
For low temperatures, \(T\ll T_{K}\), the SBA violates the Friedel sum rule and gives \({\cal G}\) values larger than the maximum predicted by the unitary limit. Hence, as one approaches \(T/T_{K}\to 0\) the SBA results can only capture the qualitative behavior of the physical observables of interest.
In practice, the choice of a impurity solver certainly must take into account the computational cost. Table 1 compares the processing times of the methods we analyze here.
###### Acknowledgements.
We thank Rok Zitko for valuable comments and suggestions in the implementation of the FDM-NRG approach. This work was supported by the Brazilian funding agencies CNPq (Grant Nos. 308801/2015-6, 308351/2017-7, 423137/2018-2), FAPERJ (Grant E-26/202.882/2018), and FAPESP (Grant Nos. 2016/18495-4, 2016/01343-7, 2017/02317-2).
## References
* Tsvelick and Wiegmann (1983)A. M. Tsvelick and P. B. Wiegmann, Adv. Phys. **32**, 453 (1983).
* Hewson (1997)A. C. Hewson, _The Kondo Problem to Heavy Fermions_, 2nd ed. (Cambridge University Press, 1997).
* Georges _et al._ (1996)A. Georges, G. Kotliar, W. Krauth, and M. Rozenberg, Rev. Mod. Phys. **68**, 13 (1996).
* Anderson (1961)P. W. Anderson, Phys. Rev. **124**, 41 (1961).
* Krishna-murthy _et al._ (1975)H. R. Krishna-murthy, K. G. Wilson, and J. W. Wilkins, Phys. Rev. Lett. **35**, 1101 (1975).
* Krishna-murthy _et al._ (1980a)H. R. Krishna-murthy, J. W. Wilkins, and K. G. Wilson, Phys. Rev. B **21**, 1003 (1980a).
* Krishna-murthy _et al._ (1980b)H. R. Krishna-murthy, J. W. Wilkins, and K. G. Wilson, Phys. Rev. B **21**, 1044 (1980b).
* Newns and Read (1987)D. M. Newns and N. Read, Advances in Physics **36**, 799 (1987).
* Coleman (1984)P. Coleman, Phys. Rev. B **29**, 3035 (1984).
* Meir _et al._ (1991)Y. Meir, N. S. Wingreen, and P. A. Lee, Phys. Rev. Lett. **66**, 3048 (1991).
* Pustilnik and Glazman (2004)M. Pustilnik and L. Glazman, J. Phys. Condens. Matter **16**, R513 (2004).
* Meir _et al._ (1993)Y. Meir, N. S. Wingreen, and P. a. Lee, Phys. Rev. Lett. **70**, 2601 (1993).
* Ng and Lee (1988)T. K. Ng and P. A. Lee, Phys. Rev. Lett. **61**, 1768 (1988).
* Costi _et al._ (1994)T. A. Costi, A. C. Hewson, and V. Zlatic, J. Phys.: Condens. Matter **6**, 2518 (1994).
* Goldhaber-Gordon _et al._ (1998a)D. Goldhaber-Gordon, H. Shtrikman, D. Mahalu, D. Abusch-Magderand, U. Meirav, and M. A. Kastner, Nature **391**, 156 (1998a).
* Goldhaber-Gordon _et al._ (1998b)D. Goldhaber-Gordon, J. Göres, M. A. Kastner, H. Shtrikman, D. Mahalu, and U. Meirav, Phys. Rev. Lett. **81**, 5225 (1998b).
* Pustilnik and Glazman (2001)M. Pustilnik and L. I. Glazman, Phys. Rev. Lett. **87**, 216601 (2001).
* Glazman and Pustilnik (2003)L. I. Glazman and M. Pustilnik, _New Directions in Mesoscopic Physics (Towards Nanoscience)_, edited by R. Fazio, V. F. Gantmakher, and Y. Imry (Springer Netherlands, Dordrecht, 2003) Chap. Coulomb blockade and Kondo effect in quantum dots, pp. 93–115.
* Kugler _et al._ (2019)F. B. Kugler, M. Zingl, H. U. R. Strand, S.-S. B. Lee, J. von Delft, and A. Georges, arXiv/1909.02389 (2019).
* Kotliar _et al._ (2006)G. Kotliar, S. Y. Savrasov, K. Haule, V. S. Oudovenko, O. Parcollet, and C. A. Marianetti, Rev. Mod. Phys. **78**, 865 (2006).
* Liang _et al._ (2002)W. Liang, M. P. Shores, M. Bockrath, J. R. Long, and H. Park, Nature **417**, 725 (2002).
* Park _et al._ (2002)J. Park, A. N. Pasupathy, J. I. Goldsmith, C. Chang, Y. Yaish, J. R. Petta, M. Rinkoski, J. P. Sethna, H. D. Abruña, P. L. McEuen, and D. C. Ralph, Nature **417**, 722 (2002).
* Thoss and Evers (2018)M. Thoss and F. Evers, J. Chem. Phys. **148**, 030901 (2018).
* Droghetti and Rungger (2017)A. Droghetti and I. Rungger, Phys. Rev. B **95**, 085131 (2017).
* Chioncel _et al._ (2015)L. Chioncel, C. Morari, A. Östlin, W. H. Appelt, A. Droghetti, M. M. Radonjić, I. Rungger, L. Vitos, U. Eckern, and A. V. Postnikov, Phys. Rev. B **92**, 054431 (2015).
* Appelt _et al._ (2018)W. H. Appelt, A. Droghetti, L. Chioncel, M. M. Radonjić, E. Muñoz, S. Kirchner, D. Vollhardt, and I. Rungger, Nanoscale **10**, 17738 (2018).
* David Jacob (2015)David Jacob, J. Phys. Condens. Matter **27**, 245606 (2015).
* Jacob _et al._ (2009)D. Jacob, K. Haule, and G. Kotliar, Phys. Rev. Lett. **103**, 3 (2009).
* Jacob _et al._ (2010)D. Jacob, K. Haule, and G. Kotliar, Phys. Rev. B **82**, 195115 (2010).
* Theumann (1969)A. Theumann, Phys. Rev. **178**, 978 (1969).
* Lacroix (1981)C. Lacroix, J. Phys. F: Met. Phys. **11**, 2389 (1981).
* Lacroix (1982)C. Lacroix, J. Appl. Phys. **53**, 2131 (1982).
* Bickers (1987)N. E. Bickers, Rev. Mod. Phys. **59**, 845 (1987).
* Pruschke and Grewe (1989)T. Pruschke and N. Grewe, Z. Phys. B **74**, 439 (1989).
* Hirsch and Fye (1986)J. E. Hirsch and R. M. Fye, Phys. Rev. Lett. **56**, 2521 (1986).
* Yamada (1975)K. Yamada, Prog. Theor. Phys. **53**, 970 (1975).
* Yosida and Yamada (1970)K. Yosida and K. Yamada, Prog. Theor. Phys. Sup. **46**, 244 (1970).
* Yosida and Yamada (1975)K. Yosida and K. Yamada, Prog. Theor. Phys. **53**, 1286 (1975).
* Kashcheyevs _et al._ (2006)V. Kashcheyevs, A. Aharony, and O. Entin-Wohlman, Phys. Rev. B **73**, 125338 (2006).
* Haule _et al._ (2010)K. Haule, C. H. Yee, and K. Kim, Phys. Rev. B **81**, 195107 (2010).
* Bulla _et al._ (2008)R. Bulla, T. A. Costi, and T. Pruschke, Rev. Mod. Phys. **80**, 395 (2008).
* Meir and Wingreen (1992)Y. Meir and N. S. Wingreen, Phys. Rev. Lett. **68**, 2512 (1992).
* Wingreen and Meir (1994)N. S. Wingreen and Y. Meir, Phys. Rev. B **49**, 11040 (1994).
* (44)In this paper we use \(\Gamma\), the typical scattering theory notation used in quantum dots and molecular electronics, instead of the hybridization function \(\Delta\), more familiar to the strongly correlated systems community. Note that \(\Gamma=2\Delta\).
* Haug and Jauho (2008)H. Haug and A.-P. Jauho, _Quantum Kinectics in Transport and Optics of Semiconductors_, 2nd ed. (Springer, 2008).
* Dias da Silva _et al._ (2017)L. G. G. V. Dias da Silva, C. Lewenkopf, E. Vernek, G. J. Ferreira, and S. E. Ulloa, Phys. Rev. Lett. **119**, 116801 (2017).
* Hernández _et al._ (2007)A. Hernández, V. M. Apel, F. A. Pinheiro, and C. H. Lewenkopf, Physica A **385**, 148 (2007).
* Hubbard (1963)J. Hubbard, Proc. R. Soc. Lond. A **276**, 238 (1963).
* Gebhard (1997)F. Gebhard, _The Mott metal-insulator transition: models and methods_ (Springer, 1997).
* Hubbard (1964)J. Hubbard, Proc. R. Soc. Lond. A **277**, 237 (1964).
* Gerace _et al._ (2002)D. Gerace, E. Pavarini, and L. C. Andreani, Phys. Rev. B **65**, 155331 (2002).
* Aguado and Langreth (2003)R. Aguado and D. C. Langreth, Phys. Rev. B **67**, 245307 (2003).
* Haule _et al._ (2001)K. Haule, S. Kirchner, J. Kroha, and P. Wölfle, Phys. Rev. B **64**, 155111 (2001).
* Sposetti _et al._ (2016)C. N. Sposetti, L. O. Manuel, and P. Roura-Bas, Phys. Rev. B **94**, 085139 (2016).
* Barnes (1976)S. E. Barnes, J. Phys. F: Met. Phys. **6**, 1375 (1976).
* Barnes (1977)S. E. Barnes, J. Phys. F: Met. Phys. **7**, 2637 (1977).
* Kroha and Wölfle (1998)J. Kroha and P. Wölfle, Acta Physica Polonica B **29**, 3781 (1998).
* Abrikosov (1965)A. Abrikosov, Physics World Physique Fizika **2**, 5 (1965).
* Hettler _et al._ (1998)M. H. Hettler, J. Kroha, and S. Hershfield, Phys. Rev. B **58**, 5649 (1998).
* Costi _et al._ (1996)T. Costi, J. Kroha, and P. Wölfle, Phys. Rev. B **53**, 1850 (1996).
* (61)See, for example, the Appendix of Ref. Sposetti2016.
* (62)See, for example, the Appendix A of Ref. Hettler1998.
* (63)The main pseudo-particle is the one with the lowest energy. Note that \(\varepsilon_{b}=0\), \(\varepsilon_{s_{\sigma}}=\varepsilon_{0}\), and \(\varepsilon_{d}=2\varepsilon_{0}+U\).
* Tosi _et al._ (2011)L. Tosi, P. Roura-Bas, A. M. Llois, and L. O. Manuel, Phys. Rev. B **83**, 73301 (2011).
* Grewe _et al._ (2008)N. Grewe, S. Schmitt, T. Jabben, and F. B. Anders, J. Phys. Condens. Matter **20**, 365217 (2008).
* Anders (1995)F. B. Anders, J. Phys. Condens. Matter **7**, 2801 (1995).
* Schmitt _et al._ (2009)S. Schmitt, T. Jabben, and N. Grewe, Phys. Rev. B **80**, 1 (2009).
* Vildosola _et al._ (2015)V. Vildosola, L. V. Pourovskii, L. O. Manuel, and P. Roura-Bas, J. Phys. Condens. Matter **27**, 485602 (2015).
* Rüegg _et al._ (2013)A. Rüegg, E. Gull, G. A. Fiete, and A. J. Millis, Phys. Rev. B **87**, 1 (2013).
* Žitko (2011)R. Žitko, Phys. Rev. B **84**, 085142 (2011).
* Peters _et al._ (2006)R. Peters, T. Pruschke, and F. B. Anders, Phys. Rev. B **74**, 1 (2006).
* Weichselbaum and Von Delft (2007)A. Weichselbaum and J. Von Delft, Phys. Rev. Lett. **99**, 076402 (2007).
* Yoshida _et al._ (1990)M. Yoshida, M. A. Whitaker, and L. N. Oliveira, Phys. Rev. B **41**, 9403 (1990).
* Zawadzki and Oliveira (2018)K. Zawadzki and L. Oliveira, Eur. Phys. J. B **91**, 136 (2018).
* Haldane (1978)F. D. M. Haldane, Phys. Rev. Lett. **40**, 416 (1978).
* Van Roermund _et al._ (2010)R. Van Roermund, S.-y. Shiau, and M. Lavagna, Phys. Rev. B **81**, 165115 (2010).
* Lavagna (2015)M. Lavagna, J. Phys.: Conf. Ser. **592**, 012141 (2015).
* Czycholl (1985)G. Czycholl, Phys. Rev. B **31**, 2867 (1985).
* Aleiner _et al._ (2002)I. L. Aleiner, P. W. Brouwer, and L. I. Glazman, Phys. Rep. **358**, 309 (2002).
* Foa Torres _et al._ (2003)L. E. F. Foa Torres, C. H. Lewenkopf, and H. M. Pastawski, Phys. Rev. Lett. **91**, 116801 (2003).
* Schoeller and Schön (1994)H. Schoeller and G. Schön, Phys. Rev. B **50**, 18436 (1994).
* Melo (2019)B. Melo, _Impurity solvers for the single impurity Anderson model: comparison and applications_, Ph.D. thesis, Universidade Federal Fluminense (2019).
* Kouwenhoven and Glazman (2001)L. Kouwenhoven and L. Glazman, Physics World **14**, 33 (2001).
* Roura-Bas (2010)P. Roura-Bas, Phys. Rev. B **81**, 1 (2010).
|
0909.3553 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 45,
"num_imgs": 0,
"llama3_tokens_count": 9
} | [] | # Developments in Quantum Phase Transitions
|
1704.07988 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 29502,
"num_imgs": 5,
"llama3_tokens_count": 8539
} | [
"content_image/1704.07988/x1.png",
"content_image/1704.07988/x2.png",
"content_image/1704.07988/x4.png",
"content_image/1704.07988/x6.png",
"content_image/1704.07988/x8.png"
] | # Joint Hybrid Precoder and Combiner Design for mmWave Spatial Multiplexing Transmission
Zihuan Wang\({}^{{\dagger}}\), Ming Li\({}^{{\dagger}}\), Xiaowen Tian\({}^{{\dagger}}\), and Qian Liu\({}^{{\ddagger}}\)
\({}^{{\dagger}}\)School of Information and Communication Engineering
Dalian University of Technology, Dalian, Liaoning 116024, China
E-mail: {wangzihuan, tianxw}@mail.dlut.edu.cn, mli@dlut.edu.cn
\({}^{{\ddagger}}\) School of Computer Science and Technology
Dalian University of Technology, Dalian, Liaoning 116024, China
E-mail: qianliu@dlut.edu.cn
###### Abstract
Millimeter-wave (mmWave) communications have been considered as a key technology for future 5G wireless networks because of the orders-of-magnitude wider bandwidth than current cellular bands. In this paper, we consider the problem of codebook-based joint analog-digital hybrid precoder and combiner design for spatial multiplexing transmission in a mmWave multiple-input multiple-output (MIMO) system. We propose to jointly select analog precoder and combiner pair for each data stream successively aiming at maximizing the channel gain while suppressing the interference between different data streams. After all analog precoder/combiner pairs have been determined, we can obtain the effective baseband channel. Then, the digital precoder and combiner are computed based on the obtained effective baseband channel to further mitigate the interference and maximize the sum-rate. Simulation results demonstrate that our proposed algorithm exhibits prominent advantages in combating interference between different data streams and offer satisfactory performance improvement compared to the existing codebook-based hybrid beamforming schemes.
Millimeter-wave communication, hybrid precoder, multiple-input multiple-output (MIMO), antenna arrays, beamforming.
## I Introduction
The rapid proliferation of wireless devices has raised high demand of increasingly high transmission data rate. Millimeter-wave (mmWave) wireless communications, operating in the frequency bands from 30-300 GHz, have been demonstrated to be an excellent candidate to solve the spectrum congestion problem in recent experiments and are defining a new era of wireless communications because of the significantly large and unexploited mmWave frequency bands [1]-[3]. Economic and energy-efficient analog/digial hybrid precoding and combining transceiver architecture has been widely used in mmWave massive multiple-input multiple-output (MIMO) systems.
The hybrid precoding approach applies a large number of analog phase shifters to implement high-dimesional radio frequency (RF) precoders to compensate the large path-loss at mmWave bands, and a small number of RF chains for low-dimensional digital precoders to provide the necessary flexibility to perform advanced multiplexing/multiuser techniques. The major challenge in designing hybrid precoder is the practical constraints of the RF precoders, such as constant modulus, which is usually imposed by phase shifters. A popular solution to maximize the spectral efficiency of mmWave communications is to minimize the Euclidean distance between the hybrid precoder and the full-digital precoder. Thus, the hybrid precoder design is deemed as solving various matrix factorization problems with constant modulus constraints of the analog precoder [4]-[7].
The existing hybrid precoding designs typically assume that the analog beamformers are implemented with infinite resolution phase shifters. However, the components for realizing accurate phase shifters could be very complicated and expensive. Therefore, low-resolution phase shifters with discrete/quantized tunable phases are cost-effective and typically adopted in realistic systems. Particularly, according to the special characteristic of a mmWave channel, more practical codebook-based hybrid precoder design has been widely used [8]-[15], in which the columns of the analog precoder are selected from certain candidate vectors, such as array response vectors of the channel and discrete Fourier transform (DFT) beamformers which have constant modulus and discrete phases.
In the existing codebook-free and codebook-based algorithms mentioned above, the optimal hybrid precoder and combiner are individually designed to approximate, in the best Frobenius norm, the right and left singular vectors of the channel matrix, receptively. While the separate design for hybrid precoder and combiner can provide satisfactory performance in terms of spectral efficiency, the orthogonality of resulting spatial multiplexing channel cannot be guaranteed. Therefore, the conventional hybrid precoder and combiner designs may cause significant performance loss in realistic mmWave multiplexing systems. This motivates us to reconsider the hybrid precoder and combiner design and find a better way for spatial multiplexing in mmWave MIMO communications.
In this paper, we consider the problem of codebook-based joint hybrid precoder and combiner design for spatial multiplexing transmission in mmWave MIMO systems. We propose to jointly select the analog precoder and combiner pair for each data stream successively, which can maximize the channel gain as well as suppress the interference between different data streams. Then, the digital precoder and combiner are computed based on the obtained effective baseband channel to further mitigate the interference and maximize the sum-rate. Simulation results demonstrate that our proposed algorithm exhibits prominent advantages in combating the interference between different data streams and offers satisfactory performance improvement compared to the existing codebook-based hybrid beamforming schemes.
The following notation is used throughout this paper. Boldface lower-case letters indicate column vectors and boldface upper-case letters indicate matrices; \(\mathbb{C}\) denotes the set of all complex numbers; \((\cdot)^{T}\) and \((\cdot)^{H}\) denote the transpose and transpose-conjugate operation, respectively; \(\mathbf{A}(:,l)\) denotes the \(l\)-th column of matrix \(\mathbf{A}\); \(\mathbf{I}_{L}\) is the \(L\times L\) identity matrix; \(\mathbb{E}\{\cdot\}\) represents statistical expectation. Finally, \(|\cdot|\), \(\|\cdot\|\), and \(\|\cdot\|_{F}\) are the scalar magnitude, vector norm, and Frobenius norm, respectively.
## II System Model and Problem Formulation
### _System Model_
We consider a single-user mmWave MIMO multiplexing system with hybrid precoder and combiner, as illustrated in Fig. 1. The transmitter employs \(N_{t}\) antennas and \(N^{RF}_{t}\) RF chains to simultaneously transmit \(N_{s}\) data streams to the receiver which is equipped with \(N_{r}\) antennas and \(N^{RF}_{r}\) RF chains. To ensure the efficiency of the communication with limited number of RF chains, the number of data streams is constrained as \(N_{s}=N_{t}^{RF}=N_{r}^{RF}\), while the results can be applied to the general cases.
<figure><img src="content_image/1704.07988/x1.png"><figcaption>Fig. 1: A typical mmWave MIMO system with hybrid precoder and combiner.</figcaption></figure>
The transmitted symbols are first processed by a baseband precoder \(\mathbf{F}_{BB}\in\mathbb{C}^{N_{t}^{RF}\times N_{s}}\), then up-converted to the RF domain via \(N_{t}^{RF}\) RF chains before being precoded with an analog precoder \(\mathbf{F}_{RF}\) of dimension \(N_{t}\times N_{t}^{RF}\). While the baseband precoder \(\mathbf{F}_{BB}\) enables both amplitude and phase modifications, the analog precoder \(\mathbf{F}_{RF}\) has a constant amplitude \(\frac{1}{\sqrt{N_{t}}}\) for each element since it is implemented using analog phase shifters.
The discrete-time transmitted signal can be written as
\[\mathbf{x}=\sqrt{P}\mathbf{F}_{RF}\mathbf{F}_{BB}\mathbf{s}\] (1)
where \(\mathbf{s}\) is the \(N_{s}\times 1\) symbol vector such that \(\mathbb{E}\{\mathbf{s}\mathbf{s}^{H}\}=\frac{1}{N_{s}}\mathbf{I}_{N_{s}}\). \(P\) represents transmit power, and the total transmit power constraint is enforced by normalizing \(\mathbf{F}_{BB}\) such that \(\|\mathbf{F}_{RF}\mathbf{F}_{BB}\|^{2}_{F}=N_{s}\).
For simplicity, we consider a narrowband block-fading propagation channel, which yields the received signal as
\[\mathbf{y}=\sqrt{P}\mathbf{H}\mathbf{F}_{RF}\mathbf{F}_{BB}\mathbf{s}+\mathbf{n}\] (2)
where \(\mathbf{y}\) is the \(N_{r}\times 1\) received vector, \(\mathbf{H}\) is the \(N_{r}\times N_{t}\) channel matrix, and \(\mathbf{n}\thicksim\mathcal{CN}(\mathbf{0},\sigma^{2}\mathbf{I}_{N_{r}})\) is the complex Gaussian noise vector corrupting the received signal.
The receiver uses its \(N_{r}^{RF}\) RF chains and phase shifters to obtain the processed receive signal which has a form of
\[\mathbf{\widehat{s}}=\sqrt{P}\mathbf{W}^{H}_{BB}\mathbf{W}^{H}_{RF}\mathbf{H} \mathbf{F}_{RF}\mathbf{F}_{BB}\mathbf{s}+\mathbf{W}^{H}_{BB}\mathbf{W}^{H}_{RF }\mathbf{n}\] (3)
where \(\mathbf{W}_{BB}\) is the \(N_{r}^{RF}\times Ns\) digital baseband combiner, \(\mathbf{W}_{RF}\) is the \(N_{r}\times N_{r}^{RF}\) analog RF combiner.
### _Millimeter-Wave MIMO Channel Model_
Due to high free-space pathloss and large tightly-packed antenna arrays, the mmWave propagation in a massive MIMO system is well characterized by a limited spatial selectivity or scattering model, e.g. the Saleh-Valenzuela model, which allows us to accurately capture the mathematical structure of mmWave channels [8]. The matrix channel \(\mathbf{H}\) is assumed to be a sum contribution of \(N_{cl}\) scattering clusters, each of which provides \(N_{ray}\) propagation paths to the channel matrix \(\mathbf{H}\). Therefore, the discrete-time narrow-band mmWave channel \(\mathbf{H}\) can be formulated as
\[\mathbf{H}=\sqrt{\frac{N_{t}N_{r}}{N_{c1}N_{ray}}}\sum_{i=1}^{N_{cl}}\sum_{l=1 }^{N_{ray}}\alpha_{il}\mathbf{a}_{r}(\theta_{il}^{r})\mathbf{a}_{t}(\theta_{il }^{t})^{H}\] (4)
where \(\alpha_{il}\thicksim\mathcal{CN}(0,\sigma_{\alpha,i}^{2})\) is the complex gain of the \(l\)-th propagation path (ray) in the \(i\)-th scattering cluster and it yields independent identically distribution (i.i.d.). Let \(\sigma_{\alpha,i}^{2}\) represent the average power of the \(i\)-th cluster, and the total power satisfies \(\sum_{i=1}^{N_{cl}}\sigma_{\alpha,i}^{2}=N_{c1}\). \(\theta_{il}^{t}\) and \(\theta_{il}^{r}\) are the angles of departure (AoD) and the angle of arrival (AoA), respectively, which are assumed to be Laplacian-distributed with a mean cluster angle \(\theta_{i}^{t}\) and \(\theta_{i}^{r}\) and an angle spread of \(\sigma_{\theta_{i}^{t}}\) and \(\sigma_{\theta_{i}^{r}}\), respectively. Finally, the array response vectors \(\mathbf{a}_{r}(\theta)\) and \(\mathbf{a}_{t}(\theta)\) are the antenna array response vectors, which only depend on the antenna array structures. When the commonly used uniform linear arrays (ULAs) are considered, the receive antenna array response vector can be written as
\[\mathbf{a}_{r}(\theta)=\frac{1}{\sqrt{N_{r}}}[1,{e}^{j\frac{2\pi}{\lambda}d \sin(\theta)},\ldots,{e}^{j(N_{r}-1)\frac{2\pi}{\lambda}d\sin(\theta)}]^{T} \vspace{-0.1 cm}\] (5)
where \(\lambda\) is the signal wavelength, and \(d\) is the distance between antenna elements. The transmit array response vector \(\mathbf{a}_{t}(\theta)\) can be written in a similar fashion.
### _Problem Formulation_
We consider the problem of codebook-based hybrid precoder and combiner design in a mmWave multiplexing system. Specifically, let \(\mathcal{F}\) and \(\mathcal{W}\) denote the beamsteering codebooks for the analog precoder and combiner, respectively. If \(B^{RF}_{t}\) (\(B^{RF}_{r}\)) bits are used to quantize the AoD (AoA), \(\mathcal{F}\) and \(\mathcal{W}\) will consist of all possible analog precoding and combining vectors, which can be presented as
\[\mathcal{F} = \{\mathbf{a}_{t}(2\pi i/2^{B_{t}^{RF}}):i=1,\ldots,2^{B_{t}^{RF}}\},\] (6)
\[\mathcal{W} = \{\mathbf{a}_{r}(2\pi i/2^{B_{r}^{RF}}):i=1,\ldots,2^{B_{r}^{RF}}\}.\] (7)
The columns of analog precoding (combining) matrix \(\mathbf{F}_{RF}\) (\(\mathbf{W}_{RF}\)) are picked from candidate vectors in \(\mathcal{F}\) (\(\mathcal{W}\)), i.e. \(\mathbf{F}_{RF}(:,l)\in\mathcal{F},\forall l=1,\ldots,N_{t}^{RF}\), \(\mathbf{W}_{RF}(:,l)\in\mathcal{W},\forall l=1,\ldots,N_{r}^{RF}\).
When Gaussian symbols are transmitted over the mmWave MIMO channel, the achieved spectral efficiency is given by
\[R = \mathrm{log}_{2}\Bigg{(}\bigg{|}\mathbf{I}_{N_{s}}+\frac{P}{N_{s} }\mathbf{R}_{n}^{-1}\mathbf{W}^{H}_{BB}\mathbf{W}^{H}_{RF}\mathbf{H}\mathbf{F} _{RF}\mathbf{F}_{BB}\times\] (8)
\[\mathbf{F}_{BB}^{H}\mathbf{F}_{RF}^{H}\mathbf{ H}^{H}\mathbf{W}_{RF}\mathbf{W}_{BB}\vspace{1cm}\bigg{|}\Bigg{)},\]
where \(\mathbf{R}_{n}\triangleq\sigma_{n}^{2}\mathbf{W}^{H}_{BB}\mathbf{W}^{H}_{RF} \mathbf{W}_{RF}\mathbf{W}_{BB}\) is the noise covariance matrix after combining.
While most existing hybrid precoder and combiner design algorithms aim to maximize spectral efficiency in (8), we should point out that it is actually a performance upper bound for a general MIMO system. Even though precoder and combiner designs by approximating the right and left singular vectors of the channel matrix can provide satisfatory spectral efficiency performance, they cannot guarantee the orthogonality of the resulting effective spatial multiplexing channel for multiple data streams transmission. Therefore, a practical spatial multiplexing system needs a more reasonable performance metrics, such as the sum-rate of each data stream which is described as follows.
Given the received signal in (3), the signal-to-interference-plus-noise ratio (SINR) of the \(k\)-th data stream is formulated by
\[\gamma_{k}=\frac{\frac{P}{N_{s}}\mid\mathbf{W}(:,k)^{H}\mathbf{H}\mathbf{F}(:, k)\mid^{2}}{\frac{P}{N_{s}}\sum\limits_{\begin{subarray}{c}i=1,i\neq k \end{subarray}}^{N_{s}}\mid\mathbf{W}(:,k)^{H}\mathbf{H}\mathbf{F}(:,i)\mid^{2 }+\sigma_{n}^{2}\|\mathbf{W}(:,k)\|^{2}}\] (9)
where \(\mathbf{F}\triangleq\mathbf{F}_{RF}\mathbf{F}_{BB}\) and \(\mathbf{W}\triangleq\mathbf{W}_{RF}\mathbf{W}_{BB}\). The achievable sum-rate of the spatial multiplexing system is
\[R_{\mathrm{sum}}=\sum_{k=1}^{K}\log(1+\gamma_{k}).\] (10)
In this paper, we aim to jointly design the precoders \(\mathbf{F}_{RF}\), \(\mathbf{F}_{BB}\) and combiners \(\mathbf{W}_{RF}\), \(\mathbf{W}_{BB}\) to maximize the sum-rate of the mmWave multiplexing system, which can be formulated as
\[\hskip-17.071654pt\left\{\mathbf{F}_{RF}^{\star},\mathbf{F}_{BB}^ {\star},\mathbf{W}_{RF}^{\star},\mathbf{W}_{BB}^{\star}\right\}=\arg\max\sum_{ k=1}^{N_{s}}\log\left(1+\gamma_{k}\right)\] (11)
\[\hskip 22.762205pt\textrm{s. t.}~{}~{}~{}~{}\mathbf{F}_{RF}(:,l) \in\mathcal{F},\forall l=1,\ldots,N_{t}^{RF},\]
\[\hskip 51.214961pt\mathbf{W}_{RF}(:,l)\in\mathcal{W},\forall l=1, \ldots,N_{r}^{RF},\]
\[\hskip 51.214961pt\|\mathbf{F}_{RF}\mathbf{F}_{BB}\|^{2}_{F}=N_{s}.\]
The optimization problem of (11) is obviously a non-convex NP-hard problem. In the next Section, we turn to seek a sub-optimal joint hybrid precoder and combiner design to reduce the complexity while achieving a satisfactory performance.
## III Proposed Joint Hybrid Precoder and Combiner Design
To implement an efficient multiplexing system and maximize the sum-rate in (10), we need to design the precoder and combiner which can enhance the channel gain of each data stream as well as suppress the interference from each other. To this end, we propose to decompose the difficult optimization problem in (11) into a series of suboptimal problems which are much easier to be solved. In particular, by considering transmit/receive RF chain pairs one by one, we successively select analog precoder and combiner to maximize the corresponding channel gain while suppressing the co-channel interference. Then, the baseband digital precoder and combiner are computed to further mitigate the interference and maximize the sum-rate.
For the first data stream channel (i.e. \(k=1\)), we attempt to find the optimal analog precoder and combiner pair \((\mathbf{f}_{{RF}_{1}}^{\star},\mathbf{w}_{{RF}_{1}}^{\star})\) from codebooks in order to obtain the largest beamforming gain:
\[\left\{\mathbf{f}_{{RF}_{1}}^{\star},\mathbf{w}_{{RF}_{1}}^{\star}\right\}= \textrm{arg}\underset{\begin{subarray}{c}\mathbf{w}_{RF}\in\mathcal{W}\\ \mathbf{f}_{RF}\in\mathcal{F}\end{subarray}}{\textrm{max}}|\mathbf{w}^{H}_{RF} \mathbf{H}\mathbf{f}_{RF}|\] (12)
which can be easily solved by searching all candidate vectors in \(\mathcal{F}\) and \(\mathcal{W}\) with computational complexity \(O(|\mathcal{F}||\mathcal{W}|)\). Then, we assign \(\mathbf{f}^{\star}_{RF_{1}}\) and \(\mathbf{w}^{\star}_{RF_{1}}\) to the precoding and combining matrices
\[\mathbf{F}^{\star}_{RF}(:,1)=\mathbf{f}^{\star}_{RF_{1}},\] (13)
\[\mathbf{W}^{\star}_{RF}(:,1)=\mathbf{w}^{\star}_{RF_{1}}.\] (14)
For the rest \(K-1\) data streams, we attempt to successively select precoders and combiners to actively avoid the interference of the data streams whose precoders and combiners have been already determined. In particular, the component of previous determined precoders and combiners should be removed from the other data streams’ channels such that similar analog precoders and combiners will not be selected by the other data streams. To achieve this goal, we first initialize \(\mathbf{\widetilde{H}}=\mathbf{H}\), which will be updated successively in each step of selecting the analog beamformer pair. Let \(\mathbf{p}_{1}\triangleq\mathbf{f}^{\star}_{{RF}_{1}}\) and \(\mathbf{q}_{1}\triangleq\mathbf{w}^{\star}_{{RF}_{1}}\) be the components of the determined analog precoder and combinder for the first data stream, respectively. Then, before choosing the next analog precoder and combiner pair, the channel should be updated by eliminating such a series of orthogonal components of previous determined analog beamformer pairs. Particularly, before finding the second (i.e. \(k=2\)) analog precoder and combiner pair, we need to update channel as
\[\mathbf{\widetilde{H}}=(\mathbf{I}_{N_{r}}-\mathbf{q}_{1}\mathbf{q}_{1}^{H}) \mathbf{\widetilde{H}}(\mathbf{I}_{N_{t}}-\mathbf{p}_{1}\mathbf{p}_{1}^{H})\] (15)
and then execute searching precess as
\[\left\{\mathbf{f}_{{RF}_{2}}^{\star},\mathbf{w}_{{RF}_{2}}^{\star}\right\}= \textrm{arg}\underset{\begin{subarray}{c}\mathbf{w}_{RF}\in\mathcal{W}\\ \mathbf{f}_{RF}\in\mathcal{F}\end{subarray}}{\textrm{max}}|\mathbf{w}^{H}_{RF} \mathbf{\widetilde{H}}\mathbf{f}_{RF}|.\] (16)
The analog precoders and combiners for the rest data streams can be successively selected using the above procedure. Note that when \(k>1\), the orthogonormal component \(\mathbf{p}_{k}\) and \(\mathbf{q}_{k}\) of the selected precoder and combiner \(\mathbf{f}_{{RF}_{k}}^{\star}\) and \(\mathbf{w}_{{RF}_{k}}^{\star}\) should be obtained by a Gram-Schmidt based procedure:
\[\mathbf{p}_{k}=\mathbf{f}_{{RF}_{k}}^{\star}-\sum\limits_{i=1}^{k-1}\mathbf{p} ^{H}_{i}\mathbf{f}_{{RF}_{k}}^{\star}\mathbf{p}_{i},\]
\[\mathbf{p}_{k}=\mathbf{p}_{k}/\|\mathbf{p}_{k}\|,k=2,\ldots,K;\] (17)
\[\mathbf{q}_{k}=\mathbf{w}_{{RF}_{k}}^{\star}-\sum\limits_{i=1}^{k-1}\mathbf{q} ^{H}_{i}\mathbf{w}_{{RF}_{k}}^{\star}\mathbf{q}_{i},\]
\[\mathbf{q}_{k}=\mathbf{q}_{k}/\|\mathbf{q}_{k}\|,k=2,\ldots,K.\] (18)
[TABLE:S3.T1][ENDTABLE]
After all analog beamformer pairs have been determined, we can obtain the effective baseband channel as
\[\mathbf{H}_{eff}=(\mathbf{W}_{RF}^{\star})^{H}\mathbf{H}\mathbf{F}_{RF}^{\star}\] (19)
where \(\mathbf{F}_{RF}^{\star}\triangleq[\mathbf{f}^{\star}_{RF_{1}},\ldots,\mathbf{f }^{\star}_{RF_{N_{s}}}]\) and \(\mathbf{W}_{RF}^{\star}\triangleq[\mathbf{w}^{\star}_{RF_{1}},\ldots,\mathbf{w }^{\star}_{RF_{N_{s}}}]\). For the baseband precoder and combiner design, we perform singular value decomposition (SVD)
\[\mathbf{H}_{eff}=\mathbf{U}\mathbf{\Sigma}\mathbf{V},\] (20)
where \(\mathbf{U}\) is an \(N_{r}\times N_{r}\) unitary matrix, \(\mathbf{\Sigma}\) is an \(N_{r}\times N_{t}\) diagonal matrix of singular values arranged in decreasing order, and \(\mathbf{V}\) is an \(N_{t}\times N_{t}\) unitary matrix. Then, an SVD-based baseband digital precoder is employed to further suppress the interference and maximize the sum-rate:
\[\mathbf{F}^{\star}_{{BB}}=\mathbf{V}(:,1:N_{s}),\] (21)
\[\mathbf{W}^{\star}_{{BB}}=\mathbf{U}(:,1:N_{s}).\] (22)
Finally, we normalize the baseband precoder \(\mathbf{F}^{\star}_{{BB}}\) by
\[\mathbf{F}^{\star}_{{BB}}=\frac{\sqrt{N_{s}}\mathbf{F}^{\star}_{{BB}}}{\| \mathbf{F}^{\star}_{RF}\mathbf{F}^{\star}_{{BB}}\|_{F}}.\] (23)
This joint hybrid precoder and combiner design algorithm is summarized in Table I.
## IV Simulation Results
In this section, we illustrate the simulation results of the proposed joint hybrid precoder and combiner design. Both transmitter and receiver are equipped with a \(128\)-antenna ULA and the antenna spacing is \(d=\frac{\lambda}{2}\). The number of RF chains at the transmitter and receiver are \(N_{t}^{RF}=N_{r}^{RF}=4\), so is the number of data streams, \(N_{s}=4\). The channel parameters are set as \(N_{cl}=10\) clusters, \(N_{ray}=10\) rays per cluster, and the average power of the \(i\)-th cluster is \(\sigma^{2}_{\alpha,i}=c\frac{7}{10}^{i}\) where \(c=(\sum_{i=1}^{N_{cl}}(\frac{7}{10})^{i})^{-1}N_{cl}\). The azimuths of the AoAs/AoDs within a cluster are assumed to be Laplacian-distributed with an angle spread of \(\sigma_{\theta_{i}^{r}}=\sigma_{\theta_{i}^{t}}=2.5^{\circ}\). The mean cluster AoDs are assumed to be uniformly distributed over \([0,2\pi]\), while the mean cluster AoAs are uniformly distributed over an arbitrary \(\frac{\pi}{3}\) sector. Finally, we employ a codebook consisting of array response vectors with \(64\) uniformly quantized angle resolutions.
<figure><img src="content_image/1704.07988/x2.png"><figcaption>Fig. 2: Spectral efficiency versus SNR (Nt=Nr=128, NRFt=NRFr=4, Ns=4).</figcaption></figure>
<figure><img src="content_image/1704.07988/x4.png"><figcaption>Fig. 4: Spectral efficiency versus SNR (Nt=Nr=128, NRFt=NRFr=8, Ns=8).</figcaption></figure>
<figure><img src="content_image/1704.07988/x6.png"><figcaption>Fig. 6: Spectral efficiency versus the number of antennas (SNR=20dB,NRFt=NRFr=4, Ns=4).</figcaption></figure>
<figure><img src="content_image/1704.07988/x8.png"><figcaption>Fig. 8: Spectral efficiency versus Ns (Nt=Nr=128, SNR=20dB).</figcaption></figure>
Fig. 3 shows the spectral efficiency versus signal-to-noise-ratio (SNR) over \(10^{6}\) channel realizations. For the comparison purpose, we also include two state-of-the-art algorithms: _i_) Spatially Sparse Precoding (SSP) [8], which is a classic codebook-based hybrid precoding design; _ii_) Alternating Minimization using Phase Extraction (PE AltMin) algorithm [6], which is a codebook-free hybrid precoding design. The optimal (OPT) full-digital beamforming scheme with the unconstrained SVD algorithm is also plotted as the performance benchmark. It can be observed that our proposed algorithm outperforms the codebook-based SSP algorithm. Note that the PE AltMin algorithm has a continuous phase on RF beamformer and can achieve extremely close performance to the optimal full-digital approach. Therefore, we just consider it as a reference for the codebook-free algorithms will not compare it with our proposed algorithm. Fig. 3 presents the sum-rate versus SNR with the same system settings as of Fig. 3. We can notice that the proposed joint hybrid precoder and combiner design has a significant performance advantage over the other two hybrid beamforming designs. This is because our joint analog precoder and combiner selection approach aims to mitigate the interference between different data streams. We also consider a different setting that the number of RF chains and data streams are both set as \(8\). In Figs. 5 and 5, the spectral efficiency and sum-rate are presented, respectively. From these two figures, we can draw similar conclusions that the proposed algorithm can significantly suppress the inter-stream interference and take advantages of spatial multiplexing.
In Figs. 7 and 7, we turn to illustrate how the number of transmit and receive antennas affects the spectral efficiency and sum-rate performance. We assume \(N_{t}=N_{r}=N\) which is varying from \(16\) to \(256\). The SNR is set at \(20\)dB and \(N_{t}^{RF}=N_{r}^{RF}=N_{s}=4\). It can be observed from these two figures that the proposed algorithm has significant superiority compared with the SSP approach in the spectral efficiency as well as sum-rate performance. Interestingly, we also notice that the SSP scheme may exhibit severe performance degradation when the resolution of codebook is less than the number of antennas. This phonomania is clear shown in Fig. 7 with a turning point around \(N=64\).
Figs. 9 and 9 provide the performance of spectral efficiency and sum-rate versus the number of data streams \(N_{s}\), respectively. The number of transmit and receive RF chains varies along with \(N_{s}\). We can see that three approaches can achieve comparable performance in terms of spectral efficiency. However, only our proposed algorithm can maintain a satisfactory sum-rate achievement with the increase number of data streams. In addition, the strong interference between different data streams even causes a performance loss of SSP algorithm.
## V Conclusions
This paper investigated the problem of codebook-based joint hybrid precoder and combiner design for spatial multiplexing transmission in mmWave MIMO systems. We proposed to jointly select analog precoder and combiner pair for each data stream successively aiming at maximizing the channel gain as well as suppressing the interference between different data streams. Then, the digital precoder and combiner were computed based on the obtained effective baseband channel to further mitigate the interference and maximize the sum-rate. Simulation results demonstrated the performance improvement of our proposed algorithm compared to the existing codebook-based hybrid beamforming schemes.
## References
* [1] Z. Pi and F. Khan, “An introduction to millimeter-wave mobile broadband systems,” _IEEE Commun. Mag._, vol. 49, no. 6, pp. 101-107, June 2011.
* [2] T. Rappaport, S. Sun, R. Mayzus, H. Zhao, Y. Azar, K. Wang, G. N. Wong, J. K. Schulz, M. Samimi and F. Gutierrez “Millimeter wave mobile communications for 5G cellular: It will work!” _IEEE Access_, vol. 1, pp. 335-349, 2013.
* [3] R. W. Heath Jr., N. González-Prelcic, S. Rangan, W. Roh, and A. M. Sayeed, “An overview of signal processing techniques for millimeter wave MIMO systems,” _IEEE J. Sel. Topics Signal Process._, vol. 10, no. 3, pp. 436-453, Apr. 2016.
* [4] X. Gao, L. Dai, S. Han, C.-L. I, and R. W. Heath Jr., “Energy-efficient hybrid analog and digital precoding for mmWave MIMO systems with large antenna arrays,” _IEEE J. Sel. Areas Commun._, vol. 34, no. 4, pp. 998-1009, April 2016.
* [5] L. Dai, X. Gao, J. Quan, S. Han, and C.-L. I, “Near-optimal hybrid analog and digital precoding for downlink mmWave massive MIMO systems,” in _Proc. IEEE Int. Conf. Commun. (ICC)_, London UK, June 2015, pp. 1334-1339.
* [6] X. Yu, J.-C. Shen, J. Zhang, and K. B. Letaief, “Alternating minimization algorithms for hybrid precoding in millimeter wave MIMO systems,” _IEEE J. Sel. Topics Signal Process._, vol. 10, no. 3, pp. 485 C500, Apr. 2016.
* [7] C. Rusu, R. Méndez-Rial, N. González-Prelcic, and Robert W. Heath Jr., “Low Complexity Hybrid Precoding Strategies for Millimeter Wave Communication Systems,” _IEEE Trans. Wireless Commun._, vol. 15, no. 12, pp. 8380-8393, Dec. 2016.
* [8] O. E. Ayach, S. Rajagopal, S. Abu-Surra, Z. Pi, and R. W. Heath Jr., “Spatially sparse precoding in millimeter wave MIMO systems,” _IEEE Trans. Wireless Commun._, vol. 13, no. 3, pp. 1499-1513, Mar. 2014.
* [9] A. Alkhateeb, O. El Ayach, G. Leus, and R. W. Heath Jr., “Channel estimation and hybrid precoding for millimeter wave cellular systems,” _IEEE Journal of Selected Topics in Signal Processing_, vol. 8, no. 5, pp. 831-846, Oct. 2014.
* [10] A. Alkhateeb, G. Leus, and R. W. Heath Jr., “Limited feedback hybrid precoding for multi-user millimeter wave systems,” _IEEE Trans. Wireless Commun._, vol. 14, no. 11, pp. 6481-6494, Nov. 2015.
* [11] A. Alkhateeb and R. W. Heath Jr., “Frequency selective hybrid precoding for limited feedback millimeter wave systems,” _IEEE Trans. Commun._, vol. 64, no. 5, pp. 1801-1818, May 2016.
* [12] Y. Lee, C.-H. Wang, and Y.-H. Huang, “A hybrid RF/baseband precoding processor based on parallel-index-selection matrix-inversion-bypass simultaneous orthogonal matching pursuit for millimeter wave MIMO systems,” _IEEE Trans. Signal Process._, vol. 63, no. 2, pp. 305-317, Jan. 2015.
* [13] M. Kim and Y. Lee, “MSE-based hybrid RF/baseband processing for millimeter wave communication systems in MIMO interference channels,” _IEEE Trans. Veh. Technol._, vol. 64, no. 6, pp. 2714-2720, June 2015.
* [14] X. Gao, L. Dai, C. Yuen, and Z. Wang, “Turbo-like beamforming based on Tabu search algorithm for millimeter-wave massive MIMO systems,” _IEEE Trans. Veh. Technol._, vol. 65, no. 7, pp. 5731-5737, July 2016.
* [15] O. El Ayach, R. W. Heath Jr., S. Rajagopal, and Z. Pi, “Multimode precoding in millimeter wave MIMO transmitters with multiple antenna sub-arrays,” in _Proc. IEEE Global Commun. Conf. (GLOBECOM)_, Atlanta, USA, Dec. 2013, pp. 3476-3480.
|
1606.02294 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 30083,
"num_imgs": 4,
"llama3_tokens_count": 8728
} | [
"content_image/1606.02294/x1.png",
"content_image/1606.02294/x2.png",
"content_image/1606.02294/x3.png",
"content_image/1606.02294/x4.png"
] | # Beyond the Kuiper Belt Edge: New High Perihelion Trans-Neptunian Objects With Moderate Semi-major Axes and Eccentricities
Scott S. Sheppard¹ , Chadwick Trujillo² and David J. Tholen³
[FOOTNOTE:1][ENDFOOTNOTE]
[FOOTNOTE:2][ENDFOOTNOTE]
[FOOTNOTE:3][ENDFOOTNOTE]
###### Abstract
We are conducting a survey for distant solar system objects beyond the Kuiper Belt edge (\(\sim 50\) AU) with new wide-field cameras on the Subaru and CTIO telescopes. We are interested in the orbits of objects that are decoupled from the giant planet region in order to understand the structure of the outer solar system, including whether a massive planet exists beyond a few hundred AU as first reported in Trujillo and Sheppard (2014). In addition to discovering extreme trans-Neptunian objects detailed elsewhere, we have found several objects with high perihelia (\(q>40\) AU) that differ from the extreme and inner Oort cloud objects due to their moderate semi-major axes (\(50<a<100\) AU) and eccentricities (\(e\lesssim 0.3\)). Newly discovered objects 2014 FZ71 and 2015 FJ345 have the third and fourth highest perihelia known after Sedna and 2012 VP113, yet their orbits are not nearly as eccentric or distant. We found several of these high perihelion but moderate orbit objects and observe that they are mostly near Neptune mean motion resonances and have significant inclinations (\(i>20\) degrees). These moderate objects likely obtained their unusual orbits through combined interactions with Neptune’s mean motion resonances and the Kozai resonance, similar to the origin scenarios for 2004 XR190. We also find the distant 2008 ST291 has likely been modified by the MMR+KR mechanism through the 6:1 Neptune resonance. We discuss these moderately eccentric, distant objects along with some other interesting low inclination outer classical belt objects like 2012 FH84 discovered in our ongoing survey.
Kuiper belt: general – Oort Cloud – comets: general – minor planets, asteroids: general – planets and satellites: individual (Sedna, 2012 VP113, 2004 XR190, 2008 ST291, 2014 FZ71, 2015 FJ345, 2012 FH84)
## 1 Introduction
The Kuiper Belt is composed of small icy bodies just beyond Neptune. It has been dynamically and collisionally processed (Morbidelli et al. 2008; Petit et al. 2011). Much of the structure of the Kuiper Belt can be explained through interactions with Neptune (Dawson and Murray-Clay 2012; Nesvorny 2015a,2015b; Nesvorny and Vokrouhlicky 2016). The Neptune resonant objects were likely emplaced by Neptune’s outward migration (Malhotra 1995; Gomes et al. 2005; Gladman et al. 2012; Sheppard 2012). The scattered objects not in resonance have large eccentricities with perihelia near Neptune (\(q<38\) AU) suggesting strong interactions with the planet (Gomes et al. 2008; Brasil et al. 2014a). Extreme trans-Neptunian (ETNOs) or inner Oort cloud objects have high perihelia (\(q>40\) AU), large semi-major axes (\(a>150\) AU) and large eccentricities (Gladman et al. 2002; Morbidelli and Levison 2004; Brown et al. 2004; Gomes et al. 2005,2006; Trujillo and Sheppard 2014). These extreme objects are currently decoupled from the giant planets but must have interacted with something in the past to obtain their extreme orbits (Kenyon and Bromley 2004; Gladman and Chan 2006; Schwamb et al. 2010; Brasser et al. 2012; Soares and Gomes 2013). The similarity in the extreme objects’ orbital angles suggests they are being shepherded by an unseen massive distant planet (Trujillo and Sheppard 2014; Batygin and Brown 2016). There is an edge to the Kuiper Belt for low to moderately eccentric objects around 48 AU (Jewitt et al. 1998; Trujillo and Brown 2001; Allen et al. 2002).
Early dynamical simulations showed objects scattered by Neptune could obtain high perihelia, moderately eccentric orbits from Neptune interactions (Torbett & Smoluchowski 1990; Holman and Wisdom 1993; Malhotra 1995). Until now, only one object, 2004 XR190, was known to have a perihelion significantly beyond the Kuiper Belt edge yet only have a moderate eccentricity and moderate semi-major axis (Allen et al. 2006). 2004 XR190 likely obtained its high perihelion during Neptune’s outward migration, where the combined effect of the 8:3 Neptune Mean Motion Resonance (MMR) along with the Kozai (or Lidov-Kozai) Resonance (KR) modified the eccentricity and inclination of 2004 XR190 to obtain a very high perihelion (Gomes et al. 2008; Gomes 2011). The MMR+KR high perihelion objects may allow insights into the past migrational history of Neptune. In this letter we report several new high perihelia objects (\(q>40\) AU) that have only moderate eccentricities (\(e\lesssim 0.3\)) and semi-major axes (\(50<a<100\) AU) showing a significant population of these objects exist. This work is part of our ongoing survey and here we focus on the moderate objects found beyond the Kuiper Belt edge.
## 2 Observations
Basic methodology of the survey have been published in Trujillo and Sheppard (2014) and further details will be published elsewhere (Sheppard and Trujillo in prep). The majority of the area surveyed was with the CTIO 4m Blanco telescope in Chile with the 2.7 square degree Dark Energy Camera (DECam). DECam has 62 \(2048\times 4096\) pixel CCD chips with a scale of 0.26 arcseconds per pixel (Flaugher et al. 2015). The r-band filter was used during the early observing runs (November and December 2012 and March, May and November 2013) reaching to about 24th magnitude while the wide VR filter was used in the later observations (March and September 2014 and April 2015) to about 24.5 magnitudes. In addition, we have used the Subaru 8m telescope in Hawaii with its 1.5 square degree HyperSuprimeCam. HyperSuprimeCam has 110 CCD chips with scale of 0.17 arcseconds per pixel. The observations were obtained in March and May 2015 to just over 25th magnitude in the r-band. We covered 1078 and 72 square degrees at CTIO and Subaru, respectively, for a total of 1150 square degrees.
Most fields had three images of similar depth obtained over 3 to 6 hours. Observations were within 1.5 hours of opposition, which means the dominant apparent motion would be parallactic, and thus inversely related to distance. The seeing was between 0.6 and 1.2 arcseconds for most fields allowing us to detect objects moving faster than 0.28 arcseconds per hour, which corresponds to about 500 AU at opposition, though many fields would have detected objects to over 1000 AU (determined by placing artificial slow objects in the fields). Anything discovered beyond 50 AU was flagged for future recovery. Most of the survey fields were between 5 and 20 degrees from the ecliptic with fairly uniform longitudinal coverage.
## 3 Results
The new objects discovered in our survey are shown with the well known outer solar system objects in Figure 1. The region of orbital space beyond 50 AU in semi-major axis but with moderate to low eccentricities (\(e\lesssim 0.3\)) has been called the Kuiper Belt edge since only 2004 XR190 was known to occupy this area until now. Several of our new objects have perihelia well above the generally accepted perihelion limit where Neptune has significant influence (\(q>40-41\) AU: Gomes et al. 2008; Brasser and Schwamb 2015). Though they have high perihelia, they only have moderate semi-major axes (\(50<a<100\) AU) unlike the extreme and inner Oort cloud objects with \(a>150\) AU that likely have a different history and were detailed in Trujillo and Sheppard (2014). As seen in Figure 2, it appears most of these moderate objects beyond the Kuiper Belt edge are near strong Neptune MMRs (Table 1). This suggests these moderate orbits were created through MMR interactions.
This situation is similar to that of high perihelion object 2004 XR190 (Gomes 2011), though the new objects do not have exceptionally high inclinations like 2004 XR190 (\(i=46.7\)). A high inclination of over 40 degrees is required for the KR mechanism to efficiently operate and modify orbits by itself (Kozai 1962; Lidov 1962). More moderately inclined objects with inclinations of 20 to 40 degrees can have their orbits significantly modified by the KR if they are also in a MMR (Duncan & Levison 1997; Fernandez et al. 2004). The combined MMR+KR mechanism could allow objects to obtain perihelia up to 60 AU (Gomes et al. 2008).
The new objects have been observed for one to three years and thus their orbital elements are secure. We used the MERCURY numerical integrator (see appendix) to look at the behaviour of all the new objects shown in Table 1. We found all of the new orbits to be very stable over the age of the solar system. As detailed later, we examined the resonance argument angles for signs of libration, which would indicate MMR membership (Chiang et al. 2003; Elliot et al. 2005; Gladman et al. 2008; Pike et al. 2015). But the objects only need to have been in a Neptune MMR in the past to have had their orbits significantly modified by the MMR+KR mechanism (Gallardo 2006a). If not in but near a MMR today, the objects could have either escaped or Neptune migrated away to remove them from the MMR as suggested for 2004 XR190’s orbit (Gomes et al. 2011). Based on the Neptune MMR maps shown in Gallardo (2006b), all the new very high perihelion, moderate semi-major axis objects are near strong Neptune MMRs (Figure 2).
### The Very High Perihelion of 2014 FZ71
One of the most interesting new objects is 2014 FZ71, which has the highest perihelion of any known object after Sedna and 2012 VP113 (Figure 1). But 2014 FZ71’s moderate eccentricity and semi-major axis compared to Sedna and 2012 VP113 suggests it has a different origin. 2014 FZ71 is very close to the 4:1 MMR with Neptune and thus 2014 FZ71’s orbit was likely modified through interactions with it. Interestingly, the large perihelion of 55.9 AU suggests 2014 FZ71 would not currently have any strong interaction with Neptune. The relatively moderate inclination and eccentricity of 2014 FZ71 make it harder to invoke the Kozai mechanism for the high perihelion of 2014 FZ71. In our numerical simulations with ten one sigma orbit clones, we find some of the basic 4:1 resonance argument angles, called \(e^{3}\), \(es^{2}\) and \(e^{2}e_{N}\) in Elliot et al. (2005), showed signs of libration in some clones. This indicates 2014 FZ71 likely still interacts with the 4:1 Neptune MMR. We found all one sigma 2014 FZ71 clones showed constant semi-major axis but some showed large variations in \(i\) and \(e\) (\(8<i<32\) degrees and \(0.23<e<0.50\) giving \(38<q<58\) AU).
If 2014 FZ71 does have both the Kozai and Neptune MMR acting on it, the eccentricity of the object could vary and would be coupled to the inclination following
\(H=\sqrt{1-e^{2}}cos(i)\)
where H is constant (Kozai 1962; Morbidelli and Thomas 1995; Gomes et al. 2008). In this formalism, the perihelion of 2014 FZ71 could have been near 38.5 AU if its eccentricity was higher in the past (Figure 4). Indeed, a perihelion of around 38 AU is exactly what we find as the lower perihelion limit for the librating clones of 2014 FZ71 in our numerical simulations. This distance is just below the 40 AU upper limit Gomes et al. (2008) suggest for the KR and Neptune MMR objects. 2014 FZ71 is an interesting case that appears to be near the limits of effectiveness for the MMR+KR mechanism to operate. It is possible that 2014 FZ71 is a more extreme case of (145480) 2005 TB190, which Gomes et al. (2008) suggest was created by the Neptune 4:1 MMR+KR interactions.
### A Large Population of MMR+KR 3:1 Resonance Objects
2015 FJ345, 2013 FQ28 and 2015 KH162 all have orbits near the 3:1 MMR with Neptune. In our numerical simulations, some of 2015 FJ345’s and 2013 FQ28’s one sigma clones showed oscillating resonant argument angles with Neptune’s 3:1 MMR. 2015 KH162 and its clones showed no oscillating resonant argument angles and is thus likely a fossilized 3:1 MMR+KR object from Neptune’s outward migration. The likely 3:1 object (385607) 2005 EO297 was previously suggested by Gomes et al. (2008) to have been created from MMR+KR interactions. 2013 FQ28 and especially 2015 FJ345 have much higher perihelia and less eccentric orbits and thus have commonalities with 2004 XR190 and 2014 FZ71. 2015 FJ345 has the lowest eccentricity and highest perihelia of the 3:1 objects, which is consistent with the MMR+KR being responsible since 2015 FJ345 also has the highest inclination. The minimum perihelia for all these 3:1 objects could be below 35 AU through the MMR+KR mechanism, allowing strong interactions with Neptune (Figure 4). Our new discoveries 2015 FJ345 and 2013 FQ28 are the first 2 objects that have very high perihelia orbits through the 3:1 Neptune MMR+KR (both also have inclinations above 25 degrees). As seen in Figure 2, there is also a cluster of objects near the 3:1 Neptune MMR with perihelia just below 40 AU.
### Other High Perihelion, Moderate Objects
There are a few objects in Figure 3 that have moderately high perihelia but are not near Neptune MMRs. The closest resonance for 2014 FC69 and 2013 JD64 is the 11:3. Both these objects have very high inclinations of 30.1 and 50.3 degrees, respectively, strongly suggesting their orbits have been created through some interaction with the KR. 2014 QR441 is also not near any major Neptune MMR, though the moderately strong 10:3 resonance is nearby. 2014 QR441 has a very high inclination of 42.2 degrees, again showing the KR is likely involved.
We also find that 2008 ST291 is likely a 6:1 resonance or fossilized resonance object that has probably been modified by the MMR+KR mechanism. Though the nominal orbital position does not, clones a few tenths of AU lower in semi-major axis show resonant argument librations (the \(e^{5}\)) for 2008 ST291’s orbit in our numerical simulations, where \(10<i<35\) degrees, \(0.40<e<0.65\) and \(35<q<58\) AU occurred over 1 Gyr. 2010 ER65 could be a similar 6:1 case, but we found no significant resonant argument librations.
### The Outer Classical Belt
Our new discovery 2012 FH84 also has a high perihelia and moderate semi-major axis and eccentricity (Table 1). But 2012 FH84 has a very low inclination of only 3.6 degrees and is between the 5:2 and 8:3 Neptune MMRs, which makes it less likely to have been created by MMR+KR. Its minimum perihelion would be about 42 AU through this mechanism, so Neptune would be unlikely to have strong interactions. 2012 FH84 is similar to 1995 TL8 (\(a=52.3\) AU, \(e=0.234\), \(i=0.2\) deg), which cannot be explained by the MMR+KR mechanism (Gomes et al. 2008). 2012 FH84 is thus likely a new member of the rare outer classical belt of objects. These are non-resonant objects that have semi-major axes just beyond the 2:1 resonance, with moderate to low eccentricities and low inclinations. This outer belt might be related to the low inclination objects in the main classical Kuiper belt as they have similar dynamics and very red colors (Gomes et al. 2008; Morbidelli et al. 2008; Sheppard 2010). 2002 CP154 and 2001 FL193 are the only other objects beyond 50 AU that have perihelia higher than 40 AU and low inclinations like 2012 FH84 and 1995 TL8 (Figure 3). 2014 FA72, 2013 GQ136 and 2003 UY291 are also near this region with low inclinations and perihelia above 40 AU but have semi-major axes just below 50 AU.
The new object 2015 GP50 has a very similar semi-major axis and eccentricity to 2012 FH84, but 2015 GP50’s significantly higher inclination could allow it to obtain a much lower perihelion. 2015 GP50 again is not obviously near a Neptune MMR but the strong 5:2 resonance is nearby. 2005 CG81’s and 2007 LE38’s similarly high perihelia and highly inclined orbits, are also close to the 12:5 Neptune MMR.
## 4 Discussion and Conclusions
The moderate eccentricity space just beyond the Kuiper Belt edge at 50 AU is shown to be populated with objects other than 2004 XR190. All the new moderate eccentricity, very high perihelion objects (\(q>45\) AU) are near strong N:1 Neptune MMRs. We find all the moderate eccentricity objects with perihelia above 40 AU and semi-major axes beyond 53 AU have inclinations above 20 degrees (except the outer classical 2012 FH84 detailed above). Those away from Neptune N:1 MMRs generally have the highest inclinations, which presents evidence that the KR alone can raise the perihelion of high inclination objects while more moderate inclinations require the addition of MMRs (Figure 3). We used our observational bias simulator detailed in Trujillo and Sheppard (2014) to examine the distribution of inclinations of the MMR+KR objects in Table 1. Using the sin i / single Gaussian functional form for inclinations in Gulbis et al (2010), we find the debiased inclination distribution of the MMR+KR objects to be \(\mu_{1}=28^{+2}_{-1}\) degrees and \(\sigma_{1}=2.5^{+2.2}_{-0.8}\). This is significantly greater than the scattered objects with \(\mu_{1}=19.1^{+3.9}_{-3.6}\) and \(\sigma_{1}=6.9^{+4.1}_{-2.7}\) (Gulbis et al. 2010).
The few colors that have been obtained for these high perihelion, moderate orbit objects show them to be typical of scattered disk objects (Sheppard 2010). If these two populations of objects were both originally from the same population, this suggests it is the action of the MMR+KR that is responsible for the larger inclinations seen in Table 1. These objects were likely scattered into these orbits and captured into resonances. Whatever created the Kuiper Belt edge likely occurred during or before the emplacement of MMR+KR fossilized objects like 2004 XR190 as these fossilized objects would likely have been lost like any other objects beyond the edge. This would suggest the edge was created before Neptune finished migrating outwards and created the fossilized MMR+KR objects.
Our observational bias simulator was further used to get a crude estimate on the MMR+KR population. We used a uniform simulated orbit distribution with a minimum of 0.1 eccentricity and 40 AU perihelion with an inclination distribution described above. We would only detect the objects when beyond 50 AU and expect no longitudinal bias as our survey is fairly uniform. Because some MMRs are closer than others we would expect population ratio detections of 1.0/0.97/0.79/0.38/0.17/0.09 for MMRs 5:2/8:3/3:1/4:1/5:1/6:1 assuming equal populations. The odds of finding three 3:1 high perihelion MMR objects and no 5:2 or 8:3 objects by chance is 2.5% if their populations are equal (though increases to 7% if 2015 GP50 is in the 5:2). This suggests the 3:1 may harbor many more MMR+KR objects than the 5:2 or 8:3 MMR, which is surprising as Volk et al. (2016) find a large 5:2 MMR population with lower perihelia and Brasil et al. (2014b) suggest the 3:1 and 5:2 should be the most populated with MMR+KR objects. However, the low order N:1 resonances like the 3:1 are the strongest for diffussing scattered objects via MMR+KR (Gallardo 2006a). We find that about \(2400^{+1500}_{-1000}\) MMR+KR 3:1 and about \(1600^{+2000}_{-1200}\) 4:1 objects larger than 100 km in diameter likely exist with perihelia greater than 40 AU with the 5:2 and 8:3 populations significantly smaller.
Trujillo and Sheppard (2014) first noticed that the extreme trans-Neptunian objects exhibit a clustering in their orbital angles and predict a super-Earth planet exists beyond a few hundred AUs to create this clustering. Recently Batygin and Brown (2016) obtained a possible rudimentary orbit for this planet predicted by Trujillo and Sheppard (2014). In our numerical integrations (see appendix) we found this planet (a=700 AU, e=0.6 and i=30 degrees) has no significant impact on the current MMR+KR objects, including the most distant 2008 ST291. We note that all five our new MMR+KR objects along with 2004 XR190 have longitudes of perihelion (\(LP=\omega+\Omega\)) between about 80 and 190 degrees, which is about 180 degrees from the longitude of perihelion for the ETNOs.
## Acknowledgments
We thank Y. Ramanjooloo and D. Hung for help in recovery of 2015 KH162 at the Hawaii 88inch telescope. This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the DOE and NSF (USA), MISE (Spain), STFC (UK), HEFCE (UK). NCSA (UIUC), KICP (U. Chicago), CCAPP (Ohio State), MIFPA (Texas A&M), CNPQ, FAPERJ, FINEP (Brazil), MINECO (Spain), DFG (Germany) and the collaborating institutions in the DES, which are Argonne Lab, UC Santa Cruz, University of Cambridge, CIEMAT-Madrid, University of Chicago, University College London, DES-Brazil Consortium, University of Edinburgh, ETH Zurich, Fermilab, University of Illinois, ICE (IEEC-CSIC), IFAE Barcelona, Lawrence Berkeley Lab, LMU Munchen and the associated Excellence Cluster Uni verse, University of Michigan, NOAO, University of Nottingham, Ohio State University, University of Pennsylvania, University of Portsmouth, SLAC National Lab, Stanford University, University of Sussex, and Texas A&M University. Based in part on observations at CTIO, NOAO, which is operated by AURA under a cooperative agreement with the NSF. Based in part on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. C.T. was supported by the Gemini Observatory. This research was funded by NASA grant NN15AF446. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile.
## Appendix A Appendix
Our simple numerical simulations were performed in order to determine the basic orbital properties and behaviour of the newly discovered objects. We used the MERCURY numerical integrator (Chambers 1999). In our basic simulations we used the four giant planets Jupiter, Saturn, Uranus and Neptune and added the mass of the terrestrial planets to the Sun. An additional simulation was run with all the same conditions but adding in a 15 Earth mass planet on an eccentric \(e=0.6\) orbit at 700 AU to see how it might effect the orbits of the MMR+KR objects. The time step used was 20 days and all integrations ran for over 1 billion years. Orbital elements used were heliocentric converted from the barycentric output from the orbit fitting program by Bernstein and Khushalani (2000). In our simulations of the nominal orbits and ten clones within 1 sigma of each new object’s orbit we found no significant semi-major axis variability over 1 billion years. For most nominal orbits and clones the \(e\) for all the new objects only varied by 0.01 to 0.02 and the \(i\) at most by about 3 degs over 1 billion years. But some 1 sigma clones did show large variations in \(e\) and \(i\) indicating significant interactions with Neptune’s MMRs. 2014 FZ71 with a tenth of an AU larger semi-major axis than the nominal position showed variations of \(8<i<32\) degrees, \(0.50>e>0.23\) inversely with \(i\) and \(38<q<58\) AU over 100 Myr timescales, indicating interactions with Neptune’s 4:1 MMR. A tenth of an AU smaller clone of 2013 FQ28 near the 3:1 Neptune MMR had \(i\) vary from 20 to 30 degrees and \(e\) inversely from 0.2 to 0.4 giving a perihelion from 38 to 50 AU over 1 billion years. The 2008 ST291’s clones of a few tenths of an AU smaller than the nominal position showed significant orbital variability in \(e\) and \(i\). 2008 ST291’s clones in the Neptune 6:1 MMR resonance where the MMR+KR mechanism allowed \(i\) to vary from 35 to 10 degrees and \(e\) inversely from 0.40 to 0.65 over 100 Myrs (with perihelia ranging between 35 to 58 AU). Including the distant massive planet didn’t cause the clones of 2008 ST291 to escape the 6:1 Neptune MMR or 2014 FZ71 to escape the 4:1 Neptune MMR and their basic orbital behaviour was similar to the simulations without the putative distant massive planet.
## References
* (1)
* (2) Allen, R., Bernstein, G., & Malhotra, R. 2002, AJ, 124, 2949.
* (3)
* (4) Allen, R., Gladman, G., Kavelaars, J., Petit, J., Parker, J. & Nicholson, P. 2006, ApJ, 640, L83.
* (5)
* (6) Batygin, K. & Brown, M. 2016, AJ, 151, 22.
* (7)
* (8) Bernstein, G. & Khushalani, B. 2000, AJ, 120, 3323.
* (9)
* (10) Brasil, P., Nesvorny, D., & Gomes, R. 2014a, AJ, 148, 56.
* (11)
* (12) Brasil, P., Gomes, R., & Soares, J. 2014b, AA, 564, A44.
* (13)
* (14) Brasser, R., Duncan, M., Levison, H., Schwamb, M. & Brown, M. 2012, Icarus, 217, 1.
* (15)
* (16) Brasser, R. & Schwamb, M. 2015, MNRAS, 446, 3788.
* (17)
* (18) Brown, M., Trujillo, C. & Rabinowitz, D. 2004, ApJ, 617, 645.
* (19)
* (20) Chambers, J. 1999, MNRAS, 304, 793.
* (21)
* (22) Chiang, E., Jordan, A., Millis, R. et al. 2003, AJ, 126, 430.
* (23)
* (24) Dawson, R. & Murray-Clay, R. 2012, ApJ, 750, 43.
* (25)
* (26) Duncan, M. & Levison, H. 1997, Science, 276, 1670.
* (27)
* (28) Elliot, J., Kern, S., Clancy, K., et al. 2005, AJ, 129, 1117.
* (29)
* (30) Flaugher, B., Diehl, H., Honscheid, K., et al. 2015, AJ, 150, 150.
* (31)
* (32) Fernandez, J., Gallardo, T. & Brunini, A. 2004, Icarus, 172, 372.
* (33)
* (34) Gallardo, T. 2006a, Icarus, 181, 205.
* (35)
* (36) Gallardo, T. 2006b, Icarus, 184, 29.
* (37)
* (38) Gallardo, T., Hugo, G. & Pais, P. 2012, Icarus, 220, 392.
* (39)
* (40) Gladman, B., Holman, M., Grav, T., Kavelaars, J., Nicholson, P., Aksnes, K. & Petit, J. 2002, Icarus, 157, 269.
* (41)
* (42) Gladman, B. & Chan, C. 2006, ApJ, 643, L135.
* (43)
* (44) Gladman, B., Marsden, B. & VanLaerhoven, C. 2008, in The Solar System Beyond Neptune, eds. M. Barucci, H. Boehnhardt, D. Cruikshank and A. Morbidelli (Tucson: Univ of Arizona Press), 43-57.
* (45)
* (46) Gladman, B., Lawler, S., Petit, J. et al. 2012, AJ, 144, 23.
* (47)
* (48) Gomes, R., Gallardo, T., Fernandez, J., & Bruini, A. 2005, Celest. Mech. Dynam. Astron., 91, 109.
* (49)
* (50) Gomes, R., Matese, J., & Lissauer, J. 2006, Icarus, 184, 589.
* (51)
* (52) Gomes, R., Fernandez, J., Gallardo, T. and Brunini, A. 2008, in The Solar System Beyond Neptune, eds. M. Barucci, H. Boehnhardt, D. Cruikshank and A. Morbidelli (Tucson: Univ of Arizona Press), 259-273.
* (53)
* (54) Gomes, R. 2011, Icarus, 215, 661.
* (55)
* (56) Gulbis, A., Elliot, J., Adams, E., Benecchi, S., Buie, M., Trilling, D. & Wasserman, L. 2010, AJ, 140, 350.
* (57)
* (58) Holman, M. & Wisdom, J. 1993, AJ, 105, 1987.
* (59)
* (60) Jewitt, D., Luu, J., & Trujillo, C. 1998, AJ, 115, 2125.
* (61)
* (62) Kenyon, S. & Bromley, B. 2004, Nature, 432, 598.
* (63)
* (64) Kozai, Y. 1962, AJ, 67, 591.
* (65)
* (66) Lidov, M. 1962, P&SS, 9, 719.
* (67)
* (68) Malhotra, R. 1995, AJ, 110, 420.
* (69)
* (70) Morbidelli, A. and Thomas, F. 1995, Icarus, 118, 322.
* (71)
* (72) Morbidelli, A. & Levison, H. 2004, AJ, 128, 2564.
* (73)
* (74) Morbidelli, A., Levison, H. and Gomes, R. 2008, in The Solar System Beyond Neptune, eds. M. Barucci, H. Boehnhardt, D. Cruikshank and A. Morbidelli (Tucson: Univ of Arizona Press), 275-292.
* (75)
* (76) Nesvorny, D. 2015a, AJ, 150, 68.
* (77)
* (78) Nesvorny, D. 2015b, AJ, 150, 73.
* (79)
* (80) Nesvorny, D. & Vokrouhlicky, D. 2016, arXiv160206988.
* (81)
* (82) Petit, J., Kavelaars, J., Gladman, B. et al. 2011, AJ, 142, 131.
* (83)
* (84) Pike, R., Kavelaars, J., Petit, J., Gladman, B., Alexandersen, M., Volk, K. & Shankman, C. 2015, AJ, 149, 202.
* (85)
* (86) Schwamb, M., Brown, M., Rabinowitz, D. & Ragozzine, D. 2010, ApJ, 720, 1691.
* (87)
* (88) Sheppard, S. 2010, AJ, 139, 1394.
* (89)
* (90) Sheppard, S. 2012, AJ, 144, 169.
* (91)
* (92) Soares, J., and Gomes, R. 2013, AA, 553, 110.
* (93)
* (94) Torbett, M. & Smoluchowski, A. 1990, Nature, 345, 49.
* (95)
* (96) Trujillo, C. & Brown, M. 2001, ApJ, 554, L95.
* (97)
* (98) Trujillo, C. & Sheppard, S. 2014, Nature, 507, 471.
* (99)
* (100) Volk, K., Murray-Clay, R., Gladman, B. et al. 2016, arXiv:1604:08177.
* (101)
[TABLE:A1.T1][ENDTABLE]
<figure><img src="content_image/1606.02294/x1.png"><figcaption>Figure 1: The perihelion versus eccentricity. Red circles are objectsdiscovered during this survey (large red circles are the focus of this work).Objects above the dashed line are considered extreme with a>150 AU. Objectswith high perihelia beyond the Kuiper Belt edge at 50 AU but only moderateeccentricity are likely created by a combination of Neptune Mean MotionResonances (MMR) and the Kozai Resonance (KR).</figcaption></figure>
<figure><img src="content_image/1606.02294/x2.png"><figcaption>Figure 2: The semi-major axis versus eccentricity. Red circles show the newobjects discovered in this survey. Larger circles show objects with periheliaabove 40 AU. Dashed lines show strong mean motion resonances with Neptune. Thedotted line shows a constant perihelion of 40 AU. Objects to the right of thedotted line have perihelia above 40 AU and thus are mostly decoupled fromNeptune. Uncertainties on the orbital parameters are smaller than the symbols.</figcaption></figure>
<figure><img src="content_image/1606.02294/x3.png"><figcaption>Figure 3: Similar to Figure 2 but showing inclination. Larger circles showobjects that have perihelia above 40 AU. All high perihelion objects beyond 53AU have inclinations greater than 20 degrees except for our newly discovered2012 FH84, which is likely a rare outer classical belt object along with 1995TL8, 2002 CP154 and 2001 FL193 (large blue circles with very low inclinationsbetween 50 and 53 AU).</figcaption></figure>
<figure><img src="content_image/1606.02294/x4.png"><figcaption>Figure 4: The MMR+KR curves for some objects. An object that interacts with aNeptune MMR and the KR can have its inclination and perihelion (linked toeccentricity) altered through the relation H=√1−e2cos(i), where H is aconstant and q=a(1−e). An object needs to come within about 40 AU for theMMR+KR to be viable. This fails for the extreme or inner Oort cloud objectslike 2012 VP113 or for the outer classical belt objects like 2012 FH84. 2014FZ71 is near the limit for the MMR+KR mechanism. The MMR+KR mechanism cansufficiently explain 2015 FJ345’s and 2013 FQ28’s high perihelion orbits.</figcaption></figure>
|
1003.0420 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 56644,
"num_imgs": 16,
"llama3_tokens_count": 14599
} | [
"content_image/1003.0420/x1.png",
"content_image/1003.0420/x2.png",
"content_image/1003.0420/x3.png",
"content_image/1003.0420/x4.png",
"content_image/1003.0420/x5.png",
"content_image/1003.0420/x6.png",
"content_image/1003.0420/x7.png",
"content_image/1003.0420/x8.png",
"content_image/1003.0420/x9.png",
"content_image/1003.0420/x10.png",
"content_image/1003.0420/x11.png",
"content_image/1003.0420/x12.png",
"content_image/1003.0420/x13.png",
"content_image/1003.0420/x14.png",
"content_image/1003.0420/x15.png",
"content_image/1003.0420/x16.png"
] | # Doppler Shift, Intensity, and Density Oscillations Observed With the Euv Imaging Spectrometer on _Hinode_
John T. Mariska and K. Muglach¹
Space Science Division, Code 7673, Naval Research Laboratory, Washington, DC 20375; mariska@nrl.navy.mil
[FOOTNOTE:1][ENDFOOTNOTE]
###### Abstract
Low-amplitude Doppler-shift oscillations have been observed in coronal emission lines in a number of active regions with the EUV Imaging Spectrometer (EIS) on the _Hinode_ satellite. Both standing and propagating waves have been detected and many periods have been observed, but a clear picture of all the wave modes that might be associated with active regions has not yet emerged. In this study, we examine additional observations obtained with EIS in plage near an active region on 2007 August 22–23. We find Doppler-shift oscillations with amplitudes between 1 and 2 km s\({}^{-1}\) in emission lines ranging from Fe xi 188.23 Å, which is formed at \(\log T=6.07\) to Fe xv 284.16 Å, which is formed at \(\log T=6.32\). Typical periods are near 10 minutes. We also observe intensity and density oscillations for some of the detected Doppler-shift oscillations. In the better-observed cases, the oscillations are consistent with upwardly propagating slow magnetoacoustic waves. Simultaneous observations of the Ca ii H line with the _Hinode_ Solar Optical Telescope Broadband Filter Imager show some evidence for 10-minute oscillations as well.
Subject headings:Sun: corona — Sun: oscillations — Sun: UV radiation †
[FOOTNOTE:†][ENDFOOTNOTE]
## 1. Introduction
Because the outer regions of the solar atmosphere are threaded by a magnetic field, they can support a wide range of oscillatory phenomena. Theoretical aspects of these waves and oscillations have been the subject of extensive investigations (e.g., Roberts et al., 1983, 1984). These initial studies were mainly driven by radio observations of short period oscillations (e.g., Rosenberg, 1970). More recent detections of coronal oscillatory phenomena have resulted in considerable additional theoretical work. Recent reviews include Roberts (2000) and Roberts & Nakariakov (2003). Observational detections of coronal oscillatory phenomena include detections of spatial oscillations of coronal structures, which have been interpreted as fast kink mode MHD disturbances (e.g., Aschwanden et al., 1999); intensity oscillations, which have been interpreted as propagating slow magnetoacoustic waves (e.g., DeForest & Gurman, 1998; Berghmans & Clette, 1999); and Doppler shift oscillations, which have been interpreted as slow mode MHD waves (e.g., Wang et al., 2002). The interaction between the theoretical work and the growing body of oscillation observations provides fertile ground for testing our understanding of the structure and dynamics of the corona.
The EUV Imaging Spectrometer (EIS) on the _Hinode_ satellite is an excellent tool for studying oscillatory phenomena in the corona. Culhane et al. (2007) provides a detailed description of EIS, and the overall _Hinode_ mission is described in Kosugi et al. (2007). Briefly, EIS produces stigmatic spectra in two 40 Å wavelength bands centered at 195 and 270 Å. Two slits (1″ and 2″) provide line profiles, and two slots (40″ and 266″) provide monochromatic images. Moving a fine mirror mechanism allows EIS to build up spectroheliograms in selected emission lines by rastering a region of interest. With typical exposure times of 30 to 90 s, however, it can take considerable time to construct an image. Higher time cadences can be achieved by keeping the EIS slit or slot fixed on the Sun and making repeated exposures. This sit-and-stare mode is ideal for searching for oscillatory phenomena.
EIS Doppler shift data have already been used for a number of investigations of oscillatory phenomena. Van Doorsselaere et al. (2008) have detected kink mode MHD oscillations with a period near 5 minutes. Mariska et al. (2008) observed damped slow magnetoacoustic standing waves with periods of about 35 minutes. Wang et al. (35) have detected slow mode magnetoacoustic waves with 5 minute periods propagating upward from the chromosphere to the corona in an active region. Wang et al. (36) have also observed propagating slow magnetoacoustic waves with periods of 12 to 25 minutes in a large fan structure associated with an active region. Analysis of oscillatory data with EIS is still just beginning. The amplitudes observed have all been very small—typically 1 to 2 km s\({}^{-1}\). Thus a clear picture of the nature of the low-amplitude coronal oscillations has yet to emerge. In this paper, we add to that picture by analyzing portions of an EIS sit-and-stare active region observation that shows evidence for Doppler-shift oscillations in a number of EUV emission lines. We also use data from the _Hinode_ Solar Optical Telescope (SOT) to relate the phenomena observed in the corona with EIS to chromospheric features and the magnetic field.
## 2. Observations
<figure><img src="content_image/1003.0420/x1.png"><figcaption>Figure 1.— Context EIS spectroheliograms in 9 wavelength windows obtained from13:33:46 to 17:56:57 UT on 2007 August 22 (left) and from 01:55:43 to 06:18:53UT on 2007 August 23 (right). The location of the EIS slit for the sit-and-stare observation is marked with the vertical line. The horizontal lines alongthe slit mark the location of the oscillation data analyzed in this paper. Thepost-flare loops discussed in the text are the bright features east and southof the marked region in the three bottom right panels.</figcaption></figure>
The observations discussed in this paper were taken in an area of enhanced EUV and soft X-ray emission just west of NOAA active region 10969. The complete EIS data set consists of a set of \(256\arcsec\times 256\arcsec\) context spectroheliograms in 20 spectral windows, followed by 7.4 h of sit-and-stare observations in 20 spectral windows, and finally a second set of context spectroheliograms. All the EIS sit-and-stare data were processed using software provided by the EIS team. This processing removes detector bias and dark current, hot pixels, dusty pixels, and cosmic rays, and then applies a calibration based on the prelaunch absolute calibration. The result is a set of intensities in ergs cm\({}^{-2}\) s\({}^{-1}\) sr\({}^{-1}\) Å\({}^{-1}\). The data were also corrected for the EIS slit tilt and the orbital variation in the line centroids. For emission lines in selected wavelength windows, the sit-and-stare data were fitted with Gaussian line profiles plus a background, providing the total intensity, location of the line center, and the width.
Figure 1 shows the pre- and post-sit-and-stare spectroheliograms and provides a more detailed view of the area on the Sun covered by the sit-and-stare observations. The spectroheliograms were obtained with the 1″ EIS slit using an exposure time of 60 s. Note that the two sets of spectroheliograms have small differences in the pointing. Examination of the two sets of images shows that considerable evolution has taken place in the observed region between the times each set was taken. In particular, new loop structures have developed at locations to the south of the bright core of the emitting region, suggesting that some flare-like heating may have taken place between the two sets of context observations. Space Weather Prediction Center data show that there was one B1.2 class flare between 07:50 and 08:10 UT on 2007 August 22 at an unknown location. Since other flares occurred in this active region, it is likely that this one is also associated with it. There were, however, no events recorded during the time of the EIS observations.
<figure><img src="content_image/1003.0420/x2.png"><figcaption>Figure 2.— A portion of an MDI magnetogram taken on 2007 August 22 at 14:27 UTshowing the area covered by the EIS spectroheliograms and the approximatelocation of the EIS slit during the sit-and-stare observation. The horizontallines along the slit mark the location of the oscillation data analyzed inthis paper, and the area of weak magnetic flux that is the focus of thisanalysis is marked with an arrow. The image has been scaled so that the rangeof magnetic fluxes displayed is ±100 Gauss.</figcaption></figure>
<figure><img src="content_image/1003.0420/x3.png"><figcaption>Figure 3.— TRACE 171 Å (FeIX/X) image taken on 2007 August 22 at 20:01:19 UT.The box shows the location of the EIS spectroheliograms taken before and afterthe sit-and-stare observations, and the vertical line with the box shows thelocation of the sit-and stare observation. The horizontal lines along the slitmark the location of the oscillation data analyzed in this paper.</figcaption></figure>
Both magnetogram data from the _SOHO_ Michelson Doppler Imager (MDI) and _Hinode_ Solar Optical Telescope (SOT) and coronal images taken in the 171 Å (Fe\(\;\)IX/X) filter with the _Transition Region and Coronal Explorer_ (_TRACE_) were also examined. The magnetograms, one of which is shown in Figure 2 with the location of the EIS slit for the sit-and-stare observation indicated, show an extended bipolar plage region without any visible sunspots. As can be seen in the magnetogram, the EIS slit crosses two regions of positive (white) polarity. The weaker one, which has a size of about 6″and is marked with an arrow, is the focus of this analysis.
<figure><img src="content_image/1003.0420/x4.png"><figcaption>Figure 4.— Coalignment of EIS and SOT. Top left: EIS spectroheliogram taken inO vi 184.12 Å at the same time as the spectroheliograms shown in Figure 1, thedark vertical line near the center marks the slit location for the sit-and-stare observations. Top right: MDI full disk magnetogram taken at 14:27 UT.Bottom right: Hinode SP magnetogram, taken between 18:03 UT and 19:01 UT, atthe same time the Ca ii H time sequence was taken. Bottom left: temporalaverage of the time sequence in Ca ii H taken between 19:50 and 21:40 UT. Theblack circles in the bottom images encompass a small network bright point thatoverlaps with the EIS slit.</figcaption></figure>
The _TRACE_ images, one of which is shown in Figure 3, show that the western part of the region near the right edge of the box that outlines the edges of the second EIS spectroheliogram exhibits fan-like coronal structures extending radially out in all directions, while the eastern part consists of shorter loop structures that connect the opposite polarities. The full set of _TRACE_ images that cover most of the time period covered by the context and sit-and-stare observations are included as a movie. The movie shows generally quiescent behavior in the fan-like structures, but multiple brightenings in the eastern loop system. The largest of these took place at around 01:31 UT on August 23 and shows a filament eruption, which resulted in a system of post-flare loops that are visible in the Fe xv 284.16 Å and Fe xvi 262.98 Å spectroheliograms in the right panels of Figure 1. Some of the loops in the eastern part cross the EIS slit location for the sit-and-stare observations, but they are for the most part north of the area we focus on.
We have also examined images taken with the Extreme Ultraviolet Imaging Telescope (EUVI, Wuelser et al., 2004) on the Ahead (A) and Behind (B) spacecraft of the _Solar Terrestrial Relations Observatory_ (_STEREO_). EUVI observes the entire solar disk and the corona up to 1.4 R\({}_{\sun}\) with a pixel size of 1.59″. On 2007 August 22, the separation angle of _STEREO_ with Earth was 14.985\(\arcdeg\) for spacecraft A and 11.442\(\arcdeg\) for spacecraft B. Our target region was rather near the limb as seen from spacecraft A. EUVI/B, however, provided continuous on-disk images between 2007 August 22, 11:00 UT and 2007 August 23, 11:30 UT, which allowed us to study the development of the coronal structures of the region during the time of the EIS observations.
We produced movies in all four EUVI channels (171 Å, 195 Å, 284 Å, and 304 Å), with 171 Å having the highest cadence of 2.5 min. The EUVI movies generally show the same behavior as that shown in the _TRACE_ data. EUVI captured two larger events during this period, the first one started at around 12:24 UT on August 22, showing multiple brightenings of the eastern loop system although it is not clear if a CME had been launched. The second one took place at around 01:31 UT on August 23, involving the western part of the region and shows a filament eruption with a system of post-flare loops evolving at the time the second EIS context spectroheliogram was taken.
SOT obtained Ca ii H observations at a one-minute cadence during the time interval of the sit-and-stare observations that are the focus of this study. To coalign those observations we use an EIS O vi 184.12 Å spectroheliogram, MDI magnetograms, SOT Spectro-Polarimeter (SP) magnetograms, and SOT Ca ii H line filtergrams. The EIS O vi 184.12 Å spectroheliogram was part of the scan shown in Figure 1 (left), taken several hours before the sit-and-stare observations. Coalignment was carried out by rescaling the images, using cross-correlation techniques to overlap the selected subfields and finally blinking the images to ensure minimal offsets.
Figure 4 displays the O vi 184.12 Å spectroheliogram in the top left corner, with the dark vertical line representing the slit position of the sit-and-stare observations. The top right image is from a full-disk MDI magnetogram taken at 14:27 UT. The three small structures in the lower left corner of the magnetogram were used to coalign the O vi image, which shows bright emission structures at the same locations. In the next step we use SP data to coalign with the MDI magnetogram. The SP was scanning the target region on 2007 August 22 between 18:03 UT and 19:01 UT, at the same time the Ca ii H image sequence was taken. The lower right image in Figure 4 gives the apparent longitudinal field strength as derived from the Stokes spectra. Comparing the two magnetograms we can see that the overall structure of the bi-polar region has hardly evolved over time and that the SP data show considerable additional fine structure in the magnetic field. We also note that in the time between the MDI magnetogram and SP magnetogram flux cancellation has taken place at the polarity inversion line of the bi-polar region (at around \(x=-610\)″). The eruption observed a few hours later in EUVI and _TRACE_ was probably triggered by this flux cancellation.
<figure><img src="content_image/1003.0420/x5.png"><figcaption>Figure 5.— EIS sit-and-stare Doppler-shift data in four emission linescovering the entire sit-and-stare observation. The Doppler shift in eachwindow has been adjusted so that the zero value is the average over thewindow. The maximum and minimum values plotted in each case are +20 and −20 kms−1, respectively. This study focuses on the spatial range from −200″ to −215″in the time interval from 20:07 to 21:53 UT.</figcaption></figure>
<figure><img src="content_image/1003.0420/x6.png"><figcaption>Figure 6.— Doppler shift data averaged over the 16 detector rows from −200″ to−215″. The emission lines shown from top to bottom are Fe xi 188.23 Å, Fe xii195.12 Å, Fe xiii 202.04 Å, Fe xiv 274.20 Å, and Fe xv 284.16 Å. Time ismeasured in minutes from the start of the sit-and-stare data at 18:13:06 UT.The zero value for Doppler shifts is set to the average Doppler shift in eachline and is shown using the horizontal lines. Data for each emission line aredisplaced from the next set by 4 km s−1.</figcaption></figure>
The target area for the oscillation study is in the black circle, a small magnetic region of positive polarity. In the final coalignment step the SOT Ca ii H images were coaligned with the SP scan. The bottom left image in Figure 4 shows the temporal average of the time sequence taken between 19:50 UT and 21:40 UT. The black circle encompasses a small network bright point that overlaps with the EIS slit and the region averaged along the slit in the analysis that is the focus of this paper. As we have already noted, the EIS spectroheliograms taken before and after the sit-and-stare observation show that considerable structural evolution has taken place. In the Fe xv 264.78 Å spectroheliogram taken after the sit-and-stare observation, there is significant emission within the 6″ region of the EIS slit that is the focus of this paper, suggesting that the location is near one end of one of the coronal loops seen in the lower three panels in the right side of Figure 1. On the other hand the images in the EIS spectroheliograms taken immediately before the sit-and-stare observation show much less evidence for a loop rooted at that location. Thus we can only conclude that the bright feature we observe in the Ca ii H line may be the footpoint of a coronal loop. We do note, however, that there are no other strong magnetic concentrations close to the location of the region that is the focus of this study. The ones to the south-east are about 10″ away—more than our co-alignment errors. We believe that loop footpoints will be anchored in strong flux concentrations.
The basic building block for the sit-and-stare observation was EIS Study ID 56. This study takes 50 30 s exposures in 20 spectral windows with a solar \(y\)-position coverage of 400″. The entire sit-and-stare observation consisted of four executions of the study, with each execution invoked with a repetition count of four. Thus, the full sit-and-stare data set consists of 16 repetitions of the basic building block. There was a brief delay between each invocation of the study, leading to gaps in the resulting time series.
Over both orbital periods and shorter time intervals, the _Hinode_ pointing fluctuates in both the solar \(x\)- and \(y\)-directions by up to about 3″ peak-to-peak. This is due to both spacecraft pointing variations and changes in the location of EIS relative to the location of the Sun sensors on the spacecraft. Studies of the latter variations have shown that they are well-correlated with similar variations observed in data from the _Hinode_ X-Ray Telescope (XRT). We have therefore used a modified version of the software provided by the XRT team to compute the average \(y\)-position of each pixel along the slit over each exposure and then interpolated the fitted centroid positions onto a uniform \(y\)-position grid.
It is not, of course, possible to correct the sit-and-stare observations for fluctuations in the \(x\)-position on the Sun. Plots of the \(x\)-position pointing fluctuations show that they tend to be smoother than the \(y\)-direction fluctuations, and that they are dominated by the orbital period. If periodic Doppler shifts are present only in very small structures, we would thus expect the signal to show a modulation with the orbital period. Larger structures, \(\geq 3\)″ in the \(x\)-direction, that display coherent Doppler shifts should not be affected by the spacecraft \(x\)-position pointing variations.
Over much of the EIS slit during the sit-and-stare observation there is no evidence for interesting dynamical behavior in the Doppler shift observations. The portion of the slit that covers the brighter core area of the region does, however, show evidence for periodic changes in the Doppler shifts. Figure 5 shows the measured Doppler-shift data in four emission lines over this \(y\)-position range as a function of time. The gaps between each invocation of the study appear as wider pixels near 20, 22, and 0 UT. Note that the data are also affected by passage of the spacecraft through the South Atlantic Anomaly (SAA). A small region of compromised data appears near 19:30 UT, and major SAA passages are evident near 21:00 and 22:45 UT.
The display shows clear evidence for periodic fluctuations in all the emission lines at \(y\)-positions of roughly \(-170\)″ to \(-180\)″during the first hour shown in the plots. Periodic fluctuations are also visible over \(y\)-locations centered near \(-210\)″. Note that there is some evidence of longer period fluctuations, for example in the Fe xv 284.16 Å emission line. These fluctuations are probably the result of the correction for orbital line centroid shifts not fully removing those variations. In the remainder of this paper, we focus on the area near \(-210\)″ that shows evidence for oscillatory phenomena.
## 3. Analysis
### Doppler Shift Oscillations
As we noted earlier, solar \(y\)-positions between \(-200\)″ and \(-215\)″ show considerable oscillatory behavior, particularly in the set of data taken beginning at 20:07:01 UT. Figure 6 shows the averaged Doppler shift data over the time period of this set of observations for, from top to bottom, Fe xi 188.23 Å, Fe xii 195.12 Å, Fe xiii 202.04 Å, Fe xiv 274.20 Å, and Fe xv 284.16 Å. The Doppler shifts have been averaged over the 16 detector rows from \(-200\)″ to \(-215\)″. Data from 172 to 180 minutes were taken during SAA passage and have been removed from the plot.
The Doppler shift data over this portion of the EIS slit show clear evidence for low-amplitude, roughly 2–4 km s\({}^{-1}\), oscillatory behavior with a period near 10 minutes. For some of the time period, particularly after 180 minutes, there appears to be a clear trend for the oscillations to display increasing amplitude as a function of increasing temperature of line formation.
| Wavelength | Log T | PD | δv | PI | δI/I
---|---|---|---|---|---|---
Ion | (Å) | (K) | (minutes) | (km s−1) | (minutes) | (%)
Fe xi | 188.23 | 6.07 | 10.0 | 1.2 | 11.8 | 1.1
Fe xii | 195.12 | 6.11 | 10.1 | 1.1 | 11.4 | 0.9
Fe xiii | 202.04 | 6.20 | 09.1 | 1.3 | 11.2 | 1.4
Fe xiv | 274.20 | 6.28 | 09.1 | 1.3 | 11.4 | 2.0
Fe xv | 284.16 | 6.32 | 09.0 | 1.4 | 07.6 | 2.4
Table 1Periods and Amplitudes Detected in Doppler Shift and Intensity Data
Because there is a significant gap in the Doppler shift data, neither Fourier time series analysis nor wavelet analysis is appropriate. Instead, we examine the time series by calculating periodograms using the approach outlined in Horne & Baliunas(1986) and Press & Rybicki(1989). Figure 7 shows the periodograms calculated from the Doppler shift data shown in Figure 6. Also plotted on the figure are the 99% and 95% significance levels. All but one of the time series show a peak in the periodogram at the 99% confidence level, and the largest peak in the Fe xv 284.16 Å emission line data is at the 95% confidence level. Table 1 lists the period of the most significant peak in each of the panels shown in the figure.
To estimate the amplitude of the oscillations, we detrend the Doppler shift time series by subtracting a background computed using averaged data in a 10-minute window centered on each data point and then compute the rms amplitude as the standard deviation of the mean. For a sine wave, the peak velocity is the rms value multiplied by \(\sqrt{2}\). These peak velocity values are also listed in the table in the \(\delta v\) column. Visual inspection of the data suggests that the numbers in the table are smaller than what might be obtained by fitting the data. The numbers do confirm the impression that the oscillation amplitude increases with increasing temperature of line formation.
It is clear from the Doppler shift plots in Figure 6 that the oscillations are not always present. Instead, they appear for a few periods and then disappear. To understand better this behavior, we have fitted the time intervals where oscillations are obvious with a combination of a damped sine wave and a polynomial background. Thus, for each time period where oscillations are present we assume that the data can be fitted with a function of the form
\[v(t)=A_{0}\sin(\omega t+\phi)\exp(-\lambda t)+B(t),\] (1)
where
\[B(t)=b_{0}+b_{1}t+b_{2}t^{2}+b_{3}t^{3}+\cdots\] (2)
is the trend in the background data. Time is measured from an initial time \(t_{0}\), which is different for each set of oscillations we fit. The fits were carried out using Levenberg-Marquardt least-squares minimization (Bevington, 1969). Generally only two terms in the background polynomial were necessary.
<figure><img src="content_image/1003.0420/x7.png"><figcaption>Figure 7.— Periodograms computed for the Doppler shift data shown in Figure 6.The solid horizontal line on each plot indicates the power equivalent to acalculated false alarm probability of 1% and the dashed line is for a falsealarm probability of 5%.</figcaption></figure>
<figure><img src="content_image/1003.0420/x8.png"><figcaption>Figure 8.— Decaying sine wave fits to the Doppler shift data beginning roughly142 minutes after the start of the sit-and-stare observation. The plotted datahas had the polynomial background removed. Vertical dashed lines show the timerange used in the fitting.</figcaption></figure>
Figure 8 shows the results of this fitting for the Doppler shift data beginning 142.5 minutes after the start of the sit-and-stare observation. All the fits show roughly the same amplitudes, periods, and phases. Emission lines formed at the higher temperatures (Fe xiv and Fe xv) show clear evidence for more than one full oscillation period. At lower temperatures, the oscillatory signal damps much more rapidly.
Table 2 lists the amplitudes, \(A_{0}\), periods, \(P\), phases, \(\phi\), and the inverse of the decay rate, \(\lambda\), that result from fitting all the time periods in the data for which a reasonable fit to the Doppler shift data could be obtained. The periods are consistent with the results of the periodogram analysis for the entire time interval. Generally, the amplitudes are larger than the \(\delta v\) values show in Table 1. Note that some of the fits show negative decay times, indicating that some of the oscillations show a tendency to grow with time. In these cases, this is not followed by a decay, but rather a rapid loss of the oscillatory signal.
All the fits use the start times \(t_{0}\) listed in the table and thus the phase values for each time interval can not be directly compared. When the phases are adjusted to a common start time, the values do not agree. Thus while the periods are similar for each time interval in which oscillations are observed, it appears that each event is being independently excited.
t0 | | A0 | P | ϕ | λ−1
---|---|---|---|---|---
(min) | Ion | (km s−1) | (min) | (rad) | (min)
113.9 | Fe xi | 0.9 | 08.8 | 1.8 | +0019.0
113.9 | Fe xii | 0.7 | 09.2 | 2.3 | −216
113.9 | Fe xiii | 0.7 | 08.8 | 2.3 | 00−73.3
113.9 | Fe xiv | 0.3 | 10.7 | 3.9 | 00−10.9
113.9 | Fe xv | 0.5 | 09.5 | 2.9 | 00−24.5
142.5 | Fe xi | 2.4 | 10.4 | 1.0 | +005.1
142.5 | Fe xii | 2.0 | 08.4 | 0.9 | +006.6
142.5 | Fe xiii | 1.6 | 08.2 | 1.2 | +0015.0
142.5 | Fe xiv | 2.0 | 08.7 | 1.9 | +0023.9
142.5 | Fe xv | 2.2 | 08.9 | 2.1 | +0023.3
190.0 | Fe xi | 0.3 | 08.8 | 4.7 | −411
190.0 | Fe xii | 0.8 | 07.5 | 3.5 | +0036.8
190.0 | Fe xiii | 1.2 | 07.3 | 3.1 | +0036.3
190.0 | Fe xiv | 1.5 | 07.3 | 3.2 | +0043.6
190.0 | Fe xv | 2.2 | 07.5 | 3.4 | +0012.8
Table 2Doppler Shift Oscillation Properties
Examining the amplitude of the oscillations in each time interval as a function of the temperature of line formation shows no clear trend. The data set starting at 113.9 minutes seems to show evidence for a decrease in amplitude with increasing temperature, while the data set beginning at 190.0 minutes shows the opposite trend. Similarly, there is no clear trend in the periods of the oscillations in each data set as a function of temperature.
### Intensity Oscillations
If the observed Doppler shift oscillations are acoustic in nature, then they should also be visible in the intensity data. For a linear sound wave \(v=c_{\textrm{s}}\delta\rho/\rho\), where \(v\) is the amplitude of the wave, \(c_{\textrm{s}}\) is the sound speed, and \(\delta\rho\) is the density perturbation on the background density \(\rho\). Taking an amplitude of 2 km s\({}^{-1}\), yields values of \(\delta\rho/\rho\) of around 1%. Since the intensity fluctuation, \(\delta I/I\), is proportional to \(2\delta\rho/\rho\), we expect only about a 2% fluctuation in the measured intensity. This number could of course increase if the actual velocity is much larger due to a large difference between the line-of-sight and the direction of the coronal structure being measured. Figure 9 shows the measured intensity data averaged over the same locations as the Doppler shift data shown in Figure 6. The data show little or no evidence for oscillations with the periods measured in the Doppler shift data. This is borne out by a periodogram analysis of the time series in the figure, which show no significant peaks.
If, however, we detrend the data by subtracting the gradually evolving background signal, there is some evidence for an oscillatory signal. Figure 10 shows the data in Figure 9 with a background consisting of a 10-minute average of the data centered on each data point subtracted. All the emission lines show some evidence for an oscillatory signal, with the Fe xiii 202.04 Å emission line being the most obvious.
<figure><img src="content_image/1003.0420/x9.png"><figcaption>Figure 9.— Normalized intensity data averaged over the 16 detector rows from−200″ to −215″. The emission lines shown from top to bottom are Fe xi 188.23Å, Fe xii 195.12 Å, Fe xiii 202.04 Å, Fe xiv 274.20 Å, and Fe xv 284.16 Å.Time is measured in minutes from the start of the sit-and-stare data at18:13:06 UT. Data for each emission line are displaced from the next set by0.2.</figcaption></figure>
<figure><img src="content_image/1003.0420/x10.png"><figcaption>Figure 10.— Detrended intensity data averaged over the 16 detector rows from−200″ to −215″. The emission lines shown from top to bottom are Fe xi 188.23Å, Fe xii 195.12 Å, Fe xiii 202.04 Å, Fe xiv 274.20 Å, and Fe xv 284.16 Å.Time is measured in minutes from the start of the sit-and-stare data at18:13:06 UT. Data for each emission line are displaced from the next set by100 ergs cm−2 s−1 sr−1.</figcaption></figure>
<figure><img src="content_image/1003.0420/x11.png"><figcaption>Figure 11.— Periodograms computed for the detrended intensity data shown innormalized form in Figure 9. The solid horizontal line on each plot indicatesthe power equivalent to a calculated false alarm probability of 1% and thedashed line is for a false alarm probability of 5%.</figcaption></figure>
<figure><img src="content_image/1003.0420/x12.png"><figcaption>Figure 12.— Decaying sine wave fits to the detrended intensity data beginningroughly 138 minutes after the start of the sit-and-stare observation. Theintensities have been converted to residual intensities by taking thedifference between the intensity at each data point and the running meanintensity and dividing it by the running mean. The plotted data has also hadthe polynomial background used in the fit removed. Vertical dashed lines showthe time range used in the fitting. Also plotted as green curves on the panelsfor the Fe xiv and Fe xv intensity data are the fitting results for the Fe xivand Fe xv Doppler shift data. Those plots show −v(t).</figcaption></figure>
Figure 11 shows periodograms constructed for the emission lines shown in Figure 10. Each periodogram shows a significant peak. The periods for the strongest peak in each periodogram are listed in Table 1. The periods are generally consistent with those determined from the Doppler shift data. Also listed in the table is an estimate of the intensity fluctuation in each emission line. This was obtained by computing the standard deviation of the detrended intensity in the time series for each emission line and then dividing the result by the average intensity. The values are roughly consistent with those expected based on the \(\delta v\) estimates listed in Table 1.
While the oscillatory signal is much less strong in the detrended intensity data than in the Doppler shift data, it is possible to fit some of the data in roughly the same time intervals that were used for the results listed in Table 2. Table 3 shows the resulting fit information. Note that intensity oscillation fits were not possible for all the lines for which the Doppler shift data could be fitted. Figure 12 shows an example of the fits to the detrended intensity data. For these plots the intensity has been converted to a residual intensity expressed in % by taking the difference between each data point and the 10-minute average and dividing it by the 10-minute average. The plotted points have also had the polynomial background subtracted.To facilitate comparisons with the Doppler shift data we have also plotted as green curves on the panels for the Fe xiv and Fe xv intensity data the fitting results for the Fe xiv and Fe xv Doppler shift data.
Comparison of the fits in Figure 12 with those in Figure 8 shows some similarities and many differences between the two data sets. For the lines with a reasonably strong signal, the periods measured for the residual intensity data are similar to those measured for the Doppler shifts. In addition, the residual intensities show the same trend to larger amplitudes as the temperature of line formation increases. The intensity oscillations clearly start earlier than the Doppler shift oscillations. Moreover, while the Doppler shift oscillations are damped, the intensity oscillations appear to grow with time over the same interval. This appears to be the case for the other time intervals as well. Fitting the detrended intensity signal is more challenging than fitting the Doppler shift signal. Also, implicit in the fitting is the idea that a damped sine wave can fully represent what is probably a more complex signal. Thus, we are reluctant to read too much into this growth in the intensity until we can determine from additional data sets if it is a common phenomenon.
t0 | | A0 | P | ϕ | λ−1
---|---|---|---|---|---
(min) | Ion | (%) | (min) | (rad) | (min)
113.9 | Fe xii | 2.1 | 08.0 | 4.2 | +08.2
113.9 | Fe xiii | 1.0 | 12.3 | 4.4 | +37.6
137.7 | Fe xi | 0.2 | 12.7 | 0.0 | −11.4
137.7 | Fe xii | 0.4 | 10.3 | 3.9 | −17.8
137.7 | Fe xiii | 1.1 | 09.9 | 3.5 | −50.2
137.7 | Fe xiv | 2.1 | 09.3 | 3.0 | −41.8
137.7 | Fe xv | 2.6 | 09.3 | 3.0 | −48.6
190.0 | Fe xi | 0.5 | 08.8 | 5.0 | −81.1
190.0 | Fe xii | 0.4 | 13.7 | 3.7 | −32.6
190.0 | Fe xiii | 1.1 | 13.5 | 3.8 | −83.6
190.0 | Fe xiv | 0.8 | 12.8 | 2.5 | −20.3
190.0 | Fe xv | 1.3 | 06.9 | 5.6 | −22.8
Table 3Intensity Oscillation Properties
An important factor in determining the nature of the oscillations is the phase difference between the Doppler shift signal and the intensity signal. Comparing the phases of the fits listed in Table 2 with those in Table 3 is difficult because the periods are not identical, but, since the periods are close, the differences do not significantly alter any conclusions that we might draw. To facilitate this comparison we have plotted as green curves on the panels for the Fe xiv and Fe xv intensity data in Figure 12 the fitting results for the Fe xiv and Fe xv Doppler shift data. For the Doppler shift data, we plot \(-v(t)\). The curves show that for these two ions, the intensity variations are close to \(180\arcdeg\) out of phase with the Doppler shift variations. Since we define the Doppler shift as \(c\,\delta\lambda/\!\lambda\), this means that the peak intensity corresponds to a blueshift, indicating an upward propagating wave. For the other two time intervals, the situation is more ambiguous. Examination of the tables shows that in many cases, the periods are significantly different for the same Doppler shift and intensity data in the same line. In those cases where the periods are close (e.g., Fe xii at \(t_{0}=113.9\) minutes and Fe xv at \(t_{0}=190.0\) minutes), examination of the plots similar to Figures 8 and 12 shows the same \(180\arcdeg\) phase shift, again indicating upwardly propagating oscillations.
Even for the cases where the periods are close, the agreements in the phases are only approximate. For the Fe xiv and Fe xv intensity and Doppler shift fits shown in Figure 12, the intensity oscillation leads the Doppler shift oscillation by a small fraction of a period. For both the Fe xii data at \(t_{0}=113.9\) minutes and the Fe xv data at \(t_{0}=190.0\) minutes, the Doppler shift oscillation leads the intensity oscillation by a fraction of a period. In both cases, this difference is less than the \(1/4\)period expected for a standing-mode MHD wave (Sakurai et al., 2002). Wang et al.(35) observed propagating waves with periods in the four to six minute range in EIS active region observations. They noted that for most cases the Doppler shift and intensity oscillations were nearly in phase. In the cases where there was a difference, the phase of the intensity was earlier than the Doppler shift, as is the case for the data shown in Figures 8 and 12. Theoretical modeling of propagating slow waves with periods near five minutes show that thermal conduction can produce phase shifts between the intensity and Doppler shifts (Owen et al., 2009). Further study of EIS data sets where both the Doppler shift and intensity can be fitted could provide valuable constraints on these models.
### Density Oscillations
The electron density is one factor in determining the Alfvén speed in the oscillating plasma. Moreover, for magnetoacoustic fluctuations, we expect the density to also oscillate. Thus a direct measurement can aid in disentangling the nature of the oscillations. The sit-and-stare observations included density-sensitive line pairs of Fe xii (186.88 Å/195.12 Å) and Fe xiii (203.83 Å/202.04 Å). Using data from version 5.2 of the CHIANTI database (Landi et al., 2006; Dere et al., 1997), we computed the electron density at each time for the row-averaged data. For Fe xii, CHIANTI uses energy levels and radiative decay rates from Del Zanna & Mason(2005), electron collision strengths from Storey et al.(2005), and proton collision rate coefficients from Landman(1978). For Fe xiii, CHIANTI uses energy levels from Penn & Kuhn(1994), Jupen et al.(1993), and version 1.0 of the NIST database; radiative decay rates from Young(2004), electron collision strengths from Gupta & Tayal(1998), and proton collision rate coefficients from Landman(1975). These diagnostics are discussed in detail in Young et al.(2009).
<figure><img src="content_image/1003.0420/x13.png"><figcaption>Figure 13.— Electron density determined using the Fe xii 186.88 Å/195.12 Åratio (top) and the Fe xiii 203.83 Å/202.04 Å ratio (bottom) for the databeginning roughly 138 minutes after the start of the sit-and-stareobservation.</figcaption></figure>
Figure 13 shows the derived electron densities as a function of time for the same time interval shown in Figures 8 and 12. Both sets of derived densities show the same overall time behavior, but the absolute values differ by nearly a factor of three. Differences between these diagnostics have been noted before (e.g., Young et al., 2009), and are thought to be due to issues with the atomic data. It is not yet clear which of the values should be considered the most reliable.
<figure><img src="content_image/1003.0420/x14.png"><figcaption>Figure 14.— Decaying sine wave fits to the smoothed electron densitydetermined using the Fe xii 186.88 Å/195.12 Å ratio (top) and the Fe xiii203.83 Å/202.04 Å ratio (bottom) for the data beginning 190 minutes after thestart of the sit-and-stare observation along with. The plotted data has hadthe polynomial background removed. Vertical dashed lines show the time rangeused in the fitting.</figcaption></figure>
Neither of the density time series shown in the figure display any evidence for the oscillations detected in the detrended intensity data. If we smooth the density time series with a 10-minute running mean, there is some evidence for oscillatory behavior over the time range beginning at 190 minutes. Decaying sine wave fits to that region are show in Figure 14. For the Fe xii time series the fit has an amplitude of \(1.9\times 10^{7}\) cm\({}^{-3}\), a period of 13.5 minutes, a decay time of \(-46.4\) minutes, and a phase of 3.5 radians, all consistent with the values listed for the Fe xii data in Table 3 for this time interval. For the Fe xiii time series the fit has an amplitude of \(5.1\times 10^{6}\) cm\({}^{-3}\), a period of 10.9 minutes, a decay time of \(-25.1\) minutes, and a phase of 1.9 radians. With the exception of the phase, these values are generally consistent with the Fe xiii data in Table 3 for this time interval. The amplitudes for the Fe xii and Fe xiii fits are 0.5% and 0.4% of the average density in the time interval. Since the observed intensity fluctuations are only about 1%, though, and we expect \(\delta I/I\) to be proportional to \(2\delta\rho/\rho\), they are consistent with the observed intensity oscillations.
### Underlying Chromospheric Behavior
As we pointed out earlier, SOT obtained a time sequence of Ca ii H images that is co-temporal with the EIS sit-and-stare observations starting at 19:50 UT and ending at 21:40 UT, with a constant cadence of 60 s. To further use this data, we applied the standard reduction procedure provided by the SOT team that is available in SolarSoft. The images of the Ca ii H sequence were then carefully aligned using Fourier cross-correlation techniques to remove residual jitter and the drift of the SOT correlation tracker.
As can be seen in the lower left panel of Figure 4, the EIS slit covers the network bright point, which is the focus of the chromospheric analysis. Considering the accuracy of the coalignment and the spatial averaging applied to the EIS data, we also average the Ca ii H signal.
Figure 15 shows the time history of the Ca ii H-line intensity for three different sized spatial areas centered on the feature shown in Figure 4. The sizes of the regions are given in SOT pixels, which are \(0.10896\)″ in size. As expected for chromospheric lines, all three averaged data sets show evidence for intensity oscillations with a period near 5 minutes. The data, however, are quite noisy and periodograms constructed from them show no significant peaks. Detrending the data, however, results in periodograms with significant peaks. Figure 16 shows periodograms for the three data sets with each detrended by subtracting a 9-minute running mean from each data point. The periodograms show clear evidence for oscillations near 5 minutes.
<figure><img src="content_image/1003.0420/x15.png"><figcaption>Figure 15.— Ca ii H intensity data averaged over different spatial areascentered on the magnetic feature highlighted in Figure 4. Each data set hashad the mean value subtracted. Spatial areas are given in SOT pixels, whichare 0.10896″in size, giving areas of 2.18\arcsec×2.18\arcsec,3.77\arcsec×3.77\arcsec, and 4.36\arcsec×4.36\arcsec, respectively.</figcaption></figure>
<figure><img src="content_image/1003.0420/x16.png"><figcaption>Figure 16.— Periodograms computed for the detrended Ca ii H intensity data.The size of the area averaged over in pixels in indicated in each plot. Thesolid horizontal line on each plot indicates the power equivalent to acalculated false alarm probability of 1% and the dashed line is for a falsealarm probability of 5%. Spatial areas are given in SOT pixels, which are0.10896″in size, giving areas of 2.18\arcsec×2.18\arcsec,3.77\arcsec×3.77\arcsec, and 4.36\arcsec×4.36\arcsec, respectively.</figcaption></figure>
There is also some evidence for power at periods between 9 and 10 minutes, but with more than a 5% false alarm probability. If we instead detrend the data with an 11-minute running mean, then the peak between 9 and 10 minutes becomes more prominent and is at or above the 5% false alarm probability level for the \(30~{}\mathrm{pixel}\times 30~{}\mathrm{pixel}\) and \(20~{}\mathrm{pixel}\times 20~{}\mathrm{pixel}\) data sets. Wavelet analysis of the data sets detrended with both a 9- and 11-minute running mean shows significant signal near 9 minutes.
Note that the data in Figure 15 shows three larger peaks in all three data sets. The first peak, near 110 minutes, occurs before the beginning of the EIS data plots in Figures 6 and 10. The other two, at roughly 140 and 170 minutes, come just before significant oscillatory signal is observed in the EIS Doppler shift and detrended intensity data. It is tempting to suggest that these enhancements correspond to chromospheric events that resulted in the oscillations observed with EIS.
In principle the EIS He ii 256.32 Å data can bridge the gap in temperature between the SOT Ca ii data and the Fe lines formed at higher temperatures that are the main focus of this study. In practice the He ii data are challenging to analyze. The line is closely blended with a Si x line at 256.37 Å along with a smaller contribution from Fe x and Fe xiii ions at slightly longer wavelengths (Brown et al., 2008). In an effort to see if a connection can be made, we have made two-component Gaussian fits to the row-averaged He ii data. Periodograms of the resulting Doppler shift data show no significant periods. Periodograms of the He ii fitted intensities detrended with a 10-minute running mean show no peaks at the 1% false alarm probably level and one peak with a period between 12 and 13 minutes at the 5% false alarm probability level. Examining the detrended He ii intensity data, we do not see a peak in the data at 140 minutes, but do see a significant increase at 170 minutes. Thus the He ii data only weakly support our suggestion that the enhancements seen in the Ca ii data correspond to chromospheric events that result in the oscillations observed with EIS in the Fe lines.
## 4. Discussion and Conclusions
As we pointed out in §1, a number of investigations have detected Doppler shift oscillations with EIS. Based on the phase differences between the Doppler shift and the intensity, we believe that the signals we have detected are upwardly propagating magnetoacoustic waves. The periods we detect are between 7 and 14 minutes. For a set of observations that begins at a particular time, there is considerable scatter in the measured periods and amplitudes. This is probably due to the relatively weak signal we are analyzing. But it may also be an indication that a simple sine wave fit is not a good representation of the data. It is likely that each line-of-sight passes through a complex, time-dependent dynamical system. While a single flux tube may respond to an oscillatory signal by exhibiting a damped sine wave in the Doppler shift, a more complex line-of-sight may display a superposition of waves.
Coalignment of the EIS data with both SOT and MDI magnetograms shows that the portion of the EIS slit analyzed in this study corresponds to a unipolar flux concentration. SOT Ca ii images show that the intensity of this feature exhibits 5-minute oscillations typical of chromospheric plasma, but also exhibits some evidence for longer period oscillations in the time range detected by EIS. Moreover, the Ca ii intensity data show that the oscillations observed in EIS are related to significant enhancements in the Ca ii intensities, suggesting that a small chromospheric heating event triggered the observed EIS response.
Wang et al.(35) also detected propagating slow magnetoacoustic waves in an active region observed with EIS, which they associated with the footpoint of a coronal loop. While the oscillation periods they measured—5 minutes—were smaller than those detected here, many of the overall characteristics we see are the same. In each case, the oscillation only persists for a few cycles and the phase relationship indicates an upwardly propagating wave. In contrast with their results, however, we do not see a consistent trend for the oscillation amplitude to decrease with increasing temperature of line formation. Examination of both the Doppler shift data in Figure 6 and the results in Table 2 shows that in one case the amplitude has a tendency to decrease with increasing temperature of line formation (oscillation beginning at 113.9 minutes) and in another case the amplitude clearly increases with increasing temperature of line formation (oscillation beginning at 190 minutes). Thus it does not appear that the results reported by Wang et al.(35) are always the case. O’Shea et al.(2002) noted that for oscillations observed above a sunspot the amplitude decreased with increasing temperature until the temperature of formation of Mg x, which is formed at roughly 1 MK. They then saw an increase in amplitude in emission from Fe xvi. All the EIS lines we have included in this study have temperatures of formation greater than 1 MK.
Combined EIS and SUMER polar coronal hole observations have also shown evidence propagating slow magnetoacoustic waves (Banerjee et al., 2009). These waves have periods in the 10–30 minute range. These waves appear to be more like those we observe in that they are have periods longer than those studied by Wang et al.(35).
Wang et al.(35) suggested that the waves they observed were the result of leakage of photospheric p-mode oscillations upward into the corona. The longer periods we and Banerjee et al.(2009) observe are probably not related to p-modes. Instead, we speculate that the periods of the waves are related to the impulsive heating which may be producing them. If an instability is near the point where rather than generating a catastrophic release of energy it wanders back and forth between generating heating and turning back off, waves would be created. The heating source could be at a single location, or, for example, locations near each other where instability in one place causes a second nearby location to go unstable and begin heating plasma. In this view, the periods provide some insight into the timescale for the heating to rise and fall and thus may be able to place limits on possible heating mechanisms.
The behavior of slow magnetoacoustic oscillations as a function of temperature has been the subject of considerable theoretical work. It is generally believed that the damping of the waves is due to thermal conduction (e.g., De Moortel & Hood, 2003; Klimchuk et al., 2004). Because thermal conduction scales as a high power of the temperature, conductive damping should be stronger for oscillations detected in higher temperature emission lines (e.g., Porter et al., 1994; Ofman & Wang, 2002). Earlier EIS observations of the damping of standing slow magnetoacoustic waves, however, show that this is not always the case (Mariska et al., 2008). Thus, the temperature behavior of both the amplitude of the oscillations and the damping, differ from some earlier results. We believe that additional observations will be required to understand fully the physical picture of what is occurring in the low corona when oscillations are observed. Given the complex set of structures that may be in the line of sight to any given solar location under the EIS slit, we are not entirely surprised that different data sets should yield different results, which in some cases differ from models. For example, none of the current models for oscillations in the outer layers of the solar atmosphere take into account the possibility that what appear to be single structures in the data might actually be bundles of threads with differing physical conditions.
Our observations along with others (e.g., Wang et al., 35, 36; Banerjee et al., 2009) show that low-amplitude upwardly propagating slow magnetoacoustic waves are not uncommon in the low corona. The periods observed to date range from 5 minutes to 30 minutes. In all cases, however, the wave amplitudes are too small to contribute significantly to coronal heating. But understanding how the waves are generated and behave as a function of line formation temperature and the structure of the magnetic field should lead to a more complete understanding of the structure of the low corona and its connection with the underlying portions of the atmosphere. Instruments like those on _Hinode_ that can simultaneously observe both the chromosphere and the corona, should provide valuable additional insight into these waves as the new solar cycle rises and more active regions become available for study.
_Hinode_ is a Japanese mission developed, launched, and operated by ISAS/JAXA in partnership with NAOJ, NASA, and STFC (UK). Additional operational support is provided by ESA and NSC (Norway). The authors acknowledge support from the NASA _Hinode_ program. CHIANTI is a collaborative project involving NRL (USA), RAL (UK), MSSL (UK), the Universities of Florence (Italy) and Cambridge (UK), and George Mason University (USA). We thank the anonymous referee for his or her very helpful comments.
## References
* Aschwanden et al. (1999) Aschwanden, M. J., Fletcher, L., Schrijver, C. J., & Alexander, D. 1999, ApJ, 520, 880
* Banerjee et al. (2009) Banerjee, D., Teriaca, L., Gupta, G. R., Imada, S., Stenborg, G., & Solanki, S. K. 2009, A&A, 499, L29
* Berghmans & Clette (1999) Berghmans, D., & Clette, F. 1999, Sol. Phys., 186, 207
* Bevington (1969) Bevington, P. R. 1969, Data reduction and error analysis for the physical sciences (New York: McGraw-Hill)
* Brown et al. (2008) Brown, C. M., Feldman, U., Seely, J. F., Korendyke, C. M., & Hara, H. 2008, ApJS, 176, 511
* Culhane et al. (2007) Culhane, J. L., et al. 2007, Sol. Phys., 243, 19
* De Moortel & Hood (2003) De Moortel, I., & Hood, A. W. 2003, A&A, 408, 755
* DeForest & Gurman (1998) DeForest, C. E., & Gurman, J. B. 1998, ApJ, 501, L217
* Del Zanna & Mason (2005) Del Zanna, G., & Mason, H. E. 2005, A&A, 433, 731
* Dere et al. (1997) Dere, K. P., Landi, E., Mason, H. E., Monsignori Fossi, B. C., & Young, P. R. 1997, A&AS, 125, 149
* Gupta & Tayal (1998) Gupta, G. P., & Tayal, S. S. 1998, ApJ, 506, 464
* Horne & Baliunas (1986) Horne, J. H., & Baliunas, S. L. 1986, ApJ, 302, 757
* Jupen et al. (1993)Jupen, C., Isler, R. C., & Trabert, E. 1993, MNRAS, 264, 627
* Klimchuk et al. (2004) Klimchuk, J. A., Tanner, S. E. M., & De Moortel, I. 2004, ApJ, 616, 1232
* Kosugi et al. (2007) Kosugi, T., et al. 2007, Sol. Phys., 243, 3
* Landi et al. (2006) Landi, E., Del Zanna, G., Young, P. R., Dere, K. P., Mason, H. E., & Landini, M. 2006, ApJS, 162, 261
* Landman (1975) Landman, D. A. 1975, A&A, 43, 285
* Landman (1978) —. 1978, ApJ, 220, 366
* Mariska et al. (2008) Mariska, J. T., Warren, H. P., Williams, D. R., & Watanabe, T. 2008, ApJ, 681, L41
* Ofman & Wang (2002) Ofman, L., & Wang, T. 2002, ApJ, 580, L85
* O’Shea et al. (2002) O’Shea, E., Muglach, K., & Fleck, B. 2002, A&A, 387, 642
* Owen et al. (2009) Owen, N. R., De Moortel, I., & Hood, A. W. 2009, A&A, 494, 339
* Penn & Kuhn (1994) Penn, M. J., & Kuhn, J. R. 1994, ApJ, 434, 807
* Porter et al. (1994) Porter, L. J., Klimchuk, J. A., & Sturrock, P. A. 1994, ApJ, 435, 482
* Press & Rybicki (1989) Press, W. H., & Rybicki, G. B. 1989, ApJ, 338, 277
* Roberts (2000) Roberts, B. 2000, Sol. Phys., 193, 139
* Roberts et al. (1983) Roberts, B., Edwin, P. M., & Benz, A. O. 1983, Nature, 305, 688
* Roberts et al. (1984) —. 1984, ApJ, 279, 857
* Roberts & Nakariakov (2003) Roberts, B., & Nakariakov, V. M. 2003, in NATO Science Series: II: Mathematics, Physics and Chemistry, Vol. 124, Turbulence, Waves and Instabilities in the Solar Plasma, ed. R. Erdelyi, K. Petrovay, B. Roberts, & M. J. Aschwanden (Kluwer Academic Publishers, Dordrecht), 167–192
* Rosenberg (1970) Rosenberg, H. 1970, A&A, 9, 159
* Sakurai et al. (2002) Sakurai, T., Ichimoto, K., Raju, K. P., & Singh, J. 2002, Sol. Phys., 209, 265
* Storey et al. (2005) Storey, P. J., Del Zanna, G., Mason, H. E., & Zeippen, C. J. 2005, A&A, 433, 717
* Van Doorsselaere et al. (2008)Van Doorsselaere, T., Nakariakov, V. M., Young, P. R., & Verwichte, E. 2008, A&A, 487, L17
* Wang et al. (2002) Wang, T., Solanki, S. K., Curdt, W., Innes, D. E., & Dammasch, I. E. 2002, ApJ, 574, L101
* Wang et al. (2009a) Wang, T. J., Ofman, L., & Davila, J. M. 2009a, ApJ, 696, 1448
* Wang et al. (2009b) Wang, T. J., Ofman, L., Davila, J. M., & Mariska, J. T. 2009b, A&A, 503, L25
* Wuelser et al. (2004) Wuelser, J.-P., et al. 2004, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. S. Fineschi & M. A. Gummin, Vol. 5171, 111
* Young (2004) Young, P. R. 2004, A&A, 417, 785
* Young et al. (2009) Young, P. R., Watanabe, T., Hara, H., & Mariska, J. T. 2009, A&A, 495, 587
|
1303.3296 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 43310,
"num_imgs": 0,
"llama3_tokens_count": 15714
} | [] | **Enhanced asymptotic symmetry algebra of \(AdS_{3}\).**
Cédric Troessaert
_Centro de Estudios Científicos (CECs)_
_Arturo Prat 514, Valdivia, Chile_
_troessaert@cecs.cl_
Abstract. A generalization of the Brown-Henneaux boundary conditions is introduced for pure gravity with negative cosmological constant in 3 dimensions. This leads to new degrees of freedom and to an enhancement of the symmetry algebra. Up to the zero modes, it consists of two copies of the semi-direct product of a Virasoro algebra with a \(U(1)\) current algebra. The associated surface charge algebra now contains three non-zero central charges: the two usual Brown-Henneaux central charges and one new quantity.
## 1 Introduction
Einstein’s gravity in 2+1 dimensions is an interesting toy model to understand some features of higher dimensional gravity as, in three dimensions, this theory doesn’t have local degrees of freedom but still has dynamical global objects [1].
For instance, in the presence of a negative cosmological constant, there exists the famous BTZ black-hole solution [2, 3]. Those black-holes possess the same characteristics as their higher dimensional cousins like temperature or entropy. One hopes that understanding the BTZ black-holes thermodynamical properties in this simpler setup would help us in the more physically relevant cases.
Another surprise came even earlier with the study by Brown-Henneaux of the symmetry algebra of asymptotically \(AdS_{3}\) space-times [4]. They showed that this algebra is not the expected \(so(2,2)\) symmetry algebra of the background \(AdS_{3}\) but is enhanced to the full conformal algebra in two dimensions. Furthermore the algebra acquires classical central charges with the famous value of \(c^{\pm}=\frac{3l}{2G}\). Having the full 2D conformal group as a symmetry of the theory allows for the use of the powerful techniques of 2D CFT’s. One of the main results is then the computation by Strominger where he was able to reproduce the Bekenstein-Hawking entropy of the BTZ black-holes using the Cardy formula and the explicit value of the Brown-Henneaux central charges [5].
In the last fifteen years, a lot has been done to further improve our understanding. For instance, using the Chern-Simons description of gravity in 3D, it was shown that gravity in this case is equivalent to a Liouville theory on the boundary [6, 7, 8, 9]. However, Liouville theory does not contain enough degrees of freedom to fully account for the entropy of the black-holes [10, 11]. More recently, a direct computation of the partition function of the theory was done but, in most cases, the results are not sensible [12, 13]. Those are some of the more recent results but, in general, we still don’t have a description of the fundamental degrees of freedom of the theory.
All the results described above are strongly dependent on the Brown-Henneaux boundary conditions and the resulting asymptotic symmetry algebra. Attempts have been made to try to relax them but with no impact on the number of global degrees of freedom and no change on the asymptotic symmetry algebra [14]. A few days ago, the authors of [15] proposed a new set of chiral boundary conditions for asymptotically \(AdS_{3}\) space-times. Those new conditions are associated to a different problem as they only contain part of the solutions to Einstein’s equations satisfying to the Brown-Henneaux boundary conditions.
In this paper, we want to present a set of boundary conditions that generalizes the one of Brown-Henneaux. Those boundary conditions describe a theory with more degrees of freedom. Moreover, there is a second enhancement of the asymptotic symmetry algebra. Up to the zero modes, the new algebra is generated by two Virasoro algebras and two \(U(1)\) current algebras. At the level of the charges, the algebra acquires a central extension characterized by three non-zero numbers: the two usual Brown-Henneaux central charges plus one new quantity.
## 2 New asymptotic conditions
The action for gravity in 3 dimension with cosmological constant is the Einstein-Hilbert action:
\[S[g]=\frac{1}{16\pi G}\int_{\mathcal{M}}d^{3}x\,\left(R-2\Lambda\right),\quad \Lambda=-\frac{1}{l^{2}},\quad\mathcal{M}=\mathbb{R}^{3}.\] (2.1)
This action is not well defined without additional boundary terms and fall-off conditions for the fields. The usual setup is given by the Brown-Henneaux boundary conditions [4]:
\[g_{AB} = r^{2}\bar{\gamma}_{AB}+O(1),\] (2.2)
\[g_{rA} = O(r^{-3}),\] (2.3)
\[g_{rr} = \frac{l^{2}}{r^{2}}+O(r^{-4}),\] (2.4)
where \(\bar{\gamma}_{AB}\) is a fixed metric on the cylinder at spatial infinity \((r\rightarrow\infty)\) and \(x^{A}=(\tau,\phi)\). Choosing \(\bar{\gamma}_{AB}\) as the flat metric corresponds to asymptotically \(AdS_{3}\) space-times. With the metric \(\bar{\gamma}_{AB}\) fixed, the action can be supplemented with the Gibbons-Hawking boundary term to make it well defined [16, 17]:
\[S[g]=\frac{1}{16\pi G}\int_{\mathcal{M}}d^{3}x\,\sqrt{-g}\left(R-2\Lambda \right)+\frac{1}{16\pi G}\oint_{\partial\mathcal{M}}d^{2}x\,\sqrt{-h}\left(-2K +{}_{0}K\right),\] (2.5)
where \(h_{AB}=g_{AB}\) is the induced metric and \(K=h^{AB}K_{AB}\) is the trace of the extrinsic curvature of the boundary. We only considered the time-like part of \(\partial\mathcal{M}\) wich is the cylinder at spatial infinity. The quantity \({}_{0}K=\frac{-2}{l}\) acts as a counter-term to make the boundary stress energy tensor finite [18, 19, 20].
We will argue that a more general possibility is to use the same asymptotic behavior (2.2)-(2.4) but fixing only the conformal structure of the induced metric on the boundary [21, 22]:
\[g_{AB}=r^{2}\gamma_{AB}+O(1),\qquad\gamma_{AB}=e^{2\varphi}\bar{\gamma}_{AB}\] (2.6)
where \(\varphi\) will be a dynamical field. Varying the action (2.5) now leads to
\[\delta S[g]=\frac{1}{16\pi G}\int_{\mathcal{M}}d^{3}x\,\sqrt{-g} \delta g_{\mu\nu}\left(-G^{\mu\nu}-\Lambda g^{\mu\nu}\right)\\ +\frac{1}{16\pi G}\oint_{\partial\mathcal{M}}d^{2}x\,r^{2}\sqrt{- \gamma}\,2\delta\varphi\left(-K+{}_{0}K\right).\] (2.7)
In order to have a well defined action, we need to impose \(K+\frac{2}{l}=o(r^{-2})\). This corresponds to a simple change from Dirichlet to Mixed boundary conditions. On space-like boundaries, it is equivalent to a canonical transformation but as we will see, on time-like boundaries, it changes the number of global degrees of freedom drastically.
The asymptotic conditions discussed above can be summarized by:
\[g_{rr} = \frac{l^{2}}{r^{2}}+C_{rr}r^{-4}+o(r^{-4}),\] (2.8)
\[g_{rA} = O(r^{-3}),\] (2.9)
\[g_{AB} = r^{2}\gamma_{AB}+C_{AB}+o(1),\] (2.10)
\[K-{}_{0}K = o(r^{-2}),\] (2.11)
where \(\gamma_{AB}=e^{2\varphi}\bar{\gamma}_{AB}\) and \(\bar{\gamma}\) is a fixed metric on the cylinder. The last condition is a constraint on the functions \(\varphi(x^{A}),C_{rr}(x^{A}),C_{AB}(x^{C})\):
\[\rho\equiv\gamma^{AB}C_{AB}+\frac{1}{l^{2}}C_{rr}=0.\] (2.12)
To describe asymptotically \(AdS_{3}\) space-times, the metric \(\bar{\gamma}\) has to be fixed to the flat metric:
\[\bar{\gamma}_{AB}dx^{A}dx^{B}=-d\tau^{2}+d\phi^{2}=-dx^{+}dx^{-}\] (2.13)
where \(x^{\pm}=\tau\pm\phi\) are light-cone coordinates on the cylinder. The BTZ black-hole [2, 3] satisfies those asymptotic conditions with \(\varphi=0\) and
\[C_{\tau\tau}=8Gl^{2}M,\quad C_{\phi\phi}=0,\quad C_{\tau\phi}=4GlJ\quad\text{ and}\quad C_{rr}=8Gl^{4}M,\] (2.14)
or, in light-cone coordinates:
\[C_{+-}=2Gl^{2}M,\quad C_{\pm\pm}=2Gl^{2}\left(M\pm\frac{J}{l}\right).\] (2.15)
As we will see in section 4, any solution of Einstein’s equation satisfying the Brown-Henneaux boundary conditions (2.2)-(2.4) also satisfy the supplementary condition \(\rho=0\). In that sense, those new boundary conditions (2.8)-(2.11) are a generalization of the usual ones.
## 3 Asymptotic symmetries
The infinitesimal diffeomorphisms leaving the conditions (2.8)-(2.11) invariant are generated by vector fields \(\xi^{\mu}\) satisfying
\[\mathcal{L}_{\xi}g_{rr}=\eta_{rr}r^{-4}+o(r^{-4}),\quad\mathcal{L }_{\xi}g_{rA}=O(r^{-3}),\] (3.1)
\[\quad\mathcal{L}_{\xi}g_{AB}=2\omega\,\gamma_{AB}r^{2}+\eta_{AB}+ o(1),\] (3.2)
\[-2\omega\,\gamma^{AB}C_{AB}+\gamma^{AB}\eta_{AB}+\frac{1}{l^{2}} \eta_{rr}=0,\] (3.3)
where \(\mathcal{L}_{\xi}\) is the Lie derivative. For convenience, we have denoted the induced variations on \(C_{AB}\), \(C_{rr}\) and \(\varphi\) by \(\eta_{AB}\), \(\eta_{rr}\) and \(\omega\) respectively. Equation (3.3) is coming from the variation of the supplementary condition (2.12). Equations (3.1)-(3.2) lead to
(3.4)
where \(Y^{A}\) is a conformal Killing vector of \(\gamma_{AB}\). The last equation (3.3) implies that \(\psi\) is a harmonic function: a solution of \(\Delta\psi=D_{A}D^{A}\psi=0\) where the derivative \(D_{A}\) is the covariant derivative associated with \(\gamma_{AB}\). Those conditions on \(Y^{A}\) and \(\psi\) depend only on the conformal structure of the cylinder: \(\bar{\gamma}_{AB}\).
We expect the above vectors to form a closed algebra under the Lie bracket. As in [23], the vectors explicitly depend on the dynamical part of the metric \(g_{\mu\nu}\). In that case, the usual Lie bracket has to be modified to take into account this dependance. The relevant bracket is given by:
\[[\xi_{1},\xi_{2}]^{\mu}_{M}=[\xi_{1},\xi_{2}]^{\mu}-\delta^{g}_{\xi_{1}}\xi^{ \mu}_{2}+\delta^{g}_{\xi_{2}}\xi^{\mu}_{1},\] (3.5)
where we denoted by \(\delta^{g}_{\xi_{1}}\xi^{\mu}_{2}\) the change induced in \(\xi^{\mu}_{2}(g)\) due to the variation \(\delta^{g}_{\xi_{1}}g_{\mu\nu}=\mathcal{L}_{\xi_{1}}g_{\mu\nu}\). Under this bracket, the vectors (3.4) satisfy:
\[\begin{array}[]{rclrcl}\left[\xi_{1},\xi_{2}\right]^{r}_{M}&=&-\frac{1}{2} \widehat{\psi}r+O(r^{-1}),&\widehat{\psi}&=&Y_{1}^{A}\partial_{A}\psi_{2}-Y_{2 }^{A}\partial_{A}\psi_{1},\\ \left[\xi_{1},\xi_{2}\right]^{A}_{M}&=&\widehat{Y}^{A}-\frac{l^{2}}{4r^{2}} \gamma^{AB}\partial_{B}\widehat{\psi}+O(r^{-4}),&\widehat{Y}{}^{A}&=&Y^{B}_{1} \partial_{B}Y_{2}^{A}-Y^{B}_{2}\partial_{B}Y_{1}^{A}.\end{array}\] (3.6)
One can easily prove that \(\widehat{\psi}\) is again a solution of \(\Delta\widehat{\psi}=0\).
The set of transformations for which \(Y^{A}=0=\psi\) is an ideal of the full algebra. This is the subalgebra of pure gauge transformations; as we will see in section 5, their associated charges are zero. The asymptotic symmetry algebra is defined as the quotient of the full algebra given by (3.4) with the ideal of the pure gauge transformations [24, 25]. This quotient is parametrized by \((Y^{A},\psi)\) and the induced Lie bracket is
\[\left[(Y_{1},\psi_{1}),(Y_{2},\psi_{2})\right]=(Y^{B}_{1}\partial_{B}Y_{2}^{A} -Y^{B}_{2}\partial_{B}Y_{1}^{A},Y_{1}^{A}\partial_{A}\psi_{2}-Y_{2}^{A} \partial_{A}\psi_{1}).\] (3.7)
This algebra is the semi-direct product of the two dimensional conformal algebra with the harmonic Weyl transformations. It is a subalgebra of the Penrose-Brown-Henneaux algebra introduced in [26] which is the semi-direct product of the two dimensional conformal algebra with all Weyl transformations.
In the case of asymptotically \(AdS_{3}\) space-times \(\bar{\gamma}_{AB}dx^{A}dx^{B}=-dx^{+}dx^{-}\), the conformal Killing equation for \(Y^{A}\) gives as usual \(Y^{+}(x^{+})\) and \(Y^{-}(x^{-})\). The harmonic equation for \(\psi\) takes the form
\[\Delta\psi=-4e^{-2\phi}\partial_{+}\partial_{-}\psi.\] (3.8)
Using Fourier expansion, we easily find the general solution:
\[\psi=\sum_{n}\left(\psi^{+}_{n}e^{inx^{+}}+\psi^{-}_{n}e^{inx^{-}}\right)+V\tau,\] (3.9)
where \(\psi^{\pm}_{n}\) and \(V\) are constants. Denoting \(W^{\pm}(x^{\pm})=\sum_{n}\psi^{\pm}_{n}e^{inx^{\pm}}\), the algebra (3.7) takes the form:
\[\widehat{Y}^{\pm} = Y_{1}^{\pm}\partial_{\pm}Y_{2}^{\pm}-Y_{2}^{\pm}\partial_{\pm}Y_ {1}^{\pm},\] (3.10)
\[\widehat{W}^{\pm} = Y_{1}^{\pm}(\partial_{\pm}W_{2}^{\pm}+\frac{1}{2}V_{2})-Y_{2}^{ \pm}(\partial_{\pm}W_{1}^{\pm}+\frac{1}{2}V_{1}),\] (3.11)
\[\widehat{V} = 0.\] (3.12)
In terms of the basis vectors \(l^{\pm}_{n}\), \(p^{\pm}_{n}\) and \(q\) defined as
\[Y^{\pm}(x^{\pm})\partial_{\pm}=\sum_{n\in\mathbf{Z}}c^{n}_{\pm}l ^{\pm}_{n}, l^{\pm}_{n}=e^{inx^{\pm}}\partial_{\pm},\] (3.13)
\[W^{\pm}=\sum_{n=0}\psi^{\pm}_{n}p^{\pm}_{n}, p^{\pm}_{n}=e^{inx^{\pm}},\] (3.14)
\[\psi=W^{+}+W^{-}+Vq, q=\tau,\] (3.15)
the algebra reads
\[\boxed{\begin{array}[]{rclcrcl}i\left[l^{\pm}_{m},l^{\pm}_{n}\right]&=&(m-n)l^ {\pm}_{m+n},&&i[l^{+}_{m},l^{-}_{n}]&=&0,\\ i\left[l^{\pm}_{m},p^{\pm}_{n}\right]&=&-np^{\pm}_{m+n},&&i[l^{\pm}_{m},p^{\mp }_{n}]&=&0,\\ i\left[p^{\pm}_{m},p^{\pm}_{n}\right]&=&0,&&i[p^{+}_{m},p^{-}_{n}]&=&0,\\ i\left[l^{\pm}_{m},q\right]&=&\frac{i}{2}p^{\pm}_{m},&&i[p^{\pm}_{m},q]&=&0. \end{array}}\] (3.16)
The two generators \(p^{+}_{0}\) and \(p^{-}_{0}\) are identical. Each chiral copy \((l_{m},p_{m})\) is the semi-direct product of a Virasoro algebra with a current algebra. One copy of this semi-direct product already appeared in the study of asymptotically warped \(AdS_{3}\)[27, 28, 29] and in the study of the new chiral boundary conditions for \(AdS_{3}\)[15].
## 4 Asymptotic solutions to the EOM
We will solve Einstein’s equations asymptotically for metrics of the form (2.8)-(2.10) with the last constraint (2.12) only imposed at the end (see [30] for a similar analysis). To do that, it is useful to introduce explicitly the first order of \(g_{rA}\):
\[g_{rr} = \frac{l^{2}}{r^{2}}+C_{rr}r^{-4}+o(r^{-4}),\] (4.1)
\[g_{rA} = C_{rA}r^{-3}+o(r^{-3}),\] (4.2)
\[g_{AB} = r^{2}\gamma_{AB}+C_{AB}+o(1).\] (4.3)
For those metrics, the Ricci tensor takes the following form
\[R_{rr} = -\frac{2}{l^{2}}\left(\frac{l^{2}}{r^{2}}+C_{rr}r^{-4}\right)+o(r ^{-4}),\] (4.4)
\[R_{rA} = \left(-\gamma^{CB}D_{B}C_{CA}+\gamma^{CB}D_{A}C_{BC}+\frac{1}{2l^ {2}}\partial_{A}C_{rr}\right)r^{-3}\] (4.5)
\[\quad-\frac{2}{l^{2}}C_{rA}r^{-3}+o(r^{-3}),\]
\[R_{AB} = -\frac{2}{l^{2}}\left(r^{2}\gamma_{AB}+C_{AB}\right)\] (4.6)
\[\quad+{}^{\gamma}R_{AB}+\frac{1}{l^{2}}\gamma_{AB}\left(\gamma^{ CD}C_{CD}+\frac{1}{l^{2}}C_{rr}\right)+o(1),\]
where \({}^{\gamma}R_{AB}\) is the Ricci tensor associated to the metric \(\gamma_{AB}\). The EOM \(G_{\mu\nu}-\frac{1}{l^{2}}g_{\mu\nu}=0\) reduce asymptotically to two simple conditions:
\[{}^{\gamma}R=-\frac{2}{l^{2}}\left(\frac{1}{l^{2}}C_{rr}+\gamma^{ BC}C_{BC}\right),\] (4.7)
\[D_{B}(\gamma^{BC}C_{CA}-\frac{1}{2}\delta^{B}_{A}\gamma^{CD}C_{ CD})=\frac{1}{2}\partial_{A}\left(\frac{1}{l^{2}}C_{rr}+\gamma^{BC}C_{BC} \right).\] (4.8)
They are easily rewritten in term of \(\varphi\) and \(\bar{\gamma}_{AB}\):
\[\bar{\Delta}\varphi = \frac{1}{2}\bar{R}+\frac{2e^{2\varphi}}{l^{2}}\rho,\] (4.9)
\[\bar{D}_{B}\Xi^{B}_{A} = \frac{e^{2\varphi}}{2}\partial_{A}\rho,\qquad\Xi^{B}_{A}\equiv \bar{\gamma}^{BC}C_{CA}-\frac{1}{2}\delta^{B}_{A}\bar{\gamma}^{CD}C_{CD}\] (4.10)
where the barred quantities refer to the metric \(\bar{\gamma}_{AB}\). The quantity \(\Xi^{B}_{A}\) is a symmetric trace-less tensor. For our asymptotic conditions, we have to add the constraint \(\rho=0\). This gives us equations of motion for \(\varphi\) and \(\Xi^{A}_{B}\):
\[\bar{\Delta}\varphi=\frac{1}{2}\bar{R},\qquad\bar{D}_{B}\Xi^{B}_{A}=0.\] (4.11)
Using a pure gauge transformation, one can always send a metric satisfying (4.1)-(4.3) to the Fefferman-Graham gauge-fixed form where \(g_{rr}=\frac{l^{2}}{r^{2}}\) and \(g_{rA}=0\)[31, 32]. In that case, one can show that Einstein’s equations impose
\[g_{AB}=r^{2}\gamma_{AB}+\widetilde{C}_{AB}+r^{-2}S_{AB},\] (4.12)
where \(\widetilde{C}_{AB}\) and \(S_{AB}\) are given in term of \(\varphi\) and \(\Xi^{A}_{B}\)[33, 34, 35, 36, 23]. In that sense, \(\varphi\) and \(\Xi^{B}_{A}\) satisfying (4.11) contain all the gauge invariant degrees of freedom of the theory.
In the usual Brown-Henneaux boundary conditions, one doesn’t impose \(\rho=0\) but instead \(\gamma_{AB}dx^{A}dx^{B}=-d\tau^{2}+d\phi^{2}\) which implies \(\varphi=0\) and \(\bar{R}=0\). Inserting this in the full EOM (4.9)-(4.10), we obtain:
\[\rho=0,\qquad\bar{D}_{B}\Xi^{B}_{A}=0.\] (4.13)
**Theorem 4.1**.: _Any solution of Einstein’s equations with negative cosmological constant satisfying to the usual Brown-Henneaux boundary conditions with \(\bar{\gamma}_{AB}dx^{A}dx^{B}=-d\tau^{2}+d\phi^{2}\) will also satisfy to the generalized boundary conditions (2.8)-(2.11)._
In that sense, we can say that our new boundary conditions are a generalization of the usual one: we are not losing any solutions. Using light-cone coordinates, we can put the EOM (4.11) in the simple form
\[\partial_{+}\partial_{-}\varphi=0,\quad\partial_{+}\Xi^{+}_{-}=0,\quad\partial _{-}\Xi^{-}_{+}=0.\] (4.14)
Those quantities for the BTZ black-hole are given by:
\[\varphi=0,\quad\Xi^{-}_{+}=2Gl^{2}\left(M+\frac{J}{l}\right),\quad\Xi^{+}_{-}= 2Gl^{2}\left(M-\frac{J}{l}\right).\] (4.15)
We would like to emphasize the difference between the two approaches. In the Brown-Henneaux boundary conditions, one imposes \(\varphi=0\). The EOM then imply \(\rho=0\) and \(D_{B}\Xi^{B}_{A}=0\). In the new boundary conditions, one imposes \(\rho=0\) which leads the EOM (4.11) which are EOM for both \(\varphi\) and \(\Xi^{B}_{A}\). We have new degrees of freedom in \(\varphi\).
## 5 Surface charges
For the surface charges, we follow [37], up to a global change of sign. The technique allows us to compute the variation of the surface charges \(\delta\hskip-5.0pt/\hskip-0.5pt\mathcal{Q}_{\xi}[h,g]\) associated to a vector field \(\xi\) under a variation of the metric \(h_{\mu\nu}=\delta g_{\mu\nu}\). When this variation is integrable on field space [38], we can define the charges as:
\[\mathcal{Q}_{\xi}[g]=\int_{\gamma_{s}}\delta\hskip-5.0pt/\hskip-0.5pt\mathcal{ Q}_{\xi}[\delta^{s}g,g(\gamma_{s})]\] (5.1)
where the integration is done along a path \(\gamma_{s}\) in field space joining the background metric \(\bar{g}\) to the metric \(g\) that we are considering. For the variation of the charge, we use the expression
(5.2)
where the indices are raised and lowered with the full metric \(g_{\mu\nu}\) and \(\nabla_{\mu}\) is the covariant derivative associated to it. We also define
\[(d^{n-k}x)_{\mu\nu}=\frac{1}{k!(n-k)!}\epsilon_{\mu\nu\alpha_{1}\dots\alpha_{n -2}}dx^{\alpha_{1}}\wedge\dots\wedge dx^{\alpha_{n-2}},\quad\epsilon_{01\dots n -1}=1,\]
with \(n=3\) and the surface of integration \(\partial\Sigma\) is taken to be a circle on the cylinder at spatial infinity \(C_{\infty}\).
A straightforward computation leads to
\[\delta\hskip-5.0pt/\hskip-0.5pt\mathcal{Q}_{\xi}[h,g]=\frac{1}{16 \pi Gl}\delta\int_{C_{\infty}}\epsilon_{AB}dx^{B}\,\sqrt{-\bar{\gamma}}\Big{[} l^{2}\bar{\gamma}^{AC}(\partial_{C}\psi\varphi-\psi\partial_{C}\varphi)+2Y^{A} \bar{\gamma}^{CD}C_{CD}-2Y^{C}\bar{\gamma}^{AD}C_{CD}\Big{]}\\ +\frac{1}{16\pi Gl}\int_{C_{\infty}}\epsilon_{AB}dx^{B}\,\sqrt{- \bar{\gamma}}Y^{A}\left(\frac{1}{l^{2}}e^{2\varphi}\delta C_{rr}-2\delta \varphi\bar{\gamma}^{CD}C_{CD}\right)\] (5.3)
where we used \(\epsilon_{AB}=-\epsilon_{rAB}\). The second line contains a non-integrable term that is easily removed using the variation of the constraint (2.12). The final result, after integrating in field space, is given by:
\[\mathcal{Q}_{\xi}[g]=\frac{1}{16\pi Gl}\int_{C_{\infty}}\epsilon_ {AD}dx^{D}\,\sqrt{-\bar{\gamma}}\Big{[}l^{2}(\bar{D}^{A}\psi\varphi-\psi\bar{D }^{A}\varphi)-2Y^{B}\Xi^{A}_{B}\Big{]},\] (5.4)
where we raised and lowered indices with \(\bar{\gamma}_{AB}\) and its inverse. We see that the 2 relevant dynamical quantities for the charges are, as expected, \(\varphi\) and \(\Xi^{A}_{B}\). We normalized our charges using the BTZ black-hole with \(M=0=J\) as a background (or, using (4.15), \(\varphi=0\) and \(\Xi^{A}_{B}=0\)). The charges depend only on the leading orders of \(\xi\): the pure gauge transformations \(\xi^{r}=O(r^{-1})\) and \(\xi^{A}=O(r^{-4})\) give zero.
In the asymptotically \(AdS_{3}\) case, with \(\bar{\gamma}_{AB}dx^{A}dx^{B}=-d\tau^{2}+d\phi^{2}\) and \(C_{\infty}\) being the circle at \(\tau\) constant, we obtain:
(5.5)
The last two terms are the usual contribution coming from the two Virasoro algebras. Using the time translation and angular rotation symmetry vectors \(Y=\frac{1}{l}\partial_{\tau}\) and \(Y=\partial_{\phi}\), we can evaluate the mass and the angular momentum of the BTZ black-hole:
\[\mathcal{Q}_{\frac{1}{l}\partial_{\tau}}[g_{BTZ}]=M,\quad\text{and}\quad \mathcal{Q}_{\partial_{\phi}}[g_{BTZ}]=J.\] (5.6)
In light-cone coordinates and using the parametrization of the asymptotic symmetry group in term of \((Y^{\pm},W^{\pm},V)\) introduced in section 3, the charges (5.5) can be rewritten as
\[\mathcal{Q}_{\xi}[g]\approx\frac{1}{8\pi Gl}\int_{0}^{2\pi}d\phi \,\Big{[}l^{2}\left(W^{+}\partial_{+}\varphi^{+}+W^{-}\partial_{-}\varphi^{-} \right)+Y^{+}\Xi_{++}+Y^{-}\Xi_{--}\Big{]}\\ +\frac{\alpha}{16\pi Gl}\int_{0}^{2\pi}d\phi\,(W^{+}+W^{-})-\frac {V}{16\pi Gl}\int_{0}^{2\pi}d\phi\,(\varphi^{+}+\varphi^{-}).\] (5.7)
To obtain this result, we used some integrations by parts and solved the EOM (4.14) with \(\varphi=\varphi^{+}(x^{+})+\varphi^{-}(x^{-})+\alpha\tau\), \(\alpha\) being a constant.
## 6 Centrally extended algebra
As in [37, 38], we expect the charges built in the previous section to form a representation of the asymptotic symmetry algebra, or more precisely, that
\[\Big{[}\mathcal{Q}_{\xi_{1}}[g],\mathcal{Q}_{\xi_{2}}[g]\Big{]} \equiv\delta\hskip-5.0pt/\hskip-0.5pt\mathcal{Q}_{\xi_{1}}[\mathcal{L}_{\xi_{2 }}g,g]\approx\mathcal{Q}_{[\xi_{1},\xi_{2}]_{M}}[g]+K_{\xi_{1},\xi_{2}},\] (6.1)
where \(K_{\xi_{1},\xi_{2}}\) is a possible central extension.
For vectors satisfying (3.4), \(\mathcal{L}_{\xi}g_{AB}\) leads to the following variations:
\[\delta_{\xi}\varphi = Y^{A}\partial_{A}\varphi+\frac{1}{2}\bar{D}_{A}Y^{A}-\frac{1}{2}\psi,\] (6.2)
\[\delta_{\xi}\Xi_{AB} = \mathcal{L}_{Y}\Xi_{AB}-\frac{l^{2}}{2}\bar{D}_{A}\partial_{B}\psi\] (6.3)
\[+\frac{l^{2}}{2}\left(\partial_{A}\varphi\partial_{B}\psi+ \partial_{B}\varphi\partial_{A}\psi-\bar{\gamma}_{AB}\bar{D}^{C}\varphi \partial_{C}\psi\right).\]
Using those in (5.4) and integration by parts, we get
\[\delta\hskip-5.0pt/\hskip-0.5pt\mathcal{Q}_{\xi_{1}}[\mathcal{L}_ {\xi_{2}}g,g]=\frac{1}{16\pi Gl}\int_{C_{\infty}}\epsilon_{AD}dx^{D}\,\sqrt{- \bar{\gamma}}\Big{[}l^{2}(\bar{D}^{A}\widehat{\psi}\varphi-\widehat{\psi}\bar{ D}^{A}\varphi)-2\widehat{Y}^{B}\Xi^{A}_{B}\\ +\frac{l^{2}}{2}\left(\psi_{1}\bar{D}^{A}\psi_{2}-\psi_{2}\bar{D} ^{A}\psi_{1}\right)+l^{2}\left(Y^{B}_{1}\bar{D}^{A}\partial_{B}\psi_{2}-Y^{B}_ {2}\bar{D}^{A}\partial_{B}\psi_{1}\right)\\ -l^{2}\psi_{1}Y^{A}_{2}\bar{D}_{B}\bar{D}^{B}\varphi+\frac{l^{2}} {2}Y^{A}_{2}\psi_{1}\bar{R}-2Y^{B}_{1}Y^{A}_{2}\bar{D}_{E}\Xi^{E}_{B}\Big{]}.\] (6.4)
On shell, this reproduces (6.1) with
\[K_{\xi_{1},\xi_{2}} = \frac{l}{16\pi G}\int_{C_{\infty}}\epsilon_{AD}dx^{D}\,\sqrt{- \bar{\gamma}}\Big{[}\frac{1}{2}\psi_{1}\bar{D}^{A}\psi_{2}+Y^{B}_{1}\bar{D}^{A }\partial_{B}\psi_{2}-(1\leftrightarrow 2)\Big{]},\] (6.5)
which satisfies the cyclic identity:
\[K_{[\xi_{1},\xi_{2}]_{M},\xi_{3}}+K_{[\xi_{2},\xi_{3}]_{M},\xi_{1}}+K_{[\xi_{3 },\xi_{1}]_{M},\xi_{2}}=0.\] (6.6)
As expected, the algebra closes and we obtain a non-zero central extension. However, as one can see clearly in the expression (6.5), there are no central terms in the conformal subalgebra parametrized by \(Y^{A}\).
In the asymptotically \(AdS_{3}\) case, using equation (5.7) for the charges and some integration by parts, we obtain
\[\begin{gathered}\Big{[}\mathcal{Q}_{\xi_{1}}[g],\mathcal{Q}_{\xi_ {2}}[g]\Big{]}\approx\mathcal{Q}_{[\xi_{1},\xi_{2}]_{M}}[g]+K^{+}_{\xi_{1},\xi _{2}}+K^{-}_{\xi_{1},\xi_{2}}+K^{0}_{\xi_{1},\xi_{2}},\\ K^{+}_{\xi_{1},\xi_{2}}=\frac{l}{16\pi G}\int_{0}^{2\pi}d\phi\,( W_{1}^{+}\partial_{+}^{2}Y_{2}^{+}-W_{2}^{+}\partial_{+}^{2}Y_{1}^{+}-W_{1}^{+ }\partial_{+}W^{+}_{2}),\\ K^{-}_{\xi_{1},\xi_{2}}=\frac{l}{16\pi G}\int_{0}^{2\pi}d\phi\,( W_{1}^{-}\partial_{-}^{2}Y_{2}^{-}-W_{2}^{-}\partial_{-}^{2}Y_{1}^{-}-W_{1}^{- }\partial_{-}W^{-}_{2}),\\ K^{0}_{\xi_{1},\xi_{2}}=\frac{lV_{1}}{32\pi G}\int_{0}^{2\pi}d \phi\,(W^{+}_{2}+W^{-}_{2})-\frac{lV_{2}}{32\pi G}\int_{0}^{2\pi}d\phi\,(W^{+} _{1}+W^{-}_{1}),\end{gathered}\] (6.7)
where \(K^{\pm,0}_{\xi_{1},\xi_{2}}\) are the central extensions. Expanding this result in term of the charges \((L^{\pm}_{m},P^{\pm}_{m},Q)\) associated to the basis \((l^{\pm}_{m},p^{\pm}_{m},q)\) introduced in section 3, we obtain explicitly
\[\boxed{\begin{array}[]{rclcrcl}i\left[L^{\pm}_{m},L^{\pm}_{n}\right]&=&(m-n)L^ {\pm}_{m+n},&&i[L^{+}_{m},L^{-}_{n}]&=&0,\\ i\left[L^{\pm}_{m},P^{\pm}_{n}\right]&=&-nP^{\pm}_{m+n}+\frac{l}{8G}im^{2} \delta_{m+n,0},&&i[L^{\pm}_{m},P^{\mp}_{n}]&=&0,\\ i\left[P^{\pm}_{m},P^{\pm}_{n}\right]&=&-\frac{l}{8G}m\delta_{m+n,0},&&i[P^{+} _{m},P^{-}_{n}]&=&0,\\ i\left[L^{\pm}_{m},Q\right]&=&\frac{i}{2}P^{\pm}_{m},&&i\left[P^{\pm}_{m},Q \right]&=&-i\frac{l}{16G}\delta_{m,0}.\end{array}}\] (6.8)
As we saw earlier, adding dynamics to the conformal factor of the boundary metric sends the Virasoro central charges to zero. This effect is similar to the one described in [39] where Liouville theory is coupled to gravity in two dimensions.
However, as we will see in the next section, there are more than one 2D conformal algebra hidden in this algebra and it is possible to recover the usual Brown-Henneaux central extension.
## 7 Brown-Henneaux central charges recovered
At first sight, the final algebra (6.8) is not very promising. The central charges in the Virasoro algebras are a key point of the various results obtained in asymptotically \(AdS_{3}\) space-times and we lost them. However, as the boundary conditions studied are a generalization of the usual ones, the central charges must be hidden somewhere.
The answer comes by studying the exact Killing vectors of the background \(AdS_{3}\). The original Virasoro algebras studied by Brown-Henneaux are built on the Killing vectors of \(AdS_{3}\), in the sense that, \(l^{\pm}_{-1},l^{\pm}_{0}\) and \(l^{\pm}_{1}\) are the generators of the \(so(2,2)\) algebra leaving \(AdS_{3}\) invariant. As we will show in the following, the Virasoro generators present in the basis used to write our algebra (3.16) do not satisfy this property. Nevertheless, it is possible to recover it by doing a change of basis of the algebra. This will also reproduce the usual Brown-Henneaux result for the central extension of the 2D conformal subalgebra.
The \(AdS_{3}\) metric is given by:
\[ds^{2}=-(\frac{r^{2}}{l^{2}}+1)dt^{2}+\frac{1}{\frac{r^{2}}{l^{2}}+1}dr^{2}+r^ {2}d\phi^{2}.\] (7.1)
In our asymptotic expansion, it corresponds to:
\[\varphi=0,\quad\Xi_{++}=-\frac{l^{2}}{4},\quad\Xi_{--}=-\frac{l^{2}}{4}.\] (7.2)
The Killing vectors of \(AdS_{3}\) are asymptotic symmetries that preserve those three quantities. Using the variations (6.2) and (6.3), we obtain the following equations:
\[\delta_{\xi}\varphi = \partial_{+}Y^{+}+\partial_{-}Y^{-}-\psi=0,\] (7.3)
\[\delta_{\xi}\Xi_{++} = -\frac{l^{2}}{2}\left(\partial_{+}Y^{+}+\partial_{+}^{2}\psi \right)=0,\] (7.4)
\[\delta_{\xi}\Xi_{--} = -\frac{l^{2}}{2}\left(\partial_{-}Y^{-}+\partial_{-}^{2}\psi \right)=0.\] (7.5)
It is obvious that the \(l^{\pm}_{\pm 1}\) defined in section 3 are not solutions to those equations: they are not Killing vectors of \(AdS_{3}\). The general solution is given by the set of vectors \((Y^{A},\psi=\partial_{A}Y^{A})\) with \(Y^{A}\) satisfying \(\partial_{\pm}Y^{\pm}=\partial_{\pm}^{3}Y^{\pm}\). In terms of our generators \((l^{\pm}_{m},p^{\pm}_{m})\) the Killing vectors of \(AdS_{3}\) are given by:
\[l^{\pm}_{0},\quad l^{\pm}_{1}+ip^{\pm}_{1},\quad l^{\pm}_{-1}-ip^{\pm}_{-1}.\] (7.6)
We can build two full Virasoro algebras on those vectors as follows:
\[\widetilde{l}^{\pm}_{m}\equiv l^{\pm}_{m}+imp^{\pm}_{m}.\] (7.7)
The generators \((\widetilde{l}^{\pm}_{m},p^{\pm}_{m},q)\) form a new basis of our asymptotic symmetry algebra for which the commutators (3.16) take the same form:
\[\begin{array}[]{rclcrcl}i[\widetilde{l}^{\pm}_{m},\widetilde{l}^{\pm}_{n}]&=&( m-n)\widetilde{l}^{\pm}_{m+n},&&i[\widetilde{l}^{+}_{m},\widetilde{l}^{-}_{n}] &=&0,\\ i[\widetilde{l}^{\pm}_{m},p^{\pm}_{n}]&=&-np^{\pm}_{m+n},&&i[\widetilde{l}^{ \pm}_{m},p^{\mp}_{n}]&=&0,\\ i[p^{\pm}_{m},p^{\pm}_{n}]&=&0,&&i[p^{+}_{m},p^{-}_{n}]&=&0,\\ i[\widetilde{l}^{\pm}_{m},q]&=&\frac{i}{2}p^{\pm}_{m},&&i[p^{\pm}_{m},q]&=&0. \end{array}\] (7.8)
On the level of the associated charges \((\widetilde{L}^{\pm}_{m}=L^{\pm}_{m}+imP^{\pm}_{m},P^{\pm}_{m},Q)\), we recover the usual result for the Virasoro central charges:
\[\boxed{\begin{array}[]{rclcrcl}i[\widetilde{L}^{\pm}_{m},\widetilde{L}^{\pm}_{ n}]&=&(m-n)\widetilde{L}^{\pm}_{m+n}+\frac{c^{\pm}}{12}m^{3}\delta_{m+n,0},&&i [\widetilde{L}^{+}_{m},\widetilde{L}^{-}_{n}]&=&0,\\ i[\widetilde{L}^{\pm}_{m},P^{\pm}_{n}]&=&-nP^{\pm}_{m+n},&&i[\widetilde{L}^{ \pm}_{m},P^{\mp}_{n}]&=&0,\\ i[P^{\pm}_{m},P^{\pm}_{n}]&=&k\,m\delta_{m+n,0},&&i[P^{+}_{m},P^{-}_{n}]&=&0, \\ i[\widetilde{L}^{\pm}_{m},Q]&=&\frac{i}{2}P^{\pm}_{m},&&i\left[P^{\pm}_{m},Q \right]&=&i\frac{k}{2}\delta_{m,0}.\end{array}}\] (7.9)
This central extension is a particular case of the general central extension studied in appendix A. Here, only 3 of the 6 possible central charges are non-zero: the Brown-Henneaux central charges \(c^{\pm}=\frac{3l}{2G}\) and one new quantity \(k=-\frac{l}{8G}\). The factor of \(m^{3}\) is coming from our normalization for \(\widetilde{L}^{\pm}_{0}\): using \(AdS_{3}\) as a background would lead to the standard \(m^{3}-m\).
Similar algebras appear in the study of higher spin gravity in three dimensions [40, 41]. As in their case and in the result of [15], the central charges \(c^{\pm}\) and \(k\) have opposite signs which, in general, leads to non unitary representations [29].
## 8 Conclusions
The boundary conditions studied in this paper are a generalization of the usual Brown-Henneaux boundary conditions. Those extended boundary conditions lead to a second enhancement of the asymptotic symmetry algebra. Up to the zero modes, it is generated by two Virasoro algebras and two \(U(1)\) current algebras. At the level of the charges, the algebra is centrally extended and we recover the Brown-Henneaux central charges \(c^{\pm}=\frac{3l}{2G}\) plus one new number \(k=-\frac{l}{8G}\). In general, the negative value for \(k\) leads to non unitary representations.
Those boundary conditions give us two interesting things. There are more degrees of freedom which would maybe account for what we are missing in our understanding of gravity in three dimensions. The second improvement is a bigger symmetry algebra. This would give us more tools to control and understand the theory.
For the future, it would be interesting to see how the results obtained in the study of asymptotically \(AdS_{3}\) space-times change. As most of those results rely heavily on the result of Brown-Henneaux, a change in boundary conditions can have a strong impact.
The new chiral boundary conditions of [15] describe a different problem than those presented here. One way of seeing it is that the time translation symmetry is the zero mode of a \(U(1)\) current algebra in their case whereas it is part of the conformal algebra in this generalized Brown-Henneaux case. However, there are still striking differences in the number of degrees of freedom and in the size of the algebra. It might be possible to generalize the chiral boundary conditions to allow for more degrees of freedom and maybe enhance the associated asymptotic symmetry algebra.
## Acknowledgements
I would like to thank G. Barnich, S. Detournay, H. González, A. Perez, P. Ritter, D. Tempo, R. Troncoso and J. Zanelli for useful discussions. The Centro de Estudios Científicos (CECs) is funded by the Chilean Government through the Centers of Excellence Base Financing Program of Conicyt.
## Appendix A Central Extension
Let’s consider an algebra \(\mathcal{G}\) generated by \(T_{a}\):
\[\left[T_{a},T_{b}\right]=f^{c}_{ab}T_{c}.\] (A.1)
A central extension of \(\mathcal{G}\) by an abelian algebra of dimension 1 is given by a set of complex numbers \(K_{a,b}=-K_{b,a}\) such that the following extended algebra closes:
\[\left[T_{a},T_{b}\right] = f^{c}_{ab}T_{c}+K_{a,b}I,\] (A.2)
\[\left[T_{a},I\right] = 0,\] (A.3)
where \(I\) is the new abelian generator [42, 43]. As it is customary, we will not write it in the rest of the computation. Two such central extensions are equivalent if they can be related by a redefinition of the generators of \(\mathcal{G}\) : \(T_{c}\to T_{c}+\alpha_{c}I\).
To compute the most general central extension of the algebra (3.16) up to equivalence, we will start with the general form:
\[\begin{array}[]{rclcrcl}i[L^{\pm}_{m},L^{\pm}_{n}]&=&(m-n)L^{\pm}_{m+n}+K^{\pm }_{m,n},&&i[L^{+}_{m},L^{-}_{n}]&=&K^{+-}_{m,n},\\ i[L^{\pm}_{m},P^{\pm}_{n}]&=&-nP^{\pm}_{m+n}+V^{\pm}_{m,n},&&i[L^{\pm}_{m},P^{ \mp}_{n}]&=&V^{\pm\mp}_{m,n},\\ i[P^{\pm}_{m},P^{\pm}_{n}]&=&W^{\pm}_{m,n},&&i[P^{+}_{m},P^{-}_{n}]&=&W^{+-}_{ m,n},\\ i[L^{\pm}_{m},Q]&=&\frac{i}{2}P^{\pm}_{m}+X^{\pm}_{m},&&i\left[P^{\pm}_{m},Q \right]&=&Y^{\pm}_{m},\end{array}\] (A.4)
with \(K^{\pm}_{m,n}=-K^{\pm}_{n,m}\) and \(W^{\pm}_{m,n}=-W^{\pm}_{n,m}\). Using redefinitions of the generators:
\[L^{\pm}_{n} \rightarrow L^{\pm}_{n}-\frac{1}{n}K^{\pm}_{0,n}\qquad\text{for}\quad n\neq 0,\] (A.5)
\[L^{\pm}_{0} \rightarrow L^{\pm}_{0}+\frac{1}{2}K^{\pm}_{1,-1},\] (A.6)
\[P^{\pm}_{n} \rightarrow P^{\pm}_{n}-\frac{1}{n}V^{\pm}_{0,n}\qquad\text{for}\quad n\neq 0,\] (A.7)
\[P_{0} \rightarrow P_{0}-i(X^{+}_{0}+X^{-}_{0}),\] (A.8)
we can put the following quantities to zero: \(K^{\pm}_{0,m}\), \(K^{\pm}_{1,-1}\), \(V^{\pm}_{0,n}\) for \(n\neq 0\) and \(X^{+}_{0}+X^{-}_{0}\). Because the generator \(Q\) never appears on the right hand side, a redefinition of \(Q\) will not produce any central term: we have used all our freedom.
The extended algebra (A.4) is an algebra if and only if it satisfies the Jacobi identity. Let’s check this step by step:
* The various Jacobi identities that we can write with \(L^{\pm}_{m}\) give the following equations: \[0 = (m-n)K^{\pm}_{m+n,p}+(n-p)K^{\pm}_{n+p,m}+(p-m)K^{\pm}_{p+m,n},\] (A.9) \[0 = (m-n)K^{+-}_{m+n,p}.\] (A.10) Using the fact that \(K^{\pm}_{0,m}=K^{\pm}_{1,-1}=0\), one can easily prove that: \[K^{\pm}_{m,n}=\frac{c^{\pm}}{12}(m^{3}-m)\delta_{m+n,0},\qquad K^{+-}_{m,n}=0,\] (A.11) which is the usual result. The two numbers \(c^{\pm}\) are arbitrary and are the two central charges of the conformal group in 2D. Using another choice for the redefinition of \(L^{\pm}_{0}\), \(L^{\pm}_{0}\to L^{\pm}_{0}-\frac{c^{\pm}}{24}\), we can put \(K^{\pm}_{m,n}=\frac{c^{\pm}}{12}m^{3}\delta_{m+n,0}\).
* The Jacobi identities involving two Virasoro generators \(L^{\pm}_{m}\) and one current generator \((P^{\pm}_{m},Q)\) give: \[0 = (m-n)V^{\pm}_{m+n,p}+p(V^{\pm}_{m,n+p}-V^{\pm}_{n,m+p}),\] (A.12) \[0 = (m-n)V^{\pm\mp}_{m+n,p},\] (A.13) \[0 = (m-n)X^{\pm}_{m+n}-\frac{i}{2}V^{\pm}_{m,n}+\frac{i}{2}V^{\pm}_{n ,m}.\] (A.14) This time, the solution is parametrized by 3 numbers \(d^{\pm}\) and \(d_{0}\): \[V^{\pm}_{m,n}=\left(d^{\pm}m^{2}\mp id_{0}\,m(m+2)\right)\delta_{m+n,0},\quad X ^{\pm}_{m}=\pm d_{0}\,\delta_{m,0},\quad V^{\pm\mp}_{m,n}=0.\] (A.15)
* The Jacobi identities involving only one Virasoro generator and two current generators give: \[0 = -nW^{\pm}_{m+n,p}+pW^{\pm}_{m+p,n},\] (A.16) \[0 = -nW^{\pm\mp}_{m+n,p},\] (A.17) \[0 = -nY^{\pm}_{m+n}-\frac{i}{2}W^{\pm}_{m,n}.\] (A.18) Because \(P^{+}_{0}\) and \(P^{-}_{0}\) represent the same generator, we have \(Y^{+}_{0}=Y^{-}_{0}\). This restricts the solution to \[W^{\pm}_{m,n}=km\,\delta_{m+n,0},\qquad W^{\pm\mp}_{m,n}=0,\qquad Y^{\pm}_{m}= \frac{i}{2}k\,\delta_{m,0},\] (A.19) which is parametrized by only one number: \(k\).
* The Jacobi identities involving only current generators are automatically satisfied.
The final result is then that, up to redefinition of the generators, the most general central extension of the algebra (3.16) is parametrized by \(6\) numbers \(c^{\pm}\), \(d^{\pm}\), \(d^{0}\) and \(k\):
\[\begin{array}[]{rclcrcl}i[L^{\pm}_{m},L^{\pm}_{n}]&=&(m-n)L^{\pm}_{m+n}+\frac{ c^{\pm}}{12}m^{3}\delta_{m+n,0},&&i[L^{+}_{m},L^{-}_{n}]&=&0,\\ i[L^{\pm}_{m},P^{\pm}_{n}]&=&-nP^{\pm}_{m+n}+\left(d^{\pm}m^{2}\mp id_{0}\,m(m +2)\right)\delta_{m+n,0},&&i[L^{\pm}_{m},P^{\mp}_{n}]&=&0,\\ i[P^{\pm}_{m},P^{\pm}_{n}]&=&km\delta_{m+n,0},&&i[P^{+}_{m},P^{-}_{n}]&=&0,\\ i[L^{\pm}_{m},Q]&=&\frac{i}{2}P^{\pm}_{m}\pm d_{0}\,\delta_{m,0},&&i\left[P^{ \pm}_{m},Q\right]&=&\frac{i}{2}k\delta_{m,0}.\end{array}\] (A.20)
## References
* [1] S. Deser and R. Jackiw, “Three-Dimensional Cosmological Gravity: Dynamics of Constant Curvature,” _Annals Phys._**153** (1984) 405–416.
* [2] M. Banados, C. Teitelboim, and J. Zanelli, “The Black hole in three-dimensional space-time,” _Phys.Rev.Lett._**69** (1992) 1849–1851, hep-th/9204099.
* [3] M. Banados, M. Henneaux, C. Teitelboim, and J. Zanelli, “Geometry of the (2+1) black hole,” _Phys.Rev._**D48** (1993) 1506–1525, gr-qc/9302012.
* [4] J. D. Brown and M. Henneaux, “Central Charges in the Canonical Realization of Asymptotic Symmetries: An Example from Three-Dimensional Gravity,” _Commun. Math. Phys._**104** (1986) 207–226.
* [5] A. Strominger, “Black hole entropy from near horizon microstates,” _JHEP_**9802** (1998) 009, hep-th/9712251.
* [6] O. Coussaert, M. Henneaux, and P. van Driel, “The Asymptotic dynamics of three-dimensional Einstein gravity with a negative cosmological constant,” _Class.Quant.Grav._**12** (1995) 2961–2966, gr-qc/9506019.
* [7] M. Henneaux, L. Maoz, and A. Schwimmer, “Asymptotic dynamics and asymptotic symmetries of three-dimensional extended AdS supergravity,” _Annals Phys._**282** (2000) 31–66, hep-th/9910013.
* [8] M. Rooman and P. Spindel, “Holonomies, anomalies and the Fefferman-Graham ambiguity in AdS(3) gravity,” _Nucl.Phys._**B594** (2001) 329–353, hep-th/0008147.
* [9] G. Barnich and H. Gonzalez, “Dual dynamics of three dimensional asymptotically flat Einstein gravity at null infinity,” 1303.1075.
* [10] M. Banados, “Three-dimensional quantum geometry and black holes,” hep-th/9901148.
* [11] S. Carlip, “Conformal field theory, (2+1)-dimensional gravity, and the BTZ black hole,” _Class.Quant.Grav._**22** (2005) R85–R124, gr-qc/0503022.
* [12] A. Maloney and E. Witten, “Quantum Gravity Partition Functions in Three Dimensions,” _JHEP_**1002** (2010) 029, 0712.0155.
* [13] A. Castro, M. R. Gaberdiel, T. Hartman, A. Maloney, and R. Volpato, “The Gravity Dual of the Ising Model,” _Phys.Rev._**D85** (2012) 024032, 1111.1987.
* [14] A. P. Porfyriadis and F. Wilczek, “Effective Action, Boundary Conditions, and Virasoro Algebra for AdS\({}_{3}\),” 1007.1031.
* [15] G. Compère, W. Song, and A. Strominger, “New Boundary Conditions for AdS3,” 1303.2662.
* [16] G. Gibbons and S. Hawking, “Action Integrals and Partition Functions in Quantum Gravity,” _Phys.Rev._**D15** (1977) 2752–2756.
* [17] J. D. Brown and J. York, James W., “Quasilocal energy and conserved charges derived from the gravitational action,” _Phys. Rev._**D47** (1993) 1407–1419, gr-qc/9209012.
* [18] M. Henningson and K. Skenderis, “Holography and the Weyl anomaly,” _Fortsch.Phys._**48** (2000) 125–128, hep-th/9812032.
* [19] V. Balasubramanian and P. Kraus, “A stress tensor for anti-de Sitter gravity,” _Commun. Math. Phys._**208** (1999) 413–428, hep-th/9902121.
* [20] S. de Haro, K. Skenderis, and S. N. Solodukhin, “Gravity in warped compactifications and the holographic stress tensor,” _Class.Quant.Grav._**18** (2001) 3171–3180, hep-th/0011230.
* [21] I. Papadimitriou and K. Skenderis, “Thermodynamics of asymptotically locally AdS spacetimes,” _JHEP_**0508** (2005) 004, hep-th/0505190.
* [22] G. Compere and D. Marolf, “Setting the boundary free in AdS/CFT,” _Class.Quant.Grav._**25** (2008) 195014, 0805.1902.
* [23] G. Barnich and C. Troessaert, “Aspects of the BMS/CFT correspondence,” _JHEP_**05** (2010) 062, 1001.1541.
* [24] T. Regge and C. Teitelboim, “Role of Surface Integrals in the Hamiltonian Formulation of General Relativity,” _Annals Phys._**88** (1974) 286.
* [25] R. Benguria, P. Cordero, and C. Teitelboim, “Aspects of the Hamiltonian Dynamics of Interacting Gravitational Gauge and Higgs Fields with Applications to Spherical Symmetry,” _Nucl.Phys._**B122** (1977) 61.
* [26] C. Imbimbo, A. Schwimmer, S. Theisen, and S. Yankielowicz, “Diffeomorphisms and holographic anomalies,” _Class.Quant.Grav._**17** (2000) 1129–1138, hep-th/9910267.
* [27] M. Henneaux, C. Martinez, and R. Troncoso, “Asymptotically warped anti-de Sitter spacetimes in topologically massive gravity,” _Phys.Rev._**D84** (2011) 124016, 1108.2841.
* [28] G. Compere and S. Detournay, “Boundary conditions for spacelike and timelike warped \(AdS_{3}\) spaces in topologically massive gravity,” _JHEP_**0908** (2009) 092, 0906.1243.
* [29] S. Detournay, T. Hartman, and D. M. Hofman, “Warped Conformal Field Theory,” _Phys.Rev._**D86** (2012) 124018, 1210.0539.
* [30] J. Navarro-Salas and P. Navarro, “A Note on Einstein gravity on AdS(3) and boundary conformal field theory,” _Phys.Lett._**B439** (1998) 262–266, hep-th/9807019.
* [31] C. Fefferman and C. Graham, “Conformal invariants,” _Elie Cartan et les Mathématiques d’aujourd’hui_ (1985) 95–116.
* [32] C. Graham and J. Lee, “Einstein metrics with prescribed conformal infinity on the ball,” _Adv. Math._**87** (1991) 186–225.
* [33] K. Skenderis and S. N. Solodukhin, “Quantum effective action from the AdS / CFT correspondence,” _Phys.Lett._**B472** (2000) 316–322, hep-th/9910023.
* [34] C. R. Graham, “Volume and area renormalizations for conformally compact einstein metrics,” _ArXiv Mathematics e-prints_ (Sept., 1999) arXiv:math/9909042.
* [35] M. Rooman and P. Spindel, “Aspects of (2+1)-dimensional gravity: AdS(3) asymptotic dynamics in the framework of Fefferman-Graham-Lee theorems,” _Annalen Phys._**9** (2000) 161–167, hep-th/9911142.
* [36] K. Bautier, F. Englert, M. Rooman, and P. Spindel, “The Fefferman-Graham ambiguity and AdS black holes,” _Phys.Lett._**B479** (2000) 291–298, hep-th/0002156.
* [37] G. Barnich and F. Brandt, “Covariant theory of asymptotic symmetries, conservation laws and central charges,” _Nucl. Phys._**B633** (2002) 3–82, hep-th/0111246.
* [38] G. Barnich and G. Compere, “Surface charge algebra in gauge theories and thermodynamic integrability,” _J. Math. Phys._**49** (2008) 042901, 0708.2378.
* [39] S. Carlip, “Liouville lost, Liouville regained: Central charge in a dynamical background,” _Phys.Lett._**B508** (2001) 168–172, gr-qc/0103100.
* [40] A. Castro, E. Hijano, and A. Lepage-Jutier, “Unitarity Bounds in \(AdS_{3}\) Higher Spin Gravity,” _JHEP_**1206** (2012) 001, 1202.4467.
* [41] H. Afshar, M. Gary, D. Grumiller, R. Rashkov, and M. Riegler, “Semi-classical unitarity in 3-dimensional higher-spin gravity for non-principal embeddings,” 1211.4454.
* [42] L. Brink and M. Henneaux, _Principles of string theory._ Plenum, New York, 1988.
* [43] J. A. de Azcárraga and J. M. Izquierdo, _Lie Groups, Lie algebras, cohomology, and some applications in physics._ Cambridge University Press, 1995.
|
0706.3632 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 10335,
"num_imgs": 4,
"llama3_tokens_count": 2728
} | [
"content_image/0706.3632/x1.png",
"content_image/0706.3632/x2.png",
"content_image/0706.3632/x3.png",
"content_image/0706.3632/x4.png"
] | ###### Abstract
The excitation of pulsation modes in \(\beta\) Cephei and Slowly Pulsating B stars is known to be very sensitive to opacity changes in the stellar interior where \(T\sim 2\times 10^{5}\,\rm K\). In this region differences in opacity up to \(\sim 50\%\) can be induced by the choice between OPAL and OP opacity tables, and between two different metal mixtures (Grevesse & Noels1993 and Asplund et al.2005). We have extended the non-adiabatic computations presented in Miglio et al. (2007) towards models of higher mass and pulsation modes of degree \(\ell=3\), and we present here the instability domains in the HR- and \(\log{P}\)-\(\log{T_{\rm eff}}\) diagrams resulting from different choices of opacity tables, and for three different metallicities.
] ]
## [
]Revised instability domains of SPB and \(\beta\) Cephei stars
]]
A. Miglio\({}^{1}\), J. Montalbán\({}^{1}\) and M.-A. Dupret\({}^{2}\)
\({}^{1}\)Institut d’Astrophysique, Allée du 6 Août, 17, B-4000 Liège, Belgium
\({}^{2}\)Observatoire de Paris, LESIA, CNRS UMR 8109, 92195 Meudon, France
### 1Introduction
The detection of B-type pulsators in low metallicity environments (see e.g. Kołaczkowski et al.2006 and references therein), and the large number of pulsation modes detected in B stars, are now revealing new discrepancies between theory and observations that challenge standard stellar models. For instance, the two \(\beta\) Cep stars 12 Lacertae and \(\nu\) Eridani present low order p-modes with frequencies higher than those predicted by pulsation models, as well as high-order g-modes (SPB type oscillation) (Jerzykiewicz et al., 2005; Handler et al., 2006, and references therein).
The interpretation of observations in the framework of standard stellar models must take into consideration the uncertainties in the basic input physics. In fact, pulsations modes in SPBs and \(\beta\) Cep stars are excited by the \(\kappa\)-mechanism (see e.g. Dziembowski et al., 1993) due to the Fe-group opacity bump at \(T\sim 2\times 10^{5}\) K, and in the last three years there have been two important updates of the basic physics that can affect the study of B-type pulsators: _i)_ the revised solar metal mixture (Asplund et al., 2005) that implies a 25% larger Fe mass fraction for a given metallicity \(Z\); and _ii)_ the new Fe data included in OP opacity computations that lead to an opacity in the Z-bump increased by 18% with respect to the previous values (Badnell et al., 2005).
Miglio et al.(2007) (hereafter Paper I) and Pamyatnykh & Ziomek(2007) showed that the combination these updates have a remarkable effect on the instability domains of SPB and \(\beta\) Cephei pulsators compared to the results obtained using OPAL opacities (Iglesias & Rogers, 1996) and Grevesse & Noels(1993) metal mixture.
In Paper I we analyzed the role of chemical composition and opacity computations on the instability strip of B-type pulsators and on the frequency domain of expected excited modes. In the present paper we have extended the computations presented in Paper I by considering stellar masses up to 18 \(\rm M_{\odot}\) instead of 12 \(\rm M_{\odot}\), and by carrying out the non-adiabatic analysis also for \(\ell=3\) modes. Only throughout comparisons with observations we will be able to assess if, and to which extent, the current uncertainties on opacity calculations and on the assumed metal mixture are able to explain the discrepancies between recent observations and standard stellar models. For this purpose we present in the following sections the instability strips in the HR (\(\log{L}\)-\(\log{T_{eff}}\)) and in the period-effective temperature (\(\log{P}\)-\(\log{T_{eff}}\)) diagrams resulting from the non-adiabatic calculations presented in Paper I and extended as mentioned above.
### 2Stellar models and opacities
We computed stellar models with the code CLES (Code Liégeois d’Evolution Stellaire, Scuflaire et al.2007). The main physical inputs are: OPAL2001 equation of state (Rogers & Nayfonov, 2002) and Caughlan & Fowler(1988) nuclear reaction rates with Formicola et al.(2004) for the \({}^{14}\)N(p,\(\gamma\))\({}^{15}\)O cross-section. Convective transport is treated by using the classical Mixing Length Theory of convection (Böhm-Vitense, 1958), and a convective overshooting parameter of 0.2 pressure scale height was assumed in all the models. For the chemical composition we have considered: Grevesse & Noels(1993) (GN93) and Asplund et al.(2005) corrected with the Ne abundance determined by Cunha et al.(2006) (AGS05+Ne). We have computed models with: _i)_ OPAL opacity tables with GN93 and _ii)_ AGS05+Ne chemical composition, then models with _iii)_ OP opacity tables assuming GN93 and _iv)_ AGS05+Ne mixtures. All the opacity tables are completed at \(\log T<4.1\) with the corresponding GN93 and AGS05 low temperature tables by Ferguson et al.(2005).
The masses considered span from 2.5 to 18 \(\rm M_{\odot}\), and the chemical compositions considered are: \(X=0.70\) for the hydrogen mass fraction, and three different metal mass fractions: \(Z=0.02\), 0.01 and 0.005. For all the models the evolution was followed from the Pre-Main Sequence.
We recall that the differences between opacities computed with OPAL, OP and with the metal mixtures considered can reach nearly 50% in the region where the driving of pulsations occurs (\(T\sim 2\times 10^{5}\,\rm K\)) for a typical \(\beta\) Cep star (see Paper I for a detailed comparison). Though not included in the calculations presented here, it is worth recalling that the effect of considering the Asplund et al.(2005) metal mixture without the higher Neon abundance proposed by Cunha et al.(2006) is to further increase the Fe relative mass fraction by \(\sim 5\%\). In a 10 \(\rm M_{\odot}\) model (and for a given value of \(Z\)) this induces a further increase of the opacity at \(T\sim 2\times 10^{5}\,\rm K\) up to \(7\%\), that only slightly modifies the instability strips presented here.
### 3Results: Updated SPBs and \(\beta\)-Cep instability domains
We carry out a pulsational stability analysis of main-sequence models from our grid using the non-adiabatic code MAD (Dupret et al., 2003). As mentioned above, in these computations we fixed the overshooting parameter \(\alpha_{\rm ov}\) at 0.2 and the initial hydrogen mass fraction \(X\) at 0.70. For discussion about the effect of assuming different \(\alpha_{\rm ov}\) or \(X\) on the stability domain, as well as for the stability study in post-MS models, we refer to the work by Pamyatnykh(1999). We checked the stability of radial modes and of non-radial p- and g-modes of degree \(1\leq\ell\leq 3\).
The location of the instability strip in the HR diagram and the frequency of the excited modes are determined by the properties of the metal opacity bump. The effects of the choice of the metal mixture (GN93 or AGS05+Ne) and of the opacity computations (OPAL or OP) on the HR location of instability domains are shown in figures 1, 2, and 3 for models with metallicity \(Z=\)0.02, 0.01, and 0.005, respectively. The combined effects on the excited modes of OP opacity and AGS05+Ne metal mixture, compared with the standard OPAL with GN93, are also shown by means of the Period-\(T_{\rm eff}\) diagram in Fig. 4.
The results presented in these figures can be summarized as follows:
1. Since the region where \(\kappa_{\rm T}=\left(\partial\log{\kappa_{\rm R}}/\partial\log{T}\right)_{\rho}\) increases outwards is found deeper in the star, with respect to the models computed with OPAL tables, the blue borders of the instability strips are hotter with OP models compared to OPAL ones.
2. The \(T_{\rm eff}\) domain for which we find SPB pulsators using OP opacities is \(\sim 3000\) K larger than for OPAL models. As a consequence, the number of expected hybrid \(\beta\) Cep–SPB objects is also larger for OP models.
3. The impact of the different OP–OPAL opacities is more important for low metallicity. As shown in Fig. 2, while OPAL–GN93 models with Z=0.01 are hardly able to produce a narrow instability strip at the end of MS, with excited modes only for \(\ell>1\), the OP models present \(\ell=\)0–3 excited modes already for an evolutionary state corresponding to \(X_{c}\simeq 0.3\).
4. The Fe-mass fraction enhancement in the AGS05+Ne mixture, compared with GN93, has the main effect of extending towards higher overtones the range of excited frequencies.
5. Furthermore, while the different profile of \(\kappa\) in OP and OPAL computations modifies the blue border of the instability strip, a larger Fe-mass fraction in the metal mixture provides a slightly wider instability bands, and this effect increases as the metallicity decreases. Thus, the number of \(\beta\) Cep pulsators expected with AGS05+Ne is more than three times larger than with GN93.
6. Computations for the lowest metallicity considered (Z=0.005), show that none of the different OP/OPAL and GN93/AGS05+Ne evolutionary tracks for masses up to 18 \(M_{\odot}\) predicts \(\beta\) Cep pulsators, whereas we find SPB-type modes excited when considering OP with AGS05+Ne.
The instability strips presented in this work will be made available to the community via the HELAS (European Helio- and Asteroseismology Network) website¹ and are also available upon request to the authors.
[FOOTNOTE:1][ENDFOOTNOTE]
###### Acknowledgements.
The authors are thankful to R. Scuflaire for his kind help with CLES. A.M. and J.M. acknowledge financial support from the Prodex-ESA Contract Prodex 8 COROT (C90199). References
<figure><img src="content_image/0706.3632/x1.png"><figcaption>Figure 1: Instability strips of β Cep- and SPB-type pulsations in the HRdiagram for Z=0.02. Evolutionary tracks are represented by dotted lines.</figcaption></figure>
<figure><img src="content_image/0706.3632/x2.png"><figcaption>Figure 2: Same as Fig. 1 but for Z=0.01</figcaption></figure>
<figure><img src="content_image/0706.3632/x3.png"><figcaption>Figure 3: Same as Fig. 1 but for Z=0.005</figcaption></figure>
<figure><img src="content_image/0706.3632/x4.png"><figcaption>Figure 4: Instability strips represented in a logTeff-logP diagram forZ=0.02,0.01 and different degree ℓ. In each panel, the two regions of unstablemodes represent β Cep- and SPB-type pulsations.</figcaption></figure>
|
1804.05881 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 45,
"num_imgs": 0,
"llama3_tokens_count": 15
} | [] | See pages 1-last of ISWCS2017_submitted.pdf
|
1410.0594 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 83427,
"num_imgs": 1,
"llama3_tokens_count": 24358
} | [
"content_image/1410.0594/csascheme2.jpg"
] | # Generalized Dynkin game of switching type representation for defaultable claims in presence of contingent CSA
###### Abstract
We study the solution’s existence for a generalized Dynkin game of switching type which is shown to be the natural representation for general defaultable OTC contract with contingent CSA. This is a theoretical counterparty risk mitigation mechanism that allows the counterparty of a general OTC contract to switch from zero to full/perfect collateralization and switch back whenever she wants until contract maturity paying some switching costs and taking into account the running costs that emerge over time. In this paper we allow for the strategic interaction between the counterparties of the underlying contract, which makes the problem solution much more tough. We are motivated in this research by the importance to show the economic sense - in terms of optimal contract design - of a contingent counterparty risk mitigation mechanism like our one. In particular, we show that the existence of the solution and the game Nash equilibrium is connected with the solution of a system of non-linear reflected BSDE which remains an open problem. We then provide the basic ideas to numerically search the game equilibrium via an _iterative optimal stopping_ approach and we show the existence of the solution for our problem under strong condition, in the so called _symmetric case_.
GIOVANNI MOTTOLA¹
[FOOTNOTE:1][ENDFOOTNOTE]
Sapienza University of Rome
Piazzale Aldo Moro
00185, Rome, Italy
## 1 Introduction
### Aim of the work
In this paper we analyze a theoretical contract in which counterparties want to set a contingent CSA (_credit support annex_) in order to gain the flexibility and the possibility to manage optimally the counterparty risk. We refer specifically to a contingent risk mitigation mechanism that allows the counterparties to switch from zero to full/perfect collateralization (or even partial) and switch back whenever until maturity \(T\) paying some _switching costs_ and taking into account the _running costs_ that emerge over time. The running costs that we model and consider in the analysis of this problem are - by one side - those related to CVA namely _counterparty risk hedging costs_ and - by the other side - the _collateral and funding/liquidity costs_ that emerge when collateralization is active.
We can summarize the characteristics and the basic idea underlying the problem - that we show to admit a natural formulation as a _stochastic differential game of switching type_ - through the so defined _contingent CSA scheme_ shown below (Fig. 1.1), in which - by considering also the funding issue in the picture - is present a third party, an _external funder_ assumed _default free_ (\(\lambda=0\)) in order to reduce dimension and technical issues.
<figure><img src="content_image/1410.0594/csascheme2.jpg"><figcaption></figcaption></figure>
Here, motivated by the results obtained in the unilateral case², we analyze the problem in a generalized setting allowing for the strategic interplay between the parties of the contingent CSA scheme.
This has lead us to study the solution’s existence and the equilibrium for the _stochastic differential game of switching type_ that we are going to define in _section two_. We can anticipate that our game solution (its existence and uniqueness) remains an open problem, that deserves further studies and research. We also highlight the basic ideas of the game’s numerical solution via iterative optimal stopping approach. Then further analysis on game equilibrium are taken over in _section three_ and some model applications in _section four_.
[FOOTNOTE:2][ENDFOOTNOTE]
### Literature review
The body of literature regarding stochastic differential games theory is very wide. The roots of the theory are founded in the pioneering works of von Neumann (1944) - for (mainly cooperative) _zero sum game_ - and Nash (1950) - for (non cooperative) _non-zero sum game_ - and the work of Isaacs (see Isaacs (2000)) who studied for first the _differential games_ in a deterministic setting. In the stochastic framework it is worth of mention the seminal work of Eugene Dynkin (Dynkin (1967)) who firstly analyzed stochastic differential games where the agent control set is given by _stopping times_, so that these ”games on stopping” are known as _Dynkin games_ in his honor.
Certainly, it is impossible to mention here all the numerous important contribution in this field of research and to give a systematic account of the theory and of the literature. For a complete treatment of the different type of games we refer to the book of Isaacs on differential games. So we give in the following just a simplified classification restricting ourselves to the literature more related to our stochastic switching control problem.
This classification is based on the following main categories:
a)_game and equilibrium type_: it includes zero and non-zero-sum games whose solution can be searched mainly in terms of cooperative or non-cooperative (Nash) equilibrium. This depends also on the characteristics of the game which are mainly the system dynamic - that can be _markovian/non markovian_ - and controls which can be state controls, stopping controls (as in Dynkin games) or both which are called _mixed control and stopping_. Also the number of player is relevant, here we focus on the case \(p=2\).
b) _solution approaches_: these are mainly the _analytical approach_ that allows - typically under a markovian framework - to formulate the stochastic differential game (SDG) as a system of (second order) _Hamilton-Jacobi-Bellman equations_ or _variational inequalities_ to solve, proving existence and (possibly) uniqueness of the solution, namely of the equilibrium of the game. The main solution techniques are the ones related to PDE theory, the dynamic programming principle and viscosity solution.
Worth of mention, from the analytical point of view, are the important works of Bensoussan and Friedman (1977) that for first showed the existence of a Nash equilibrium for a non-zero-sum SDG with stopping \(\{\tau_{1},\tau_{2}\}\) as controls, formulating the problem as a system of quasi-variational inequalities (solved through _fixed point_ methods), assuming continuous and bounded running rewards and terminal rewards; instead Fleming and Souganidis (1989) for first showed the existence and uniqueness of the solution/equilibrium for zero-sum SDG through dynamic programming and viscosity solution approach. These techniques have become very popular and used in the recent literature given the deep connection with probabilistic tools, as we have already mentioned in chapter five.
In fact, the _probabilistic approach_ is the other more general one³ that makes use of the martingale (also via _duality methods_) and Snell envelope theory and, in addition, of the deep results of the forward-backward SDE theory in order to derive existence and uniqueness of the optimal control/stopping strategy for the game.
The main works worth of mention - other that the already mentioned work of Cvitanic and Karatzas (1996) that for first highlights the connection between the solution of zero-sum Dynkin game and that of _doubly reflected BSDE_ (other than its analytical solution) - are that of: Hamadene (1998) that shows how the solution of a SDG is related to that of a backward-forward SDE; Hamadene and Lepeltier (2000) that extend the analysis through reflected BSDE to ”_mixed game_” problems; El Karoui and Hamadene (2003) that generalize the existence and uniqueness results for zero and non-zero-sum game with ”_risk sensitive_” controls; Hamadene and Zhang (2008) that use Snell envelope technique to show the existence of a Nash equilibrium for non-zero-sum Dynkin game in a non-markovian framework and Hamadene and Zhang (2010) that tackle the solution of a general switching control problem via systems of interconnected (nonlinear) RBSDE (with _oblique reflection_).
[FOOTNOTE:3][ENDFOOTNOTE]
To conclude the section we recall also the monographic work of Pham (2009) and Yong and Zhu (1999) for a clear and detailed analysis of backward SDE and we remark also that much of these literature and works have been inspired by financial valuation problems. We refer mainly to the _american game option_ problem as defined in Kifer (2000) (also known as _israeli option_). This has given impulse to the literature related mainly to _convertible and switchable bond valuation_ whose solution can be related to that of a zero-sum Dynkin game.
### Some examples od Dynkin games
Let us briefly remind that stochastic differential games are a family of dynamic, continuous time versions of _differential games_ (as defined by Isaacs) incorporating randomness in both the states and the rewards. The random states are described typically by adapted diffusion processes whose dynamics are known (or assumed). To play a game, a player receives a _running reward_ cumulated at some rate till the end of the game and a _terminal reward_ granted at the end of the game. The rewards are related to both the _state process_ and the _controls_ at the choice of the players, as deterministic or random functions or functionals of them. A control represents a player action in attempt to influence his rewards. Assuming his rationality, a player acts in the most profitable way based on his knowledge represented by his information filtration. Before starting the formulation and the analysis of our generalized _Dynkin game of switching type_, let us recall - in a markovian framework - some example for both a non-zero-sum and zero-sum Dynkin game (with \(p=2\)) and the relative equilibrium characterization. For all the details we refer in particular to the works of Bensoussan and Friedman (1977) and Fleming and Souganidis (1989)
-_Non-zero-sum Dynkin game_: Given a standard probability space represented by the triple \((\Omega,\mathcal{F},\mathbb{P})\) where we define \(W=(W_{t})_{0\leq t\leq T}\) be a standard \(d\)-dimensional Brownian motion adapted to the space filtration. Assuming as true the usual conditions on the drift function \(\mu(.)\) and volatility function \(\sigma(.)\), such that the following SDE admits a unique solution
\[dy(t) = \mu(y(t),t)dt+\sigma(y(t),t)dW(t),\;\;t\in[0,T]\]
\[y(0) = y_{0}.\]
Let (for \(p=1,2\)) \(f_{p}(y,t)\) the running reward function and \(\phi_{p}(y,t)\), \(\psi_{p}(y,t)\) the reward function obtained by the players upon stopping the game be _continuous_ and _bounded_ in \(\mathbb{R}^{d}\times[0,T]\), with \(f_{p}\in\mathbb{L}^{2}\) square integrable and \(\psi_{p}\leq\phi_{p}\) for all \((y,t)\in\mathbb{R}^{d}\times[0,T]\). Then let \(g_{p}(y(T))\) the terminal reward function also continuous and bounded.
In a game of this kind, the two players have to decide optimally when to stop the game finding the optimal control given by the stopping times \((\tau_{1},\tau_{2})\) that give the maximum expected reward. So let us set the payoff functional for the two players of this Dynkin game as follows
\[J^{p}(y,\tau_{1},\tau_{2}) = \mathbb{E}\bigg{[}\int_{t}^{\tau_{1}\wedge\tau_{2}\wedge T}f_{p}( y(s),s)ds+\mathbbm{1}_{\{\tau_{i}<\tau_{j}\}}\phi_{p}(y(\tau_{i}),\tau_{i})\]
\[+ \mathbbm{1}_{\{\tau_{i}\geq\tau_{j},T>\tau_{j}\}}\psi_{p}(y(\tau_ {j}),\tau_{j})+\mathbbm{1}_{\{\tau_{1}=\tau_{2}=T\}}g_{p}(y(T))\bigg{]}\:\;for \;j\neq i(\:\in\{1,2\}),\]
and for \((t\leq\tau_{i}\leq T)\). Being in a non-zero-sum game with the players that aims to maximize their payoff \(J^{p}(.)\) without cooperation, the problem here is to find a _Nash equilibrium point_ (NEP) for the game, that is to determine the couple of optimal stopping times \((\tau_{1}^{*},\tau_{2}^{*})\) such that
\[J^{1}(y,\tau_{1}^{*},\tau_{2}^{*}) \geq J^{2}(y,\tau_{1},\tau_{2}^{*}),\;\:\forall\:\tau_{1},\tau_{2}\in [t,T]\]
\[J^{2}(y,\tau_{1}^{*},\tau_{2}^{*}) \geq J^{1}(y,\tau_{1}^{*},\tau_{2}),\;\:\forall\:\tau_{1},\tau_{2}\in [t,T]\]
namely the supremum of the payoff functional over the stopping time set. In other words, the NEP implies that every player has no incentive to change his strategy given that the other one has already defined optimally his strategy.
This type of game, as shown in Bensoussan and Friedman (1977), has an analytical representation given by a system of _variational inequalities_ but it admits also a stochastic counterpart through _system of BSDE with reflecting barrier_. We return to its formal definition in the next section in relation to our problem. The Nash equilibrium defined above can be fairly generalized in the case of _mixed game of control and stopping_. We show this below in relation to zero-sum games.
- _Zero-sum mixed game_: A zero-sum game is characterized by the antagonistic interaction of the players that in this case has the same payoff functional but their objective are different because for one player the payoff is a reward (let’s think typically at the buyer of a convertible bond) that he wants to _maximize_, while for the other one is a cost that he intends to _minimize_.
In the generalized case of mixed games of control and stopping, the set of control will be enriched by the \(\mathcal{F}_{t}\)-progressively measurable process \((\alpha_{t})_{t\leq T}\) and \((\beta_{t})_{t\leq T}\) that are the intervention function namely the state controls respectively for the player \(p_{1}\) and \(p_{2}\). In addition, the players have to decide optimally when to stop the game setting the stopping times \(\tau\) (for \(p_{1}\)) and \(\sigma\) (for \(p_{2}\)). Indeed, the system dynamic being controlled by the agents can be expressed as the following _controlled diffusion_ (remaining in a markovian framework):
\[dy(t)^{\alpha,\beta} = \mu(t,y_{t}^{\alpha,\beta},\alpha_{t},\beta_{t})dt+\eta(t,y_{t}^{ \alpha,\beta},\alpha_{t},\beta_{t})dW(t),\;\;t\in[0,T]\]
\[y(0)^{\alpha,\beta} = y_{0}.\]
The zero-sum game payoff being the same for both the players will be
\[\Gamma(\alpha,\tau;\beta,\sigma) := \mathbb{E}\bigg{[}\int_{t}^{T\wedge\tau\wedge\sigma}f(s,y_{s}^{ \alpha,\beta},\alpha_{s},\beta_{s})ds+\mathbbm{1}_{\{\tau\leq\sigma<T\}}\phi( \tau,y_{\tau}^{\alpha,\beta})\]
\[+ \mathbbm{1}_{\{\sigma<\tau\}}\psi(\sigma,y_{\sigma}^{\alpha,\beta })+\mathbbm{1}_{\{\tau=\sigma=T\}}g(y^{\alpha,\sigma}(T))\bigg{]}\:\;(t\leq \tau\leq\sigma<T)\]
where the running and reward functions are intended to be the same as in the non-zero-sum case but clearly now they are the same for both players.
The solution of this SDG is typically tackled by studying the upper and lower _value function_ of the players, which are
\[\mathcal{U}(t,y) := \sup_{\alpha}\inf_{\beta}\sup_{\tau}\inf_{\sigma}\Gamma(\alpha, \tau;\beta,\sigma)\;\;(upper\:value\:p1)\]
\[\mathcal{L}(t,y) := \inf_{\beta}\sup_{\alpha}\inf_{\sigma}\sup_{\tau}\Gamma(\alpha, \tau;\beta,\sigma)\;\;(lower\:value\:p2)\]
Under some standard condition on the reward function and on controls, the problem has been tackled analytically representing the lower and upper value of the game as a system of nonlinear PDE with two obstacles/barriers, defined as follows
\[\begin{cases}\min\bigg{\{}u(t,y)-\phi(t,y),\max\bigg{\{}\frac{\partial u}{ \partial t}(t,y)-H^{-}(t,y,u,Du,D^{2}u),u(t,y)-\psi(t,y)\bigg{\}}\bigg{\}}=0\\ u(T,y)=g(y),\end{cases}\\\]
\[\begin{cases}\min\bigg{\{}v(t,y)-\phi(t,y),\max\bigg{\{}\frac{\partial v}{ \partial t}(t,y)-H^{+}(t,y,v,Dv,D^{2}v),v(t,y)-\psi(t,y)\bigg{\}}\bigg{\}}=0\\ v(T,y)=g(y),\end{cases}\\\]
where \(H^{+}(.)\) and \(H^{-}(.)\) are the _hamiltonian operators_ (as defined in chapter four) associated to the upper and lower value function of the SDG. To solve the system, the unknown solution function \(u\) and \(v\) can be shown (under some techinical assumptions) to be _viscosity solutions_ of the above two PDE with obstacles and to coincide with the value functions \(\mathcal{U}(t,y)\) and \(\mathcal{L}(t,y)\) of the game.
In particular, when the _Isaacs condition_ holds, namely
\[H^{-}(t,y,u,q,X)=H^{+}(t,y,u,q,X)\]
then the two solutions coincide and the SDG has a value namely
\[V:=\sup_{\alpha}\inf_{\beta}\sup_{\tau}\inf_{\sigma}\Gamma(\alpha,\tau;\beta, \sigma)=\inf_{\beta}\sup_{\alpha}\inf_{\sigma}\sup_{\tau}\Gamma(\alpha,\tau; \beta,\sigma)\]
which is called the _saddle point equilibrium_ of the mixed zero-sum game.
We mention also that in this case, as in the non-zero-sum case, the SDG has a stochastic representation which is expressed in terms of a _doubly reflected BSDE_ (\(2RBSDE\))⁴. In particular, setting the terminal reward \(\xi\), the _early exercise rewards_\(\phi_{t}=U_{t}\) and \(\psi_{t}=L_{t}\) which represent the two barriers of the value process, therefore the _generator function_\(f\), the _value process_\(Y_{t}\), \(Z_{t}\) which is _the conditional expectation/volatility process_ that helps \(Y_{t}\) to be \(\mathcal{F}_{t}\)-measurable and \(K\) the _compensator process_, it can be shown that the following \(2RSBDE\)⁵ solution
[FOOTNOTE:4][ENDFOOTNOTE]
[FOOTNOTE:5][ENDFOOTNOTE]
## 2 Defaultable Dynkin game of switching type
### Framework and assumptions
To begin is convenient to describe the framework in which we work and to give the main definitions of the processes and variables involved. The framework setting follows strictly the _reduced-form models_ literature and we refer to the classical monograph of Bielecki and Rutkowski (2004) for details.
Let us consider a probability space described by the triple \((\Omega,\mathcal{G}_{t},\mathbb{P})\) where the full filtration is given by \(\mathcal{G}_{t}=\sigma(\mathcal{F}_{t}\vee\mathcal{H}^{A}\vee\mathcal{H}^{B})_ {t\geq 0}\) for \(t\in[0,T]\) and \(\mathbb{P}\) the real probability measure defined on this space. On it lie two strictly positive random time \(\tau_{i}\) for \(i\in\{A,B\}\), which represent the _default times_ of the counterparties considered in our model. In addition, we define the _default process_ \(H^{i}_{t}=\mathbbm{1}_{\{\tau^{i}\leq t\}}\) and the relative filtration \(\mathcal{H}^{i}\) generated by \(H_{t}^{i}\) for any \(t\in\mathbb{R}^{+}\). We are left to mention \(\mathcal{F}\) which is the (risk-free) _market filtration_ generated by a \(d\)-dimensional Brownian motion vector \(W\) adapted to it, under the real measure \(\mathbb{P}\). In addition, we remember that all the processes we consider, in particular \(H^{i}\), are _cádlág semimartingales_\(\mathcal{G}\) adapted and \(\tau^{i}\) are \(\mathcal{G}\) stopping times.
For convenience, let us define the first default time of the counterparties as \(\tau=\tau_{A}\wedge\tau_{B}\) which also represent the ending/exstinction time of the underlying contract, with the corresponding indicator process \(H_{t}=\mathbbm{1}_{\{\tau\leq t\}}\).As concerns the underlying market model it is assumed arbitrage-free, namely it admits a _spot martingale measure_\(\mathbb{Q}\) (not necessarily unique) equivalent to \(\mathbb{P}\). A spot martingale measure is associated with the choice of the savings account \(B_{t}\) (so that \(B^{-1}\) as discount factor) as a numeraire that, as usual, is given by \(\mathcal{F}_{t}\)-predictable process
\[B_{t}=\exp\int_{0}^{t}r_{s}ds,\:\:\forall\:t\in\mathbb{R}^{+}\] (1)
where the short-term \(r\) is assumed to follow an \(\mathcal{F}\)-progressively measurable stochastic process (whatever it is the choice of the term structure model for itself).
We then define the _Azéma supermartingale_\(G_{t}=\mathbb{P}(\tau>t|\mathcal{F}_{t})\) with \(G_{0}=1\) and \(G_{t}>0\:\:\forall\:t\in\mathbb{R}^{+}\) as the _survival process_ of the default time \(\tau\) with respect to the filtration \(\mathbb{F}\). The process \(G\) being a bounded \(\mathcal{G}\)-supermartingale it admits a unique _Doob-Meyer decomposition_\(G=\mu-\nu\), where \(\mu\) is the martingale part and \(\nu\) is a predictable increasing process. In particular, \(\nu\) is assumed to be absolutely continuous with respect to the _Lebesgue measure_, so that \(d\nu_{t}=\upsilon_{t}dt\) for some \(\mathcal{F}\)-progressively measurable, non-negative process \(\upsilon\). So that, we can define now the default intensity \(\lambda\) as the \(\mathcal{F}\)-progressively measurable process that is set as \(\lambda_{t}=G_{t}^{-1}\upsilon_{t}\) so that \(dG_{t}=d\mu_{t}-\lambda_{t}G_{t}dt\) and the cumulative default intensity is defined as follows
\[\Lambda_{t}=\int_{0}^{t}G_{u}^{-1}d\nu_{u}=\int_{0}^{t}\lambda_{u}du,\] (2)
For convenience, we assume that the _immersion property_ holds in our framework, so that every cádlág \(\mathcal{G}\)-adapted (square-integrable) martingale is also \(\mathcal{F}\)-adapted.
In particular, we assume _pre-default valued processes_ namely, setting \(J:=1-H=\mathbbm{1}_{\{t\leq\tau\}}\) we have that for any \(\mathcal{G}\)-adapted, respectively \(\mathcal{G}\)-predictable process \(X\) over \([0,T]\), there exists a unique \(\mathcal{F}\)-adapted, respectively \(\mathcal{F}\)-predictable, process \(\tilde{X}\) over \([0,T]\), called the pre-default value process of \(X\), such that \(JX=J\tilde{X}\), respectively \(J_{-}X=J_{-}\tilde{X}\).
As regards counterparty objectives, \(\{A,B\}\) are both defaultable and are assumed to behave rationally with the same objective to minimize the overall costs related to counterparty risk - quantified through the BCVA - and those related to collateral and funding. The information flow is assumed symmetric.
### Main definitions: BCVA, contingent CSA and funding
Let us state the main definitions that are needed to model our claim with contingent CSA. Also here we just state the objects involved, for proofs and details we refer to the already mentioned work and the reference therein.
**1) BCVA definition**. Following Bielecki et al. (2011) (proposition 2.9), the bilateral CVA process of a defaultable claim with bilateral counterparty risk \((X;\mathbf{A};Z;\tau)\) maturing in \(T\) satisfies the following relation⁶
[FOOTNOTE:6][ENDFOOTNOTE]
\[BCVA_{t} = S_{t}^{rf}-S_{t}\]
\[= CVA_{t}-DVA_{t}\]
\[=\]
\[-\] (4)
for every \(t\in[0,T]\), where \(B_{t}\) indicates the _discount factor_,\(R_{c}^{i}\) for \(i\in\{A,B\}\) is the _counterparty recovery rate_ (process) and where the _clean price process_\(S^{rf}_{t}\) would be simply represented by the integral over time of the contract dividend flow under the relative martingale pricing measure \(\mathbb{Q}\), that is
\[S_{t}^{rf}=B_{t}\mathbb{E}_{\mathbb{Q}}\bigg{(}\int_{]t,T]}B_{u}^{-1}dD^{rf}_{ u}\big{|}\mathcal{F}_{t}\bigg{)}\;\;t\in[0,T]\] (5)
and \(D^{rf}_{t}\) is the clean dividend process of the default-free contract
\[D_{t}^{rf} = X(T)+\sum_{i\in\{A,B\}}\bigg{(}\int_{]t,T]}d\mathbf{A}^{i}_{u} \bigg{)}\;\;t\in[0,T]\] (6)
where \(X\) is the \(\mathcal{F}\)-adapted final payoff, \(\mathbf{A}\) the \(\mathcal{F}\)-adapted process representing the contract’s _promised dividends_ and \(\tau=\tau^{i}=\infty\). We are left to state the definitions of (bilateral) _risky dividend_ and _price process_ of a general defaultable claim:
\[D_{t} = X\mathbbm{1}_{\{T<\tau\}}+\sum_{i\in\{A,B\}}\bigg{(}\int_{]t,T]} (1-H_{u}^{i})d\mathbf{A}^{i}_{u}+\int_{]t,T]}Z_{u}dH_{u}^{i}\bigg{)}\;\;t\in[0 ,T]\] (7)
for \(i\in\{A,B\}\) and
\[NPV_{t}=S_{t}=B_{t}\mathbb{E}_{\mathbb{Q}}\bigg{(}\int_{]t,T]}B_{u}^{-1}dD_{u} \big{|}\mathcal{G}_{t}\bigg{)}\;\;t\in[0,T].\\\] (8)
where \(Z\) is the _recovery process_ that specifies the recovery payoff at default and \(H_{t}:=\mathbbm{1}_{\{\tau\leq t\}}\) the already defined _default process_.
**2) Collateral definition with contingent CSA.** In order to generalize collateralization in presence of contingence CSA, we recall the definition collateral account/process \(Coll_{t}:[0,T]\rightarrow\mathbb{R}\) is a stochastic \(\mathcal{F}_{t}\)-adapted process defined as
\[Coll_{t}=\mathbbm{1}_{\{S^{rf}_{t}>\Gamma_{B}+MTA\}}(S^{rf}_{t}-\Gamma_{B})+ \mathbbm{1}_{\{S^{rf}_{t}<\Gamma_{A}-MTA\}}(S^{rf}_{t}-\Gamma_{A}),\] (9)
on the time set \(\{t<\tau\}\), and
\[Coll_{t}=\mathbbm{1}_{\{S^{rf}_{\tau^{-}}>\Gamma_{B}+MTA\}}(S^{rf}_{\tau^{-}}- \Gamma_{B})+\mathbbm{1}_{\{S^{rf}_{\tau^{-}}<\Gamma_{A}-MTA\}}(S^{rf}_{\tau^{- }}-\Gamma_{A}),\;\] (10)
on the set \(\{\tau\leq t<\tau+\Delta t\}\), thresholds \(\Gamma_{i}\), for \(i\in\{A,B\}\) and positive minimum transfer amount \(MTA\).
The perfect collateralization case, say \(Coll^{Perf}_{t}\), can be shown to be always equal to the mark to market, namely to the (default free) price process \(S^{rf}_{t}\) of the underlying claim, that is formally
\[Coll_{t}^{Perf}=\mathbbm{1}_{\{S^{rf}_{t}>0\}}(S^{rf}_{t}-0)+\mathbbm{1}_{\{S_ {t}^{rf}<0\}}(S^{rf}_{t}-0)=S^{rf}_{t}\;\;\forall\:t\in[0,T],\;on\:\{t<\tau\}.\] (11)
and
\[Coll_{t}^{Perf}=S^{rf}_{\tau^{-}}\;\;\forall\:t\in[0,T],on\:\{\tau\leq t<\tau+ \Delta t\}.\] (12)
Let us remind that in presence of _perfect/full collateralization_ one can easily show that
\[BCVA_{t}^{Coll^{Perf}} = S_{t}^{rf}-S_{t}=0\Longrightarrow\]
\[S_{t} = S_{t}^{rf}\qquad\forall t\in[0,T]\] (13)
Generalizing, the contingent collateral \(Coll^{C}_{t}\) can be defined as the \(\mathcal{F}_{t}\)-adapted process defined for any time \(t\in[0,T]\) and for every switching time \(\tau_{j}\in[0;T]\) and \(j=1,\dots,M\), switching indicator \(z_{j}\)and default time \(\tau\) (defined above as \(\min\{\tau_{A},\tau_{B}\}\)), which is formally
\[Coll^{C}_{t}=S^{rf}_{t}\mathbbm{1}_{\{z_{j}=0\}}\mathbbm{1}_{\{\tau_{j}\leq t< \tau_{j+1}\}}+0\mathbbm{1}_{\{z_{j}=1\}}\mathbbm{1}_{\{\tau_{j}\leq t<\tau_{j+ 1}\}}\;\:on\:\{t<\tau\}\] (14)
on the set \(\{t<\tau\}\), and
\[Coll^{C}_{t}=S^{rf}_{\tau^{-}}\mathbbm{1}_{\{z_{j}=0\}}\mathbbm{1}_{\{\tau_{j} \leq t<\tau_{j+1}\}}+0\mathbbm{1}_{\{z_{j}=1\}}\mathbbm{1}_{\{\tau_{j}\leq t< \tau_{j+1}\}}\;\]
on the set \(\{\tau\leq t<\tau+\Delta t\}\).
**2) BCVA definition with contingent CSA.** By the definition of \(D_{t}^{C}\) and \(S_{t}^{C}\) as the dividend and price process in presence of the _contingent CSA_
\[D^{C}_{t} = D^{rf}_{t}\mathbbm{1}_{\{z_{j}=0\}}\mathbbm{1}_{\{\tau_{j}\leq t <\tau_{j+1}\}}+D_{t}\mathbbm{1}_{\{z_{j}=1\}}\mathbbm{1}_{\{\tau_{j}\leq t< \tau_{j+1}\}}\] (15)
\[S^{C}_{t} = S^{rf}_{t}\mathbbm{1}_{\{z_{j}=0\}}\mathbbm{1}_{\{\tau_{j}\leq t <\tau_{j+1}\}}+S_{t}\mathbbm{1}_{\{z_{j}=1\}}\mathbbm{1}_{\{\tau_{j}\leq t< \tau_{j+1}\}}\] (16)
for any time \(t\in[0,T\wedge\tau]\), switching times \(\tau_{j}\in[0,T\wedge\tau]\), \(j=1,\dots,M\) and switching indicator \(z_{j}\in\{0,1\}\), we define the bilateral CVA of a contract with contingent CSA of switching type as follows
\[BCVA^{C}_{t} = S^{rf}_{t}-S^{C}_{t}\] (17)
\[= S^{rf}_{t}-(S^{rf}_{t}\mathbbm{1}_{\{z_{j}=0\}}\mathbbm{1}_{\{ \tau_{j}\leq t<\tau_{j+1}\}}+S_{t}\mathbbm{1}_{\{z_{j}=1\}}\mathbbm{1}_{\{\tau _{j}\leq t<\tau_{j+1}\}})\]
\[= 0\mathbbm{1}_{\{z_{j}=0\}}\mathbbm{1}_{\{\tau_{j}\leq t<\tau_{j+ 1}\}}+BCVA_{t}\mathbbm{1}_{\{z_{j}=1\}}\mathbbm{1}_{\{\tau_{j}\leq t<\tau_{j+1 }\}}.\]
where the expression for \(BCVA_{t}\) is known from the former point.
**4) Funding definition**. As regards funding, in our setting we allow for difference in the funding rates between counterparties. In particular, we assume the existence of the following funding asset \(B^{opp^{i}}_{t}\), \(B^{borr^{i}}_{t}\) and \(B^{rem^{i}}_{t}\) In particular, under the assumptions of _segregation_ (no collateral _rehypotecation_), collateral made up by cash and BCVA not funded⁷, let us highlight that if counterparty \(i\in\{A,B\}\) has to post collateral in the margin account, she sustains a funding cost, applied by the external funder, represented by the _borrowing rate_\(r^{borr^{i}}_{t}=r_{t}+s_{t}^{i}\) that is the risk free rate plus a credit spread (that is usually different from the other party) . By the other side, the counterparty receives by the funder the remuneration on the collateral post, that we define in the CSA as a risk free rate plus some basis points, namely is \(r_{t}^{rem^{i}}=r_{t}+bp_{t}^{i}\). Hence we assume the following dynamics for the funding assets (which can be different between counterparties)
[FOOTNOTE:7][ENDFOOTNOTE]
\[dB^{borr^{i}}_{t} = (r_{t}+s_{t}^{i})B^{borr^{i}}_{t}dt,\qquad i\;\in\{A,B\}\] (18)
\[dB^{rem^{i}}_{t} = (r_{t}+bp_{t}^{i})B^{rem^{i}}_{t}dt,\qquad i\;\in\{A,B\}\] (19)
Instead, considering the counterparty that call the collateral, as above the collateral is remunerated at the rate given (as the remuneration for the two parties can be different) by \(B^{rem}\), but she cannot use or invest the collateral amount (that is segregated), so she sustains an opportunity cost, that can be represented by the rate \(r_{t}^{opp^{i}}=r_{t}+\pi_{t}^{i}\) where \(\pi\) is a premium over the risk free rate. Hence, we assume the existence of the following asset too
\[dB^{opp^{i}}_{t} = (r_{t}+\pi_{t}^{i})B^{borr^{i}}_{t}dt,\qquad i\;\in\{A,B\}\] (20)
To conclude the section, let us underline that given the symmetrical nature of processes (except for the funding ones) the following relations states:
\[BCVA_{t}^{A} = -BCVA_{t}^{B}\qquad t\;\in[0,\tau\wedge T]\]
\[BCVA_{t}^{C^{A}} = -BCVA_{t}^{C^{B}}\qquad t\;\in[0,\tau\wedge T]\]
\[Coll_{t}^{C^{A}} = -Coll_{t}^{C^{B}}\qquad t\;\in[0,\tau\wedge T].\]
### Model dynamics, controls and cost functionals
In our contingent CSA model of multiple switching type (with finite horizon), both counterparties \(A,B\) are free to switch from zero to perfect collateralization every time in \([0,T]\). Hence their control sets are made up by sequences of _switching times_ - say \(\tau_{j}\in\mathcal{T}\) - and _switching indicators_\(z_{j}\in\mathcal{Z}\) with \(\mathcal{T}\subset[0,T]\), that we define formally as follows
\[\mathcal{C}^{A}=\big{\{}\mathcal{T}^{A},\mathcal{Z}^{A}\big{\}}=\big{\{}\tau_{ j}^{A},z_{j}^{A}\big{\}}_{j=1}^{M},\;\forall\tau_{j}^{A}\in[0,T],\;z_{j}^{A} \in\{0,1\}\] (21)
\[\mathcal{C}^{B}=\big{\{}\mathcal{T}^{B},\mathcal{Z}^{B}\big{\}}=\big{\{}\tau_{ j}^{B},z_{j}^{B}\big{\}}_{j=1}^{M},\;\forall\tau_{j}^{B}\in[0,T],\;z_{j}^{B} \in\{0,1\}\] (22)
with the last switching \(\{\tau^{M}_{i}\leq T\}\)\((M<\infty)\). We recall that \(\tau_{j}^{i}\) are by definitions of _stopping times_\(\mathcal{F}_{t}\)-measurable random variables, while \(z_{j}^{i}\) are \(\mathcal{F}_{\tau_{j}}\)-measurable switching indicators, taking values in our model \(\forall\j\in 1\,\dots,M\)
\[\begin{cases}z_{j}=1\Rightarrow&\emph{zero collateral}\;(full\;CVA)\\ z_{j}=0\Rightarrow&\emph{full collateral}\;(null\;CVA)\\ \end{cases}\]
Clearly controls affect also our model dynamic. As regards this point, we assume general markovian diffusions for \((X;\lambda^{i})\)\(i\in\{A,B\}\), namely the interest rates and the default intensities of counterparties. From the definitions of _contingent CSA_ and (bilateral) CVA given in the last section, we highlight that the switching controls enter and affect the dynamic of these processes. In fact, as we know, switching to full collateralization implies \(BCVA=0\), that is \(S_{t}=S^{rf}_{t}=Coll_{t}^{perf}\). This means no counterparty risk and so the default intensities’ dynamic \(d\lambda_{t}^{i}\) won’t be relevant, just \(dX\) will be considered while \(z=0\) ( namely until the collateralization will be kept active). So, formally, we have:
\(if\:\big{\{}z_{j}=1\big{\}}\:and\:\{\tau_{j}\leq t<\tau_{j+1}\}\Rightarrow\;\)
\[D_{t}^{C} = D_{t}\Rightarrow\]
\[S_{t}^{C} = S_{t}\Rightarrow\]
\[BCVA_{t}^{C} = BCVA_{t}\forall\:t\in[0,T\wedge\tau]\]
so that the relevant dynamic to model the BCVA process in this regime is
\[dX_{t} = \mu(t,X_{t})X(t)dt+\sigma(t,X_{t})X(t)dW_{t}^{x};\quad\qquad X(0) =x_{0}\]
\[d\lambda_{t}^{A} = \gamma(t,\lambda_{t}^{A})\lambda^{A}(t)dt+\nu(t,\lambda_{t}^{A}) \lambda^{A}(t)dW_{t}^{\lambda^{A}};\qquad\lambda^{A}(0)=\lambda^{A}_{0}\]
\[d\lambda_{t}^{B} = \chi(t,\lambda_{t}^{B})\lambda^{B}(t)dt+\eta(t,\lambda_{t}^{B}) \lambda^{B}(t)dW_{t}^{\lambda^{B}};\qquad\lambda^{B}(0)=\lambda^{B}_{0}\]
\[d\langle X,\lambda_{t}^{A}\rangle_{t} = d\langle W_{t}^{x},W_{t}^{\lambda^{A}}\rangle_{t}=\rho_{X, \lambda^{A}}dt\]
\[d\langle X,\lambda_{t}^{B}\rangle_{t} = d\langle W_{t}^{x},W_{t}^{\lambda^{B}}\rangle_{t}=\rho_{X, \lambda^{B}}dt\]
\(if\:\big{\{}z_{j}=0\big{\}}\;and\;\{\tau_{j}\leq t<\tau_{j+1}\}\Rightarrow\)
\[D_{t}^{C} = D_{t}^{rf}\Rightarrow\]
\[S_{t}^{C} = S_{t}^{rf}=Coll_{t}^{Perf}\Rightarrow\]
\[BCVA_{t}^{C} = 0\forall\:t\in[0,T\wedge\tau]\]
so that the relevant dynamic to model the process in this regime will be just
\[dX_{t}=\mu(t,X_{t})X(t)dt+\sigma(t,X_{t})X(t)dW_{t}^{x};\quad X(0)=x_{0}.\]
Here, the drift and volatility coefficient \(\mu(t,x)\), \(\sigma(t,x)\), \(\gamma(t,x)\), \(\nu(t,x)\), \(\chi(t,x)\) and \(\eta(t,x)\) are all continuous, measurable and real valued function \(\mathcal{F}\)-adapted to the relative brownian filtration. For a matter of convenience we ease the notation by setting our system dynamic in vectorial form;
\(d\mathcal{Y}(t):=\left[\begin{array}[]{c}dt\\ dX_{t}\\ d\lambda^{i}_{t}\\ dZ^{i}\\ \end{array}\right]\), \(\qquad\mathcal{Y}(0)=\left[\begin{array}[]{c}t=0\\ x_{0}\\ \lambda^{i}_{0}\\ Z_{0}^{i}=1\\ \end{array}\right]\)
for \(i\in\{A,B\}\).
As regards counterparties cost functionals’ formulation, we remind that both are assumed coherently⁸ counterparty risk averse, but in this case they can have different preference/cost functions in which they need to take in account also the optimal control strategy of the other party, namely its response function \(b^{-i}(u^{i})\) where \(u_{i}:=\mathcal{C}^{i}\) and \(i\in\{A,B\}\). We are going to discuss more about it in the next section on game formulation. Here, let us be more explicit about the formulation of counterparties costs for which we assume - for convenience - quadratic preferences for both generalized to take in account the optimal control strategy of the other party over time \(t\in[0;\tau\wedge T]\), that is formally⁹:
[FOOTNOTE:8][ENDFOOTNOTE]
[FOOTNOTE:9][ENDFOOTNOTE]
**a)****Running costs**:
\[F^{A}(\mathcal{Y}_{t},b^{B}(u^{A}),t)=\begin{cases}\big{[}(CVA^{A}(s)-DVA^{A}( s))-\delta^{B}(s,u^{*,A})\big{]}^{2}&if\{z_{j}^{i}=1\}\\ \Big{(}\big{(}\int_{u}^{T\wedge\tau^{A}_{j+1}}R^{A}(s)[NPV^{A}(v)]du-NPV^{A}(s )\big{)}-\delta^{B}(s,u^{*,A})\Big{)}^{2}&if\;\{z_{j}^{i}=0\}.\end{cases}\]
\(\forall\;s,\tau_{j}\in[t,T\wedge\tau]\) and for counterparty \(B\)
\[F^{B}(\mathcal{Y}_{t},b^{A}(u^{B}),t)=\begin{cases}\big{[}(CVA^{B}(s)-DVA^{B}( s))-\delta^{A}(s,u^{*,B})\big{]}^{2}&if\{z_{j}^{i}=1\}\\ \Big{(}\big{(}\int_{u}^{T\wedge\tau^{B}_{j+1}}R^{B}(s)[NPV^{B}(v)]du-NPV^{B}(s )\big{)}-\delta^{A}(s,u^{*,B})\Big{)}^{2}&if\;\{z_{j}^{i}=0\}.\end{cases}\]
\(\forall\;s,\tau_{j}\in[t,T\wedge\tau]\) and \(i\in\{A,B\}\). Here, all the terms of the running costs are known except the response function \(\delta(.)^{i}\) which is assumed non-negative, continuous and \(\mathcal{F}\)-adapted and the funding factor term \(R^{i}(s)\) which we set to model the expected collateral and funding costs when \(z_{j}=0\), that is formally
\[R^{i}(t)=\begin{cases}-\exp-(r_{borr}^{i}-r_{rem}^{i})t&if\;z_{j}=0\;and\;NPV< 0\\ \exp-(r_{opp}^{i}-r_{rem}^{i})t&if\;z_{j}=0\;and\;NPV>0.\end{cases}\]
**b)****Terminal costs**
\[G^{A}(\mathcal{Y}_{t},b^{B}(u^{A}),t)=\begin{cases}(-NPV^{A}(T)-\delta^{B}(T,u ^{*,A}))^{2}\Rightarrow&if\;collateral\;is\;active\\ (0-\delta^{B}(T,u^{*,A}))^{2}\Rightarrow&\;no\;collateral\end{cases}\]
\[G^{B}(\mathcal{Y}_{t},b^{A}(u^{B}),t)=\begin{cases}(-NPV^{B}(T)-\delta^{A}(T,u ^{*,B}))^{2}\Rightarrow&if\;collateral\;is\;active\\ (0-\delta^{A}(T,u^{*,B}))^{2}\Rightarrow&no\;collateral.\end{cases}\]
**c)****Instantaneous switching costs**:
\[l^{i}\big{(}\tau_{j}^{i},z_{j}^{i}\big{)}=\sum_{j\geq 1}^{M}e^{-r\tau_{j}^{i}} c_{z_{j}^{i}}(t)\mathbbm{1}_{\{\tau_{j}^{i}\wedge\tau_{j}^{-i}<T\}},\;\: \forall\:\:\tau_{j}^{i},z_{j}^{i}\in\{\mathcal{T}^{i},\mathcal{Z}^{i}\},\]
for \(i\in\{A,B\}\), where \(c^{i}(.)\) is the \(\mathcal{F}\)-predictable (deterministic for convenience) instantaneous cost function.
### Game formulation and pure strategies definition
In this section we pass to give a generalized formulation for our contingent CSA scheme in which allowing for the strategic interaction between the players - which are the counterparties of this theoretical contract - we are lead to formulate our problem as a stochastic differential game whose study of the equilibrium is central for the existence of a solution and the optimal design of our contingent scheme.
In order to formulate the game, firstly, we recall that in our model for the contingent CSA scheme we assume no fixed times or other rules for switching, that is the counterparty can switch optimally every time until contract maturity T in order to minimize its objective functional. But the functionals, as set formally in the former section, now are generalized and not symmetrical between the parties : as already mentioned, both players are assumed to remain _risk averse_ to the variance of bilateral CVA, collateral and funding costs, but depending on the different parametrization of the functionals (that we show below) and instantaneous switching costs other than the difference in default intensities, the problem can be naturally represented by a generalized _non-zero-sum Dynkin game_. This is a non-zero-sum game given that the player payoff functionals are not symmetrical and generalized in the sense that player controls are not just simple _stopping times_ but sequences of random times that define the optimal times to switch \(\tau_{j}\) from a regime to the other (together with switching indicators sequences \(z_{j}\) ).
Therefore, given that the _right to switch_ is bilateral and we assume no other rules/constraint on controls set by contract, the other player’s optimal strategy - and hence the strategic interaction with the other party - becomes central to define the own optimal switching strategy. Let us be more formal and, building on the definitions of section 2.3, we define our model’s SDG as a generalized Dynkin game of switching type as follows.
**Definition 2.4.1 (Dynkin game of switching type definition)**._Let us consider two players/counterparties \(\{A,B\}\) that have signed a general contract with a contingent CSA of switching type. Given the respective payoff functionals \(F^{i}(.)\) (or running reward), the terminal rewards \(G^{i}(.)\) and the instantaneous switching cost functions \(l^{i}(.)\) where \(i\in\{A,B\}\), under rationality assumption and non-cooperative strategic interaction, the players aim to minimize the following objective functional:_
\[J^{i}(y,u^{i},u^{-i}) = \inf_{u^{i}\in\{\mathcal{C}^{i}_{ad}\}}\mathbb{E}\Bigg{[}\sum_{j} \int_{t}^{\tau_{j}^{i}\wedge\tau_{j}^{-i}\wedge T}B_{s}\bigg{[}F^{i}(y_{s},u^{ i},b^{-i}(u^{i}))\bigg{]}ds\]
\[+ l^{i}\big{(}\tau_{j}^{i},z_{j}^{i}\big{)}+G^{i}(y_{T},b^{-i}(u^{ i}))\bigg{|}\mathcal{F}_{t}\Bigg{]}\;\;for\:i\in\{A,B\}\]
_where we mean for \(i=A\) then \(-i=B\) and viceversa, \(B_{t}\) is defined in (1), the system dynamic is defined in section 2.3 by \(dy_{t}:=d\mathcal{Y}_{t}\), the controls set are defined in (21)-(22) and we have set for notational convenience \(u_{i}:=\{\mathcal{T}^{i},\mathcal{Z}^{i}\}\) for \(i\in\{A,B\}\)._
Let us underline from definition 2.4.1 that the payoffs functions, whose specific formulation has been stated in section 2.3, can differ between \(A\) and \(B\) for the following terms
\[\delta^{A}(.) \lesseqgtr \delta^{B}(.)\] (24)
\[R^{A}_{t}(.) \lesseqgtr R^{B}_{t}(.)\] (25)
\[c^{A}_{t}(.) \lesseqgtr c^{B}_{t}(.)\] (26)
where
a) \(R^{i}(.)\) is the funding-collateral cost factor defined in the former section;
b) \(\delta^{i}(.)\) is the running cost function threshold which is generalized her taking as argument the optimal response and control strategies of the other player;
c) \(c^{i}_{t}(.)\) are the already known instantaneous costs from switching.
**Remarks 2.4.2.** The game as formulated above in (23) is fairly general; in addition, one could also introduce the possibility for the players to stop the game adding a stopping time (and the related reward/cost function) to the set of controls made up of switching times and indicators. From the financial point of view, this can be justified by a _early termination clause_ set in the contingent CSA defined by the parties. Anyway, given the problem recursion, this would add greater complications that we leave for further research.
Actually this game is already complicated by the fact that, differently from the (non-zero-sum) Dynkin game as formulated in section 1.3, here the players control strategies affect also each other payoffs. In fact, given that in our general formulation the players can switch optimally whenever over the life of the underlying contract, it is clear that - without setting any other _rules_ for the game - the decision of one player to switch to a certain regime impose a different cost function \(F_{Z}(.)^{i}\) also for the other player. So if \(A\) switches but for \(B\) the decision is not optimal, he is able to immediately switch back, taking in account the instantaneous switching costs¹⁰. In this sense, the relative difference between players’ payoff (in the different regimes) and the strategic interaction between them over time become central in order to understand and analyze the problem solution/equilibrium.
[FOOTNOTE:10][ENDFOOTNOTE]
We return on this points later, here is important to mention that in order to highlight this strategic dependence in the game - that is assumed to be played by _rational and non-cooperative_ players - we have enriched the running cost function \(F_{Z}(.)^{i}\) by a response function \(b^{-i}(u^{i})\), which can be intended mainly in two way:
a) _as the ”classical” best response function to the other player strategy, which implies the complete information assumption in the game, that is the players have the same information set about the system dynamic and they are able to calculate (under the real probability measure \(\mathbb{P}\)) each other payoff;_
b) _if the game information is not complete and there is a degree of uncertainty over the players payoff and their switching strategy, the function \(b^{-i}(u^{i})\) can be intended in generalized terms as a probability distribution assigned by a player to the optimal response of the other one._
We discuss further on the game information flows below. Now, the main issue to tackle is to understand the condition under which this generalized game (23) have sense and it will be played, which means that it will be signed by counterparties. This takes to the problem definition of an equilibrium for this game and to the condition under which its existence and uniqueness are ensured.
Before giving the formal definition of the game equilibrium, let us highlight the game pure strategies at a given time \(\{\tau_{j-1}^{i}<t\leq\tau_{j}^{i}\}\) under the assumption of _simultaneous moves_ by players.
**Definition 2.4.3 (Pure strategies of the game of switching type).**_For any given initial condition \({z_{0}^{A},z_{0}^{B}}\) and \(\forall\:z_{j}^{A}\in u^{A}\) and \(z_{j}^{B}\in u^{B}\) and \(\{\tau_{j-1}^{i}<t\leq\tau_{j}^{i}\}\), the pure strategies of our Dynkin game of switching type are defined as follows \(if\{z_{j-1}=0\}\:\Longrightarrow\)_
\[\{z_{j}^{A} = 0,\;z_{j}^{B}=0\}\:\Longrightarrow\:"no\:switch"\]
\[\{z_{j}^{A} = 0,\;z_{j}^{B}=1\}\:\Longrightarrow\:"switch\:to\:1"\]
\[\{z_{j}^{A} = 1,\;z_{j}^{B}=0\}\:\Longrightarrow\:"switch\:to\:1"\]
\[\{z_{j}^{A} = 1,\;z_{j}^{B}=1\}\:\Longrightarrow\:"switch\:to\:1".\]
_while if \(\{z_{j-1}=1\}\:\Longrightarrow\)_
\[\{z_{j}^{A} = 0,\;z_{j}^{B}=0\}\:\Longrightarrow\:"switch\:to\:0"\]
\[\{z_{j}^{A} = 0,\;z_{j}^{B}=1\}\:\Longrightarrow\:"switch\:to\:0"\]
\[\{z_{j}^{A} = 1,\;z_{j}^{B}=0\}\:\Longrightarrow\:"switch\:to\:0"\]
\[\{z_{j}^{A} = 1,\;z_{j}^{B}=1\}\:\Longrightarrow\:"no\:switch".\]
In the table below we represent the standard game form at a given decision time with the possible (pure) strategies (namely the switching indicators) and the related random payoff between parenthesis.
\begin{tabular}{|c|c|c|}
\hline
\(A,B\) & \(Switch\) & \(No\:Switch\) \\
\hline
\(Switch\) & \(1,1\:(J^{A},J^{B})\) & \(1,0\:(J^{A},J^{B})\) \\
\hline
\(No\:Switch\) & \(0,1\:(J^{A},J^{B})\) & \(0,0\:(J^{A},J^{B})\) \\ \hline
\end{tabular}
where we note that the players’ strategies can be cast in these two categories:
1. on the main diagonal of the table we have _accomodation/peace type switching strategies_ played over time;
2. on the opposite diagonal of the table we have _fighting/war type switching strategies_ played over time.
### Game equilibrium and stochastic representation through system of RBSDE
From a static point of view the NEP for the game of definition 2.4.1 can be easily found once the payoff \(J^{i}\) are known. But the problem is that game configurations like these has to be played over time taking in account as key factors:
1. _the payoff value that derives from switching at a given time;_
2. _the expected value from waiting until the next switching time;_
3. _the optimal responses, namely the other party optimal switching strategy. This implies - by information flows’ symmetry - that each player knows how to calculate (under the real probability measure \(\mathbb{P}\)) the points a) and b) relative to the other party._
The resulting equilibrium is an optimal sequence of switching over time for both players that needs a (backward) dynamically recursive valuation. On an heuristic base, we expect that if the relative difference (over time) in players payoff functionals - mainly due to different function parametrization, default intensities \(\lambda^{i}\) or switching costs \(c_{Z}^{i}\) - remains low, it is more likely that the switching strategies on the main diagonal of the game \(\{1,1;0,0\}\) will be played (given that both players would have also similar best responses). This should ease the search for the equilibrium of the game but this makes more likely to incur in _banal solutions_ . Otherwise, one should observe a more complicated strategic behavior that needs a careful study depending also on the type of equilibrium that now we try to define.
Indeed, given the characteristics of our game namely a non-zero-sum game in which the agents are assumed rational and act in a non-cooperative way in order to minimize their objective function knowing that also the other part will do the same, the equilibrium/solution of this type of games is the celebrated _Nash equilibrium point_ (NEP). Actually, - as already shown - in our case the equilibrium is characterized by a sequence of optimal switching over time and being game (23) a generalization of a _Dynkin game_, by similarity, we can state the following definition of a _Nash equilibrium point_ for a _Dynkin game of switching type_.
**Definition 2.5.1 (NEP for Dynkin game of switching type).**_Let us define the switching control sets for the player \(\{A,B\}\) of the generalized Dynkin game (23) as follows_
\[u_{A}:=\big{\{}\tau_{j}^{A},z_{j}^{A}\big{\}}_{j=1}^{M},\;\;\forall\:\tau^{A}_ {j}\in[0,T],\:z^{A}_{j}\in\{0,1\};\]
\[u_{B}:=\big{\{}\tau_{j}^{B},z_{j}^{B}\big{\}}_{j=1}^{M},\;\;\forall\:\tau^{B}_ {j}\in[0,T],\:z^{B}_{j}\in\{0,1\}.\]
_A Nash equilibrium point for this game is given by the pair of sequences of switching times and indicators \(\{u^{*}_{A},u^{*}_{B}\}\) such that for any control sequences \(\{u_{A},u_{B}\}\) the following condition are satisfied_
\[J^{A}(y;u^{*}_{A},u^{*}_{B})\leq J^{A}(y;u_{A},u^{*}_{B})\] (27)
_and_
\[J^{B}(y;u^{*}_{A},u^{*}_{B})\leq J^{B}(y;u_{A}^{*},u_{B})\] (28)
_(the signs will be reversed in the maximization case)._
A formal and rigorous proof of the existence (and uniqueness) of a NEP for our game such that it is non trivial or _banal_ in the sense that it is never optimal for both the parties to switch or when the switching control set reduce to a single switching/stopping time - is the big issue here.
In order to approach its solution , as we know from the introduction, one has in general two way: analytic or probabilistic which are deeply interconnected (working in a markovian framework). In particular, from the theory of BSDE with reflection¹¹, we can state the next definition for the stochastic representation of our Dynkin game of switching type as a _system of interconnected (non-linear) reflected BSDE_. Let us denote first with
[FOOTNOTE:11][ENDFOOTNOTE]
\[\mathcal{M}^{p}=\{E[\sup_{t\leq T}|\nu_{t}|^{p}]<\infty\}\]
the set of progressively measurable and \(p\) integrable processes \(\nu_{t}\) and with
\[\mathcal{K}^{p}=\{E[\int_{0}^{T}|\nu_{s}|^{p}ds]<\infty\}\]
the set of continuous and progressively measurable processes. Hence the following states.
**Definition 2.5.2 (RBSDE representation for game of switching type).**_Let us define the vector triple \((Y^{i,Z},N^{i,Z},K^{i,Z})\) for \(i\in\{A,B\}\) and \(Z\in\{z,\zeta\}\) with \(Y^{i}\) and \(N^{i}\) assumed progressively measurable and adapted to the market filtration \((\mathcal{F}_{t}^{W})\) and \(K\) continuous and increasing. Then, given the standard Brownian motion vector \(W_{t}\), the terminal reward \(\xi^{i}\), the obstacles \(Y^{i,Z}_{t}-c^{i,Z}_{t}\) and the generator functions \(F^{i}_{Z}(.,Y^{-i})\) assumed progressively measurable, uniformly Lipschitz and interconnected between the players, the Dynkin game of switching type formulated in (23) has the following representation through system of interconnected non-linear reflected BSDE_
\[\begin{cases}Y^{A,z},Y^{A,\zeta}\in\mathcal{K}^{2};N^{A,z},N^{A,\zeta}\in \mathcal{M}^{2};\:\:K^{A,z},K^{A,\zeta}\in\mathcal{K}^{2},\:K\:non\:decreasing \:and\:K_{0}=0,\\ Y_{t}^{A,Z}=\xi^{A}+\int_{s}^{T}F^{A}_{Z}(y_{s},u^{A},N^{A}_{s};Y^{B}_{s})ds- \int_{s}^{T}N_{s}^{A,Z}dW_{s}+K^{A,Z}_{T}-K^{A,Z}_{s},\;\:t\leq s\leq T\wedge \tau,\;Z\in\{z,\zeta\}\\ Y_{t}^{A,z}\geq(Y_{t}^{A,\zeta}-c^{A,z}_{t});\;\:\int_{0}^{T}[Y_{t}^{A,z}-(Y_{ t}^{A,\zeta}-c^{A,z}_{t})]dK^{A,z}_{t}=0;\\ Y_{t}^{A,\zeta}\geq(Y_{t}^{A,z}-c^{A,\zeta}_{t});\;\;\int_{0}^{T}[Y_{t}^{A, \zeta}-(Y_{t}^{A,z}-c^{A,\zeta}_{t})]dK^{A,\zeta}_{t}=0;\\ \\ Y^{B,z},Y^{B,\zeta}\in\mathcal{K}^{2};N^{B,z},N^{B,\zeta}\in\mathcal{M}^{2};\: \:K^{B,z},K^{B,\zeta}\in\mathcal{K}^{2},\:K\:non\:decreasing\:and\:K_{0}=0,\\ Y_{t}^{B,Z}=\xi^{B}+\int_{s}^{T}F^{B}_{Z}(y_{s},u^{B},N^{B}_{s};Y^{A}_{s})ds- \int_{s}^{T}N_{s}^{B,Z}dW_{s}+K^{B,Z}_{T}-K^{B,Z}_{s},\;\:t\leq s\leq T\wedge \tau\;,Z\in\{z,\zeta\}\\ Y_{t}^{B,z}\geq(Y_{t}^{B,\zeta}-c^{B,z}_{t});\;\:\int_{0}^{T}[Y_{t}^{B,z}-(Y_{ t}^{B,\zeta}-c^{B,z}_{t})]dK^{B,z}_{t}=0;\\ Y_{t}^{B,\zeta}\geq(Y_{t}^{B,z}-c^{B,\zeta}_{t});\;\;\int_{0}^{T}[Y_{t}^{B, \zeta}-(Y_{t}^{B,z}-c^{B,\zeta}_{t})]dK^{B,\zeta}_{t}=0\\ \\ \end{cases}\]
From definition 2.5.2, it is evident that the system of RBSDE is a non-standard one given the characteristics of the generator functions (which are the cost function in our game) that are inter-dependent and this is highlighted by the presence of the other player value process \(Y^{i}_{t}\) inside \(F^{i}_{Z}(.)\) for \(i\in\{A,B\}\) . This makes hard to show the existence and uniqueness of the solution of the system for the reason that we highlight below. In particular, the solution of this system of RBSDE is made up of a two-dimensional vector made up by the triple \((Y^{*,Z},N^{*,Z},K^{*,Z})\) where its dimension is given by the two switching regimes while the optimal switching sequences is determined by the value process crossings of the barriers, represented by the last two lines of the RBSDEs system.
Therefore, the other central issue is to show that the the system vector solution \(Y^{*,Z}\) coincides with the players value functions’ of the non-zero-sum game of switching type (23).
As far as we know, these issues have been tackled - in relation to switching problems - in the already mentioned work of Hamadene and Zhang (2010). They study general system of m-dimensional BSDE called with _oblique reflection_, which are RBSDE with both generator and barrier interconnected as in our case, showing existence and uniqueness of the solution while the optimal strategy in general does not exist but an _approximating optimal strategy_ is constructed (through some technical estimates).
Let us briefly recall the main technical assumptions that are imposed in order to derive these results are:
* _square integrability for both the generator function \(F^{i}(.)\) and terminal reward \(\xi\) while the obstacles function are continuous and bounded;_
* _Lipschitz (uniform) continuity of the generator function respect to its terms;_
* _both the generator and the obstacles are assumed to be increasing function of the other players utility/value process._
As also mentioned in the above mentioned paper, the condition c) implies from a game point of view that the players are _partners_, namely the interaction and the impact of the other players value processes has a unique positive sign. This is not the case of our non-zero-sum game in which the interaction allowed between the two players is antagonistic and more complicate.
Hence, as far as we know, the existence and uniqueness solution of the optimal switching strategy for our game formulated in definition 2.5.2 is an open problem, whose solution needs further studies. Probably a solution for the problem exists but it won’t be unique, indeed the classifications of solutions behavior’ and the conditions for their existence and uniqueness it is an interesting and hard program to tackle analytically and also numerically. Therefore, even though one could assume to simplify the problem in order to work under the same assumptions a)- c) that would ensure the existence and uniqueness of the solution, it would remain to verify that the solution of the system of RBSDE is the _Nash equilibrium point_ for the game (23), which is complicated by the fact that the optimal control strategy may not exist¹². Formally, one should prove the following theorem which is also an open problem.
[FOOTNOTE:12][ENDFOOTNOTE]
**Theorem 2.5.3 (NEP and RBSDE system solution).**_Let us assume the existence and uniqueness of the solution for the system of definition 2.5.2, under the assumption a)- c). Then the system RBSDE value processes \(Y^{*}_{A}\), \(Y^{*}_{B}\) coincide with the player value functions of the game of switching type, that is_
\[Y^{*}_{A} = J^{A}(y,u^{*}_{A},u^{*}_{B})\]
\[Y^{*}_{B} = J^{B}(y,u^{*}_{B},u^{*}_{A})\]
_and are such that condition (27) and (28) are satisfied, which implies the existence and uniqueness of a Nash equilibrium point for the game (23)._
**Remarks 2.5.4.** As we already know, in the markovian framework - thanks to El Karoui et al. (1997) results - the solution of the system of RBSDE is connected with the viscosity solution of a generalized system of non-linear PDE with generator and obstacles different and interconnected between the two players, which is even harder to study analytically. The main alternative is to try to approach numerically the problem, searching for the conditions under which one can find the equilibrium. A possibility is to apply the same technique - _Snell envelope_ and _iterative optimal stopping_ technique of the work of Carmona and Ludkovski (2010)¹³. adapted to study our stochastic game’s solution. In particular, the algorithm need to be generalized in order to introduce the players’ strategic interaction and to compute the _Nash equilibrium point_ of the game.
Let us give here just a sketch of the numerical solution founded on the _iterative optimal stopping approach_ which is well suited to study the solution of our highly nonlinear and recursive problem. Let us pick for exposition convenience two calculation times \(t_{1}\) and \(t_{2}\) and a final regime switching condition (given that the program run backward in time while the information grow forward) thanks to the _dynamic programming principle_, both the players need to evaluate at these discretized times
[FOOTNOTE:13][ENDFOOTNOTE]
1. the optimality of an immediate switch at \(t_{1}\) to the other regime (\(Z\in\{z,\zeta\}\)) taking in account the _best response function_ of the other player (over each switching time);
2. the optimality to _continue_ namely to wait the next switching time \(t_{2}\), considering also in this case the _best response_ of the other party.
Formally, this means to run the following program:
\[V^{l,A}(t_{1},\mathcal{Y}_{t_{1}},u^{A},b^{B}(u^{A})) =\] (29)
\[SW^{A,Z}(t_{1},\mathcal{Y}_{t_{1}},u^{A}_{t_{1}},b^{B}(u^{A}_{t_ {1}}))\bigg{)}\]
\[\simeq \min\bigg{(}F^{A,Z}(t_{1},\mathcal{Y}_{t_{1}},u^{A}_{t_{1}},b^{B} (u^{A}_{t_{1}}))\Delta t+\mathbb{E}\big{[}V^{l,A}(t_{2},\mathcal{Y}_{t_{2}},u^ {A}_{t_{2}},b^{B}(u^{A}_{t_{2}}))|\mathcal{F}_{t_{1}}\big{]},\]
\[\{V^{l-1,A}(t_{1},\mathcal{Y}_{t_{1}},u^{A}_{t_{1}},b^{B}(u^{A}_{ t_{1}}))-c_{t_{1}}^{Z}\}\bigg{)}\]
\[V^{l,B}(t_{1},\mathcal{Y}_{t_{1}},u^{B},b^{A}(u^{B})) =\] (30)
\[SW^{B,Z}(t_{1},\mathcal{Y}_{t_{1}},u^{B}_{t_{1}},b^{A}(u^{B}_{t_ {1}}))\bigg{)}\]
\[\simeq \min\bigg{(}F^{B,Z}(t_{1},\mathcal{Y}_{t_{1}},u^{B}_{t_{1}},b^{A} (u^{B}_{t_{1}}))\Delta t+\mathbb{E}\big{[}V^{l,B}(t_{2},\mathcal{Y}_{t_{2}},u^ {B}_{t_{2}},b^{A}(u^{B}_{t_{2}}))|\mathcal{F}_{t_{1}}\big{]},\]
\[\{V^{l-1,B}(t_{1},\mathcal{Y}_{t_{1}},u^{B}_{t_{1}},b^{A}(u^{B}_{ t_{1}}))-c_{t_{1}}^{Z}\}\bigg{)}\]
where \(SW^{i,Z}(.)\) is the so called _intervention/switching operator_ that quantifies the switching regime value and it represents the obstacle in our RBSDE’s formulation, while \(l\in\{1,\dots,M\}\) denotes the number of switching time left.
By running the program (29)-(30) backward over time one needs to keep trace - over each switching time - of both the players switching strategies - optimal or not - and the relative payoffs in order to calculate in \(t=0\) the players value functions \(J^{i}(.)\) and their game strategies (using definition 2.4.3). Then by checking the conditions (27)-(28), the existence (and uniqueness) of the _Nash equilibrium point_ for the game (23) can be established. Definitely the equilibrium existence needs a careful numerical analysis and algorithm implementation that we leave for a future paper.
### Game solution in a special case and further analysis
In this section we make some further reasoning on the game characteristics in order to possibly simplify our general formulation (23) and to search for a solution. In particular, we focus the analysis on the following three main points - already mentioned in the past section - that have impact on the equilibrium characterization and existence:
a) _information set between the players/counterparties_;
b) _rules of the game_;
c) _differences in the objective functionals of the players/counterparties_.
a) Firstly, a careful analysis of the game information set is fundamental to characterize and understand the game itself and its equilibrium. In our game formulation (23) we have assumed symmetry in the information available for the players which helps to simplify the analysis, but in general one needs to specify what is the information available to them at all the stage of the game. Given that we have been working under the market filtration \((\mathcal{F}_{t})_{t\geq 0}\), under symmetry we get that both player knows \(\forall\:t\in[0,T]\) the values of the market variables and processes that enter the valuation problem, namely
\[\mathcal{F}_{t}^{A}=\mathcal{F}_{t}^{B}=\mathcal{F}_{t}\;\;\forall\:t\in[0,T]\]
So, both players are able to calculate the outcomes/payoff of the game through time. This implies that the players know each other cost functions so that the game is said _information complete_¹⁴ and it is easier to solve for a NEP knowing the _best response function_.
It is important to underline that the game is played simultaneously at the decision times but it is dynamic and recursive because of the optimal strategy played today will depend not only on the initial condition (that is usually _common knowledge_) but on future decisions taken by both the players. Clearly, this complicates game characteristics of the game imposing a backward induction procedure to search for an equilibrium point.
Of course, the assumption to know the counterparty cost function is quite strong for our problem in which the parties of the underlying contract can operate in completely different markets or industries, but it can be not uncommon to verify a _cooperative behavior_ between them. In particular, in _cooperative games_ the players aim to maximize or minimize the sum of the values of their payoff over times, namely
[FOOTNOTE:14][ENDFOOTNOTE]
\[J^{coop}(y,u^{*}) := \inf_{u^{*}\in\{\mathcal{C}_{ad}^{A}\cup\mathcal{C}_{ad}^{B}\}} \bigg{[}J^{A}(y,u^{A})+J^{B}(y,u^{B})\bigg{]}\]
\[:= \inf_{u^{*}\in\{\mathcal{C}_{ad}^{A}\cup\mathcal{C}_{ad}^{B}\}} \mathbb{E}\bigg{[}\sum_{j}\int_{t}^{\tau_{j}^{i}\wedge\tau_{j}^{-i}\wedge T}B_ {s}\Big{[}F^{A}(y_{s},u^{A},b^{B}(u^{A}))+\]
\[+ F^{B}(y_{s},u^{B},b^{A}(u^{B}))\Big{]}ds+\big{(}l^{A}(\tau_{j}^{ A}z_{j}^{A})+l^{B}(\tau_{j}^{B},z_{j}^{B})\big{)}\bigg{]}\]
\[+ \big{(}G^{A}(y(T),b^{B}(u^{A}))+G^{B}(y(T),b^{A}(u^{B}))\big{)} \bigg{|}\mathcal{F}_{t}\Bigg{]}\;\;for\:j=1,\dots,M.\]
This type of equilibrium, depending on the type of game considered, is much more difficult to study in the stochastic framework, given the necessity to find the condition under which players cooperate over time and have no incentives ”to cheat” playing a different (non cooperative) strategy. This can be an interesting further generalization for our game model that would be important to examine in major depth. Hence this is an other interesting topic to study in relation to our type of game that would be important to examine this issue in major depth.
b) Also the rules of the game are important in order to simplify the search for the equilibrium. In our model both the counterparties are able to switch optimally every time over the contract life. Discretizing the time domain, we have been lead to think at the game as played simultaneously through time over the switching time set that can be predefined in the contract or model specific. In terms of game theory, this means that a given decisional node of the game the players make their optimal choice based on the information available (which is _common knowledge_) at that node and at the subsequent node they observe the outcome of the last interaction and update their strategy.
An other possibility that may help to simplify things, is to assume that - by contract specifications - the counterparties can switch only at predefined times and that the two sets have null intersection, namely
\[\big{\{}\tau^{A}_{j}\cap\tau^{B}_{j}\big{\}}_{j=1}^{M}=\varnothing.\]
This happen for example if the right to switch is set as ”sequential”. So, again in terms of game theory, the strategic interaction and the game become _sequential_: under the assumption of _incomplete information_, this type of games are generally solved via backward induction procedure and one can search for a weaker type of Nash equilibrium. Clearly, we remind that in our case this type of equilibrium need to be studied under a stochastic framework which remains a cumbersome and tough task both analytically and numerically
Clearly, we remind that this type of equilibrium should be studied under a stochastic framework which is analytically more difficult and also numerically it can be a cumbersome task. A strategic sequential interaction like that, can be also obtained by setting time rules like the so called _grace periods_ within the contract CSA, namely a time delta \(\Delta t\) that the other party has to wait - after a switching time - before making its optimal switching decision.
c) The last point really relevant in our game analysis concerns the relative differences between counterparties objective functionals. In particular, recalling our model specifications, the main variables that have impact in this sense are:
* differences in the default intensities processes \(\lambda^{A}_{t}\), \(\lambda^{B}_{t}\);
* differences in the cost function thresholds \(\delta^{A}(.)\), \(\delta^{B}(.)\);
* differences in funding/opportunity costs \(R^{A}_{t}(.)\), \(R^{B}_{t}(.)\);
* differences in the (instantaneous) switching costs \(c^{z,A}_{t}(.)\), ,\(c^{z,B}_{t}(.)\).
To be more clear, let us focus on some specific case.
**1) Symmetric case**. Let us simplify things by considering the special case of our game (23) in which symmetry between the party of the contract is assumed. In this special case, we are able to show the existence of the solution for the game and we also highlight the impact on the equilibrium of just a simple constant threshold \(\delta\) in the running cost functions. Under _symmetry_, it is easy to show that the game solution coincides with that of a stochastic control problem of switching type. In fact, solving the control problem from just one player’s perspective is equivalent to a game played by symmetric players with objective functional having the same parameters. In economic terms, the reason to consider a game between two _symmetric_ players can be justified if one thinks to two institutions with similar _business characteristics_ other than risk worthiness, that operate in the same country/region/market with the objective to optimally manage the counterparty risk and the collateral and funding costs by signing a contingent CSA in which are defined all the relevant parameters necessary to know each other objective functional.
Hence, let us be more formal and let’s consider a game played under these special symmetric conditions, it is not difficult to see that the game payoffs will be the same for both the player: in fact, being \(\delta^{A}(.)=\delta^{B}(.)=0\), the square of BCVA and collateral costs functions is the same and also the instantaneous costs are assumed equal. This implies that also the best response functions will be equal for both the player, namely they play the same switching strategy, however the game is played simultaneously or sequentially. So, on the basis of this chain of thought, and imposing the following technical conditions¹⁵
[FOOTNOTE:15][ENDFOOTNOTE]
* the stochastic factors that drive the dynamic of the system \((X_{t})_{0\leq t\leq T}\) and \((\lambda_{t})_{0\leq t\leq T}\), (that we indicate with the vector \(\mathcal{Y}_{t}\) for brevity) are \(\mathbb{R}\)-valued processes adapted to the market filtration \(\mathbb{F}_{t}^{x,\lambda}=\sigma\{W_{s}^{x,\lambda},s\leq t\}_{t\in[0,T]}\) assumed right-continuous and complete;
* the cost functions \(F_{Z}^{i}(.)\in\mathcal{M}^{p}\) and \(G_{Z}^{i}(.)\in\mathcal{M}^{p}\) while the switching costs \(l^{i}_{Z}(.)^{Z}\in\mathcal{K}^{p}\), being deterministic and continuous;
* the running costs functions need to satisfy the _linear growth condition_ and for the switching costs \(c_{Z}^{i}\) a technical condition as \(\min\{c^{i}_{z},c^{i}_{\zeta}\}\geq C\) for \(i\in\{A,B\}\)\(t\leq T\wedge\tau\), switching indicators \(\{z,\zeta\}\in Z\) and real constant \(C>0\), is imposed in order to reduce the convenience to switch too many times;
the following result states.
**Proposition 2.6.1 (NEP Existence and uniqueness in symmetric case.)**. _Assume symmetric conditions for our Dynkin game of switching type (23), taking relations (24-26) with the equality and set \(\delta^{A}(.)=\delta^{B}(.)=0\). In addition, under the technical (Hp1 -Hp3) exists and is unique a Nash equilibrium for this game and it coincides with the value function of the following stochastic control problem¹⁶ that is_
[FOOTNOTE:16][ENDFOOTNOTE]
\[J(y,u) = \inf_{u\in\{\mathcal{C}_{ad}\}}\mathbb{E}\Bigg{[}\sum_{j}\int_{t} ^{\tau_{j}\wedge T}B_{s}\big{[}F_{Z}(y_{s},u)\big{]}ds+\sum_{j\geq 1}c_{z_{j}} (t)\mathbbm{1}_{\{\tau_{j}<T\}}+G(y_{T})\bigg{|}\mathcal{F}_{t}\Bigg{]}\] (31)
_where the model dynamics \(d\mathcal{Y}_{t}\) has been defined in section 2.3 and the following relation for the value function states_
\[J^{A}(y,u^{*}_{A};u^{*}_{B})=J^{B}(y,u^{*}_{A};u^{*}_{B})=V^{*}(y,u^{*}).\\\] (32)
_which indicates the irrelevance of the strategic interaction under symmetry._
_Proof._ The proof is easy given the above reasonings. In fact, under the _symmetry conditions_ and recalling the notation from the general game formulation (23) we have that the following relations hold:
\[F^{A}_{Z}(y_{s},u^{A},b^{B}(u^{A})) = F^{B}_{Z}(y_{s},u^{B},b^{A}(u^{B}))\]
\[l^{A}_{z_{j}^{A}}(t)\mathbbm{1}_{\{\tau_{j}^{A}\wedge\tau_{j}^{B }<T\}} = l^{B}_{z_{j}^{B}}(t)\mathbbm{1}_{\{\tau_{j}^{B}\wedge\tau_{j}^{A }<T\}}\]
\[G^{A}(y(T),b^{B}(u^{A})) = G^{B}(y(T),b^{A}(u^{B}))\]
and being control sequence optimal for both players we can set \(\tau_{j}:=\tau^{A}_{j}=\tau^{B}_{j}\) which implies \(\mathbbm{1}_{\{\tau_{j}^{A}\wedge\tau_{j}^{B}<T\}}=\mathbbm{1}_{\{\tau_{j}<T\}}\), namely \(u^{*}_{A}=u^{*}_{B}\), which implies also the equality of the best response functions \(b^{A}(u^{B})=b^{B}(u^{A})=0\), given the assumptions on the thresholds \(\delta^{i}(.)\). This means also that the strategic interaction becomes irrelevant and the game solution can be reduced to that of an optimal switching control problem equivalent for both the players, so that game (23) is reduced to the problem (31), namely
\[J(y,u) := J^{A}(y,u^{A},u^{B})=J^{B}(y,u^{B},u^{A})\Longrightarrow\]
\[J(y,u) = \inf_{u\in\{\mathcal{C}_{ad}\}}\mathbb{E}\Bigg{[}\sum_{j}\int_{t} ^{\tau_{j}\wedge T}B_{s}\big{[}F_{Z}(y_{s},u)\big{]}ds+\sum_{j\geq 1}c_{z_{j}} (t)\mathbbm{1}_{\{\tau_{j}<T\}}+G(y_{T})\bigg{|}\mathcal{F}_{t}\Bigg{]}.\]
By the proof of value function’s existence and uniqueness \(V^{*}(y,u^{*})\) for this problem (for which we refer to Djehiche et al. (2008)), one derives the optimal sequence of switching times and indicators \(u^{*}=\{\mathcal{T}^{*},\mathcal{Z}^{*}\}\). that satisfies conditions (27) and (28) of NEP definition 2.5.1, being the control strategy optimal for both players (by symmetry). Indeed, given that the two problems representation are actually the same, the NEP exists and is unique - from the existence and uniqueness of the value \(V^{*}(y;u^{*})\) - and equation (32) is true, as we wanted to show. \(\square\)
2) **Case \(\delta^{A}=\delta^{B}\neq 0\)**. In general with different function parametrization between players and incomplete or asymmetric information, the equilibrium is much harder to find and different strategies has to be checked. To give an idea of this, let us consider just a slight modification of the symmetric case conditions, setting for example the cost functions threshold \(\delta^{A}=\delta^{B}>0\), and keeping the information incomplete and the game play simultaneous. By the symmetry of BCVA and (running) _collateral/funding costs_, we know that a positive value of one term for \(A\), is negative for \(B\) and viceversa. So introducing the threshold create different payoffs for the player, as we can easily see below
\[(BCVA^{A}-\delta)^{2}\gtrless(BCVA^{B}-\delta)^{2}\]
given that if \(BCVA^{A}_{t}>0\) then \(BCVA^{B}_{t}<0\) and viceversa¹⁷. So, even though they knew each other objective functional, there would be some paths and periods in which the strategic behavior of the players is in contrast, say of _war type_ and others of _peace type_ (as by pure strategy definition 2.4.3), making the analysis more complicate.
[FOOTNOTE:17][ENDFOOTNOTE]
3) **Game banal solution case.** Worth of mention the possibility that the game is not played, namely it reveals to be never optimal to switch for both the players. It’s relevant to study the conditions under this kind of behavior of the solution come up, given that the scheme would lose its _economic sense_. This _singular game solution_ can come up if we formulate to our simultaneous game as a _zero sum game_. This can happen by considering - for example - linear objective functionals with threshold \(\delta\approx 0\), in fact - by symmetry of the BCVA and of funding cost function - a positive outcome for one player is negative for the counterpart¹⁸. Assuming instantaneous switching costs \(c^{z}>0\) for both players and - to simplify - that both know each other cost functions, we get that this game will never be played. The reason is that by the game zero sum structure, the optimal strategy for one is not optimal for the other, so every switch will be followed by the opposite switch at every switching time, like the sequence
[FOOTNOTE:18][ENDFOOTNOTE]
\[\big{\{}z_{1}=1,z_{2}=0,z_{3}=1,\dots,z_{M}=1\big{\}}\]
But by rationality and taking in account the positive cost of switching, one can conclude that the game will never be played by a rational agent.
So, let us summarize this last logic chain of thoughts in the following proposition.
**Proposition 2.6.2 (Game banal solution in the zero-sum case)**. _Let us assume that game (23) be a zero-sum game with linear functionals set for both the counterparties. Assuming in addition the same funding costs for both players, \(\delta\approx 0\) and positive switching costs \(c^{z}>0\) (for both), then the optimal strategy is to never play this game (that is a banal solution of the game)._
We end the section by remarking the importance of the points highlighted in the construction of some kind of equilibrium for a Dynkin game of switching type as our one. Although mainly theoretical, the existence of the equilibrium and the derivation of the conditions under which a non banal solution exists, are relevant economically and in the contract design phase.
Hence, the main task to pursue in future research are a rigorous proof of the existence and of the equilibrium for this type of game, and the definition of an efficient algorithm to check the model solutions.
## 3 Applications and further researches
Switching type mechanisms like the one we have analyzed can find different applications into the wild world of finance. The basic underlying idea is to ensure _flexibility_ to agents’ investments decisions over time which is a usual objective in _real option theory_. Our problem has been thought mainly in a risk management view but with the development of new techniques and algorithms, also the related pricing problem will be tackled efficiently and more financial contract would find useful and convenient - in a optimal risk management view - this type of contingent mechanisms. As regards just a possible further application in risk management, it would be important to deepen the analysis of a switching type collateralization from a _portfolio perspective_, taking for example the view of a _central clearing_. In particular, it would be relevant to show possibly analytically but mainly with numerical examples, the greater convenience of the switching/contingent solution respect to a _non contingent/standard collateral_ agreement like the _partial or full_ one, including clauses like, _early termination_, _netting_ and others. This is an hard program, which needs a generalized model formulation in order to include all the CSA clauses and in order to deal with the high recursion that characterizes the problem.
In a pricing view, we remind the example - of the fixed income market - some particular bonds called _flippable_ or _switchable_, that are characterized by options to switch the coupon from fix to floating rate. Clearly, in this case the valuation is easier given that this securities have a market and are not traded OTC so one does not need to include in the picture counterparty risk, funding and CSA cash ows. Anyway, it would be interesting to delve into the valuation of an OTC contract in witch also the dividend flows can be subject to contingent switching mechanism. As regards similar case, a problem that can be very interesting and difficult to tackle is the valuation of a flexi swap in presence of a contingent collateralization like our one.
The main characteristics of a flexi-swap are:
a) the notional of the flexible swap at period \(n\) must lie (inclusively) between predefined bounds \(L_{n}\) and \(U_{n}\);
b) the notional of the flexible swap at period n must be less than or equal to the notional at the previous period \(n-1\);
c) The party paying fixed has the option at the start of each period \(n\) to choose the notional, subject to the two conditions above.
In other words, we deal with a swap with multiple embedded option that allows one party to change the notional under certain constraints defined in the contract. This kind of interest rate swaps are usually used as hedging instruments of other swaps having notional linked to loans, especially mortgages.¹⁹ The underlying idea is that the fixed-rate payer (the option holder) will amortize as much as allowed if interest rates are very low, and will amortize as little as allowed if interest rates are very high.
Given a payment term structure \(\{T_{n}\}^{N}_{n=0}\) and a set of coupons \(X_{n}\) (with unit notional) fixing in \(T_{n}\) e paying in \(T_{n+1}\) (\(n=0,1,\dots,N-1\)), the _flexi swap_ is a fixed vs floating swap where the fixed payer has to pay a net coupon \(X_{n}R_{n}\) in \(T_{n+1}\), the notional \(R_{0}\) is fixed upfront at inception and for every \(T_{n}\), \(R_{n}\) can be amortized if it respects some given constraints defined as follows:
[FOOTNOTE:19][ENDFOOTNOTE]
1. deterministic constraints : \(R_{n}\in[g_{n}^{low},g_{n}^{high}]\);
2. local constraints function of the current notional: \(R_{n}\in[l_{n}^{low}(R_{n-1})l_{n}^{high}(R_{n-1})]\);
3. market constraints (libor, swap denoted with \(X_{n}\)): \(R_{n}\in[m_{n}^{low}(X_{n}),m_{n}^{high}(X_{n})]\).
The valuation procedure of this type of swap involves a backward recursion keeping track of the notional in every payment date. But introducing also the switching collateralization, the valuation become an ”_intricate puzzle_” given the recursive relation between the optimal switching strategy, the price process of the claim which in addition depends on the optimal notional choice over time. Simplifying modeling assumptions are needed to break the curse of recursion in a defaultable OTC contract like this.
## 4 Conclusions
In this work, we have generalized the contingent CSA scheme defined in our preceding work to the bilateral case allowing the strategic interaction between the counterparties of a defaultable (OTC) contract. The problem in this case has a natural formulation as a stochastic differential game - a generalized Dynkin game - of switching type, for which - as far as we know - no analytical solution for a _Nash equilibrium point_ is known.
We have shown, in particular, that the game solution is strictly related to that of a system of reflected BSDE with interconnected barriers and generator functions. Only by imposing strong assumptions and simplifications we are able to prove the game solution, in the so called _symmetric case_. Further research are needed and addressed in the end in the field of stochastic games and RBSDE and some interesting applications in finance are highlighted in order to show also the importance in practice of our mainly theoretical problem.
## References
* Bensoussan and Friedman (1977) Bensoussan, A., and A. Friedman, Nonzero-Sum Stochastic Differential Games With Stopping Times and Free Boundary Problems, Transactions of the American Mathematical Society, 1977.
* Bielecki et al. (2011) Bielecki, T. R., Cialenko, I. and Iyigunler, I., Counterparty Risk and the Impact of Collateralization in CDS Contracts. available at arxiv.org, 2011.
* Bielecki and Rutkowski (2004) Bielecki, T. R. and Rutkowski, M., Nonzero-Sum Stochastic Differential Games With Stopping Times and Free Boundary Problems, Transactions of the American Mathematical Society, 1977.
* Brigo, Capponi et al. (2011) D. Brigo, Capponi, A., Pallavicini A. and Papatheodorou, V., Collateral Margining in Arbitrage-Free Counterparty Valuation Adjustment including Re-Hypotecation and Netting, available at arxiv.org, pages 1-39, 2011.
* Brigo, Pallavicini et al. (2011) Brigo, D., Pallavicini, D. Parini, Funding Valuation Adjustment:a consistent framework including CVA, DVA, collateral,netting rules and re-hypothecation, 2011. Available online at www.Arxiv.org.
* Carmona and Ludkovski (2010) Carmona, R. and Ludkovski, M. (2010) Valuation of Energy Storage: An Optimal Switching Approach, Quantitative finance, 10(4), 359-374.
* Cesari et al. (2011) Cesari, G., Aquilina J., et al., Modelling pricing and hedging counterparty credit exposure, Springer Finance, Springer-Verlag, First edition. 2010.
* Cvitanic and Karatzas (1996) Cvitanic J. and Karatzas I., Backward stochastic differential equations with reflection and Dynkin games, The Annals of probability, 1996.
* Djehiche et al. (2008) B. Djehiche, S. Hamadene and A. Popier, A Finite Horizon Optimal Multiple Switching Problem, Siam Journal Control Optimization, 48(4), 2751-2770, 2008.
* Dynkin (1967) Dynkin, E. B., Game variant of a problem on optimal stopping. Soviet Math. Dokl, 10, pp 270-274. 1967.
* El Karoui and Hamadene (2003) El Karoui, N., and Hamadene S., BSDEs and Risk-Sensitive Control,Zero-Sum and Nonzero-Sum Game Problems of Stochastic Functional Differential Equations, Stochastic Processes and their Applications, 2003.
* El Karoui et al. (1997) El Karoui N., Kapoudjian, C., Pardoux, E., Peng, S. and Quenez, M. C., Reflected solutions of backward SDE’s and related obstacle problems for PDE’s, The Annals of probability, 1997.
* Fleming and Soner (2008) Fleming, Soner, Controlled Markov processes and viscosity solutions, Springer Verlag, 2008
* Fleming and Souganidis (1989) Fleming, Souganidis, On the existence of value functions of two player, zero-sum stochastic differential games, Indiana Univ. Math. J., 1989
* Hamadene (1998) Hamadene, S., Backward-forward SDEs and Stochastic Differential Games. Stochastic Processes and their Applications. 1998.
* Hamadene and Lepeltier (2000) Hamadene, S., Lepeltier, J-P., (2000). Reflected BSDEs and Mixed Game Problem, Stochastic Processes and their Applications. 2000.
* Hamadene and Zhang (2008) Hamadene, S. and Zhang, J., The Continuous Time Nonzero-sum Dynkin Game Problem and Application in Game Options, ArXiv, pages 1-16, 2008.
* Hamadene and Zhang (2010) Hamadene, S. and Zhang, J., Switching problem and relates system of reflected backward SDEs. Stochastic processes and their applications, 2010.
* Isaacs (2000) Isaacs, R., Differential games, Dover Books on Mathematics. Revised edition, 1999.
* Kifer (2000) Kifer, Y., Game options, Finance and Stochastics. 2000.
* Mottola (2013) Mottola, G. (2013), _Switching type valuation and design problems in general OTC contracts with CVA, collateral and funding issue_. PhD Thesis, School of Economics, Sapienza University of Rome
* Nash (1950) Nash, J. F., Equilibrium Points in N-person Games. Proceedings of the National Academy of Sciences, 1950.
* von Neumann (1944) Neumann, von J., Morgenstern, O., Theory of games and economic behavior. Princeton Press, 1944.
* Pham (2009) Pham, H., Continuous-time Stochastic Control and Optimization with Financial Applications, Springer Verlag. 2009.
* Yong and Zhu (1999) Yong, J. and Zhu, X. Y., Stochastic controls. Hamiltonian systems and HJB equations, Springer-Verlag. 1999
|
1407.1116 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 83951,
"num_imgs": 1,
"llama3_tokens_count": 31383
} | [
"content_image/1407.1116/x1.png"
] | # Why do simple algorithms for triangle enumeration work in the real world?
Jonathan W. Berry
Sandia National Laboratories, Albuquerque. {jberry,caphill}@sandia.gov. This manuscript has been authored by Sandia Corporation under Contract No. DE-AC04-94AL85000 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Luke A. Fostvedt
Iowa State University, Ames, Iowa 50011. {fostvedt,dnordman,agw}@iastate.edu
Daniel J. Nordman ¹
Cynthia A. Phillips ²
C. Seshadhri
Sandia National Laboratories, Livermore. scomand@sandia.gov
Alyson G. Wilson
North Carolina State University. {alyson_wilson@ncsu.edu}
[FOOTNOTE:2][ENDFOOTNOTE]
[FOOTNOTE:1][ENDFOOTNOTE]
###### Abstract
Listing all triangles is a fundamental graph operation. Triangles can have important interpretations in real-world graphs, especially social and other interaction networks. Despite the lack of provably efficient (linear, or slightly super-linear) worst-case algorithms for this problem, practitioners run simple, efficient heuristics to find all triangles in graphs with millions of vertices. How are these heuristics exploiting the structure of these special graphs to provide major speedups in running time?
We study one of the most prevalent algorithms used by practitioners. A trivial algorithm enumerates all paths of length \(2\), and checks if each such path is incident to a triangle. A good heuristic is to enumerate only those paths of length \(2\) where the middle vertex has the lowest degree. It is easily implemented and is empirically known to give remarkable speedups over the trivial algorithm.
We study the behavior of this algorithm over graphs with heavy-tailed degree distributions, a defining feature of real-world graphs. The erased configuration model (ECM) efficiently generates a graph with asymptotically (almost) any desired degree sequence. We show that the expected running time of this algorithm over the distribution of graphs created by the ECM is controlled by the \(\ell_{4/3}\)-norm of the degree sequence. Norms of the degree sequence are a measure of the heaviness of the tail, and it is precisely this feature that allows non-trivial speedups of simple triangle enumeration algorithms. As a corollary of our main theorem, we prove expected linear-time performance for degree sequences following a power law with exponent \(\alpha\geq 7/3\), and non-trivial speedup whenever \(\alpha\in(2,3)\).
## 1 Introduction
Finding triangles in graphs is a classic theoretical problem with numerous practical applications. The recent explosion of work on social networks has led to a great interest in fast algorithms to find triangles in graphs. The social sciences and physics communities often study triangles in real networks and use them to reason about underlying social processes [16, 29, 35, 9, 10, 19]. Much of the information about triangles in the last four papers is determined by a complete enumeration of all triangles in a (small) graph. Triangle enumeration is also a fundamental subroutine for other more complex algorithmic tasks [6, 18].
From a theoretical perspective, Itai and Rodeh [21] gave algorithms for triangle finding in \(O(n^{\omega})\) time (where \(n\) is the number of vertices and \(\omega\) is the matrix multiplication constant) using fast matrix multiplication. Vassilevska Williams and Williams [36] show deep connections between matrix multiplication and (edge-weighted) triangle enumeration. But much of this work is focused on dense graphs. Practitioners usually deal with massive sparse graphs with large variance in degrees, where sub-quadratic time algorithms can be trivially obtained, but are still too slow to run.
Practioners enumerate triangles on massive graphs (with millions of vertices) using fairly simple heuristics, which are often easily parallelizable. This work is motivated by the following question: _can we theoretically explain why simple algorithms for triangle enumeration work in the real world?_
Consider a trivial algorithm. Take an undirected graph with \(n\) vertices, \(m\) edges, and degree sequence \(d_{1},d_{2},\ldots,d_{n}\) (so the degree of vertex \(v\) is \(d_{v}\)). Call a path of length \(2\) (\(P_{2}\)) _closed_ if it participates in a triangle and _open_ otherwise. Simply enumerate all \(P_{2}\)s and output the closed ones. The total running time is \(\Theta(\sum_{v}d^{2}_{v})\) (assume that checking if a \(P_{2}\) is closed can be done in constant time), since every \(P_{2}\) involves a pair of neighbors for the middle vertex. We will henceforth refer to this as _the_ trivial algorithm. A simple heuristic is to only enumerate paths where the middle vertex has the lowest degree among of the \(3\) vertices in the path. We denote this algorithm by MinBucket.
1. Create \(n\) empty buckets \(B_{1},B_{2},\ldots,B_{n}\).
2. For each edge \((u,v)\): if \(d_{u}\leq d_{v}\), place it in \(B_{u}\), otherwise place it in \(B_{v}\). Break ties consistently.
3. For each bucket \(B_{v}\): iterate over all \(P_{2}\)s formed by edges in \(B_{v}\) and output closed ones.
MinBucket is quite common in practice (sometimes taking the somewhat strange name _nodeIterator++_) and has clean parallel implementations with no load balancing issues [31, 15, 30]. For such simple algorithms, the total work pretty much determines the parallel runtime. For example, it would take \(n\) processors with perfect speed up running a \(\Theta(n^{2})\)-work algorithm to compete with a single processor running a \(\Theta(n)\)-work algorithm.
MinBucket is often the algorithm of choice for triangle enumeration because of its simplicity and because it beats the trivial algorithm by orders of magnitude, as shown in the previous citations. (A quick check shows at least 60 citations to [15], mostly involving papers that deal with massive scale graph algorithms.) The algorithm itself has been discovered and rediscovered in various forms over the past decades. The earliest reference the authors could find was from the mid-80s where Chiba and Nishizeki [14] devised a sequential version of the above algorithm. We provide a more detailed history later.
Nonetheless, MinBucket has a poor worst-case behavior. It would perform terribly on a high degree regular bipartite graph. If the input sparse graph (with high variance in degree) simply consisted of many such bipartite graphs of varying sizes, MinBucket would perform no better than its trivial cousin. Then why is it good in practice?
### Results
Since the seminal results of Barabási and Albert [3], Faloutsos et al [17], Broder et al [7], researchers have assumed that massive graphs obtained from the real world have _heavy-tailed degree distributions_ (often approximated as a power law). The average degree is thought to be a constant (or very slowly growing), but the variance is quite large. The usual approximation is to think of the number of vertices of degree \(d\) as decaying roughly as \(1/d^{\alpha}\) for some small constant \(\alpha\).
This seems to have connections with MinBucket. If edges tend to connect vertices of fairly disparate degrees (quite likely in a graph with large variance in degrees), MinBucket might provably give good running times. This is exactly what we set out to prove, for a natural distribution on heavy-tailed graphs.
Consider any list of positive integers \({\bf d}=(d_{1},d_{2},\ldots,d_{n})\), which we think of as a “desired” degree sequence. In other words, we wish to construct a graph on \(n\) vertices where vertex \(v\in[n]\) has degree \(d_{v}\). The _configuration model_ (CM) [4, 8, 26, 27] creates a random graph that almost achieves this. Imagine vertex \(v\) being incident to \(d_{v}\) “stubs”, which can be thought of as half-edges. We take a random perfect matching between the stubs, so pairs of stubs are matched to each other. Each such pair creates an edge, and we end up with a multigraph with the desired degree sequence. Usually, this is converted to a simple graph by removing parallel edges and self-loops[5]. We refer to this graph distribution as \(ECM({\bf d})\), for input degree sequence \({\bf d}\). This model has a fairly long history (which we relegate to a later section) and is a standard method to construct a graph with a desired degree sequence. It is closely connected to models given by Chung and Lu [12, 13] and Mihail and Papadimitriou [23], in the context of eigenvalues of graphs with a given degree sequence. These models simply connect vertices \(u\) and \(v\) independently with probability \(d_{u}d_{v}/2m\), similarly to the Erdős-Rényi construction.
Our main theorem gives a bound on the expected running time of MinBucket for \(ECM({\bf d})\). We set \(m=(\sum_{v}d_{v})/2\). We will henceforth assume that \(0<d_{1}\leq d_{2}\ldots\leq d_{n}\) and that \(d_{n}<\sqrt{m}/2\). This “truncation” is a standard assumption for analysis of the configuration model [26, 12, 23, 13, 27, 5]. We use \(\sum_{v}\) as a shorthand for \(\sum_{i=1}^{n}\), since it is a sum over all vertices. The run time bottleneck for MinBucket is in \(P_{2}\) enumeration, and checking whether a \(P_{2}\) is closed is often assumed to be a constant time operation. Henceforth, when we say “running time,” we mean the number of \(P_{2}\)s enumerated.
**Theorem 1.1**.: _Consider a degree sequence \({\bf d}=(d_{1},d_{2},\ldots,d_{n})\), where \(m=(\sum_{v}d_{v})/2\) and \(d_{n}<\sqrt{m}/2\). The expected (over \(ECM({\bf d})\)) number of \(P_{2}\)s enumerated by MinBucket is \(O(n+m^{-2}(\sum_{v}d_{v}^{4/3})^{3})\)._
(Our main theorem applies to Chung-Lu graphs as well. Details are given in Appendix C.) Before we actually make sense of this theorem, let us look at a corollary of this theorem. It has been repeatedly observed that degree sequences in real graphs have heavy tails, often approximated as a _power law_[3]. Power laws say something about the moments of the degree distribution (equivalently, norms of the degree sequence). Since it does not affect our main theorem or corollary, we choose a fairly loose definition of power law. This is a binned version of the usual definition, which states the number of vertices of degree \(d\) is proportional to \(n/d^{\alpha}\). (Even up to constants, this is never precisely true because there are many gaps in real degree sequences.)
**Definition 1.2**.: _A degree sequence \({\bf d}\) satisfies a power law of exponent \(\alpha>1\) if the following holds for all \(k\leq\log_{2}d_{n}-1\): for \(d=2^{k}\), the number of sequence terms in \([d,2d]\) is \(\Theta(n/d^{\alpha-1})\)._
The following shows an application of our theorem for common values of \(\alpha\). This bound is tight as we show in Section 5. (When \(\alpha\geq 3\), the trivial algorithm runs in linear time because \(\sum_{v}d^{2}_{v}=O(n)\).)
**Corollary 1.3**.: _Suppose a degree sequence \({\bf d}\) (with largest term \(<\sqrt{m}/2\)) satisfies a power law with exponent \(\alpha\in(2,3)\). Then the expected running time of MinBucket of \(ECM({\bf d})\) is asymptotically better than the trivial algorithm, and is linear when \(\alpha>7/3\)._
### Making sense of Thm. 1.1
First, as a sanity check, let us actually show that Thm. 1.1 beats the trivial bound, \(\sum_{v}d^{2}_{v}\). This is a direct application of Hölder’s inequality for conjugates \(p=3\) and \(q=3/2\).
Rearranging, we get \(m^{-2}(\sum_{v}d_{v}^{4/3})^{3}=O(\sum_{v}d^{2}_{v})\), showing that our bound at least holds the promise of being non-trivial.
Consider the uniform distribution on the vertices. Assuming \(m>n\), we can write our running time bound as \(n(\hbox{\bf E}[d^{4/3}_{v}])^{3}\), as opposed to the trivial bound of \(\sum_{v}d^{2}_{v}=n\hbox{\bf E}[d^{2}_{v}]\). If the degree “distribution” (think of the random variable given by the degree of a uniform random vertex) has a small \(4/3\)-moment, the running time is small. This can happen even though the second moment is large, and this is where MinBucketbeats the trivial algorithm. In other words, if the tail of the degree sequence is heavy but not too heavy, MinBucket will perform well.
And this is exactly what happens when \(\alpha>2\) for power law degree sequences. When \(\alpha>7/3\), the \(4/3\)-moment becomes constant and the running time is linear. (It is known that for ECM graphs over power law degree sequences with \(\alpha>7/3\), the clustering coefficient (ratio of triangles to \(P_{2}\)s) converges to zero [27].) We show in §5 that the running time bound achieved in the following corollary for power laws with \(\alpha>2\) is tight. When \(\alpha\leq 2\), MinBucket gets no asymptotic improvement over the trivial algorithm. For convenience, we will drop the big-Oh notation, and replace it by \(\lessdot\). So \(A\lessdot B\) means \(A=O(B)\).
Proof.: (of Cor. 1.3) First, let us understand the trivial bound. Remember than \(d_{n}\) is the maximum degree.
\[\sum_{v}d^{2}_{v}\lessdot\sum_{k=1}^{\log_{2}n-1}(n/2^{k(\alpha-1)})2^{2k}=n \sum_{k=1}^{\log_{2}n-1}2^{k(3-\alpha)}\lessdot n+nd^{3-\alpha}_{n}\]
We can argue that the expected number of wedges enumerated by the trivial algorithm is \(\Omega(\sum_{v}d^{2}_{v})\) (Claim 3.3). Now for the bound of Thm. 1.1.
\[m^{-2}(\sum_{v}d^{4/3}_{v})^{3}\lessdot n^{-2}\Big{(}\sum_{k=1}^{\log_{2}n-1}( n/2^{k(\alpha-1)})2^{4k/3}\Big{)}^{3}=n\Big{(}\sum_{k=1}^{\log_{2}n-1}2^{k(7/3 -\alpha)}\Big{)}^{3}\lessdot n+nd^{7-3\alpha}_{n}\]
Regardless of \(d_{n}\), if \(\alpha>7/3\), then the running time of MinBucket is linear. Whenever \(\alpha\in(2,3)\), \(7-3\alpha<3-\alpha\), and MinBucket is asymptotically faster than a trivial enumeration. ∎
### Significance of Thm. 1.1
Thm. 1.1 connects the running time of a commonly used algorithm to the norms of the degree sequences, a well-studied property of real graphs. So this important property of heavy-tails in real graphs allows for the algorithmic benefit of MinBucket. We have discovered that for a fairly standard graph model inspired by real degree distributions, MinBucket is very efficient.
We think of this theorem as a proof of concept: theoretically showing that a common property of real world inputs allows for the efficient performance of a simple heuristic. Because of our distributional assumptions as well as bounds on \(\alpha\), we agree with the (skeptical) reader that this does not fully explain why MinBucket works in the real world¹. Nonetheless, we feel that this makes progress towards that, especially for a question that is quite hard to formalize. After all, there is hardly any consensus in the social networks community on what real graphs look like.
[FOOTNOTE:1][ENDFOOTNOTE]
But the notion that distinctive properties of real world graphs can be used to prove efficiency of simple algorithms is a useful way of thinking. This is one direction to follow for going beyond worst-case analysis. Our aim here is not to design better algorithms for triangle enumeration, but to give a theoretical argument for why current algorithms do well.
The proof is obtained (as expected) through various probabilistic arguments bounding the sizes of the different buckets. The erased configuration model, while easy to implement and clean to define, creates some niggling problems for analysis of algorithms. The edges are not truly independent of each other, and we have to take care of these weak dependencies.
Why the \(4/3\)-norm? Indeed, that is one of the most surprising features of this result (especially since the bound is tight for power laws of \(\alpha>2\)). As we bound the buckets sizes and make sense of the various expressions, the total running time is expressed as a sum of various degree terms. Using appropriate approximations, it tends to rearrange into norms of the degree sequence. Our proof goes over two sections. We give various probabilistic calculations for the degree behavior in §3, which set the stage for the run-time accounting. In §4, we start bounding bucket sizes and finally get to the \(4/3\)-moment. In §5, we show that bounds achieved in the proof of Cor. 1.3 are tight. This is mostly a matter of using the tools of the previous sections. In §6, we give a tighter analysis that gives an explicit expression for strong upper bounds on running time and in §7 we experimentally show these more careful bounds closely approximate the expected runtime of ECM graphs, with runtime constants under \(1\) for graphs up to 80M nodes.
## 2 Related Work
The idea of using some sort of degree binning, orienting edges, or thresholding for finding and enumerating triangles has been used in many results. Chiba and Nishizeki [14] give bounds for a sequential version of MinBucket using the degeneracy of a graph. This does not give bounds for MinBucket, although their algorithm is similar in spirit. Alon, Yuster, and Zwick [2] find triangles in \(O(m^{1.41})\) using degree thresholding and matrix multiplication ideas from Itai and Rodeh [21]. Chrobak and Eppstien [11] use acyclic orientations for linear time triangle enumeration in planar graphs. Vassilevska Williams and Williams [36] show that fast algorithms for weighted triangle enumeration leads to remarkable consequences, like faster all-pairs shortest paths. In the work most closely to ours, Latapy [22] discusses various triangle finding algorithms, and also focuses on power-law graphs. He shows the trivial bound of \(O(mn^{1/\alpha})\) when the power law exponent is \(\alpha\). Essentially, the maximum degree is \(n^{1/\alpha}\) and that directly gives a bound on the number of \(P_{2}\)s.
MinBucket has received attention from various experimental studies. Schank and Wagner [32] perform an experimental study of many algorithms, including a sequential version of MinBucket which they show to be quite efficient. Cohen [15] specifically describes MinBucket in the context of Map-Reduce. Suri and Vassilvitskii [30] do many experiments on real graphs in Map-Reduce and show major speedups (a few orders of magnitude) for MinBucket over the trivial enumeration. Tsourakakis [33] gives a good survey of various methods used in practice for triangle counting and estimation.
Explicit triangle enumerations have been used for various applications on large graphs. Fudos and Hoffman [18] use triangle enumeration for a graph-based approach for solving systems of geometric constraints. Berry et al [6] touch every triangle as part of their community detection algorithm for large graphs.
Configuration models for generating random graphs with given degree sequences have a long history. Bender and Canfield [4] study this model for counting graphs with a given degree sequence. Wormald [34] looks at the connectivity of these graphs. Molloy and Reed [24, 26] study various properties like the largest connected component of this graph distribution. Physicists studying complex networks have also paid attention to this model [28]. Britton, Deijfen, and Martin-Löf [5] show that the simple graph generated by the ECM asymptotically matches the desired degree sequence. Aiello, Chung, and Lu [1] give a model for power-law graphs, where edge \((u,v)\) are independent inserted with probability \(d_{u}d_{v}/2m\). This was studied for more general degree sequences in subsequent work by Chung, Lu, and Vu [12, 13]. Mihail and Papadimitriou [23] independently discuss this model. Most of this work focused on eigenvalues and average distances in these graphs. Newman [27] gives an excellent survey of these models, their similarities, and applications.
## 3 Degree behavior of \(ECM({\bf d})\)
We fix a degree sequence \({\bf d}\) and focus on the distribution \(ECM({\bf d})\). All expectations and probabilities are over this distribution. Because of dependencies in the erased configuration model, we will need to formalize our arguments carefully. We first state a general lemma giving a one-sided tail bound for dependent random variables with special conditional properties. The proof is in the appendix.
**Lemma 3.1**.: _Let \(Y_{1},Y_{2},\ldots,Y_{k}\) be independent random variables, and \(X_{i}=f_{i}(Y_{1},Y_{2},\ldots,Y_{i})\) be \(0\)-\(1\) random variables. Let \(\alpha\in[0,1]\). Suppose \(\Pr[X_{1}]\geq\alpha\) and \(\Pr[X_{i}=1|Y_{1},Y_{2},\ldots,Y_{i-1}]\geq\alpha\) for all \(i\). Then, \(\Pr[\sum_{i=1}^{k}X_{i}<\alpha k\delta]<\exp(-\alpha k(1-\delta)^{2}/2)\) for any \(\delta\in(0,1)\)._
We now prove a tail bound on degrees of vertices; the probability the degree of vertex \(v\) deviates by a constant factor of \(d_{v}\) is \(\exp(-\Omega(d_{v}))\). Let \(\beta,\beta^{\prime},\delta,\delta^{\prime}\) denote sufficiently small constants.
Before we proceed with our tail bounds, we describe a process to construct the random matching of stubs. We are interested in a particular vertex \(v\). Order the stubs such that the \(d_{v}\)\(v\)-stubs are in the beginning; the remaining stubs are ordered arbitrarily. We start with the first stub, and match to a uniform random stub (other than itself). We then take the next unmatched stub, according to the order, and match to a uniform random unmatched stub. And so on and so forth. The final connections are clearly dependent, though the choice among unmatched stubs is done independently. This is formalized as follows. Let \(Y_{i}\) be an independent uniform random integer in \([1,2m-2(i-1)-1]\). This represents the choice at the \(i\)th step, since in the \(i\)th step, we have exactly \(2m-2(i-1)-1\) choices. Imagine that we first draw these independent \(Y_{i}\)’s. Then we deterministically construct the matching on the basis of these numbers. (So the first stub is connected to the \(Y_{1}\)st stub, the second unmatched stub is connected to the \(Y_{2}\)nd unmatched stub, etc.)
**Lemma 3.2**.: _Assume \(d_{n}<\sqrt{m}/2\). Let \(D_{v}\) be the random variable denoting the degree of \(v\) in the resulting graph. There exist sufficiently small constants \(\beta,\beta^{\prime}\in(0,1)\), such that \(\Pr[D_{v}<\beta^{\prime}d_{v}]<\exp(-\beta d_{v})\)._
Proof.: Suppose \(d_{v}>1\). We again order the stubs so that the \(d_{v}\)\(v\)-stubs are in the beginning. Let \(X_{j}\) be the indicator random variable for the \(j\)th matching forming a new edge with \(v\). Note that \(\sum_{j=1}^{\lfloor d_{v}/2\rfloor}X_{j}\leq D_{v}\). Observe that \(X_{j}\) is a function of \(Y_{1},Y_{2},\ldots,Y_{j}\). Consider any \(Y_{1},Y_{2},\ldots,Y_{j-1}\) and suppose the matchings created by these variables link to vertices \(v_{0}=v,v_{1},v_{2},\ldots,v_{j-1}\) (distinct) such that there are \(n_{j}\) links to vertex \(v_{j}\) such that \(\sum_{i=0}^{j-1}n_{i}=(j-1)\). Then, for \(j=1,\ldots,\lfloor d_{v}/2\rfloor\),
\[\hbox{\bf E}[X_{j}|Y_{1},Y_{2},\ldots,Y_{j-1}] \geq 1-\frac{(d_{v}-j-n_{0})+\sum_{1\leq i\leq j-1;n_{i}\neq 0}(d_{v_ {i}}-n_{i})}{2m-2(j-1)-1}\]
\[\geq 1-\frac{-2(j-1)-1+\sum_{i=0}^{j-1}d_{v_{i}}}{2m-2(j-1)-1}\]
\[\geq 1-\frac{\sum_{i=0}^{j-1}d_{v_{i}}}{2m-2d_{v}}.\]
Note that \(\sum_{i=0}^{j-1}d_{v_{i}}\leq(\sqrt{m}/2)^{2}=m/4\), by the bound on the maximum degree. We also get \(2m-2d_{v}>m\), so we bound \(\hbox{\bf E}[X_{j}|Y_{1},Y_{2},\ldots,Y_{j-1}]\geq 3/4\). By Lem. 3.1 (setting \(\delta=2/3\) and bounding \(\alpha k>d_{v}/4\)),
\[\Pr[D_{v}<d_{v}/8]\leq\Pr\left[\sum_{j=1}^{\lfloor d_{v}/2\rfloor}X_{j}<d_{v}/ 8\right]\leq\Pr\left[\sum_{j=1}^{\lfloor d_{v}/2\rfloor}X_{j}<\lfloor d_{v}/2 \rfloor/2\right]<\exp(-d_{v}(1/3)^{2}/8).\]
∎
This suffices to prove the trivial bound for the trivial algorithm.
**Claim 3.3**.: _The expected number of wedges enumerated by the trivial algorithm is \(\Omega(\sum_{v}d^{2}_{v})\)._
Proof.: The expected number of wedges enumerated is \(\Omega(\sum_{v}D^{2}_{v})\), where \(D_{v}\) is the actual degree of \(v\). Using Lem. 3.2, \(\hbox{\bf E}[D^{2}_{v}]=\Omega(d^{2}_{v})\). ∎
We will need the following basic claim about the joint probability of two edges.
**Claim 3.4**.: _Let \(v,w,w^{\prime}\) be three distinct vertices. The probability that edges \((v,w)\) and \((v,w^{\prime})\) are present in the final graph is at most \(d^{2}_{v}d_{w}d_{w^{\prime}}/m^{2}\)._
Proof.: Assume \(d_{v}>1\). Let \(C_{v,w}\) be the indicator random variable for edge \((v,w)\) being present (likewise define \(C_{v,w^{\prime}}\)). Label the stubs of each vertex as \(s_{1}^{v},\ldots,s_{d_{v}}^{v}\); \(s_{1}^{w},\ldots,s_{d_{w}}^{w}\); and \(s_{1}^{w^{\prime}},\ldots,s_{d_{w^{\prime}}}^{w^{\prime}}\). Let \(C_{s_{i}^{v},s_{j}^{w}}\) be the indicator random variable for edge being present between stubs \(s_{i}^{v}\) and \(s_{j}^{w}\) (likewise define \(C_{s_{i}^{v},s_{j}^{w^{\prime}}}\)). Then the event \(\{C_{v,w}C_{v,w^{\prime}}=1\}\) that edges \((v,w)\) and \((v,w^{\prime})\) are present is a subset of the event \(\cup_{1\leq i\neq j\leq d_{v}}\cup_{k=1}^{d_{w}}\cup_{\ell=1}^{d_{w^{\prime}}} \{C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{\ell}^{w^{\prime}}}=1\}\). Hence,
\[\Pr[C_{v,w}C_{v,w^{\prime}}=1]\leq\sum_{1\leq i\neq j\leq d_{v}}\sum_{k=1}^{d_ {w}}\sum_{\ell=1}^{d_{w^{\prime}}}\Pr[C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{ \ell}^{w^{\prime}}}=1].\]
Fix \(1\leq i\neq j\leq d_{v}\), \(1\leq k\leq d_{w}\) and \(1\leq\ell\leq d_{w^{\prime}}\) and order stubs \(s_{i}^{v},s_{j}^{v}\) first in the ECM wiring. Then, \(\Pr[C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{\ell}^{w^{\prime}}}=1]=\Pr[C_{s_{i} ^{v},s_{k}^{w}}=1]\Pr[C_{s_{j}^{v},s_{\ell}^{w^{\prime}}}=1|C_{s_{i}^{v},s_{k} ^{w}}=1]\) where \(\Pr[C_{s_{i}^{v},s_{k}^{w}}=1]=[2m-1]^{-1}\) and \(\Pr[C_{s_{j}^{v},s_{\ell}^{w^{\prime}}}=1|C_{s_{i}^{v},s_{k}^{w}}=1]=[2m-3]^{-1}\). Hence,
\[\Pr[C_{v,w}C_{v,w^{\prime}}=1]\leq d_{v}(d_{v}-1)d_{w}d_{w^{\prime}}/m^{2}\]
using \((2m-1)(2m-3)\geq m^{2}\) when \(m\geq 3\). ∎
## 4 Getting the \(4/3\) moment
We will use a series of claims to express the running time of MinBucket in a convenient form. For vertex \(v\), let \(X_{v}\) be the random variable denoting the number of edges in \(v\)’s bin. The expected running time is at most \(\hbox{\bf E}[\sum_{v}X_{v}(X_{v}-1)]\). This is because number of wedges in each bin is \({X_{v}\choose 2}\leq X^{2}_{v}-X_{v}\).
We further break \(X_{v}\) into the sum \(\sum_{w}Y_{v,w}\), where \(Y_{v,w}\) is the indicator for edge \((v,w)\) being in \(v\)’s bin. As mentioned earlier, \(C_{v,w}\) is the indicator for edge \((v,w)\) being present. Note that \(Y_{v,w}\leq C_{v,w}\), since \((v,w)\) can only be in \(v\)’s bin if it actually appears as an edge.
We list out some bounds on expectations. Only the second one really uses the binning of MinBucket.
**Claim 4.1**.: _Consider vertices \(v,w,w^{\prime}\) (\(w\neq w^{\prime}\))._
* \(\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]\leq d^{2}_{v}d_{w}d_{w^{\prime}}/m^{2}\)_._
* _There exist sufficient small constants_ \(\delta,\delta^{\prime}\in(0,1)\) _such that: if_ \(d_{w}<\delta d_{v}\) _then_ \(\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]\leq 2\exp(-\delta^{\prime}d_{v})d^{2}_{v }d_{w}d_{w^{\prime}}/m^{2}\)_._
Proof.: We use the trivial bound of \(Y_{v,w}Y_{v,w^{\prime}}\leq C_{v,w}C_{v,w^{\prime}}\). By Claim 3.4, \(\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]\leq\hbox{\bf E}[C_{v,w}C_{v,w^{\prime}}] \leq d^{2}_{v}d_{w}d_{w^{\prime}}/m^{2}\).
Now for the interesting bound. The quantity \(\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]\) is the probability that both \(Y_{v,w}\) and \(Y_{v,w^{\prime}}\) are \(1\). For this to happen, we definitely require both \((v,w)\) and \((v,w^{\prime})\) to be present as edges. Call this event \({\cal E}\). We also require (at the very least) the degree of \(v\) to be at most the degree of \(w\) (otherwise the edge \((v,w)\) will not be put in \(v\)’s bin.) Call this event \({\cal F}\). If \(D_{v},D_{w}\) denote the degrees of \(v\) and \(w\), note that \(D_{w}\leq d_{w}<\delta d_{v}\), implying event \({\cal F}\) is contained in the event \(\{D_{v}<\delta d_{v}\}\) when \(d_{w}<\delta d_{v}\). Hence, the event \(Y_{v,w}Y_{v,w^{\prime}}=1\) is contained in \({\cal E}\cap\{D_{v}<\delta d_{v}\}\). Assume \(d_{v}>2,d_{w}>0,d_{w^{\prime}}>0\) or else \(\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]=0\) trivially when \(\delta<1/2\).
As in the proof of Claim 3.4, let \(C_{s_{i}^{v},s_{j}^{w}}\) be the indicator random variable for edge being present between stubs \(s_{i}^{v}\) and \(s_{j}^{w}\) of vertices \(v,w\) (and analogously define \(C_{s_{i}^{v},s_{j}^{w^{\prime}}}\)). Then \({\cal E}\) is contained in \(\cup_{1\leq i\neq j\leq d_{v}}\cup_{k=1}^{d_{w}}\cup_{\ell=1}^{d_{w^{\prime}}} \{C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{\ell}^{w^{\prime}}}=1\}\) so that
\[\Pr[Y_{v,w}Y_{v,w^{\prime}}=1] \leq \Pr[{\cal E},D_{v}<\delta d_{v}]\]
\[\leq \sum_{1\leq i\neq j\leq d_{v}}\sum_{k=1}^{d_{w}}\sum_{\ell=1}^{d_ {w^{\prime}}}\Pr[C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{\ell}^{w^{\prime}}}=1, D_{v}<\delta d_{v}]\]
\[= \sum_{1\leq i\neq j\leq d_{v}}\sum_{k=1}^{d_{w}}\sum_{\ell=1}^{d_ {w^{\prime}}}\Pr[C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{\ell}^{w^{\prime}}}=1] \Pr[D_{v}<\delta d_{v}|C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{\ell}^{w^{\prime }}}=1].\]
Given fixed values of \(i,j,k,\ell\) and order stubs \(s_{i}^{v},s_{j}^{v}\) first in the ECM wiring. Then, \(\Pr[C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{\ell}^{w^{\prime}}}=1]\leq m^{-2}\) as in the proof of Claim 3.4. Additionally, conditioned on \(C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{\ell}^{w^{\prime}}}=1\), the remaining stubs form an ECM with respect to a new degree sequence formed by replacing \(2m,d_{v},d_{w},d_{w^{\prime}}\) in the original degree sequence by \(2\tilde{m}=2m-4,d_{v}-2,d_{w}-1,d_{w^{\prime}}-1\). Let \(\tilde{D}_{v}\) denote the degree of \(v\) in the final graph from the new degree sequence. Then, conditioned on \(C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{\ell}^{w^{\prime}}}=1\), \(D_{v}=2+\tilde{D}_{v}\) so that conditional probability is bounded by
\[\Pr[D_{v}<\delta d_{v}|C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^{v},s_{ \ell}^{w^{\prime}}}=1] = \Pr[\tilde{D}_{v}<\delta d_{v}-2|C_{s_{i}^{v},s_{k}^{w}}C_{s_{j}^ {v},s_{\ell}^{w^{\prime}}}=1]\]
\[\leq \Pr[\tilde{D}_{v}<\delta(d_{v}-2)|C_{s_{i}^{v},s_{k}^{w}}C_{s_{j} ^{v},s_{\ell}^{w^{\prime}}}=1]\]
\[\leq 2\exp(-\delta^{\prime}d_{v})\]
since \(\delta<1\). That is, Lem. 3.2 applies to \(\tilde{D}_{v}\) with respect to the new degree sequence where \(v\) has degree \(d_{v}-2\) and each degree in this new sequence is less than \(\sqrt{\tilde{m}}/2\) by assumption. The bound \(\Pr[Y_{v,w}Y_{v,w^{\prime}}=1]\leq 2\exp(-\delta^{\prime}d_{v})d_{v}^{2}d_{w}d _{w^{\prime}}/m^{2}\) then follows. ∎
Armed with these facts, we can bound the expected number of \(P_{2}\)s contained in a single bucket.
**Lemma 4.2**.:
\[\hbox{\bf E}[X_{v}(X_{v}-1)]=O\Big{(}\exp(-\delta d_{v})d^{2}_{v}+\Big{(}m^{-2 }{d^{2}_{v}}\sum_{\begin{subarray}{c}w:\\ d_{w}\geq\delta d_{v}\end{subarray}}\sum_{\begin{subarray}{c}w\neq w^{\prime}: \\ d_{w^{\prime}}\geq\delta d_{v}\end{subarray}}d_{w}d_{w^{\prime}}\Big{)}\Big{)}\]
_._
Proof.: We will write out
\[X^{2}_{v}=\ (\sum_{w}Y_{v,w})^{2} = \sum_{w}Y^{2}_{v,w}+\sum_{w}\sum_{w^{\prime}\neq w}Y_{v,w}Y_{v,w^ {\prime}}\]
where \(\sum_{w}Y^{2}_{v,w}=\sum_{w}Y_{v,w}=X_{v}\) as each \(Y_{v,w}\) is a 0-1 variable. Hence,
\[\hbox{\bf E}[X_{v}(X_{v}-1)]=\sum_{w}\sum_{w^{\prime}\neq w}\hbox {\bf E}[Y_{v,w}Y_{v,w^{\prime}}] \leq \sum_{\begin{subarray}{c}w:\\ d_{w}\geq\delta d_{v}\end{subarray}}\sum_{\begin{subarray}{c}w\neq w^{\prime}: \\ d_{w^{\prime}}\geq\delta d_{v}\end{subarray}}\hbox{\bf E}[Y_{v,w}Y_{v,w^{ \prime}}]+\sum_{\begin{subarray}{c}w:\\ d_{w}<\delta d_{v}\end{subarray}}\sum_{w^{\prime}\neq w}\hbox{\bf E}[Y_{v,w}Y_ {v,w^{\prime}}]\]
\[+\sum_{\begin{subarray}{c}w^{\prime}:\\ d_{w^{\prime}}<\delta d_{v}\end{subarray}}\sum_{w\neq w^{\prime}}\hbox{\bf E}[ Y_{v,w}Y_{v,w^{\prime}}]\]
\[= \sum_{\begin{subarray}{c}w:\\ d_{w}\geq\delta d_{v}\end{subarray}}\sum_{\begin{subarray}{c}w\neq w^{\prime}: \\ d_{w^{\prime}}\geq\delta d_{v}\end{subarray}}\hbox{\bf E}[Y_{v,w}Y_{v,w^{ \prime}}]+2\sum_{\begin{subarray}{c}w:\\ d_{w}<\delta d_{v}\end{subarray}}\sum_{w^{\prime}\neq w}\hbox{\bf E}[Y_{v,w}Y_ {v,w^{\prime}}]\]
\[\leq \frac{d^{2}_{v}}{m^{2}}(\sum_{w:d_{w}\geq\delta d_{v}}d_{w})^{2}+ 2\sum_{\begin{subarray}{c}w:\\ d_{w}<\delta d_{v}\end{subarray}}\sum_{w^{\prime}\neq w}\hbox{\bf E}[Y_{v,w}Y_ {v,w^{\prime}}]\]
by splitting the sums into cases, \(d_{w}\geq\delta d_{v}\) and \(d_{w}<\delta d_{v}\), and using the trivial bound of Claim 4.1 for the first quantity. We satisfy the conditions to use the second part of Claim 4.1 as w
\[\sum_{\begin{subarray}{c}w:\\ d_{w}<\delta d_{v}\end{subarray}}\sum_{w^{\prime}\neq w}\hbox{\bf E}[Y_{v,w}Y_ {v,w^{\prime}}] \leq 2\sum_{\begin{subarray}{c}w:\\ d_{w}<\delta d_{v}\end{subarray}}\sum_{w^{\prime}\neq w}\exp(-\delta d_{v})d^{ 2}_{v}d_{w}d_{w^{\prime}}/m^{2}\]
\[\leq \exp(-\delta d_{v})d^{2}_{v},\]
where \(\sum_{i=1}^{n}d_{i}=2m\), ∎
With this bound for \(\hbox{\bf E}[X_{v}(X_{v}-1)]\), we are ready to prove Thm. 1.1.
**Theorem 4.3**.: \(\hbox{\bf E}[\sum_{v}X_{v}(X_{v}-1)]=O(n+m^{-2}(\sum_{i=1}^{n}d_{i}^{4/3})^{3})\)_._
Proof.: We use linearity of expectation and sum the bound in Lem. 4.2. Note that \(\exp(-\delta d_{v})d^{2}_{v}\) is a decreasing function of \(d_{v}\) and is hence \(O(1)\). The double summation of Lem. 4.2 can be upper bounded by \((\sum_{w:d_{w}\geq\delta d_{v}}d_{w})^{2}\).
\[\hbox{\bf E}[\sum_{v}X_{v}(X_{v}-1)]\lessdot n+m^{-2}\sum_{v}d^{2}_{v}\big{(} \sum_{w:d_{w}\geq\delta d_{v}}d_{w}\big{)}^{2}=n+m^{-2}\sum_{v}\sum_{w:d_{w} \geq\delta d_{v}}\sum_{w^{\prime}:d_{w^{\prime}}\geq\delta d_{v}}d_{v}^{2}d_{w }d_{w^{\prime}}\]
This is the moment where the \(4/3\) moment will appear. Since \(d_{w}\geq\delta d_{v}\) and \(d_{w^{\prime}}\geq\delta d_{v}\), \(d^{2/3}_{v}\leq\delta^{-2/3}d^{1/3}_{w}d^{1/3}_{w^{\prime}}\). Therefore, \(d_{v}^{2}d_{w}d_{w^{\prime}}=d^{4/3}_{v}d^{2/3}_{v}d_{w}d_{w^{\prime}}\)\(\leq\delta^{-2/3}(d_{v}d_{w}d_{w^{\prime}})^{4/3}\). Wrapping it up,
\[m^{-2}\sum_{v}\sum_{w:d_{w}\geq\delta d_{v}}\sum_{w^{\prime}:d_{ w^{\prime}}\geq\delta d_{v}}d_{v}^{2}d_{w}d_{w^{\prime}} \lessdot m^{-2}\sum_{v}\sum_{w:d_{w}\geq\delta d_{v}}\sum_{w^{\prime}:d_{ w^{\prime}}\geq\delta d_{v}}(d_{v}d_{w}d_{w^{\prime}})^{4/3}\]
\[\lessdot m^{-2}(\sum_{v}d_{v}^{4/3})^{3}.\]
∎
## 5 Proving tightness
We show that the bound achieved by Thm. 1.1 is tight for power laws with \(\alpha>2\). This shows that the bounds given in the proof of Cor. 1.3 are tight. The proof, as expected, goes by reversing most of the inequalities given earlier. For convenience, we will assume for the lower bound that \(d_{n}<\sqrt{m}/4\), instead of the \(\sqrt{m}/2\) used for the upper bound. This makes for cleaner technical arguments (we could just as well prove it for \(\sqrt{m}/2\), at the cost of more pain). Proofs are given in Appendix B.
**Claim 5.1**.: _Let \({\bf d}\) be a power law degree sequence with \(\alpha\in(2,7/3)\) with \(d_{n}<\sqrt{m}/4\). Then the expected number of \(P_{2}\)s enumerated by MinBucket over \(ECM({\bf d})\) is \(\Omega(nd^{7-3\alpha}_{n})\)._
## 6 Tighter bounds on the running time
Under a specific choice of degrees, we can pin down the running time of MinBucket up to lower order terms. Rather than starting with an arbitrary degree sequence, we draw the degree for each vertex independently at random from a reference degree distribution \({\cal D}\), given by pdf \(f\). Specifically, \(f(d)\) is the probability that a node draws degree value \(d\), for \(d\) and integer in \([0,\infty)\). After nodes draw degree values, the rest of the ECM construction proceeds as described in §1.1.
Formally, let \({\cal D}_{n}\) be the distribution with support \(\{1,2,\ldots,\lfloor\sqrt{n}/\log^{2}n\rfloor\}\), where the probability of \(d\) is proportional to \(f(d)\). Note that we do not allow a degree of \(0\) and cap the max degree at \(\sqrt{n}/\log^{2}n\) (instead of \(\sqrt{n}\)). These are mostly conveniences for a cleaner proof. We pick the degree sequence by taking \(n\) i.i.d. draws from \({\cal D}_{n}\). So, the degree sequence \({\bf d}\) is distributed according to the product \({\cal D}^{n}_{n}\). Then we generate an ECM with \({\bf d}\). For convenience, we denote \(1-1/\sum_{d\leq n}f(d)\) by \(\gamma_{n}\). Note that \(\gamma_{n}\to 0\), as \(n\rightarrow\infty\). The probability of \(d\) under \({\cal D}_{n}\) is \(f(d)(1-\gamma_{n})\). We use \(m=\sum_{v}d_{v}/2\) to denote the number of edges in the _multigraph_ and heavily use \(m\geq n/2\).
Our analysis assumes that when an edge joins two vertices of the same degree, the edge is placed in the bucket for both edges. Thus we slightly overcount the work for MinBucket. Let \(X_{i,n}\) be the size of the bucket for an arbitrary node \(i\) in a graph generated by ECM with \(n\) nodes. We wish to bound the expected triangle-searching work \(\hbox{\bf E}[\sum_{i=1}^{n}\binom{X_{i,n}}{2}]\) in an ECM graph, as the number of nodes \(n\rightarrow\infty\). We denote \(r\)th moment, \(r>0\), of the reference degree distribution \(f\) as \(\hbox{\bf E}[d^{r}]=\sum_{t=1}^{\infty}t^{r}\cdot f(t)\). The main theorem is as follows.
**Theorem 6.1**.: _Fix any \(n\) and a degree distribution \({\cal D}\) such that \(\hbox{\bf E}[d]\) and \(\hbox{\bf E}[d^{4/3}]\) are finite. Then_
\[\lim_{n\rightarrow\infty}\frac{1}{n}\hbox{\bf E}\left[\sum_{i=1}^{n}{X_{i,n} \choose 2}\right]=\frac{1}{2(\hbox{\bf E}[d])^{2}}\sum_{t_{1}=1}^{\infty}\sum_ {t_{2}=t_{1}}^{\infty}\sum_{t_{3}=t_{1}}^{\infty}t_{1}(t_{1}-1)t_{2}t_{3}f(t_{ 1})f(t_{2})f(t_{3})\in(0,\infty).\]
Note that the expectation for the running time is over two “sources” of randomness, the degree sequence, and the actual configuration graph. Throughout this section, we use \(o(1)\) to denote a quantity that goes to zero as \(n\rightarrow\infty\). We use the shorthand \(A=B\pm C\) for \(A\in[B-C,B+C]\). This is often done with \(C=o(1)\).
The overall structure of the proof can be thought of as a three-stage process. First, we fix the degree sequence \(d_{1},d_{2},\ldots,d_{n}\) (and number of vertices \(n\)) and bound \(\sum_{i=1}^{n}{X_{i,n}\choose 2}\). This is analogous to what was done earlier in §4. We require a more careful analysis where we keep track of various constant factors, so that the final limit can be precisely stated. We then take expectations over the degree sequence. Finally, we take the limit \(n\rightarrow\infty\).
### Bin sizes for low degree vertices
We fix the degree sequence \({\bf d}=(d_{1},d_{2},\ldots,d_{n})\) and consider the probability space of \(ECM({\bf d})\). As before, we use \(D_{v}\) to denote the degree of \(v\) in the resulting simple graph. We denote the \(u\)-stubs (for all vertices \(u\)) by \(u_{1},u_{2},\ldots u_{d_{u}}\). Since \(n\) is fixed here, we drop the subscript \(n\) from \(X_{v,n}\).
The main lemma of this section is a tight bound on the wedge count in \(v\)’s bin, when \(d_{v}\) is small. This is a tigher analogue of Lem. 4.2.
**Lemma 6.2**.: _Suppose \(d_{v}\leq\log n\)._
\[\hbox{\bf E}[X_{v}(X_{v}-1)]=(1\pm o(1))d_{v}(d_{v}-1)\Big{(}\sum_{w\neq v:d_{ w}\geq d_{v}}\sum_{w^{\prime}\notin\{v,w\}:d_{w^{\prime}}\geq d_{v}}d_{w}d_{w^ {\prime}}\Big{)}/(2m)^{2}\pm o(1)\]
We first prove numerous intermediate claims about the following events. We use \(c\) to denote a large enough constant.
* \({\cal E}_{v}\): The event that \(D_{v}\neq d_{v}\).
* \({\cal F}_{v,b}\): Let \(b\) be a number. This is the event that \(D_{v}<b\).
* \({\cal U}_{u_{i},v_{j}}\): For stubs \(u_{i},v_{j}\), this is the event that the \(u_{i}\) is matched to \(v_{j}\) in the multigraph.
* \({\cal W}_{v,w,w^{\prime}}\): The event that wedge \(\{(v,w),(v,w^{\prime})\}\) is formed.
* \({\cal Z}_{v,w,w^{\prime}}\): The event that wedge \(\{(v,w),(v,w^{\prime})\}\) is in \(v\)’s bin.
Note that \({\cal W}_{v,w,w^{\prime}}=\bigcup_{i\neq j\leq d_{v},k\leq d_{w},\ell\leq d_{w ^{\prime}}}\Big{(}{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}} \Big{)}.\) It will be convenient to denote the index set \(\{(i,j,k,\ell)|i\neq j\leq d_{v},k\leq d_{w},\ell\leq d_{w^{\prime}}\}\) as \({\bf I}\).
We begin by getting handles on \({\cal E}_{v},{\cal F}_{v,b}\).
**Claim 6.3**.: _Fix distinct vertices \(v,w,w^{\prime}\), distinct stubs \(v_{i},v_{j},w_{k},w^{\prime}_{\ell}\), and an arbitrary vertex \(u\)._
* _For any_ \(u\) _such that_ \(d_{u}<c\log n\)_,_ \(Pr[{\cal E}_{u}|{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}]< (\log^{3}n)/\sqrt{m}\)_._
* _For any_ \(u\) _and_ \(b\) _such that_ \(d_{u}>b>\log n\)_, then_ \(Pr[{\cal F}_{u,b}|{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}} ]<(\log^{3}n)/\sqrt{m}\)_._
Proof.: Conditioning on the event \(E\equiv{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}\) means that these stubs are matched, with \(2m-4\) stubs remaining in the multigraph; the event \(E\) itself creates no self-loops or multi-edges for vertex \(u\), even if \(u\in\{v,w,w^{\prime}\}\), as \(v,w,w^{\prime}\) are distinct. Let \(S\equiv\{u_{1},\ldots,u_{d_{u}}\}\setminus\{v_{i},w_{k},v_{j},w^{\prime}_{\ell}\}\). For any stubs \(u_{i},u_{j}\in S\) of vertex \(u\), \(i\neq j\), the event \(A_{u_{i},u_{j}}\equiv\)“\(u_{i},u_{j}\) pair” has probability \(P(A_{u_{i},u_{j}}|E)\leq(2m-5)^{-1}\), and the event \(B_{u_{i},u_{j},z}\equiv\)“\(u_{i},u_{j}\) pair to stubs of vertex \(z\neq u\)” has probability \(P(B_{u_{i},u_{j},z}|E)\leq d_{z}(d_{z}-1)/[(2m-5)(2m-7)]\). Then, \(Pr[{\cal E}_{u}|{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}]\) is at most
which is bounded by \((c\log n)^{2}[2m-5]^{-1}+(c\log n)^{2}\sum_{z}d_{z}^{2}/[(2m-5)(2m-7)]<(\log n )^{3}/\sqrt{m}\), using \(\sum_{z}d_{z}^{2}\leq\sqrt{n}2m\) and \(\sqrt{n/m}\leq\sqrt{2}\).
If \(d_{u}<c\log n\), the second part is simply a consequence of the first part. If \(d_{u}\geq c\log n\), we can apply Lem. 3.1 to bound the probability by \(1/\sqrt{m}\). ∎
Our next step is to try to remove the conditioning on \({\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}\). We will need the _Boole-Bonferroni inequalities_ (Prop. C.2 in [25]).
**Theorem 6.4**.: _For any finite set of events \({\cal F}_{1},{\cal F}_{2},\ldots,{\cal F}_{r}\),_
\[\sum_{i\leq r}\Pr[{\cal F}_{i}]-\sum_{i<j\leq r}\Pr[{\cal F}_{i}\cap{\cal F}_{ j}]\leq\Pr[\bigcup_{i\leq r}{\cal F}_{i}]\leq\sum_{i\leq r}\Pr[{\cal F}_{i}]\]
We have a general claim about probabilities of events in conjunction with \({\cal W}_{v,w,w^{\prime}}\).
**Claim 6.5**.: _Fix an arbitrary event \({\cal C}\). Suppose for all \((i,j,k,\ell)\in{\bf I}\), \(\Pr[{\cal C}|{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}]\in[ B_{L},B_{U}]\). Then,_
\[\Pr[{\cal C}\cap{\cal W}_{v,w,w^{\prime}}]\leq B_{U}d_{v}(d_{v}-1 )d_{w}d_{w^{\prime}}/(2m-1)(2m-2)\]
\[\Pr[{\cal C}\cap{\cal W}_{v,w,w^{\prime}}]\geq(B_{L}-1/\log n)d_{ v}(d_{v}-1)d_{w}d_{w^{\prime}}/(2m-1)(2m-2)\]
Proof.: Applying Thm. 6.4 to the events \({\cal C}\cap{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}\),
\[\sum_{(i,j,k,\ell)\in{\bf I}}\Pr[{\cal C}\cap{\cal U}_{v_{i},w_{k }}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}]-\sum_{(i,j,k,\ell)\neq(\hat{i},\hat{ j},\hat{k},\hat{\ell})\in{\bf I}}\Pr[{\cal C}\cap{\cal U}_{v_{i},w_{k}}\cap{ \cal U}_{v_{j},w^{\prime}_{\ell}}\cap{\cal U}_{v_{\hat{i}},w_{\hat{k}}}\cap{ \cal U}_{v_{\hat{j}},w^{\prime}_{\hat{\ell}}}]\] (1)
\[\leq \Pr[{\cal C}\cap{\cal W}_{v,w,w^{\prime}}]\leq\sum_{(i,j,k,\ell) \in{\bf I}}\Pr[{\cal C}\cap{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime }_{\ell}}]\]
We bound each sum separately.
\[\sum_{(i,j,k,\ell)\in{\bf I}}\Pr[{\cal C}\cap{\cal U}_{v_{i},w_{k }}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}] = \sum_{(i,j,k,\ell)\in{\bf I}}\Pr[{\cal C}|{\cal U}_{v_{i},w_{k}} \cap{\cal U}_{v_{j},w^{\prime}_{\ell}}]\Pr[{\cal U}_{v_{i},w_{k}}\cap{\cal U}_ {v_{j},w^{\prime}_{\ell}}]\]
\[\leq B_{U}\sum_{(i,j,k,\ell)\in{\bf I}}\Pr[{\cal U}_{v_{i},w_{k}}\cap {\cal U}_{v_{j},w^{\prime}_{\ell}}]\]
\[= B_{U}|{\bf I}|/(2m-1)(2m-2)=B_{U}d_{v}(d_{v}-1)d_{w}d_{w^{\prime }}/(2m-1)(2m-2)\]
This completes the upper bound proof. By an identical argument, we get \(\sum_{(i,j,k,\ell)\in{\bf I}}\Pr[{\cal C}\cap{\cal U}_{v_{i},w_{k}}\cap{\cal U }_{v_{j},w^{\prime}_{\ell}}]\geq B_{L}d_{v}(d_{v}-1)d_{w}d_{w^{\prime}}/(2m-1) (2m-2)\). We deal with the double summation in the next claim. Applying this bound to (1) completes the proof. ∎
**Claim 6.6**.: \(\sum_{(i,j,k,\ell)\neq(\hat{i},\hat{j},\hat{k},\hat{\ell})\in{\bf I}}\Pr[{\cal C }\cap{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}\cap{\cal U}_ {v_{\hat{i}},w_{\hat{k}}}\cap{\cal U}_{v_{\hat{j}},w^{\prime}_{\hat{\ell}}}] \leq(1/\log n)(d_{v}(d_{v}-1)d_{w}d_{w^{\prime}}/m^{2})\)_._
Proof.: We can simply upper bound \(\Pr[{\cal C}\cap{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}} \cap{\cal U}_{v_{\hat{i}},w_{\hat{k}}}\cap{\cal U}_{v_{\hat{j}},w^{\prime}_{ \hat{\ell}}}]\leq\Pr[{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{ \ell}}\cap{\cal U}_{v_{\hat{i}},w_{\hat{k}}}\cap{\cal U}_{v_{\hat{j}},w^{ \prime}_{\hat{\ell}}}]\).
Consider \((i,j,k,\ell)\neq(\hat{i},\hat{j},\hat{k},\hat{\ell})\). The corresponding event involves stub pairs \((v_{i},w_{k})\), \((v_{j},w^{\prime}_{\ell})\), \((v_{\hat{i}},w_{\hat{k}})\), and \((v_{\hat{j}},w^{\prime}_{\hat{\ell}})\). If all these pairs are distinct (even if the stubs are common), then \(\Pr[{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}\cap{\cal U}_{ v_{\hat{i}},w_{\hat{k}}}\cap{\cal U}_{v_{\hat{j}},w^{\prime}_{\hat{\ell}}}] \leq 1/m^{4}\). The number of pairs of tuples \((i,j,k,\ell)\neq(\hat{i},\hat{j},\hat{k},\hat{\ell})\) is at most \(|{\bf I}|^{2}\).
Since \((i,j,k,\ell)\neq(\hat{i},\hat{j},\hat{k},\hat{\ell})\), at most one of the pairs can be the same. If (say) \((v_{i},w_{k})=(v_{\hat{i}},w_{\hat{k}})\), then \(\Pr[{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}\cap{\cal U}_{ v_{\hat{i}},w_{\hat{k}}}\cap{\cal U}_{v_{\hat{j}},w^{\prime}_{\hat{\ell}}}] \leq 1/m^{3}\). The number of pairs of such \({\bf I}\)-tuples is at most \(|{\bf I}|(d_{v}d_{w}+d_{v}d_{w^{\prime}})\).
\[\sum_{(i,j,k,\ell),(\hat{i},\hat{j},\hat{k},\hat{\ell})\in{\bf I} }\Pr[{\cal U}_{v_{i},w_{k}}\cap{\cal U}_{v_{j},w^{\prime}_{\ell}}\cap{\cal U}_ {v_{\hat{i}},w_{\hat{k}}}\cap{\cal U}_{v_{\hat{j}},w^{\prime}_{\hat{\ell}}}]\]
\[\leq |{\bf I}|^{2}/m^{4}+|{\bf I}|(d_{v}d_{w}+d_{v}d_{w^{\prime}})/m^{ 3}=(d_{v}(d_{v}-1)d_{w}d_{w^{\prime}}/m^{2})(d_{v}(d_{v}-1)d_{w}d_{w^{\prime}} /m^{2}+d_{v}(d_{w}+d_{w^{\prime}})/m)\]
Since \(d_{v},d_{w},d_{w^{\prime}}\leq\sqrt{n}/\log^{2}n\) and \(m\geq n/2\), the final multiplier is at most \(1/\log n\). ∎
As a corollary of Claim 6.5, we can set \({\cal C}={\cal W}_{v,w,w^{\prime}}\) and \(B_{L}=B_{U}=1\) to get the following.
**Claim 6.7**.: \(\Pr[{\cal W}_{v,w,w^{\prime}}]=(1\pm 1/\log n)d_{v}(d_{v}-1)d_{w}d_{w^{\prime} }/(2m-1)(2m-2)\)__
Now for an important lemma that bounds the probabilities of \({\cal Z}_{v,w,w^{\prime}}\).
**Lemma 6.8**.: _Fix distinct vertices \(v,w,w^{\prime}\), such that \(d_{v}<c\log n\)._
* _If_ \(\min\{d_{w},d_{w^{\prime}}\}\geq d_{v}\)_,_ \(\Pr[{\cal Z}_{v,w,w^{\prime}}]=(1\pm o(1))d_{v}(d_{v}-1)d_{w}d_{w^{\prime}}/(2 m)^{2}\)_._
* _If_ \(\min\{d_{w},d_{w^{\prime}}\}<d_{v}\)_,_ \(\Pr[{\cal Z}_{v,w,w^{\prime}}]=O((1/m^{1/4})\cdot d_{v}(d_{v}-1)d_{w}d_{w^{ \prime}}/m^{2})\)_._
Proof.: Start with the first case of \(\min\{d_{w},d_{w^{\prime}}\}\geq d_{v}\). Since \({\cal Z}_{v,w,w^{\prime}}\subset{\cal W}_{v,w,w^{\prime}}\), \(\Pr[{\cal Z}_{v,w,w^{\prime}}]\leq\Pr[{\cal W}_{v,w,w^{\prime}}]\) and Claim 6.7 completes the upper bound.
Consider the event \({\cal W}_{v,w,w^{\prime}}\cap\overline{{\cal E}_{v}}\cap\overline{{\cal F}_{w, d_{v}}}\cap\overline{{\cal F}_{w^{\prime},d_{v}}}\). In this case, \(D_{v}=d_{v}\) and \(D_{w},D_{w^{\prime}}\geq d_{v}\), so \({\cal Z}_{v,w,w^{\prime}}\) contains this event.
\[\Pr[{\cal Z}_{v,w,w^{\prime}}] \geq \Pr[{\cal W}_{v,w,w^{\prime}}\cap\overline{{\cal E}_{v}}\cap \overline{{\cal F}_{w,d_{v}}}\cap\overline{{\cal F}_{w^{\prime},d_{v}}}]\]
\[= \Pr[{\cal W}_{v,w,w^{\prime}}\cap\overline{{\cal E}_{v}\cup{\cal F }_{w,d_{v}}\cup{\cal F}_{w^{\prime},d_{v}}}]\]
\[= \Pr[{\cal W}_{v,w,w^{\prime}}]-\Pr[{\cal W}_{v,w,w^{\prime}}\cap( {\cal E}_{v}\cup{\cal F}_{w,d_{v}}\cup{\cal F}_{w^{\prime},d_{v}})]\]
\[\geq \Pr[{\cal W}_{v,w,w^{\prime}}]-\Pr[{\cal W}_{v,w,w^{\prime}}\cap{ \cal E}_{v}]-\Pr[{\cal W}_{v,w,w^{\prime}}\cap{\cal F}_{w,d_{v}}]-\Pr[{\cal W} _{v,w,w^{\prime}}\cap{\cal F}_{w^{\prime},d_{v}}]\]
(The last inequality follows by a union bound.) We can bound \(\Pr[{\cal W}_{v,w,w^{\prime}}\cap{\cal E}_{v}]\) by applying the upper bound of Claim 6.5 with \(B_{U}=(\log^{3}n)/\sqrt{m}\) (as obtained from Claim 6.3). This gives an upper bound of \(((\log^{3}n)/\sqrt{m})d_{v}(d_{v}-1)d_{w}d_{w^{\prime}}/(2m)^{2}\)\(=o(1)\cdot d_{v}(d_{v}-1)d_{w}d_{w^{\prime}}/(2m)^{2}\). Identical arguments hold for \(\Pr[{\cal W}_{v,w,w^{\prime}}\cap{\cal F}_{w,d_{v}}]\) and \(\Pr[{\cal W}_{v,w,w^{\prime}}\cap{\cal F}_{w^{\prime},d_{v}}]\). Using the bound from Claim 6.7 for \(\Pr[{\cal W}_{v,w,w^{\prime}}]\), \(\Pr[{\cal Z}_{v,w,w^{\prime}}]\geq(1-o(1))d_{v}(d_{v}-1)d_{w}d_{w^{\prime}}/(2 m)^{2}\). This completes the first case.
Now, suppose \(\min\{d_{w},d_{w^{\prime}}\}<d_{v}\). Note that \({\cal Z}_{v,w,w^{\prime}}\) is contained in \({\cal W}_{v,w,w^{\prime}}\cap({\cal E}_{v}\cup{\cal E}_{w})\). This is because when \(\overline{{\cal E}_{v}\cup{\cal E}_{w}}\) occurs, \(D_{v}=d_{v}>d_{w}=D_{w}\), so the wedge cannot be present in \(v\)’s bin. By the union bound, \(\Pr[{\cal Z}_{v,w,w^{\prime}}]\leq\Pr[{\cal W}_{v,w,w^{\prime}}\cap{\cal E}_{v }]+\Pr[{\cal W}_{v,w,w^{\prime}}\cap{\cal E}_{w}]\). Using the argument above, this is at most \(((2\log^{3}n)/\sqrt{m})d_{v}(d_{v}-1)d_{w}d_{w^{\prime}}/(2m)^{2}=O((1/m^{1/4} )\cdot d_{v}(d_{v}-1)d_{w}d_{w^{\prime}}/m^{2})\). ∎
We are ready to prove the main lemma.
Proof.: (of Lem. 6.2) Note that \(\hbox{\bf E}[X_{v}(X_{v}-1)]=\hbox{\bf E}[\sum_{w\neq v}\sum_{w^{\prime}\notin \{v,w\}}\mathbb{I}({\cal Z}_{v,w,w^{\prime}})]\). By linearity of expectation, this sum is \(\sum_{w\neq v}\sum_{w^{\prime}\notin\{v,w\}}\Pr[{\cal Z}_{v,w,w^{\prime}}]\), which can be split as follows.
\[\sum_{w\neq v}\sum_{w^{\prime}\notin\{v,w\}}\Pr[{\cal Z}_{v,w,w^{ \prime}}] = \sum_{w\neq v:d_{w}\geq d_{v}}\sum_{w^{\prime}\notin\{v,w\}:d_{w^ {\prime}}\geq d_{v}}\Pr[{\cal Z}_{v,w,w^{\prime}}]+\sum_{w\neq v:d_{w}\geq d_{ v}}\sum_{w^{\prime}\notin\{v,w\}:d_{w^{\prime}}<d_{v}}\Pr[{\cal Z}_{v,w,w^{ \prime}}]\]
\[+\sum_{w\neq v:d_{w}<d_{v}}\sum_{w^{\prime}\notin\{v,w\}}\Pr[{ \cal Z}_{v,w,w^{\prime}}]\]
We deal with each of these summations using Lem. 6.8. In the first summation, \(\min\{d_{w},d_{w^{\prime}}\}\geq d_{v}\).
\[\sum_{w\neq v:d_{w}\geq d_{v}}\sum_{w^{\prime}\notin\{v,w\}:d_{w^{\prime}}\geq d _{v}}\Pr[{\cal Z}_{v,w,w^{\prime}}]=(1\pm o(1))d_{v}(d_{v}-1)\sum_{w\neq v:d_{ w}\geq d_{v}}\sum_{w^{\prime}\notin\{v,w\}:d_{w^{\prime}}\geq d_{v}}d_{w}d_{w^ {\prime}}/(2m)^{2}\]
In the second summation, \(\min\{d_{w},d_{w^{\prime}}\}<d_{v}\).
\[\sum_{w\neq v:d_{w}\geq d_{v}}\sum_{w^{\prime}\notin\{v,w\}:d_{w^{\prime}}<d_{ v}}\Pr[{\cal Z}_{v,w,w^{\prime}}]\lessdot d_{v}(d_{v}-1)/m^{2+1/4}\sum_{w}\sum _{w^{\prime}}d_{w}d_{w^{\prime}}\lessdot(\log n)^{2}/m^{1/4}=o(1)\]
(We use the fact that \(\sum_{w}d_{w}=2m\), and the bound of \(d_{v}<c\log n\). The third summation can be handled similarly, completing the proof. ∎
### Expectation over \({\cal D}_{n}\)
Our aim is to take the expectation of Lem. 6.2 over the degrees. We will distinguish over the sources of randomness, by using \(\hbox{\bf E}_{{\bf d}}[\ldots]\) to denote expectations over \({\bf d}\sim{\cal D}^{n}_{n}\). We use \(\hbox{\bf E}_{G}[\ldots]\) for the expectation over the graph chosen from \(ECM({\bf d})\). Because all vertices are basically identical, we just focus on the first vertex. The main lemma is a fairly precise expression for the expectation of Lem. 6.2. We denote the degree threshold \(\sqrt{n}/\log^{2}n\) for \({\cal D}_{n}\) by \(M(n)\).
**Lemma 6.9**.:
\[\hbox{\bf E}_{{\bf d}}\hbox{\bf E}_{G}[X_{1}(X_{1}-1)] = (1\pm o(1))/\hbox{\bf E}[d_{2}]^{2})\Big{(}\sum_{t_{1}\leq\log n} \sum_{t_{2}=t_{1}}^{M(n)}\sum_{t_{3}=t_{1}}^{M(n)}t_{1}(t_{1}-1)t_{2}t_{3}f(t_ {1})f(t_{2})f(t_{3})\Big{)}\pm o(1)\]
\[+O\Big{(}\sum_{t_{1}>\log n}^{M(n)}\sum_{t_{2}=\delta t_{1}}^{M(n )}\sum_{t_{3}=\delta t_{1}}^{M(n)}t^{2}_{1}t_{2}t_{3}f(t_{1})f(t_{2})f(t_{3}) \Big{)}\]
As a first step, we first condition on the choice of \(d_{1}\) and choose the other degrees according to \({\cal D}_{n}\). We denote the conditional expectation over this distribution by \(\hbox{\bf E}_{{\bf d}_{-1}}\). We will need the following claim about the concentration of \(m\).
**Claim 6.10**.: _With probability \(>1-n^{-\log n}\), \(|m-\hbox{\bf E}_{{\bf d}_{-1}}[m]|\leq n/\log n\)._
Proof.: We have \(m=\sum_{v}d_{v}/2\) and \(\hbox{\bf E}_{{\bf d}_{-1}}=d_{1}/2+\sum_{v\neq 1}\hbox{\bf E}_{\bf d}[d_{v}/2]\). Note that each \(d_{v}/2\) is in \([1,\sqrt{n}/\log^{2}n]\) and they are all independent. By Hoeffding’s inequality [20], \(\Pr[|\sum_{v\neq 1}d_{v}/2-\hbox{\bf E}_{\bf d}[\sum_{v\neq 1}d_{v}/2]|\geq n/ \log n]<2\exp(-2(n/\log n)^{2}/\sum_{v\neq 1}(\sqrt{n}/\log^{2}n)^{2})=2\exp(- 2\log^{2}n)<n^{-\log n}\). ∎
As a step towards Lem. 6.9, we condition on \(d_{1}\). When \(d_{1}\) is small, we can use Lem. 6.2 of the previous section.
**Claim 6.11**.: _Suppose \(d_{1}\leq\log n\). Then_
\[\hbox{\bf E}_{{\bf d}_{-1}}\hbox{\bf E}_{G}[X_{1}(X_{1}-1)]=(1\pm o(1))d_{1}(d _{1}-1)/\hbox{\bf E}[d_{2}]^{2})\Big{(}\sum_{t_{2}=t_{1}}^{M(n)}\sum_{t_{3}=t_ {1}}^{M(n)}t_{2}t_{3}f(t_{2})f(t_{3})\Big{)}\pm o(1)\]
_, where the order bound \(o(1)\) applies uniformly over \(d_{1}\leq\log n\)._
Proof.: It is convenient to define random variable \(T_{w}\) where \(T_{w}=0\), if \(d_{w}<d_{1}\), and \(T_{w}=d_{w}\) if \(d_{w}\geq d_{1}\). We can rewrite the bound of Lem. 6.2 as
\[\hbox{\bf E}_{G}[X_{1}(X_{1}-1)]=(1\pm o(1))d_{1}(d_{1}-1)\Big{(}\sum_{w>1} \sum_{w^{\prime}\notin\{1,w\}}T_{w}T_{w^{\prime}}\Big{)}/(2m)^{2}\pm o(1)\]
It will be convenient to denote the double summation by \(A\). Now, we take expectations over the \({\bf d}_{-1}=(d_{2},\ldots,d_{n})\). There is a slight technical difficulty, since \(m\) is itself a random variable. We use some Bayes’ rule manipulations to handle this. Let \({\cal C}\) denote the event that \(m\in[(1-1/\log n)\hbox{\bf E}_{{\bf d}_{-1}}[m],(1+1/\log n)\hbox{\bf E}_{{\bf d }_{-1}}[m]]\).
\[\hbox{\bf E}_{{\bf d}_{-1}}[d_{1}(d_{1}-1)A/m^{2}]=\hbox{\bf E}_{{\bf d}_{-1}} [d_{1}(d_{1}-1)A/m^{2}|{\cal C}]\Pr[{\cal C}]+\hbox{\bf E}_{{\bf d}_{-1}}[d_{1 }(d_{1}-1)A/m^{2}|\overline{{\cal C}}]\Pr[\overline{{\cal C}}]\]
Since \(d_{1}(d_{1}-1)A/m^{2}\leq n^{4}\) and by Claim 6.10, \(\Pr[\overline{{\cal C}}]\leq n^{-\log n}\), the latter term is \(o(1)\). By definition of \({\cal C}\), \(\hbox{\bf E}_{{\bf d}_{-1}}[1/m^{2}|{\cal C}]=(1\pm o(1))/\hbox{\bf E}_{{\bf d }_{-1}}[m]^{2}\).
\[\hbox{\bf E}_{{\bf d}_{-1}}\hbox{\bf E}_{G}[X_{1}(X_{1}-1)]=(1\pm o (1))d_{1}(d_{1}-1)\hbox{\bf E}_{{\bf d}_{-1}}[A|{\cal C}]/(2\hbox{\bf E}_{{\bf d }_{-1}}[m])^{2}\pm o(1)\]
We defer the bound of \(\hbox{\bf E}_{{\bf d}_{-1}}[A|{\cal C}]\) to Claim 6.12. Let us first apply Claim 6.12 to prove the main lemma.
\[\hbox{\bf E}_{{\bf d}_{-1}}\hbox{\bf E}_{G}[X_{1}(X_{1}-1)]\]
\[= (1\pm o(1))d_{1}(d_{1}-1)(n/2\hbox{\bf E}_{{\bf d}_{-1}}[m])^{2} \sum_{t_{2}=d_{1}}^{M(n)}\sum_{t_{3}=d_{1}}^{M(n)}t_{2}t_{3}f(t_{2})f(t_{3}) \pm o(1)\pm d_{1}(d_{1}-1)/(2\hbox{\bf E}_{{\bf d}_{-1}}[m])^{2}\]
Since \(d_{1}<\sqrt{n}/\log n\) and \(\hbox{\bf E}_{{\bf d}_{-1}}[m]\geq n/2\), the final term is \(o(1)\). Note that \(2\hbox{\bf E}_{{\bf d}_{-1}}[m]=d_{1}+\sum_{v>1}\hbox{\bf E}_{{\bf d}_{-1}}[d_ {v}]=d_{1}+(n-1)\hbox{\bf E}[d_{2}]=(1\pm o(1))n\hbox{\bf E}[d_{2}]\). Plugging this bound in, the proof is completed. ∎
**Claim 6.12**.: \(\hbox{\bf E}_{{\bf d}_{-1}}[A|{\cal C}]=(1\pm o(1))n^{2}\sum_{t_{2}=d_{1}}^{M( n)}\sum_{t_{3}=d_{1}}^{M(n)}t_{2}t_{3}f(t_{2})f(t_{3})\pm o(1)\)_, where the order bound \(o(1)\) applies uniformly over \(d_{1}\leq\log n\)._
Proof.: By Bayes’ rule, \(\hbox{\bf E}_{{\bf d}_{-1}}[A|{\cal C}]=(\Pr[{\cal C}]^{-1})(\hbox{\bf E}_{{ \bf d}_{-1}}[A]-\hbox{\bf E}_{{\bf d}_{-1}}[A|\overline{{\cal C}}]\Pr[ \overline{{\cal C}}])\). Since \(A\leq n^{4}\) and \(\Pr[\overline{{\cal C}}]<n^{-\log n}\), \(\hbox{\bf E}_{{\bf d}_{-1}}[A|\overline{{\cal C}}]\Pr[\overline{{\cal C}}]=o(1)\). Therefore, \(\hbox{\bf E}_{{\bf d}_{-1}}[A|{\cal C}]=(1\pm o(1))\hbox{\bf E}_{{\bf d}_{-1}} [A]-o(1)\). Writing out \(A\),
\[\hbox{\bf E}_{{\bf d}_{-1}}\Big{[}\sum_{w>1}\sum_{w^{\prime} \notin\{1,w\}}T_{w}T_{w^{\prime}}\Big{]}=\sum_{w>1}\sum_{w^{\prime}\notin\{1,w \}}\hbox{\bf E}_{{\bf d}_{-1}}[T_{w}]\hbox{\bf E}_{{\bf d}_{-1}}[T_{w^{\prime}}]\]
Because degrees are drawn independently, \(\hbox{\bf E}_{{\bf d}_{-1}}[T_{w}T_{w^{\prime}}]=\hbox{\bf E}_{{\bf d}_{-1}}[T _{w}]\hbox{\bf E}_{{\bf d}_{-1}}[T_{w^{\prime}}]\).
\[\hbox{\bf E}_{{\bf d}_{-1}}[T_{w}]=\hbox{\bf E}_{{\bf d}_{-1}}[T_{w^{\prime}}] =(1-\gamma_{n})\sum_{t_{2}=d_{1}}^{M(n)}t_{2}f(t_{2})\]
Plugging this bound into the previous equation, we get
\[\hbox{\bf E}_{{\bf d}_{-1}}[A]=(n-1)(n-2)(1-\gamma_{n})^{2}\sum_{ t_{2}=d_{1}}^{M(n)}\sum_{t_{3}=d_{1}}^{M(n)}t_{2}t_{3}f(t_{2})f(t_{3})\]
We use the fact that \((n-1)(n-2)=(1\pm o(1))n^{2}\) and \(\gamma_{n}=o(1)\) to get the final proof. ∎
We require a bound for large \(d_{1}\). This can be directly obtained with the looser arguments of Lem. 4.2.
**Claim 6.13**.: _Suppose \(d_{1}>\log n\)._
\[\hbox{\bf E}_{{\bf d}_{-1}}\hbox{\bf E}_{G}[X_{1}(X_{1}-1)]=O\Big{(}d^{2}_{1} \sum_{t_{2}=\delta d_{1}}^{M(n)}\sum_{t_{3}=\delta d_{1}}^{M(n)}t_{2}t_{3}f(t_ {2})f(t_{3})\Big{)}+o(1),\]
_where the order bound \(o(1)\) applies uniformly over \(d_{1}>\log n\)._
Proof.: The first term in Lem. 4.2 is \(\exp(-\delta d_{1})d^{2}_{1}\), which is \(o(1)\) for \(d_{1}>\log n\). (The constant \(\delta\) comes from Lem. 4.2.) Redefine \(T_{w}\) to be \(d_{w}\) is \(d_{w}\geq\delta d_{1}\) and \(0\) otherwise. Much of the following calculations are similar to those in the proof above.
\[\hbox{\bf E}_{{\bf d}_{-1}}\hbox{\bf E}_{G}[X_{1}(X_{1}-1)] \lessdot \hbox{\bf E}_{{\bf d}_{-1}}[m^{-2}d_{1}(d_{1}-1)\sum_{w>1}\sum_{w ^{\prime}\notin\{1,w\}}T_{w}T_{w^{\prime}}]\pm o(1)\]
\[\lessdot n^{-2}d_{1}(d_{1}-1)\sum_{w>1}\sum_{w^{\prime}\notin\{1,w\}} \hbox{\bf E}_{{\bf d}_{-1}}[T_{w}]\hbox{\bf E}_{{\bf d}_{-1}}[T_{w^{\prime}}]\]
\[\lessdot d^{2}_{1}\sum_{t_{2}=\delta d_{1}}^{M(n)}\sum_{t_{3}=\delta d_{1 }}^{M(n)}t_{2}t_{3}f(t_{2})f(t_{3})\]
∎
Lem. 6.9 follows directly by applications of Claim 6.11 and Claim 6.13. We simply express \(\hbox{\bf E}_{{\bf d}}\hbox{\bf E}_{G}[X_{1}(X_{1}-1)]\) as \(\sum^{M(n)}_{t_{1}=1}f(t_{1})\hbox{\bf E}_{{\bf d}_{-1}}\hbox{\bf E}_{G}[X_{1} (X_{1}-1)|d_{1}=t_{1}]\). When \(t_{1}\leq\log n\), we apply Claim 6.11. Otherwise, we use Claim 6.13.
### Taking the limit
We prove the main theorem, restated for convenience.
**Theorem 6.14**.: _Fix any \(n\) and a degree distribution \({\cal D}\) such that \(\hbox{\bf E}[d]\) and \(\hbox{\bf E}[d^{4/3}]\) are bounded. Then_
\[\lim_{n\rightarrow\infty}\frac{1}{n}\hbox{\bf E}\left[\sum_{i=1}^{n}{X_{i,n} \choose 2}\right]=\frac{1}{2(\hbox{\bf E}[d])^{2}}\sum_{t_{1}=1}^{\infty}\sum_ {t_{2}=t_{1}}^{\infty}\sum_{t_{3}=t_{1}}^{\infty}t_{1}(t_{1}-1)t_{2}t_{3}f(t_{ 1})f(t_{2})f(t_{3})\in(0,\infty).\]
Proof.: By linearity of expectation and the fact that all degrees are chosen identically, \(\frac{1}{n}\hbox{\bf E}\left[\sum_{i=1}^{n}{X_{i,n}\choose 2}\right]=\hbox{\bf E }[X_{1,n}(X_{1,n}-1)/2]\). So we only need the limit of the expression in Lem. 6.9. We first show the second summation is negligible.
\[\sum_{t_{1}>\log n}\sum_{t_{2}=\delta t_{1}}^{M(n)}\sum_{t_{3}= \delta t_{1}}^{M(n)}d^{2}_{1}t_{2}t_{3}f(t_{1})f(t_{2})f(t_{3})\]
\[\leq \delta^{-2/3}\sum_{t_{1}>\log n}\sum_{t_{2}=\delta t_{1}}^{M(n)} \sum_{t_{3}=\delta t_{1}}^{M(n)}d^{4/3}_{1}d^{4/3}_{2}d^{4/3}_{3}f(t_{1})f(t_{ 2})f(t_{3})\]
\[\leq \delta^{-2/3}\sum_{d>\log n}d^{4/3}f(d)\]
Since \(\hbox{\bf E}[d^{4/3}]=\sum_{t=1}^{\infty}t^{4/3}f(t)\) is finite, \(\lim_{n\rightarrow\infty}\sum_{t>\log n}t^{4/3}f(t)=0\). For the first triple summation in Lem. 6.9, again, we can upper bound the term by \(O(\hbox{\bf E}[d^{4/3}]^{3})\). It is also nonnegative and monotonically increasing with \(n\), so by the monotone convergence theorem, it converges to limit given in the theorem statement.
∎
## 7 Experimental Analysis
We experimentally show the theoretical analysis of §6 does a reasonable job of capturing the expected performance of MinBucket on ECM graphs. Though real-world graphs likely have additional structure, this partially validates the good practical performance of MinBucket in practice.
We generated ECM graphs of various sized based on a power-law degree distribution with power exponent \(\alpha=2.4\) (which guarantees a finite \(\frac{4}{3}\) moment for the degree distribution). Figure 1(a) shows the average value of \(\sum_{i=1}^{n}{X_{i,n}\choose 2}\), total work over all buckets, computed over 10 Monte Carlo trials (i.e., taken as an approximation of \(\mathrm{E}[\sum_{i=1}^{n}{X_{i,n}\choose 2}]\)) for ECM graphs of various sizes up to \(n=80\) million. Degrees are truncated at \(\sqrt{n}\). The ECM use power law reference degree distributions for \(\alpha=2.3\), where MinBucket runs in superlinear time, and for \(\alpha=2.4\), where it runs in linear time. Figure 1(a) also shows the theoretical linear bound on the overall expected work \(\mathrm{E}[\sum_{i=1}^{n}{X_{i,n}\choose 2}]\) for a power-law degree distribution with \(\alpha=2.4\). The constant is at most:
\[\lim_{n\to\infty}\frac{1}{n}\mathrm{E}\left[\sum_{i=1}^{n}{X_{i,n}\choose 2} \right]\equiv C=\frac{1}{2(\mathrm{E}[D])^{2}}\sum_{d_{1}=0}^{\infty}\sum_{d_{ 2}=d_{1}}^{\infty}\sum_{d_{3}=d_{1}}^{\infty}d_{1}(d_{1}-1)d_{2}d_{3}f(d_{1})f (d_{2})f(d_{3})\approx 0.687935.\]
As \(n\rightarrow\infty\), we would anticipate from Thm. 6.1 that the value of \(\mathrm{E}[\sum_{i=1}^{n}\sum_{i=1}^{n}{X_{i,n}\choose 2}]\) approximated by Monte Carlo trials should approach \(nC\), also shown in Figure 1(a). Figure 1(b) shows the ratio of work to number of nodes \(n\). For power law distributions with \(\alpha=2\), this ratio is not a constant. But by \(\alpha=2.4\), the factor is leveling off below \(1\).
<figure><img src="content_image/1407.1116/x1.png"><figcaption>Figure 1: Experimental results with n-node ECM graphs with degrees ≤n1/2 drawnfrom a power-law distribution with exponent α=2.3 or 2.4. The red solid lineshows n. The green dashed line shows a theoretical bound 0.687935n on theexpected number of pairs in buckets for an ECM graph with n vertices andexponent 2.4. The blue crosses shows the average value of pairs observed in 10generations of ECM graphs. The magenta line and blue boxes show the same forexponent 2.3. (b) Experimental values of the ratio of work to n for power lawexponents 2, 2.3, and 2.4.</figcaption></figure>
## Acknowledgments
This work was funded under the Sandia National Laboratories Laboratory Directed Research and Development (LDRD) program. We thank Ali Pinar for suggestions on improving the presentation.
## References
* [ACL01] W. Aiello, F. Chung, and L. Lu. A random graph model for power law graphs. _Experimental Mathematics_, 10:53–66, 2001.
* [AYZ97] N. Alon, R. Yuster, and U. Zwick. Finding and counting given length cycles. _Algorithmica_, 17:354–364, 1997.
* [BA99] Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. _Science_, 286:509–512, October 1999.
* [BC78] E. A. Bender and E.R. Canfield. The asymptotic number of labeled graphs with given degree sequences. _Journal of Combinatorial Theory A_, 24:296–307, 1978.
* [BDML06] T. Britton, M. Deijfen, and A. Martin-Löf. Generating simple random graphs with prescribed degree distribution. _Journal of Statistical Physics_, 124(6), September 2006.
* [BHLP11] J. W. Berry, B. Hendrickson, R. A. LaViolette, and C. A. Phillips. Tolerating the Community Detection Resolution Limit with Edge Weighting. _Physical Review E_, 83(5), May 2011.
* [BKM\({}^{+}\)00] A. Broder, R. Kumar, F. Maghoul, P. Raghavan, S. Rajagopalan, R. Stata, A. Tomkins, and J. Wiener. Graph structure in the web. _Computer Networks_, 33:309–320, 2000.
* [Bol80] B. Bollobás. A probabilistic proof of an asymptotic formula for the number of labelled regular graphs. _European Journal on Combinatorics_, 1:311–316, 1980.
* [Bur04] R. S. Burt. Structural holes and good ideas. _American Journal of Sociology_, 110(2):349–399, 2004.
* [Bur07] R. S. Burt. Secondhand brokerage: Evidence on the importance of local structure for managers, bankers, and analysts. _Academy of Management Journal_, 50, 2007.
* [CE91] M. Chrobak and D. Eppstein. Planar orientations with low out-degree and compaction of adjacency matrices. _Theoretical Computer Science_, 86:243–266, 1991.
* [CL02] F. Chung and L. Lu. The average distances in random graphs with given expected degrees. _PNAS_, 99:15879–15882, 2002.
* [CLV03] F. Chung, L. Lu, and V. Vu. Eigenvalues of random power law graphs. _Annals of Combinatorics_, 7:21–33, 2003.
* [CN85] N. Chiba and T.Takao Nishizeki. Arboricity and subgraph listing algorithms. _SIAM J. Comput._, 14:210–223, February 1985.
* [Coh09] J. Cohen. Graph twiddling in a MapReduce world. _Computing in Science & Engineering_, 11:29–41, 2009.
* [Col88] J. S. Coleman. Social capital in the creation of human capital. _American Journal of Sociology_, 94:S95–S120, 1988.
* [FFF99] M. Faloutsos, P. Faloutsos, and C. Faloutsos. On power-law relationships of the internet topology. In _Proceedings of SIGCOMM_, pages 251–262, 1999.
* [FH97] I. Fudos and C. M. Hoffmann. A graph-constructive approach to solving systems of geometric constraints. _ACM Transactions on Graphics_, 16(2):179–216, 1997.
* [FWVDC10] B. Foucault Welles, A. Van Devender, and N.Noshir Contractor. Is a friend a friend?: Investigating the structure of friendship networks in virtual worlds. In _CHI-EA’10_, pages 4027–4032, 2010.
* [Hoe63] W. Hoeffding. Probability inequalities for sums of bounded random variables. _Journal of the American Statistical Association_, 58:13–30, 1963.
* [IR78] A. Ital and M. Rodeh. Finding a minimum circuit in a graph. _SIAM Journal on Computing_, 7:413–423, 1978.
* [Lat08] M. Latapy. Main-memory triangle computations for very large (sparse (power-law)) graphs. _Theoretical Computer Science_, 407:458–473, 2008.
* [MP02] M. Mihail and C. Papadimitriou. On the eigenvalue power law. In _RANDOM_, pages 254–262, 2002.
* [MR95a] M. Molloy and B. Reed. A critical point for random graphs with a given degree sequence. _Random Structures and Algorithms_, 6:161–179, 1995.
* [MR95b] R. Motwani and P. Raghavan. _Randomized Algorithms_. Cambridge University Press, 1995.
* [MR98] M. Molloy and B. Reed. The size of the giant component of a random graph with a given degree sequence. _Combinatorics, Probability and Computing_, 7:295–305, 1998.
* [New03] M.E.J. Newman. The structure and function of complex networks. _SIAM Review_, 45:167–256, 2003.
* [NSW01] M. E. J. Newman, S. Strogatz, and D. Watts. Random graphs with arbitrary degree distributions and their applications. _Physical Review E_, 64:026118, 2001.
* [Por98] A. Portes. Social capital: Its origins and applications in modern sociology. _Annual Review of Sociology_, 24(1):1–24, 1998.
* [SV11] S. Suri and S. Vassilvitskii. Counting triangles and the curse of the last reducer. In _WWW’11_, pages 607–614, 2011.
* [SW05a] T. Schank and D. Wagner. Finding, counting and listing all triangles in large graphs, an experimental study. In _Experimental and Efficient Algorithms_, pages 606–609. Springer Berlin / Heidelberg, 2005.
* [SW05b] T. Schank and D. Wagner. Finding, counting, and listing all triangles in large graphs: an experimental study. _Workshop on Experimental and Efficient Algorithms (WEA)_, 2005.
* [Tso08] C. E. Tsourakakis. Fast counting of triangles in large real networks without counting: Algorithms and laws. In _ICDM_, pages 608–617, 2008.
* [Wor81] N. C. Wormald. The asymptotic connectivity of labelled regular graphs. _Journal of Combinatorial Theory B_, 31:156–167, 1981.
* [WS98] D. Watts and S. Strogatz. Collective dynamics of ‘small-world’ networks. _Nature_, 393:440–442, 1998.
* [WW10] V. Vassilevska Williams and R. Williams. Subcubic equivalences between path, matrix and triangle problems. In _Foundations of Computer Science (FOCS)_, pages 645–654, 2010.
## Appendix A Proof of Lem. 3.1
Proof.: Consider the sequence \(X^{\prime}_{1},X^{\prime}_{2},\ldots,X^{\prime}_{k}\) of i.i.d. Bernoulli random variables with \(\hbox{\bf E}[X^{\prime}_{i}]=\alpha\). We will shortly prove that for any \(t>0\), \(\Pr[\sum_{i=1}^{k}X_{i}<t]\leq\Pr[\sum_{i=1}^{k}X^{\prime}_{i}<t]\). Given this, we just apply a multiplicative Chernoff bound (Theorem 4.2 of [25]) for \(\sum_{i=1}^{k}X^{\prime}_{i}\) with \(\mu=\alpha k\). Hence, \(\Pr[\sum_{i=1}^{k}X_{i}<\alpha k\delta]<\exp(-\alpha(1-\delta)^{2}/2)\).
For convenience, we show the contrapositive \(\Pr[\sum_{i=1}^{k}X_{i}\geq t]\geq\Pr[\sum_{i=1}^{k}X^{\prime}_{i}\geq t]\). This is proven by induction on \(k\). First, the base case. Since \(X_{1}\) and \(X^{\prime}_{1}\) are Bernoulli random variables, it suffices to show that \(\Pr[X_{1}=1]\geq\Pr[X^{\prime}_{1}=1]=\alpha\), which holds by assumption.
Now for the induction step. Assume for all \(t>0\) and some index \(j\), \(\Pr[\sum_{i=1}^{j}X_{i}\geq t]\geq\Pr[\sum_{i=1}^{j}X^{\prime}_{i}\geq t]\). We prove this for \(j+1\). Let \({\cal E}\) denote the event \(\sum_{i=1}^{j}X_{i}\geq t\), and \({\cal E}^{\prime}\) be the (disjoint) event \(\sum_{i=1}^{j}X_{i}\in[t-1,t)\). Let \(\mathbb{I}(A)\) denote the indicator function of event \(A\). Because \(X_{i}\) is a \(0\)-\(1\) random variable, we get
\[\Pr\left[\sum_{i=1}^{j+1}X_{i}\geq t\right]=\Pr[{\cal E}]+\Pr[{ \cal E}^{\prime}\wedge(X_{j+1}=1)] = \Pr[{\cal E}]+\hbox{\bf E}[\mathbb{I}({\cal E}^{\prime})\mathbb{I }(X_{j+1}=1)]\]
\[\Pr[{\cal E}]+\hbox{\bf E}\{\hbox{\bf E}[\mathbb{I}({\cal E}^{ \prime})\mathbb{I}(X_{j+1}=1)|Y_{1},\ldots,Y_{j}]\}\]
Observe that \(\sum_{i=1}^{j}X_{i}\) only depends on \(Y_{1},\ldots,Y_{j}\) so that \(\mathbb{I}({\cal E}^{\prime})\) is a constant in the conditional expectation
\[\hbox{\bf E}[\mathbb{I}({\cal E}^{\prime})\mathbb{I}(X_{j+1}=1)|Y _{1},\ldots,Y_{j}] = \mathbb{I}({\cal E}^{\prime})\hbox{\bf E}[\mathbb{I}(X_{j+1}=1)|Y _{1},\ldots,Y_{j}]\]
\[= \mathbb{I}({\cal E}^{\prime})\Pr[X_{j+1}=1|Y_{1},\ldots,Y_{j}]\]
\[\geq \mathbb{I}({\cal E}^{\prime})\alpha,\]
where \(\Pr[X_{j+1}=1|Y_{1},\ldots,Y_{j}]\geq\alpha\) by the lemma assumption.
Let us denote (for any \(s>0\)) \(\Pr[\sum_{i=1}^{j}X_{i}\geq s]\) by \(p_{s}\) and \(\Pr[\sum_{i=1}^{j}X^{\prime}_{i}\geq s]\) by \(p^{\prime}_{s}\). The above gives
\[\Pr\left[\sum_{i=1}^{j+1}X_{i}\geq t\right] \geq p_{t}+\alpha\hbox{\bf E}[\mathbb{I}({\cal E}^{\prime})]\]
\[= p_{t}+(p_{t-1}-p_{t})\alpha=p_{t-1}\alpha+p_{t}(1-\alpha)\]
\[\geq p^{\prime}_{t-1}\alpha+p^{\prime}_{t}(1-\alpha)\ \ \ \textrm{( using induction hypothesis and $\alpha\in[0,1]$)}\]
\[= p^{\prime}_{t}+(p^{\prime}_{t-1}-p^{\prime}_{t})\alpha\]
\[=\]
\[= \Pr\left[\sum_{i=1}^{j+1}X^{\prime}_{i}\geq t\right]\]
∎
## Appendix B Proofs of Tightness
We need a technical claim give a lower bound for probabilities of edges falling in a bucket.
**Claim B.1**.: _Let \(d_{v}>3\). Consider vertices \(v,w,w^{\prime}\) (\(w\neq w^{\prime}\)) and let \(c\) be a sufficiently large constant. If \(\min(d_{w},d_{w^{\prime}})>cd_{v}\), then \(\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]=\Omega(d^{2}_{v}d_{w}d_{w^{\prime}}/m^{2})\)._
Proof.: The random variable \(Y_{v,w}Y_{v,w^{\prime}}\) is \(1\) if \((v,w)\), \((v,w^{\prime})\) are edges and the degrees of \(w\) and \(w^{\prime}\) are less than that of \(v\). As before, we will start the matching process by matching stubs of \(v\). We partition the stubs into two groups denoted by \(B_{w}\) and \(B_{w^{\prime}}\), and start by matching stubs in \(B_{w}\). We set \(|B_{w}|=\lfloor d_{v}/3\rfloor\). What is the probability that a stub in \(B_{w}\) connects with a \(w\)-stub? This is at least \(1-(1-d_{w}/2m)^{\lfloor d_{v}/3\rfloor}=\Omega(d_{v}d_{w}/m)\).
Condition on any matching of the stubs in \(B_{w}\). What is the probability that a stub in \(B_{w^{\prime}}\) matches with a \(w^{\prime}\)-stub? Since \(\min(|B_{w^{\prime}}|,d_{w^{\prime}})\geq 2|B_{w}|\), this probability is at least \(1-(1-d_{w^{\prime}}/4m)^{\lfloor d_{v}/3\rfloor}=\Omega(d_{v}d_{w^{\prime}}/m)\).
Now condition on any matching of the \(v\)-stubs. The number of unmatched stubs connected to \(w\) is at least \(d_{w}/2\) (similarly for \(w^{\prime}\)). The remaining stubs connect according to a standard configuration model. For the remaining degree sequence, the total number of stubs is \(2\tilde{m}=2m-2d_{v}\). For sufficiently large \(m\), \(d_{n}\leq\sqrt{m}/4\leq\sqrt{\tilde{m}}/2\). Hence, we can use Lem. 3.2 (and a union bound) to argue that the probability that the final degrees of \(w\) and \(w^{\prime}\) are at least \(d_{v}\) is \(\Omega(1)\). Multiplying all the bounds together, the probability \(Y_{v,w}Y_{v,w^{\prime}}=1\) is \(\Omega(d^{2}_{v}d_{w}d_{w^{\prime}}/m^{2})\). ∎
We prove Claim 5.1.
Proof.: Note that when \(\alpha>2\), then \(m=O(n)\). We start with the arguments in the proof of Lem. 4.2. Applying Claim B.1 for vertex \(v\) such that \(d_{v}>3\),
\[\hbox{\bf E}[X_{v}(X_{v}-1)]=\sum_{w}\sum_{w^{\prime}\neq w}\hbox {\bf E}[Y_{v,w}Y_{v,w^{\prime}}] \geq \sum_{\begin{subarray}{c}w:\\ d_{w}\geq cd_{v}\end{subarray}}\sum_{\begin{subarray}{c}w\neq w^{\prime}:\\ d_{w^{\prime}}\geq cd_{v}\end{subarray}}\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]\]
\[\gg m^{-2}d^{2}_{v}\sum_{\begin{subarray}{c}w:\\ d_{w}\geq cd_{v}\end{subarray}}\sum_{\begin{subarray}{c}w\neq w^{\prime}:\\ d_{w^{\prime}}\geq cd_{v}\end{subarray}}d_{w}d_{w^{\prime}}\]
\[\geq m^{-2}d^{2}_{v}(\sum_{\begin{subarray}{c}w:\\ d_{w}\geq cd_{v}\end{subarray}}d_{w})^{2}-m^{-2}d^{2}_{v}\sum_{w}d^{2}_{w}\]
The latter part, summed over all \(v\) is at most
\[m^{-2}(\sum_{v}d^{2}_{v})^{2}\leq m^{-2}(\max_{v}d_{v}\sum_{v}d_{v})^{2}\lessdot m\]
Now we focus on the former part. Choose \(v\) so that \(cd_{v}\leq d_{n}/2\), and let \(2^{r}\) be the largest power of \(2\) greater than \(cd_{v}\). (Note that \(r\leq\log_{2}d_{n}-1\).) We bound \(\sum_{w:d_{w}\geq cd_{v}}d_{w}\geq\sum_{w:d_{w}\geq 2^{r}}d_{w}\gg\sum_{k=r}^{ \log_{2}d_{n}-1}2^{k}n/2^{k(\alpha-1)}\). This is \(\sum_{k=r}^{\log_{2}d_{n}-1}n/2^{k(\alpha-2)}\), which is convergent when \(\alpha>2\). Hence, it is at least \(\Omega(n2^{-r(\alpha-2)})=\Omega(nd^{-(\alpha-2)}_{v})\).
We sum over all (appropriate \(v\)).
\[\sum_{v:3<d_{v}\leq d_{n}/2c}m^{-2}d^{2}_{v}(\sum_{ \begin{subarray}{c}w:\\ d_{w}\geq cd_{v}\end{subarray}}d_{w})^{2} \gg (n/m)^{2}\sum_{v:3<d_{v}\leq d_{n}/2c}d^{2}_{v}d^{-(2\alpha-4)}_{v}\]
\[= (n/m)^{2}\sum_{v:3<d_{v}\leq d_{n}/2c}d^{6-2\alpha}_{v}\gg(n/m)^{ 2}\sum_{k=2}^{\lfloor\log_{2}n-\log_{2}(2c)\rfloor}n2^{k(7-3\alpha)}\]
When \(\alpha<7/3\), the sum is divergent. Noting that \(m=\Theta(n)\), we bound by \(\Omega(nd^{7-3\alpha}_{n})\). Overall, we lower bound the running time MinBucket by \(\sum_{v:3<d_{v}\leq d_{n}/2c}\hbox{\bf E}[X_{v}(X_{v}-1)]\), which is \(\Omega(nd^{7-3\alpha}_{n}-m)\). For \(\alpha<7/3\), this is \(\Omega(nd^{7-3\alpha}_{n})\), matching the upper bound in Cor. 1.3.
∎
## Appendix C The running time of MinBucket for Chung-Lu graphs
**Theorem C.1**.: _Consider a Chung-Lu graph distribution with \(n\) vertices over a degree distribution \(f_{1},f_{2},\ldots,f_{n}\). The expected running time of MinBucket is given by \(O(m+n(\sum_{v}{d_{v}}^{4/3})^{3})\)._
We remind the reader that the Chung-Lu (CL) model involves inserting edge \((i,j)\) with probability \(d_{i}d_{j}/2m\) for all unordered pairs \((i,j)\). We need to prove Claim 4.1 for the Chung-Lu model. Thm. C.1 will then follow directly using the arguments in §4.
We first state Bernstein’s inequality.
**Theorem C.2**.: _[Bernstein’s inequality] Let \(X_{1},X_{2},\ldots,X_{k}\) be zero-mean independent random variables. Suppose \(|X_{i}|\leq M\) almost surely. Then for all positive \(t\),_
\[\Pr[\sum_{i=1}^{k}X_{i}>t]\leq\exp\Big{(}-\frac{t^{2}/2}{\sum_{i}\hbox{\bf E}[ X^{2}_{i}]+Mt/3}\Big{)}\]
We now prove some tail bounds about degrees of vertices. The basic form of these statements is the probability that degree of vertex \(v\) deviates by a constant factor of \(d_{v}\) is \(\exp(-\Omega(d_{v}))\). We state in terms of conditional events for easier application later. We use \(\beta\) to denote a sufficiently small constant.
**Claim C.3**.: _Let \(d\geq 2\). Suppose \(v\) is a vertex such that \(d_{v}\leq d\) and \(e,e^{\prime}\) be two pairs. Let \({\cal E}\) be the event that \(e,e^{\prime}\) are present, and \(D_{v}\) be the random variable denoting the degree of \(v\). For sufficiently small constant \(\beta\),_
\[\Pr[D_{v}>3d|{\cal E}]<\exp(-\beta d)\]
Proof.: All edges are inserted independently. So the occurrence of edge \(e^{\prime\prime}\neq e,e^{\prime}\) is completely independent of \({\cal E}\). Let \(\delta(v)\) be the set of all pairs involving \(v\) and \(\hat{\delta}(v)=\delta(v)\setminus\{e,e^{\prime}\}\). We express \(D_{v}=\sum_{h\in\delta(v)}C_{h}\), where \(C_{h}\) is the indicator random variable for edge \(h\) being present. Let \(\hat{D}_{v}=\sum_{h\in\hat{\delta{v}}}C_{h}\). Note that \(\hbox{\bf E}[\hat{D}_{v}]\leq\hbox{\bf E}[D_{v}]=d_{v}\leq d\). Set \(C^{\prime}_{h}=C_{h}-\hbox{\bf E}[C_{h}]\), so
\[\Pr[\hat{D}_{v}-\hbox{\bf E}[\hat{D}_{v}]>d]=\Pr[\sum_{h\in\hat{\delta}(v)}(C_ {h}-\hbox{\bf E}[C_{h}])>d]=\Pr[\sum_{h\in\hat{\delta}(v)}C^{\prime}_{h}>d]\]
We wish to apply Bernstein’s inequality to the \(C^{\prime}_{h}\) random variables. Observe that \(\hbox{\bf E}[C^{\prime}_{h}]=0\), and \(|C^{\prime}_{h}|\leq 1\). Setting \(\hbox{\bf E}[C_{h}]=\mu\), note that
\[\hbox{\bf E}[(C^{\prime}_{h})^{2}]=\hbox{\bf E}[(C_{h}-\mu)^{2}]=\hbox{\bf E}[ C^{2}_{h}]-\mu\hbox{\bf E}[C_{h}]+\mu^{2}=\hbox{\bf E}[C_{h}].\]
So \(\sum_{h\in\hat{\delta}(v)}\hbox{\bf E}[(C^{\prime}_{h})^{2}]=\)\(\sum_{h\in\hat{\delta}(v)}\hbox{\bf E}[C_{h}]=\hbox{\bf E}[\hat{D}_{v}]\)\(\leq d\). By Bernstein’s inequality (Thm. C.2),
\[\Pr[\hat{D}_{v}-\hbox{\bf E}[\hat{D}_{v}]>d]=\Pr[\sum_{h\in\hat{ \delta}(v)}C^{\prime}_{h}>d] \leq \exp\Big{(}-\frac{d^{2}/2}{\sum_{h\in\hat{\delta}(v)}\hbox{\bf E} [(C^{\prime}_{h})^{2}]+d/3}\Big{)}\]
\[\leq \exp\Big{(}-\frac{d^{2}/2}{d+d/3}\Big{)}=\exp(-3d/8)\]
None of these random variables depend on the event \({\cal E}\), so we get that \(\Pr[\hat{D}_{v}-\hbox{\bf E}[\hat{D}_{v}]>d\ |\ {\cal E}]\leq\exp(-3d/8)\). Suppose \(\hat{D}_{v}\leq\hbox{\bf E}[\hat{D}_{v}]+d\leq 2d\). We always have \(D_{v}\leq\hat{D}_{v}+2\) and hence \(D_{v}\leq 3d\) (using the bound that \(d\geq 2\)). Hence, \(\Pr[D_{v}>3d|{\cal E}]<\exp(-3d/8)\). We only require \(\beta<3/8\). ∎
**Claim C.4**.: _Suppose \(v\) is a vertex such that \(d_{v}\geq 4\) and \(e,e^{\prime}\) be two pairs. Let \({\cal E}\) be the event that \(e,e^{\prime}\) are present, and \(D_{v}\) be the random variable denoting the degree of \(v\). For sufficiently small constant \(\beta\),_
\[\Pr[D_{v}<d_{v}/3|{\cal E}]<\exp(-\beta d_{v})\]
Proof.: This proof is almost identical to the previous one. Again, we express \(D_{v}=\sum_{h\in\delta(v)}C_{h}\), where \(C_{h}\) is the indicator random variable for edge \(h\) being present. Let \(\hat{D}_{v}=\sum_{h\in\hat{\delta{v}}}C_{h}\). We have \(\hat{D}_{v}\geq D_{v}-2\), so \(\hbox{\bf E}[\hat{D}_{v}]\geq\hbox{\bf E}[D_{v}]-d_{v}/2\)\(=d_{v}/2\) (using the bound \(d_{v}\geq 4\)). Applying a multiplicative Chernoff bound to \(\hat{D}_{v}\),
\[\Pr[\hat{D}_{v}<2\hbox{\bf E}[\hat{D}_{v}]/3]<\exp(-d_{v}/36)\]
Since \(\hat{D}_{v}\) is completely independent of \({\cal E}\), we can condition on \({\cal E}\) to get the same bound. Suppose \(D_{v}<d_{v}/3\). Since \(D_{v}\geq\hat{D}_{v}\) and \(d_{v}\leq 2\hbox{\bf E}[\hat{D}_{v}]\), we get \(\hat{D}_{v}<2\hbox{\bf E}[\hat{D}_{v}]/3\). So the even \(D_{v}<d_{v}/3|{\cal E}\) is contained in \(\hat{D}_{v}<2\hbox{\bf E}[\hat{D}_{v}]/3|{\cal E}\), completing the proof. We require \(\beta<1/36\). ∎
Finally, we need a simple claim about the second moment of sums of independent random variables.
**Claim C.5**.: _Let \(X=\sum_{i}X_{i}\) be a sum of independent positive random variables with \(X_{i}=O(1)\) for all \(i\) and \(\hbox{\bf E}[X]=O(1)\). Then \(\hbox{\bf E}[X^{2}]=O(1)\)._
Proof.: By linearity of expectation,
\[\hbox{\bf E}\left[X^{2}\right]=\hbox{\bf E}\Big{[}\big{(}\sum_{i} X_{i}\big{)}^{2}\Big{]}=\sum_{i}\hbox{\bf E}\left[X_{i}^{2}\right]+2\sum_{i<j} \hbox{\bf E}[X_{i}]\hbox{\bf E}[X_{j}]\\ \leq\sum_{i}O\left(\hbox{\bf E}[{X_{i}}]\right)+\Big{(}\sum_{i} \hbox{\bf E}[X_{i}]\Big{)}^{2}=O(1).\]
∎
We prove the analogue of Claim 4.1.
**Claim C.6**.: _Consider vertices \(v,w,w^{\prime}\) (\(w\neq w^{\prime}\))._
* _If_ \(d_{v}\leq 4\)_, then_ \(\hbox{\bf E}[X^{2}_{v}]=O(1)\)_._
* \(\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]\leq d^{2}_{v}d_{w}d_{w^{\prime}}/4m^{2}\)_._
* _If_ \(d_{w}\leq d_{v}/10\) _and_ \(d_{v}\geq 4\)_, then_ \(\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]\leq 2\exp(-\beta d_{v})d^{2}_{v}d_{w}d_{ w^{\prime}}/4m^{2}\)_._
Proof.: Defining \(\hat{X}_{v}=\sum_{w}C_{v,w}\), we have \(X_{v}\leq\hat{X}_{v}\). Since these are all positive random variables, \(X^{2}_{v}\leq\hat{X}^{2}_{v}\). Applying Claim C.5, \(\hbox{\bf E}[\hat{X}^{2}_{v}]=O(1)\). That completes the first part.
For the second part, we use the trivial bound of \(Y_{v,w}Y_{v,w^{\prime}}\leq C_{v,w}C_{v,w^{\prime}}\). Taking expectations and using independence, \(\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]\leq C_{v,w}C_{v,w^{\prime}}=d^{2}_{v}d_{ w}d_{w^{\prime}}/4m^{2}\).
The third case is really the interesting one. The quantity \(\hbox{\bf E}[Y_{v,w}Y_{v,w^{\prime}}]\) is the probability that both \(Y_{v,w}\) and \(Y_{v,w^{\prime}}\) are \(1\). For this to happen, we definitely required both \((v,w)\) and \((v,w^{\prime})\) to be present as edges. Call this event \({\cal E}\). We also require (at the very least) the degree of \(v\) to be at most the degree of \(w\) (otherwise the edge \((v,w)\) will not be put in \(v\)’s bin.) Call this event \({\cal F}\). The event \(Y_{v,w}Y_{v,w^{\prime}}=1\) is contained in \({\cal E}\cap{\cal F}\). Using conditional probabilities, \(\Pr({\cal E}\cap{\cal F})=\Pr({\cal F}|{\cal E})\Pr({\cal E})\). Note that \(\Pr({\cal E})=d^{2}_{v}d_{w}d_{w^{\prime}}/4m^{2}\).
Let \(D_{v},D_{w}\) denote the degrees of \(v\) and \(w\). Let \({\cal F}_{v}\) denote the event \(D_{v}<d_{v}/3\) and \({\cal F}_{w}\) denote event \(D_{w}>3d_{v}/10\). If neither of these events happens, then \(D_{w}\leq 3d_{v}/10<d_{v}/3\leq D_{v}\). So \({\cal F}\) cannot happen. Hence, \(({\cal F}|{\cal E})\) is contained in \(({\cal F}_{v}\cup{\cal F}_{w}|{\cal E})\). By the union bound, \(\Pr({\cal F}_{v}\cup{\cal F}_{w}|{\cal E})\leq\Pr({\cal F}_{v}|{\cal E})+\Pr({ \cal F}_{w}|{\cal E})\). Applying Claim C.4 to the latter and Claim C.3 to the former, we bound \(\Pr({\cal F}|{\cal E})\leq 2\exp(-\beta d_{v})\). ∎
As mentioned earlier, we can now execute the arguments in §4 to prove Thm. C.1.
|
1403.0195 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 56881,
"num_imgs": 12,
"llama3_tokens_count": 14923
} | [
"content_image/1403.0195/x1.png",
"content_image/1403.0195/x2.png",
"content_image/1403.0195/x3.png",
"content_image/1403.0195/x4.png",
"content_image/1403.0195/x5.png",
"content_image/1403.0195/x6.png",
"content_image/1403.0195/x7.png",
"content_image/1403.0195/x8.png",
"content_image/1403.0195/x9.png",
"content_image/1403.0195/x10.png",
"content_image/1403.0195/x11.png",
"content_image/1403.0195/x12.png"
] | # Microwave spectroscopy of \(\Lambda\)-doublet transitions in the ground state of CH
S. Truppe
R. J. Hendricks
S. K. Tokunaga
E. A. Hinds
M. R. Tarbutt
m.tarbutt@imperial.ac.uk
Centre for Cold Matter, Blackett Laboratory, Imperial College London, Prince Consort Road, London, SW7 2AZ, United Kingdom.
###### Abstract
The \(\Lambda\)-doublet transitions in CH at 3.3 and \(0.7\,\mathrm{GHz}\) are unusually sensitive to variations in the fine-structure constant and the electron-to-proton mass ratio. We describe methods used to measure the frequencies of these transitions with Hz-level accuracy. We produce a pulsed supersonic beam of cold CH by photodissociation of CHBr\({}_{3}\), and we measure the microwave transition frequencies as the molecules propagate through a parallel-plate transmission line resonator. We use the molecules to map out the amplitude and phase of the standing wave field inside the transmission line. We investigate velocity-dependent frequency shifts, showing that they can be strongly suppressed through careful timing of the microwave pulses. We measure the Zeeman and Stark effects of the microwave transitions, and reduce systematic shifts due to magnetic and electric fields to below \(1\,\mathrm{Hz}\). We also investigate other sources of systematic uncertainty in the experiment.
keywords: Varying fundamental constants, Microwave spectroscopy, Methylidyne, Lambda-doubling, Supersonic beam †
[FOOTNOTE:†][ENDFOOTNOTE]
## 1 Introduction
Many extensions to the standard model of particle physics make the intriguing prediction that quantities we normally consider to be fundamental constants - such as the fine-structure constant \(\alpha\) or the electron-to-proton mass ratio \(\mu\) - may in fact vary with time, position or with the local density of matter [1]. These theories aim to unify gravity with the other forces, or to explain the nature of dark energy. The lowest \(\Lambda\)-doublet and millimetre-wave transitions in the CH molecule are particularly sensitive to variation of \(\alpha\) and \(\mu\)[2; 3]. Recently, we tested the hypothesis that these constants may differ between the high density environment of the Earth and the vastly lower density of the interstellar medium, by comparing microwave frequencies of CH observed in cold interstellar gas clouds in our own galaxy to those measured in the laboratory [4]. Using this method, we were able to constrain variations in these fundamental constants at the 0.1 parts-per-million level.
To measure the laboratory frequencies with Hz-level accuracy, we developed a source which produces short pulses of CH molecules at low temperature, and we developed a variation on the method of Ramsey spectroscopy [5]. In our method, the pulse of molecules propagate along a parallel-plate transmission line which supports a standing wave of the microwave field. In this geometry, the amplitude, polarization, and phase of the field are exceptionally well controlled. The microwaves can be pulsed on when the molecules are at any chosen position, and because the molecular pulse is short and the velocity spread is low, they can be used to map out the amplitude and phase of the field as a function of position [6]. Doppler shifts are cancelled because the field is (nearly) a standing wave, and any residual velocity-dependent shifts are easily measured by changing the beam velocity. Here, we describe the source of CH, the spectroscopic technique, and the methods we use to control systematic frequency shifts.
## 2 The CH molecule
The methylidyne radical, CH, plays an essential role in both chemistry and physics. Early investigations of optical emission spectra of CH helped to explain the spectra of diatomic molecules (see [7] and references therein). CH is a major participant in most combustion processes. It was one of the first molecules to be detected in the interstellar medium, in stellar atmospheres, and in comets, by means of optical absorption spectroscopy [8; 9]. It is an essential building block in the formation of complex carbon-chain molecules in interstellar gas clouds [10] and it is commonly used as a tracer for atomic carbon and molecular hydrogen [11]. In 1937 Dunham and Adams recorded the first optical spectrum of CH in the interstellar medium [12]. Over forty years later the frequencies of the two lowest-lying \(\Lambda\)-doublet transitions, at \(3.3\,\mathrm{GHz}\) and \(0.7\,\mathrm{GHz}\), were determined by radio-astronomy [13; 14; 15]. Laboratory measurements followed [16; 17], but the radio-astronomy measurements remained the most precise until our own recent measurements [4] using the methods described here.
<figure><img src="content_image/1403.0195/x1.png"><figcaption>Figure 1: Relevant energy levels and transitions between the X and A states ofCH. Each state labelled by J is split into two Λ-doublet levels of oppositeparity (±, also labelled by their e/f parity). Each of these is further splitby the hyperfine interaction into a pair of states with total angular momentumquantum numbers F=J±1/2. Dotted lines show the optical transitions used fordetecting molecules in the J=3/2 and J=1/2 states, labelled R11ff(3/2) andR22ff(1/2) respectively. The hyperfine components of the latter transition arelabelled as a, b and c. Transition frequencies are given in MHz unless statedotherwise.</figcaption></figure>
Figure 1 shows the low-lying energy level structure in the ground electronic state of CH, \(\text{X}^{2}\Pi(v=0)\), and the first electronically excited state, \(\text{A}^{2}\Delta(v=0)\), both described using the Hund’s case (b) coupling scheme. Here, the electronic orbital angular momentum \(\mathbf{L}\) and the rotational angular momentum \(\mathbf{R}\) are coupled to a resultant \(\mathbf{N}=\mathbf{L}+\mathbf{R}\). The spin angular momentum \(\mathbf{S}\) adds to \(\mathbf{N}\) to give the total electronic angular momentum \(\mathbf{J}=\mathbf{N}+\mathbf{S}\). Each \(J\)-level is split into a \(\Lambda\)-doublet which is composed of two closely-spaced states of opposite parity \(\left|p=\pm 1\right\rangle=\left|+|\Lambda|\right\rangle\pm(-1)^{J-S}\left|-| \Lambda|\right\rangle\), where \(\Lambda\) is the quantum number for the projection of \(\mathbf{L}\) onto the internuclear axis. The interaction with the \(I=1/2\) hydrogen nuclear spin splits each \(\Lambda\)-doublet component into a pair of hyperfine levels, labelled by the total angular momentum quantum number \(F=J\pm 1/2\). We use the short-hand notation \((J^{p},F)\) to label the energy levels of the X state.
## 3 Production and detection of CH
Figure 2 shows the apparatus. We produce a pulsed, supersonic beam of CH molecules by photo-dissociation of bromoform (CHBr\({}_{3}\)), following some of the methods described in [18; 19]. A carrier gas, with a backing pressure of \(4\,\mathrm{b}\mathrm{a}\mathrm{r}\), is bubbled through liquid bromoform (Sigma Aldrich, 96% purity, stabilized in ethanol) which is held at room temperature in a stainless steel container. The mixture expands through the \(1\,\mathrm{mm}\) orifice of a solenoid valve (General Valve Series 99) into a vacuum chamber at a repetition rate of \(10\,\mathrm{Hz}\). To dissociate the bromoform, we use pulses of light from an excimer laser, with an energy of \(220\,\mathrm{mJ}\), a wavelength of \(248\,\mathrm{nm}\), and a duration of \(20\,\mathrm{ns}\). This light propagates along the \(x\)-axis, and is focussed to a rectangular spot, \(1\,\mathrm{mm}\) high (along \(y\)) and \(4\,\mathrm{mm}\) wide (along \(z\)), in the region immediately beyond the nozzle of the valve. The excimer pulse sets our origin of time. Further details of the source are given in [20].
<figure><img src="content_image/1403.0195/x2.png"><figcaption>Figure 2: Sketch of the experiment to measure the Λ-doublet transitions of CH(not to scale). The molecular beam is produced via photodissociation ofbromoform. The beam passes through a skimmer and a state selector, and thentravels between the plates of a magnetically-shielded parallel-platetransmission line where the microwave transition is driven. Finally, themolecules are detected by laser induced fluorescence. For the measurement ofthe J=1/2 Λ-doublet transition the outer magnetic shield and the innermagnetic field coils were absent.</figcaption></figure>
The molecules pass through a skimmer with a \(2\,\mathrm{mm}\) diameter orifice, situated at \(z=$78\,\mathrm{mm}$\), then through the apparatus used to measure the microwave transition frequencies (described below), and are finally detected at \(z=D=$780\,\mathrm{mm}$\) by laser-induced fluorescence. The probe laser used for detection is a frequency-doubled continuous-wave titanium-sapphire laser, tuned to the \(A^{2}\Delta(v=0)\gets X^{2}\Pi(v=0)\) transition near \(430.15\,\mathrm{nm}\). This probe is linearly polarized along \(z\), propagates along \(x\), has a power of \(5\,\mathrm{mW}\), and is shaped to a rectangular cross-section, \(4\,\mathrm{mm}\) in the \(y\)-direction and \(1.4\,\mathrm{mm}\) in the \(z\) direction. The induced fluorescence is imaged onto a photomultiplier tube, and the signal is recorded with a time resolution of approximately \(5\,\mathrm{\SIUnitSymbolMicro s}\).
To vary the beam velocity, \(v_{0}\), we use He, Ne, Ar and Kr carrier gases. Figure 3(a) shows the time-of-flight profiles of CH molecules arriving at the detector when He and Ar are used. From the arrival times we measure the mean speeds to be \(v_{0}=1710\), 800, 570 and \(420\,\mathrm{m}\,\mathrm{s}^{-1}\) for He, Ne, Ar and Kr respectively. The duration of the pulse of CH produced in the source is determined by the \(4\,\mathrm{mm}\) width of the excimer beam and is always very short compared with the width of the time-of-flight profile measured at the detector. This width therefore measures the translational temperature of the CH beam. From the data in figure 3(a) we measure a CH translational temperature of \(2\,\mathrm{K}\) and \(0.4\,\mathrm{K}\) for He and Ar carrier gases respectively.
We selectively detect molecules in either the \((1/2^{-},F)\) or the \((3/2^{+},F)\) levels, by driving one of the two transitions labelled R\({}_{22ff}(1/2)\) and R\({}_{11ff}(3/2)\) in figure 1. Figure 3(b) shows the spectrum recorded as the probe laser is scanned over the three well resolved hyperfine components of the R\({}_{22ff}(1/2)\) transition. Because the detected molecules have a range of transverse speeds, the spectral lines are Doppler broadened to a full width at half maximum of \(23\,\mathrm{MHz}\). For the microwave spectroscopy, the laser frequency is locked to one of the hyperfine components of the relevant transition using an optical cavity which is itself locked to a stabilized He-Ne laser.
<figure><img src="content_image/1403.0195/x3.png"><figcaption>Figure 3: (a) Time of flight profiles of CH molecules using (separately) Heand Ar carrier gases. Points: data, line: Gaussian fits. (b) Spectrum showingthe three hyperfine components of the R22ff(1/2) line of theA2Δ(v=0,N=2)←X2Π(v=0,N=1) transition. The labels a, b, c correspond to thosein figure 1. Points: data. Line: Fit to a sum of three Gaussians.</figcaption></figure>
## 4 Microwave spectroscopy setup
At \(z=$241\,\mathrm{mm}$\), the molecules pass through the state-selector, which selectively populates one of the two parity eigenstates of a \(\Lambda\)-doublet. For the \(J=1/2\) measurements, the state selection is done by optically pumping molecules out of the relevant \((1/2^{-},F)\) state, using \(40\,\mathrm{mW}\) of laser light at the same frequency as the probe. This light is reflected almost back on itself to increase the interaction time with the molecules. The optical pumping removes about 70% of the initial population. Over 95% of the CH molecules produced in our source are in the ground \(J=1/2\) state, so for the \(J=3/2\) measurements the state selection is done by driving population into the \((3/2^{+},1)\) or \((3/2^{+},2)\) states using \(10\,\mathrm{\SIUnitSymbolMicro W}\) of radiation near \(533\,\mathrm{GHz}\). This millimetre-wave radiation is generated by an amplifier-multiplier chain that produces the 54th harmonic of a frequency synthesizer. The transfer efficiency is about 40%. A description of the measurement of this lowest millimeter-wave transition is given in reference [21].
Following the state-selector the molecules enter the transmission line resonator where we drive the \(\Lambda\)-doublet transition. The transmission line is formed by a pair of parallel copper plates of length \(L\) and width \(w\), separated vertically by a distance \(d\). It is fed from a semi-rigid, non-magnetic coaxial cable, with the inner conductor connected to one plate and the outer conductor connected to the other. The other end of the transmission line is open, and the wave reflects from this end to form a standing wave. The quality factor of this resonator is determined by the reflectivity of the open end and the transmission between the coaxial cable and the transmission line. We cut the plates to a length such that a resonance frequency of the transmission line matches the approximate \(\Lambda\)-doublet transition frequency. This length is approximately \(L=n/(2\lambda)\) where \(n\) is the number of electric field antinodes in the resonator, but this is insufficiently accurate because the position where the wave reflects from the ends is not well defined. Instead, we measure the spectrum of the resonator with a vector network analyzer and then reduce the length to obtain the desired resonance frequency. The lengths were approximately \(480\,\mathrm{mm}\) for the \(J=1/2\) measurements and \(440\,\mathrm{mm}\) for the \(J=3/2\) measurements. Figure 4 shows the spectrum of the resonator set up for measuring the \((1/2^{+},1)-(1/2^{-},0)\) transition. The free spectral range of the resonator is \(300\,\mathrm{MHz}\), and the full width at half maximum (FWHM) of the peaks is \(35\,\mathrm{MHz}\), corresponding to a round-trip loss of 50%.
<figure><img src="content_image/1403.0195/x4.png"><figcaption>Figure 4: Spectrum of the transmission line resonator. The plate length ischosen so that the resonator has a peak near the resonance frequency of themolecules. The width of the transmission line resonance is about 35MHz. Thedashed line indicates the approximate transition frequency of the(1/2+,1)−(1/2−,0) line at 3.264GHz.</figcaption></figure>
We aim to propagate only the TEM\({}_{00}\) mode so that the electric field is uniform between the plates and accurately polarized along \(y\). If the plate spacing is large enough, the transmission line can support the higher-order TE and TM modes that have components of the k-vector along \(y\), but these modes are cut off if the plate spacing is smaller than half the wavelength, \(d<\lambda/2\). The shortest wavelength we use in this experiment is \(90\,\mathrm{mm}\), whereas the plate spacing is only \(5\,\mathrm{mm}\), so these higher-order modes are very strongly attenuated. There can also be modes that have components of the k-vector along \(x\). These lesser-known modes zig-zag horizontally in the space between the two plates, reflecting from the open edges of the structure. Because they can radiate into the open space beyond the plates, they are referred to as leaky modes. The procedure for calculating the k-vectors of these modes is described in [22]. In an early version of the experiment, we used plates of width \(w=$50\,\mathrm{mm}$\) spaced by \(d=$10\,\mathrm{mm}$\). In this case, we find theoretically that there is a leaky mode which has a propagation wavelength of \(130\,\mathrm{mm}\) and a \(1/e\) attenuation distance of \(220\,\mathrm{mm}\). Using the characterization methods described below, we indeed observed a mode with this wavelength. To cut-off the leaky mode, we reduced the width of the plates to \(w=$30\,\mathrm{mm}$\) and also reduced the plate spacing to \(d=$5\,\mathrm{mm}$\). This increases the leaky mode propagation wavelength to \(600\,\mathrm{mm}\) and reduces the attenuation distance to just \(20\,\mathrm{mm}\). Since the latter is far smaller than the length of the interaction region, the mode is very effectively eliminated from the experiment. The spectrum shown in figure 4 shows that there are no significant higher-order modes.
The microwave radiation is generated by a frequency synthesizer which is phase locked to a GPS-disciplined frequency reference to a fractional uncertainty better than \(10^{-13}\). The synthesizer is connected to the transmission line resonator through a fast, high isolation switch.
## 5 Characterizing the microwave field
The electric field inside the transmission line resonator can be described by
\[E=\frac{A}{2}(1+\Delta)\cos(k(z-z_{0})-\omega t)+\frac{A}{2}(1-\Delta)\cos(k(z -z_{0})+\omega t)\,,\] (1)
where \(k=2\pi/\lambda\) is the wave vector, \(A\) is the amplitude of the electric field, \(z_{0}\) is the position of an antinode of the standing wave, and \(\Delta\) is an amplitude imbalance to account for the fact that the wave is not perfectly reflected at the end of the transmission line. This equation can be rewritten as
\[E=A\sqrt{\cos^{2}\left(k(z-z_{0})\right)+\Delta^{2}\sin^{2}\left(k(z-z_{0}) \right)}\cos(\omega t-\phi(z))\,,\] (2)
where
\[\phi(z)=\tan^{-1}\left[\Delta\tan\left(k(z-z_{0})\right)\right]+\phi_{0}\,.\] (3)
Here, \(\phi_{0}=0\) when \(\cos(k(z-z_{0}))>0\) and \(\pi\) otherwise.
For a single microwave pulse, the interaction of the molecules with the radiation transfers population from the initial to the final state with a probability of
\[P_{\text{1-pulse}}=\frac{\Omega^{2}}{\Omega^{2}+\delta^{2}}\sin^{2}\left(\frac {\sqrt{\Omega^{2}+\delta^{2}}\tau}{2}\right)\,,\] (4)
where, \(\Omega=d_{12}E/\hbar\) is the Rabi frequency, \(d_{12}\) is the transition dipole moment, \(\delta=\omega-\omega_{0}\) is the detuning of the microwave angular frequency \(\omega\) from the resonance angular frequency \(\omega_{0}\), and \(\tau\) is the interaction time. When \(\Omega\tau=\pi\) and \(\delta=0\) (\(\pi\)-pulse condition) the entire population is transferred from the initial to the final state.
To begin an experiment, we first pulse the microwaves on for just a short period, \(\tau=$15\,\mathrm{\SIUnitSymbolMicro s}$\), at the time when the molecular pulse is at the centre of the transmission line, near an antinode of the electric field. Scanning the microwave frequency over the resulting broad resonance gives a first estimate of the transition frequency. The frequency is then fixed at the transition frequency and the microwave power is scanned. We observe Rabi oscillations in the measured population, an example of which is shown in figure 5(a). Since \(\Omega\) is proportional to the electric field of the radiation, we plot the signal versus the square root of the microwave power, and then fit equation (4) to the data. This identifies the exact power needed to drive a \(\pi\)-pulse for this pulse duration and for this particular position of the molecules. Figure 5(b) shows the power needed for a \(\pi\)-pulse with the molecules centred on each of the antinodes. The variation along the the transmission line is small.
<figure><img src="content_image/1403.0195/x5.png"><figcaption>Figure 5: (a) Rabi oscillations. The detuning is set to δ=0, and theinteraction time to τ=$15µs$. By scanning the microwave power we observeoscillations in the (1/2−,1) population. This allows us to find the rightpower for a π-pulse. We fit S=S0+Asin2(αx) (red solid line) to the data (bluedots), where x is proportional to the square root of the applied microwavepower, and S0, A and α are fitting parameters. (b) Power needed to drive aπ-pulse for each antinode of the standing wave. Solid and dashed lines are themean and standard deviation of the set.</figcaption></figure>
Next, we use the molecules to make a map of the microwave electric field amplitude inside the transmission line. We do this by measuring the population as a function of the time when a short, resonant microwave pulse is applied. Figure 6 shows an example of the data obtained. Here, we have set \(\delta=0\), \(\tau=$15\,\mathrm{\SIUnitSymbolMicro s}$\), and the microwave power such that \(\Omega\tau=\pi\) when the molecules are at the central antinode. When \(\delta=0\) the transition probability becomes \(P=\sin^{2}\left(\Omega\tau/2\right)\). The Rabi frequency \(\Omega\) is proportional to the electric field which varies along the transmission line according to equation (1). For this measurement, we take \(\Delta=0\) and so \(\Omega=\Omega_{\text{max}}\cos\left[2\pi\left(z-z_{0}\right)/\lambda\right]\). To account for the finite spread of the molecules we introduce an averaged Rabi frequency such that \(\Omega_{\text{max}}\tau=q\pi\) with \(q<1\). The line in figure 6 is a fit to the data using the expected model
(5)
where \(t=z/v\) and \(v=$570\,\mathrm{m}\,\mathrm{s}^{-1}$\). The parameter \(q\), the offset \(S_{0}\), the amplitude \(A\), the initial time \(t_{0}\), and the wavelength \(\lambda\) are all fitting parameters. The fit yields \(\lambda=8.99\pm 0.01$\,\mathrm{cm}$\), in agreement with the expected wavelength at \(3.335\,\mathrm{GHz}\). This field map is essential for the Ramsey experiments described below, where we need to know exactly when the molecular pulse passes each antinode.
<figure><img src="content_image/1403.0195/x6.png"><figcaption>Figure 6: Mapping the amplitude of the microwave field by recording thefluorescence as a function of the time when a resonant 15µs microwave pulse isapplied. The power corresponds to a π-pulse when the molecules are at thecentral antinode. Blue dots: data. Red line: fit using the model of equation(5).</figcaption></figure>
## 6 Frequency measurements
Next, we increase the interaction time, \(\tau\), and decrease the microwave power accordingly to maintain \(\Omega\tau=\pi\). Figure 7 shows the signal as a function of microwave frequency for the \((1/2^{+},1)-(1/2^{-},0)\) transition, when \(\tau=$326\,\mathrm{\SIUnitSymbolMicro s}$\). The timing is chosen so that the molecules pass through the central antinode half way through this interaction time. Because the field consists of two counter-propagating waves, parallel and anti-parallel to the molecular beam direction, we see two resonances separated by twice the Doppler shift \(\Delta\omega_{D}=\pm\omega_{0}\frac{v}{c}\), where \(v\) is the velocity of the molecules. To each resonance we fit the function \(S=S_{0}+\frac{\Omega^{2}}{\Omega^{2}+\delta^{2}}\sin^{2}\left(\frac{\sqrt{ \Omega^{2}+\delta^{2}}\tau}{2}\right)\), with \(\tau=~{}$326\,\mathrm{\SIUnitSymbolMicro s}$\), \(\Omega\tau=q\pi\), and \(\delta=\omega-\omega_{0}\). Here, \(S_{0}\), \(\omega_{0}\) and \(q\) are fitting parameters. The amplitudes of the two peaks are equal within their uncertainties of 3%. We find that the power needed for a \(\pi\)-pulse is the same for the two peaks to within 2% showing that \(\Delta\) is less than 0.005 at this frequency. The mean of the two centre frequencies gives the Doppler-free resonance frequency, which we find to be \(3263793456\pm$17\,\mathrm{Hz}$\) for the \((1/2^{+},1)-(1/2^{-},0)\) transition and \(3335479349\pm$7\,\mathrm{Hz}$\) for the \((1/2^{+},1)-(1/2^{-},1)\) transition. With transitions produced by a single pulse of radiation, it is well known that splitting the line to such high accuracy may suffer from systematic errors. Specifically, inhomogeneities in either the static field or the ac field can produce an asymmetric lineshape and a corresponding shift of the line centre. Remarkably, however, these resonance frequencies agree at the level of 7 Hz with the measurements below, using the Ramsey method. This agreement indicates that there is no such lineshape distortion at this level.
<figure><img src="content_image/1403.0195/x7.png"><figcaption>Figure 7: Single pulse measurement of the (1/2+,1)−(1/2−,0) transition withτ=326 µs. The red line is a fit using the model discussed in the text.</figcaption></figure>
For the most precise measurement of the resonance frequency we use Ramsey’s method of separated oscillatory fields [5]. This method is less sensitive to the field inhomogeneities noted above, and it reduces the resonance linewidth by approximately 40% compared with a single-pulse measurement of the same over-all duration. The pulse sequence consists of two short \(\pi/2\) pulses of duration \(\tau\) and angular frequency \(\omega\) separated by a period of free evolution \(T\). The first \(\pi/2\) pulse is applied when the molecular pulse is at antinode \(m_{1}\) and creates a superposition of the two \(\Lambda\)-doublet components. The coherence evolves freely for a time \(T\) at the transition angular frequency \(\omega_{0}\) and develops a phase difference \(\delta T\) relative to the microwave oscillator, where \(\delta=\omega-\omega_{0}\). A second \(\pi/2\) pulse completes the population transfer with a probability of
\[P_{\text{2-pulse}}\left(\delta\right)= \,\frac{4\pi^{2}\sin^{2}\left(\frac{X}{4}\right)}{X^{4}}\times\]
\[\,\left[X\cos\left(\frac{X}{4}\right)\cos\left(\frac{\delta T+ \beta}{2}\right)-2\delta\tau\sin\left(\frac{X}{4}\right)\sin\left(\frac{\delta T +\beta}{2}\right)\right]^{2}\,,\] (6)
where \(X=\sqrt{\pi^{2}+4\delta^{2}\tau^{2}}\) and \(\beta\) is any change in the phase of the microwave field between one pulse and the next (here is no such phase shift if the field is a perfect standing wave). We set \(\tau=$15\,\mathrm{\SIUnitSymbolMicro s}$\), and choose the free evolution time \(T=m\lambda/(2v_{0})-\tau\), where \(m\) is an integer, so that the molecules travel an integer number of half wavelengths between the start of one pulse and the start of the next, making \(\beta=0\) (modulo \(\pi\)) even for a travelling wave. Figure 8 shows data taken this way to measure the \((1/2^{-},1)-(1/2^{+},1)\) and \((3/2^{-},2)-(3/2^{+},1)\) transition frequencies. The figure shows data for several different free evolution times \(T\). The lines are fits using the model \(b+aP_{\text{2-pulse}}(\delta)\) with \(\beta=0\), and with \(\tau\) and \(T\) set to the values used in the experiment. This leaves only the offset \(b\), the amplitude \(a\) (negative if \(m\) is odd) and the resonance angular frequency \(\omega_{0}\) as fitting parameters. The frequencies measured for different values of \(m\) are all in agreement.
<figure><img src="content_image/1403.0195/x8.png"><figcaption>Figure 8: Frequency measurements using the method of separated oscillatoryfields. Top: Population in the (1/2−,1) state as a function of the microwavefrequency for three different free evolution times, 458µs (green), 380µs(blue), 302µs (red). Bottom: Population in the (3/2+,1) state as a function ofthe microwave frequency for two different free evolution times, 650µs (green),330µs (blue). Points: data, Lines: fits using equation (6)</figcaption></figure>
## 7 Velocity-dependent frequency shifts
If \(\beta\) is not zero, there will be a systematic frequency shift, \(\delta f=\beta/\left(2\pi T\right)\). An obvious contribution to \(\beta\) comes from the amplitude imbalance between the co-propagating and counter-propagating waves, \(\beta=\phi(z_{2})-\phi(z_{1})\), where \(\phi(z)\) is given by equation (3) and \(z_{1,2}\) are the positions of the molecules at the start of the first and second pulses. As mentioned above, we aim to place \(z_{1}\) and \(z_{2}\) at antinodes of the standing wave. This can be achieved with high accuracy using the field map that we make for each frequency measurement, such as the one shown in figure 6, but there will always be some error, and there is a spread in the positions of the molecules. In fact, this spread in positions can usefully be used to map out the position dependence of the phase. Consider a molecule whose arrival time at the detector is \(t_{d}\). It is at position \(z_{1}=vt_{0}\) at the time \(t_{0}\) when the first pulse starts, and at position \(z_{2}=v(t_{0}+\tau+T)\) when the second pulse starts, where \(v=D/t_{d}\) is its speed. For this molecule, the expected systematic shift is
\[\delta f=\frac{1}{2\pi T}\left\{\tan^{-1}\left[\Delta\tan\left(kv(t_{0}+\tau+T )-kz_{0}\right)\right]-\tan^{-1}\left[\Delta\tan\left(kvt_{0}-kz_{0}\right) \right]\right\}.\] (7)
We divide the time-of-flight profile into slices \(5\,\mathrm{\SIUnitSymbolMicro s}\) wide. For each slice we find the resonance frequency using the Ramsey method, and plot this against the arrival time. Figure 9 shows the data obtained this way for the \((1/2^{+},1)-(1/2^{-},1)\) transition. We see that the frequency shift is small for molecules near the central arrival time (\(\approx 1.35\) ms) because, when the pulses were applied, these molecules were close to the antinodes where the phase changes very slowly. The molecules that arrive later are moving more slowly, and they are closer to the nodes when the pulses are applied. Molecules arriving at \(1.4\,\mathrm{ms}\) are near the node when the second pulse is applied, and here we see a sudden change in the frequency shift. Those arriving even later, at about \(1.44\,\mathrm{ms}\), are at a node when the first pulse is applied, and we see another sudden change in the frequency shift. The line in figure 9 is a fit to the model \(f_{0}+\delta f\) where \(\delta f\) is given by equation (7) and \(f_{0}\) is the resonance frequency for molecules in the centre of the pulse. We fix \(t_{0}\), \(\tau\), \(T\) and \(D\) to the values used in the experiment, while \(f_{0}\), \(z_{0}\) and \(\Delta\) are fitting parameters. From the fit, we find \(\Delta=-0.079\pm 0.002\). This model agrees remarkably well with the data, showing that we have excellent control over position-dependent phases in the experiment. For frequency measurements, we use only those molecules that arrive during the period indicated by the shaded region in figure 9, where the frequency changes by less than 20 Hz. We note that for the \(J=3/2\) measurements, a plot similar to figure 9 showed no frequency dependence at the 20 Hz level over the entire range of arrival times.
<figure><img src="content_image/1403.0195/x9.png"><figcaption>Figure 9: Systematic frequency shift of the (1/2+,1)−(1/2−,1) transition as afunction of the arrival time of the molecules. For each 5µs interval of thetime-of-flight profile, the resonance frequency is measured by fittingequation (6) to Ramsey data. The red solid line is a fit to equation (7) wherethe fitting parameters are z0, Δ, and an overall frequency offset. All otherparameters are fixed to the values they have in the experiment. The shadedregions indicates the range of arrival times used for the frequencymeasurements.</figcaption></figure>
For molecules near the centre of the pulse, we can find a simple expression for the systematic frequency shift discussed above. Let \(z_{1}=z_{0}+\delta z_{0}\) and \(z_{2}=z_{1}+(1+\epsilon)m\lambda/2\), where \(z_{0}\) is now the position of any antinode, while \(\delta z_{0}/\lambda\ll 1\) and \(\epsilon\ll 1\) account for the imperfect timing of the microwave pulses due to the uncertainty in the velocity of the molecule. Expanding the trigonometric functions in equation (7) to first order in these small quantities, we find that the systematic frequency shift is
\[\delta f\simeq\epsilon\,\Delta\left(\frac{v}{\lambda}\right).\] (8)
This result shows that the frequency shift is independent of \(\delta z_{0}\) to first order and that the first-order Doppler shift (\(v/\lambda\)) is suppressed by the product of two small quantities - \(\Delta\), which is the imbalance factor, and \(\epsilon\) which is the fractional error in setting the interaction length to an integer number of half-wavelengths. We can set upper limits to \(\Delta\) in three ways: the finesse of the transmission line resonator (figure 4), the power needed to drive \(\pi\)-pulses for each of the resolved Doppler-shifted components (figure 7), and fitting to data such as that in figure 9. The first method gives \(\Delta<0.09\) but does not give a tight constraint because the finesse of the resonator is only partly determined by the reflection at the open end, the other part being the transmission at the input end. The other two methods give consistent results as follows: \(\Delta<0.08\) for the \(J=1/2\) components at 3335 and \(3349\,\mathrm{MHz}\), \(\Delta<0.005\) for the \(J=1/2\) component at \(3264\,\mathrm{MHz}\)¹, and \(\Delta<0.08\) for all the \(J=3/2\) components. An upper limit to \(\epsilon\) comes from the maximum fractional uncertainty in determining the central speed of the pulse, which we estimate to be 0.03. In addition, molecules with different speeds have different values of \(\epsilon\). We select molecules from the time-of-flight profile with arrival times in the range \(t_{0}\pm\delta t\), where \(t_{0}\) is the most probable arrival time and \(\delta t\simeq 0.02t_{0}\), and so the spread in \(\epsilon\) values is \(\pm 0.02\). Thus, the maximum possible value of \(\epsilon\) in the experiment is 0.05. For the \(J=1/2\) measurement the upper limit to this systematic shift is \(0.04\,\mathrm{Hz}\mathrm{/}\mathrm{(}\mathrm{m}\mathrm{/}\mathrm{s}\mathrm{)}\). Note that for \(J=3/2\), the shift is about 5 times smaller for the same \(\epsilon\) and \(\delta\), because of the longer wavelength.
[FOOTNOTE:1][ENDFOOTNOTE]
There is also a second contribution to the velocity-dependent frequency shift which stems from the motion of the molecules during the two short \(\pi/2\) pulses. Consider first the interaction with a _travelling_ wave tuned to the resonant angular frequency \(\omega_{0}\). For a stationary molecule, the phase difference between the microwave oscillator and the oscillating dipole is \(\pi/2\), but for a moving molecule there is an additional contribution to this phase difference due to the Doppler shift \(\delta_{D}=2\pi v/\lambda\). To second order in \(\delta_{D}\), this phase difference is \([1-\tan(\Omega\tau/2)/(\Omega\tau)]\delta_{D}\tau\). Due to this phase shift, the population transfer is maximized by slightly detuning the microwave oscillator. To find the resulting systematic frequency shift, we include the Doppler shift in the expression for the Ramsey lineshape, expand this to second order in both the detuning and the phase shift, \(\beta\), between the two pulses, and then find the value of \(\beta\) that maximizes the population transfer. When \(T\gg\tau\) the frequency shift is
(9)
For our case, where \(\Omega\tau=\pi/2\), the bracketed quantity is \((1-4/\pi)\). We see that the Doppler shift is suppressed by the small quantity \(\tau/T\), which we may have expected since the microwave field is only applied for this fraction of the time. When there are two counter-propagating waves, we might expect the above expression to be further suppressed by the imbalance factor \(\Delta\), and our numerical modelling shows that this is indeed the case. With \(\Delta<0.08\) and typical values of \(\tau\) and \(T\), we find the shift to be less than \(0.01\,\mathrm{Hz}\mathrm{/}\mathrm{(}\mathrm{m}\mathrm{/}\mathrm{s}\mathrm{)}\).
There are other possible velocity-dependent frequency shifts in addition to those discussed above. For example, a position-dependence of the polarization can produce a frequency shift proportional to the velocity. To control these shifts, we measure the transition frequency for at least three different velocities. Figure 10(a) shows the measured frequency of the \((1/2^{+},1)-(1/2^{-},0)\) transition as a function of \(v_{0}\) for three different values of \(m_{1}\) (the antinode used for the first microwave pulse). For each data point in the figure, we average together at least three measurements with different values of \(m\), since we find no dependence on \(m\). We see that the measured frequency depends linearly on the velocity of the molecules and that the gradient \(df/dv_{0}\) differs for different values of \(m_{1}\). The largest gradient observed is \(0.05\pm 0.01\)\(\mathrm{Hz}\mathrm{/}\mathrm{(}\mathrm{m}\mathrm{/}\mathrm{s}\mathrm{)}\). After extrapolating to zero velocity, the measurements using various \(m_{1}\) are all in agreement, as shown in figure 10(b). We average together these zero-velocity results to obtain the final transition frequency, and we do this for all seven frequencies measured. For the \(J=3/2\) measurements, the largest velocity-dependence we observed was \(0.03\pm 0.01\)\(\mathrm{Hz}\mathrm{/}\mathrm{(}\mathrm{m}\mathrm{/}\mathrm{s}\mathrm{)}\), and we observed no dependence on \(m_{1}\).
<figure><img src="content_image/1403.0195/x10.png"><figcaption>Figure 10: (a) Resonance frequency of the (1/2+,1)−(1/2−,0) transition as afunction of the most probable velocity v0, for m1=3,4,5 (blue, red, green).(b) Extrapolated zero-velocity frequencies for the three values of m1. Theyagree within the uncertainty of the linear fits and we take the weighted mean(solid line). The 1-σ standard error of the weighted mean is shown by thedashed lines.</figcaption></figure>
## 8 Zeeman shifts
Next, we consider systematic frequency shifts due to magnetic fields in the interaction region. For small magnetic fields, the Zeeman splitting is linear and symmetric about the line centre. The amplitudes of the components are also symmetric if the microwave field is linearly polarized. Frequency shifts arise due to circular polarization components and/or higher-order Zeeman shifts. To minimize these shifts the interaction region is magnetically shielded (see figure 2).
In the \(J=1/2\) state the magnetic moments arising from the orbital and spin angular momenta are very nearly equal and opposite and the \(g\)-factor is of order \(10^{-3}\). For the \(J=1/2\) measurements, we used just a single layer mu-metal shield to reduce the background magnetic field to acceptable levels. We applied fields as large as \(50\,\mathrm{\SIUnitSymbolMicro T}\) in each direction and observed no shift of the frequencies measured using the Ramsey method, at the \(1\,\mathrm{Hz}\) level. Since the residual magnetic field is far smaller than this, Zeeman shifts are negligible in the \(J=1/2\) measurements.
The g-factor is much larger in the \(J=3/2\) state - \(g=1.081\) for \(F=1\) and \(g=0.648\) for \(F=2\) - and so for the \(J=3/2\) measurements we improved the shielding by using a two-layer shield. Inside the shields we create homogeneous magnetic fields along \(x\) and \(y\) using Helmholtz coils and along \(z\) using a solenoid. These coils are calibrated with a fluxgate magnetometer. To measure the Zeeman splitting of the hyperfine components, we apply large enough magnetic fields to resolve the splitting of the spectral line obtained using a single microwave pulse of \(780\,\mathrm{\SIUnitSymbolMicro s}\) duration. Figure 11(a) shows the Zeeman shift of the \((3/2^{+},2)-(3/2^{-},2)\) hyperfine component as a function of the magnetic field applied in the \(y\)-direction, parallel to the polarization of the microwaves. In this case, only the \(\Delta M_{F}=0\) components are driven, and the shift is quadratic and negative. This shift is due to mixing of the \(M_{F}=0,\pm 1\) levels of \(F=2\) with those same components in \(F=1\). In the f state, \(F=1\) lies lower (see figure 1) and the mixing raises the energy of \(F=2\). The opposite is true for the e state, where the shift is also far smaller because of the larger hyperfine splitting. Therefore, the shift of this transition is negative and is determined mainly by the shift of the lower \(F=2\) level. We measure a quadratic Zeeman shift of \(-12.6\pm 1.1\) Hz/(\(\mu\)T)\({}^{2}\), which is consistent with our calculation. Figure 11(b) shows the Zeeman shift of the same hyperfine component as a function of the magnetic field applied in the \(x\)-direction. In this case, we drive \(\Delta M_{F}=\pm 1\) transitions and observe a linear Zeeman shift of \(9.34\pm 0.28\) Hz/nT, again consistent with our calculation. Here, the error is dominated by the uncertainty in the calibration of the magnetic field coils. We determine the residual magnetic field averaged over the interaction region by measuring the change in Zeeman shift upon reversal of the applied magnetic fields. We also look for any broadening of the single-pulse lineshape due to residual Zeeman splittings. Together, these set upper limits to the background magnetic field of 3, 56, and \(25\,\mathrm{nT}\) along \(x\), \(y\) and \(z\) respectively.
<figure><img src="content_image/1403.0195/x11.png"><figcaption>Figure 11: Zeeman shifts of the (3/2+,2)−(3/2−,2) transition, measured usingsingle microwave pulses of 780µs duration. (a) Quadratic Zeeman shift of theΔMF=0 transitions versus field applied along y. Line: fit to a quadraticmodel. (b) Linear Zeeman shift of the ΔMF=±1 transitions versus field along x.We plot the component that shifts to lower frequency, which is the ΔMF=+1(−1)component for negative (positive) values of the field. Line: fit to a linearmodel.</figcaption></figure>
In the frequency measurements made using the Ramsey method, we look for a shift in the resonance frequency as a function of applied magnetic fields. For the \((3/2^{+},2)-(3/2^{-},2)\) transition we measure a maximum gradient of \(-0.017\pm 0.013\) Hz/nT for fields applied along \(x\), \(-0.036\pm 0.005\) Hz/nT along \(y\) and \(-0.005\pm 0.003\) Hz/nT along \(z\). Using these, and the upper limits for background magnetic fields, we get upper limits for systematic frequency shifts of \(0.1\,\mathrm{Hz}\), \(2\,\mathrm{Hz}\) and \(0.2\,\mathrm{Hz}\) for residual fields along \(x\), \(y\) and \(z\) respectively. Adding these in quadrature gives a total systematic uncertainty due to uncontrolled magnetic fields of \(2\,\mathrm{Hz}\). We assume the same systematic uncertainty for the \((3/2^{+},1)-(3/2^{-},1)\) transition due to the similar Zeeman structure.
For the \((3/2^{+},1)-(3/2^{-},2)\) transition we could not rule out gradients as large as 0.1 Hz/nT along x, 0.2 Hz/nT along y and 0.05 Hz/nT along z. Multiplying these by the upper limits to the residual field gives a systematic uncertainty of \(11\,\mathrm{Hz}\). We assume the same uncertainty for the \((3/2^{+},2)-(3/2^{-},1)\) transition due to its similar Zeeman structure.
## 9 Stark shifts
Figure 12(a) shows the calculated Stark shift of the two \(J=1/2\)\(\Lambda\)-doublet levels in low electric fields. The two levels shift oppositely and quadratically. Using a dipole moment of 1.46 D [23] we calculate a shift of \(\mp 17.8\) Hz/(V/cm)\({}^{2}\). For \(J=1/2\) the shift has virtually no dependence on \(F\) or \(M_{F}\). To measure the Stark shift of the microwave transition we use single, long microwave pulses, apply a DC voltage to one of the plates of the transmission line via a biased tee, and record the transition frequency as a function of the electric field. Figure 12(b) shows our data for the \((1/2^{+},1)-(1/2^{-},1)\) transition. The line is a fit to a parabola, \(\delta f=a(E-E_{b})^{2}\), where \(E\) is the applied field and \(E_{b}\) the background electric field. This fit gives a Stark shift of \(33\pm 3\) Hz/(V/cm)\({}^{2}\), where the error is dominated by the systematic uncertainty in the plate spacing. The background field is consistent with zero, and from the uncertainty in this field we obtain an upper limit to uncontrolled Stark shifts of \(0.1\,\mathrm{Hz}\). For \(J=3/2\), the Stark shift depends on \(F\) and \(M_{F}\). Using the same procedure as before we obtain a systematic uncertainty of \(0.2\,\mathrm{Hz}\) for the \(J=3/2\) measurements.
<figure><img src="content_image/1403.0195/x12.png"><figcaption>Figure 12: (a) Calculated Stark shift of the Λ-doublet states for low electricfields. There is no dependence on F or MF. (b) Measured frequency shift of the(1/2+,1)−(1/2−,1) transition.</figcaption></figure>
## 10 Other systematic uncertainties
We test for systematic frequency shifts that depend on the microwave power by using shorter \(\pi/2\) pulses in a Ramsey experiment. We reduce the pulse length from 15 to \(4\,\mathrm{\SIUnitSymbolMicro s}\), increasing the power by a factor of 14, and do not find any frequency shift. We also do not find any dependence on the probe laser detuning.
Scanning the microwave frequency can change the power in the resonator and this can lead to a systematic frequency shift. Consider a worst-case model where the molecular signal depends linearly on the microwave power and the resonator is badly tuned so that the molecular frequency is half way down the side of a resonance, where the gradient is steepest. Suppose the molecular signal is a Gaussian with a standard deviation \(w\), and the transmission line resonances are Lorentzian with FWHM \(W\). We find that there is a systematic shift of \(|\Delta f|=2w^{2}/W\). In the experiment, typical values are \(w\simeq 1.5\) kHz and \(W\simeq 30\) MHz, leading to a shift of only \(0.15\,\mathrm{Hz}\). In reality, the shift is much smaller for two reasons. First, we tune the transmission line to be on resonance at the molecular frequency. Second, we choose the power that maximizes the molecular signal, i.e. a \(\pi\)-pulse in a single-pulse measurement or \(\pi/2\)-pulses in a Ramsey measurement, and so the signal has no first derivative with respect to power.
Unwanted frequency sidebands may be produced, for example by the microwave oscillator or by the switching electronics, and these can lead to systematic shifts if there is an asymmetry in the amplitudes of the sidebands. We have measured the frequency spectrum and find no sidebands down to -40 dB within \(1\,\mathrm{MHz}\) of the oscillator frequency. For our parameters, we estimate that the systematic frequency shift will be largest for an asymmetric sideband with an offset of \(40\,\mathrm{kHz}\)[24]. Then, if the amplitude of this single sideband is -40 dB, the resulting frequency shift is only \(10\,\mathrm{\SIUnitSymbolMicro Hz}\).
Systematic frequency shifts due to blackbody radiation, the motional Stark effect, the second-order Doppler shift, and collisional shifts, are also all negligible at the current accuracy level.
## 11 Conclusions
Table 1 gives our final transition frequencies, reproduced from [4]. We add the statistical and systematic uncertainties in quadrature to give the total uncertainty.
Transition | Frequency (Hz)
---|---
(1/2+,1)−(1/2−,1) | 3335479356±3
(1/2+,0)−(1/2−,1) | 3349192556±3
(1/2+,1)−(1/2−,0) | 3263793447±3
(3/2+,2)−(3/2−,2) | 701677682±6
(3/2+,1)−(3/2−,1) | 724788315±16
(3/2+,1)−(3/2−,2) | 703978340±21
(3/2+,2)−(3/2−,1) | 722487624±16
Table 1: The measured Λ-doublet transition frequencies with their 1σ
uncertainties.
Our method of microwave spectroscopy is exceptionally versatile and accurate. The method can be used for any molecule that can be produced in a pulsed supersonic beam, and the same apparatus can be used over a very wide frequency range, including low frequencies where a conventional microwave cavity would be too large. Ramsey spectroscopy is more commonly done using two separate cavities with their axes perpendicular to the molecular beam direction. The phase difference between the two cavities then needs to be accurately controlled, which can be difficult to achieve. In the transmission line resonator, control over the relative phase of the two pulses is straightforward because the field is supported by a single structure. With high-Q cavities it is necessary to scan the cavity in synchronism with the microwave frequency. By contrast, the transmission line resonances are broad enough that it is not necessary to tune the line as the frequency is scanned. We emphasize the importance of choosing the plate width and spacing so that higher-order modes, include the ‘leaky modes’, are strongly attenuated. With only the TEM mode able to propagate, the field is very well controlled. In our setup, molecular resonance frequencies can be measured either using a single, long microwave pulse, or using the Ramsey method of two short pulses separated in time. These pulses can be applied when the molecules are at any position along the transmission line and so the molecules can be used to map out the amplitude and phase of the microwave field, providing an exceptional degree of control. For this mapping method to work well it is important to produce short, cold pulses of molecules, and to use a detector with adequate time resolution. Our CH source produces pulses that are just a few millimetres in length and with a translational temperature as low as 400 mK.
Because the field is a standing wave, Doppler shifts are very strongly suppressed in the experiment. As described in section 7, the residual Doppler shift in the Ramsey measurements is proportional to two small factors, one being the imbalance in amplitude between the two counter-propagating waves, the other being the fractional error in setting the interaction length to an integer number of half-wavelengths. In this experiment, we observed velocity-dependent shifts at the level of \(0.05\,\mathrm{Hz}\mathrm{/}\mathrm{(}\mathrm{m}\mathrm{/}\mathrm{s}\mathrm{)}\) or less. We measured and eliminated these shifts by varying the beam velocity. The experiment reached a precision of \(3\,\mathrm{Hz}\), limited mainly by the statistical uncertainty in extrapolating to zero velocity. The velocity-dependent shifts could be reduced by improving the reflection at the end of the transmission line to reduce the amplitude imbalance between the counter-propagating waves, and by improving the way the microwaves are launched into the transmission line to eliminate field non-uniformities in this region. A Stark decelerator [25] could also be used to reduce the velocity of the beam by a factor of 10, giving both longer interaction times and improved velocity control [26].
Individual frequency measurements reached a statistical uncertainty of \(1\,\mathrm{Hz}\) within about 1 hour. This was partly limited by a background of scattered laser light that reaches the detector, and partly by shot-to-shot fluctuations of the source. The background could be reduced by improving the shape of the probe laser mode, and the source noise could be reduced by using a second laser-induced-fluorescence detector upstream of the experiment to record the number of molecules produced in each shot. The photon shot noise limit could then be reached, giving an uncertainty of \(1\,\mathrm{Hz}\) in just a few minutes of integration.
## Acknowledgements
We thank Ben Sauer, Jony Hudson and Heather Lewandowski for their help and advice. We are indebted to Jon Dyne, Steve Maine and Valerijus Gerulis for their expert technical assistance. This work was supported by the EPSRC and the Royal Society.
## References
* (1) J.-P. Uzan, The fundamental constants and their variation: observational and theoretical status, Reviews of Modern Physics 75 (2) (2003) 403–455. doi:10.1103/RevModPhys.75.403. URL http://link.aps.org/doi/10.1103/RevModPhys.75.403
* (2) M. G. Kozlov, \(\Lambda\)-doublet spectra of diatomic radicals and their dependence on fundamental constants, Physical Review A 80 (2) (2009) 1–10. doi:10.1103/PhysRevA.80.022118. URL http://link.aps.org/doi/10.1103/PhysRevA.80.022118
* (3) A. J. de Nijs, W. Ubachs, H. L. Bethlem, Sensitivity of rotational transitions in CH and CD to a possible variation of fundamental constants, Physical Review A 86 (3) (2012) 032501. doi:10.1103/PhysRevA.86.032501. URL http://link.aps.org/doi/10.1103/PhysRevA.86.032501
* (4) S. Truppe, R. J. Hendricks, S. K. Tokunaga, H. J. Lewandowski, M. G. Kozlov, C. Henkel, E. A. Hinds, M. R. Tarbutt, A search for varying fundamental constants using hertz-level frequency measurements of cold CH molecules., Nature communications 4 (2013) 2600. doi:10.1038/ncomms3600. URL http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3826645&tool=pmcentrez&rendertype=abstract
* (5) N. F. Ramsey, A molecular beam resonance method with separated oscillating fields, Physical Review 78 (6) (1950) 695–699. doi:10.1103/PhysRev.78.695. URL http://link.aps.org/doi/10.1103/PhysRev.78.695
* (6) J. J. Hudson, H. T. Ashworth, D. M. Kara, M. R. Tarbutt, B. E. Sauer, E. A. Hinds, Pulsed beams as field probes for precision measurement, Physical Review A 76 (2007) 033410.
* (7) G. Herzberg, J. W. C. Johns, New spectra of the CH molecule, The Astrophysical Journal 158 (1969) 399. doi:10.1086/150202. URL http://adsabs.harvard.edu/doi/10.1086/150202
* (8) G. Herzberg, Spectra of diatomic molecules, Van Nostrand, 1950.
* (9) J. M. Brown, A. Carrington, Rotational spectroscopy of diatomic molecules, The Press Syndicate of the University of Cambridge, Cambridge, 2003.
* (10) D. Gerlich, S. Horning, Experimental investigation of radiative association processes as related to interstellar chemistry, Chemical Reviews 92 (7) (1992) 1509–1539. doi:10.1021/cr00015a003. URL http://pubs.acs.org/doi/abs/10.1021/cr00015a003
* (11) R. J. Chastain, D. Cotten, L. Magnani, High-resolution CH observations of two translucent molecular clouds, The Astronomical Journal 139 (1) (2010) 267–278. doi:10.1088/0004-6256/139/1/267. URL http://stacks.iop.org/1538-3881/139/i=1/a=267?key=crossref.c48a3cb01528f23a5ebc3503fc821304
* (12) T. Dunham Jr., Interstellar Neutral Potassium and Neutral Calcium, Publications of the Astronomical Society of the Pacific 49 (1937) 26. doi:10.1086/124759. URL http://ucp.uchicago.edu/cgi-bin/resolve?id=doi:10.1086/124759
* (13) O. E. H. Rydbeck, J. Ellder, W. M. Irvine, Radio Detection of Interstellar CH, Nature 246 (5434) (1973) 466–468. doi:10.1038/246466a0. URL http://www.nature.com/doifinder/10.1038/246466a0
* (14) B. E. Turner, B. Zuckerman, Microwave detection of interstellar CH, The Astrophysical Journal 187 (1974) L59. doi:10.1086/181396. URL http://adsabs.harvard.edu/doi/10.1086/181396
* (15) L. M. Ziurys, B. E. Turner, Detection of interstellar rotationally excited CH, The Astrophysical Journal 292 (1985) L25. doi:10.1086/184466. URL http://adsabs.harvard.edu/doi/10.1086/184466
* (16) C. R. Brazier, The microwave spectrum of the CH free radical, The Journal of Chemical Physics 78 (3) (1983) 1608. doi:10.1063/1.444853. URL http://link.aip.org/link/?JCP/78/1608/1&Agg=doi
* (17) M. C. McCarthy, S. Mohamed, J. M. Brown, P. Thaddeus, Detection of low-frequency lambda-doublet transitions of the free \({}^{12}\)CH and \({}^{13}\)CH radicals., Proceedings of the National Academy of Sciences of the United States of America 103 (33) (2006) 12263–8. doi:10.1073/pnas.0601746103. URL http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1567868&tool=pmcentrez&rendertype=abstract
* (18) J. Lindner, Multi-photon dissociation of CHBr\({}_{3}\) at 248 and 193 nm: observation of the electronically excited CH(A\({}^{2}\Delta\)) product, Chemical Physics 238 (2) (1998) 329–341. doi:10.1016/S0301-0104(98)00303-6. URL http://linkinghub.elsevier.com/retrieve/pii/S0301010498003036
* (19) C. Romanzin, S. Boyé-Péronne, D. Gauyacq, Y. Bénilan, M.-C. Gazeau, S. Douin, CH radical production from 248 nm photolysis or discharge-jet dissociation of CHBr\({}_{3}\) probed by cavity ring-down absorption spectroscopy., The Journal of Chemical Physics 125 (11) (2006) 114312. doi:10.1063/1.2333456. URL http://www.ncbi.nlm.nih.gov/pubmed/16999479
* (20) S. Truppe, New Physics with Cold Molecules: Precise Microwave Spectroscopy of CH and the Development of a Microwave Trap, Phd thesis, Imperial College London (2013).
* (21) S. Truppe, R. J. Hendricks, E. A. Hinds, M. R. Tarbutt, Measurement of the lowest millimetre-wave transition frequency of the CH radicalarXiv:1309.3301v1.
* (22) A. M. Rushdi, R. C. Menendez, R. Mittra, S. W. Lee, Leaky modes in parallel-plate EMP simulators, IEEE Transactions on Electromagnetic Compatibility EMC-20 (3) (1978) 443–451.
* (23) D. H. Phelps, F. W. Dalby, Experimental determination of the electric dipole moment of the ground electronic state of CH, Physical Review Letters 16 (1) (1966) 3–4.
* (24) C. Audoin, M. Jardino, L. S. Cutler, R. F. Lacey, Frequency offset due to spectral impurities in cesium-beam frequency standards, IEEE Transactions on Instrumentation and Measurement IM-27 (4) (1978) 325–329.
* (25) H. Bethlem, G. Berden, G. Meijer, Decelerating Neutral Dipolar Molecules, Physical Review Letters 83 (8) (1999) 1558–1561. doi:10.1103/PhysRevLett.83.1558. URL http://link.aps.org/doi/10.1103/PhysRevLett.83.1558
* (26) E. R. Hudson, H. J. Lewandowski, B. C. Sawyer, J. Ye, Cold molecule spectroscopy for constraining the evolution of the fine structure constant, Physical Review Letters 96 (14) (2006) 1–4. doi:10.1103/PhysRevLett.96.143004. URL http://link.aps.org/doi/10.1103/PhysRevLett.96.143004
|
1710.06374 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 58521,
"num_imgs": 0,
"llama3_tokens_count": 21831
} | [] | # A Variation on Hölder-Brascamp-Lieb Inequalities
Kevin O’Neill
###### Abstract.
The Hölder-Brascamp-Lieb inequalities are a collection of multilinear inequalities generalizing a convolution inequality of Young and the Loomis-Whitney inequalities. The full range of exponents was classified in Bennett et al. [3]. In a setting similar to that of Ivanisvili and Volberg [11], we introduce a notion of size for these inequalities which generalizes \(L^{p}\) norms. Under this new setup, we then determine necessary and sufficient conditions for a generalized Hölder-Brascamp-Lieb type inequality to hold and establish sufficient conditions for extremizers to exist when the underlying linear maps match those of the convolution inequality of Young.
## 1. Introduction
In a dual form, Young’s convolution inequality on \(\mathbb{R}^{d}\) states that
\[\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}f(y)g(x-y)h(x)dxdy\leq C_{p,q,r,d}|| f||_{p}||g||_{q}||h||_{r},\] (1)
where \(p,q,r\in[1,\infty]\), \(\frac{1}{p}+\frac{1}{q}+\frac{1}{r}=2\) (interpreting \(1/\infty\) as 0) and \(C_{p,q,r,d}\) is the optimal constant.
It was established in [1], [12], and [5] that certain compatible triplets of Gaussians are the extremizers of (1), providing a sharp form of the inequality. Later [6] proved this by running the heat equation through time with \(f,g\), and \(h\) as initial data and showing that the left hand side is nondecreasing with time.
[3] provides the following generalization of Young’s inequality which also encompasses Hölder’s inequality and the Loomis-Whitney inequality. Let \(d,n,d_{j}\) be positive intergers (\(1\leq j\leq n\)) and let \(L_{j}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d_{j}}\) be surjective linear maps. Then there exists \(C<\infty\) such that
\[\int_{\mathbb{R}^{d}}\prod_{j=1}^{n}f_{j}(L_{j}(x))dx\leq C\prod_{j=1}^{n}||f_ {j}||_{L^{p_{j}}(\mathbb{R}^{d_{j}})},\] (2)
for all \(f_{j}\in L^{p_{j}}(\mathbb{R}^{d_{j}})\) and with \(C\) depending only on \(d,n,d_{j}\), and \(L_{j}\), if and only if both
\[\sum_{j=1}^{n}\frac{d_{j}}{p_{j}}=d\] (3)
and
\[\dim(V)\leq\sum_{j=1}^{n}\frac{\dim(L_{j}V)}{p_{j}}\] (4)
for all subspaces \(V\subset\mathbb{R}^{d}\). The set of exponents \((1/p_{1},...,1/p_{n})\) satisfying both (3) and (4) is called the _Hölder-Brascamp-Lieb (HBL) polytope_. Thus, the HBL polytope is compact and convex with finitely many extreme points.
One may obtain (1) from (2) by setting \(d=2k,n=3,d_{j}=k\), and \(L_{1}(x,y)=y,L_{2}(x,y)=x-y,L_{3}(x,y)=x\), where \(\mathbb{R}^{2k}=\{(x,y):x,y\in\mathbb{R}^{k}\}\). [2] proved the existence of extremizers (in particular, certain tuples of Gaussians) by a generalization of the above heat equation method. (2) may be rewritten in the form
\[\int_{\mathbb{R}^{d}}\prod_{j=1}^{n}f_{j}(L_{j}(x))^{s_{j}}dx\leq C\prod_{j=1} ^{n}\left(\int_{\mathbb{R}^{d_{j}}}f_{j}\right)^{s_{j}},\] (5)
where \(s_{j}=1/p_{j}\) and \(f_{j}\geq 0\). (This is a nonrestricting assumption since \(|\int f|\leq\int|f|\).) In this paper, we will frequently use the notation \(s=(s_{1},...,s_{n})\). The above may be rewritten as
\[\int_{\mathbb{R}^{d}}B(f_{1}(L_{1}(x)),...,f_{n}(L_{n}(x)))dx\leq CB\left(\int \!f_{1},...,\int\!f_{n}\right),\] (6)
where \(B(y_{1},...,y_{n})=y_{1}^{s_{1}}\cdots y_{n}^{s_{n}}\). In this paper, we will say \(B:\mathbb{R}^{n}_{+}\rightarrow\mathbb{R}_{+}\) is a _Hölder-Brascamp-Lieb (HBL) function for \(\{L_{j}\}\)_ if (6) holds for all nonnegative \(f_{j}\in L^{1}(\mathbb{R}^{d_{j}})\) . Here \(\mathbb{R}_{+}=[0,\infty)\).
A similar question was explored in [11] in the case where the \(L_{j}\) maps are rank 1 (\(d_{j}\equiv 1\)). The authors found sufficient conditions on \(B\) for the left hand side of (6) to be bounded by the same expression where the \(f_{j}\) are replaced with certain Gaussians \(G_{j}\) with \(\int f_{j}=\int G_{j}\). A corollary of this result is that certain tuples of Gaussians are among the extremizers. The key condition was a concavity requirement on \(B\) which allowed the heat equation method from [6] to work. Their bounding term matches our in the case where each of the \(L_{j}\) is the identity.
In this paper, we will remove the rank 1 restriction and provide necessary and sufficient conditions for a function \(B:\mathbb{R}_{+}^{n}\rightarrow\mathbb{R}_{+}\) to be an HBL function in the following theorem to be proven in Section 2. Part of the proof will involve the construction of a parallelipiped with certain dimensions through a dual linear programming problem as in [9].
By \(A\lesssim B\), we mean that there exists a \(0<C<\infty\) such that \(A\leq CB\) and by \(A\gtrsim B\), we mean there exists a \(0<C^{\prime}<\infty\) such that \(A\geq CB\). \(A\approx B\) means \(A\lesssim B\) and \(A\gtrsim B\).
**Theorem 1**.: _Let \(B:[0,\infty)^{n}\rightarrow[0,\infty)\) be nondecreasing in each coordinateand satisfy \(B(y_{1},...,y_{n})=0\) whenever any of the \(y_{j}\) are 0. Let \(d,d_{j},1\leq j\leq n\) be positive integers and \(L_{j}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d_{j}}\) surjective linear maps whose Hölder-Brascamp-Lieb polytope \(\mathcal{P}\) is nonempty. Then the following are equivalent:_
_1) \(B\) is an HBL function for \(\{L_{j}\}\)._
_2) For all \(0<\lambda_{j},y_{j}<\infty\),_
\[B(\lambda_{1}y_{1},...,\lambda_{n}y_{n})\lesssim\max_{s\in\mathcal{P}}\lambda^ {s_{1}}\cdots\lambda_{n}^{s_{n}}B(y_{1},...,y_{n}).\] (7)
_3) For all \(0<\lambda_{j},y_{j}<\infty\),_
\[B(\lambda_{1}y_{1},...,\lambda_{n}y_{n})\gtrsim\min_{s\in\mathcal{P}}\lambda^{ s_{1}}\cdots\lambda_{n}^{s_{n}}B(y_{1},...,y_{n}).\] (8)
Allowing for a change of underlying constant, each of the possible conclusions in the above theorem is invariant under multiplication of \(B\) by a bounded function with bounded inverse. Thus, the theorem still holds if we replace the hypothesis that \(B\) is nondecreasing in each coordinate with the weaker hypothesis that \(B\) is bounded above and below by a positive multiple of a function which is nondecreasing in each coordinate.
The remainder of the paper is dedicated to the question of extremizers, and we will transfer some previous results into this newer setup. In particular, we will focus on the choice of \(d,n,d_{j},L_{j}\) used in Young’s inequality to emphasize the differences in setting rather than prove statements in their most general form.
In Section 3, we will state and prove a rearrangement inequality that allows one to replace \(f_{j}\) with their symmetric decreasing rearrangements. The proof of this uses the classical technique found in [8], where it was shown that \(\int F(f(x),g(x))dx\leq\int F(f^{*}(x),g^{*}(x))dx\) for certain \(F\) satisfying a second-order condition.
In Section 4, we will show that for certain \(B\), near-extremizers triples of (6) must be localized in scale and that these scales must be close for each function in the triple. This result is similar to the one found in [7] for the setting of \(L^{p}\) norms and will be used in establsihing precompactness. Section 5 will piece together these arguments to establish the existence of extremizers in certain cases of HBL functions, as stated in the following theorem.
For notation, let \(\vec{y}=(y_{1},...,y_{n})\) denote a vector in \(\mathbb{R}^{n}_{+}\) and let \(\Delta_{3}(B;a,b,c,d,e,f)\) denote the third order difference:
\[B(b,d,f)-B(a,d,f)-B(b,c,f)-B(b,d,e)+B(b,c,e)+B(a,d,e)+B(a,c,f)-B(a,c,e).\] (9)
**Theorem 2**.: _Let \(P_{i}(a,b,c)=a^{1/p_{i}}b^{1/q_{i}}c^{1/r_{i}}\), where \(p_{i},q_{i},r_{i}\in(1,\infty)\) and \(1/p_{i}+1/q_{i}+1/r_{i}=2\). Let \(B=\rho(P_{1},...,P_{n})\) where_
\[\rho(\lambda_{1}y_{1},...,\lambda_{n}y_{n})\leq C\max_{i}\lambda_{i}\rho(y_{1} ,...,y_{n})\]
_for all \(0<\lambda_{i},y_{i}<\infty\) and_
\[\rho(\vec{y_{1}})+\rho(\vec{y_{2}})\leq\rho(\vec{y_{1}}+\vec{y_{2}})\]
_for all \(\vec{y_{i}}\in\mathbb{R}^{n}_{+}\). Furthermore, suppose \(B\) is continuous with_
\[B(0,0,0)=B(x,0,0)=B(0,y,0)=B(0,0,z)=0,\]
_along with_
\[\Delta_{3}(B;a,b,c,d,e,f)\geq 0\]
_for all \(a\leq b,c\leq d,e\leq f\)._
_Let \(\alpha,\beta,\gamma>0\). Then, there exist \(f,g,h\) which maximize_
\[\iint B(f(y),g(x-y),h(x))dxdy\]
_under the constraint_ \(\int\!f=\alpha,\int\!g=\beta,\int\!h=\gamma\)_._
The setup of Theorem 2 includes the hypotheses of the rearrangement inequality from Section 3 as well as conditions which allow us to use some tools from the \(L^{p}\) norms setting while also extending the conclusion to other HBL functions.
Lastly, Section 6 will provide an example of an HBL function which leads to non-Gaussian extremizers. We will prove this to be the case by showing that no Gaussian is a critical point with regards to the Euler-Lagrange equations and referencing the existence of extremizers result from Section 5.
The author would like to thank his advisor, Michael Christ, for all his support during this project.
## 2. Necessary and Sufficient Conditions for HBL functions
The proofs of \(\eqref{eq:bigthm3}\Rightarrow(\ref{eq:bigthm2})\Rightarrow\eqref{eq:HBL3}\) are relatively straightforward so we will address those here before moving on to the more involved remaining implication.
Proof of \(\eqref{eq:bigthm3}\Rightarrow\eqref{eq:bigthm2}\Rightarrow\eqref{eq:HBL3}\).: Suppose (8) holds. Simultaneously replace each \(y_{j}\) in the given inequality with \(\lambda_{j}y_{j}\) and each \(\lambda_{j}\) with \(\lambda_{j}^{-1}\). Then (7) is obtained by dividing both sides by
\[\min_{s\in\mathcal{P}}\lambda^{-s_{1}}\cdots\lambda_{n}^{-s_{n}}\]
and then using the fact that the reciprical of the minimum is the maximum of the recipricals.
Now suppose (7) and consider nonnegative \(L^{1}\) functions \(f_{j}\). If any of the \(f_{j}\) has zero integral (hence is zero a.e.), then (6) holds trivially, so assume \(\int f_{j}>0\) for all \(j\). Letting \(g_{j}(x)=\frac{f_{j}}{\int f_{j}}\), we rewrite the left hand side of the desired integral inequality to obtain
\[\int_{\mathbb{R}^{d}}B(f_{1}\circ L_{1}(x),...,f_{n}\circ L_{n}(x))dx=\int_{ \mathbb{R}^{d}}B\left(g_{1}\circ L_{1}(x)\cdot\int f_{1},...,g_{n}\circ L_{n}( x)\cdot\int f_{n}\right)dx.\] (10)
By applying (7), we may bound (10) by a constant times
\[\int_{\mathbb{R}^{d}}\max_{s\in\mathcal{P}}\left(g_{1}\circ L_{1}(x)\right)^{s _{1}}\cdots\left(g_{n}\circ L_{n}(x)\right)^{s_{n}}dx\cdot B\left(\int\!f_{1}, ...,\int\!f_{n}\right).\]
Let us recall the fact that \(\mathcal{P}\) is a compact, convex polytope. If \(s,s^{\prime}\in\mathcal{P}\), then taking any point on the segment between \(s\) and \(s^{\prime}\) corresponds to taking a weighted geometric mean of \(\lambda_{1}^{s_{1}}\cdots\lambda_{n}^{s_{n}}\) and \(\lambda_{1}^{s^{\prime}_{1}}\cdots\lambda_{n}^{s^{\prime}_{n}}\). Thus, for any \(x\in\mathbb{R}^{d}\), the above maximum may be obtained at extreme points of \(\mathcal{P}\). We denote the set of extreme points of \(\mathcal{P}\) as \(\mathcal{P^{\prime}}\). Since all terms are nonnegative, we may bound the maximum by a summation over extreme points to obtain
\[\int_{\mathbb{R}^{d}}\max_{s\in\mathcal{P}}\left(g_{1}\circ L_{1}(x)\right)^{s _{1}}\cdots\left(g_{n}\circ L_{n}(x)\right)^{s_{n}}dx\leq\int_{\mathbb{R}^{d}} \sum_{s\in\mathcal{P^{\prime}}}\left(g_{1}\circ L_{1}(x)\right)^{s_{1}}\cdots \left(g_{n}\circ L_{n}(x)\right)^{s_{n}}dx.\] (11)
Next, we exchange the integral with the sum and bound each of the integral terms. Since each function \(g_{n}\) has integral equal to 1, we have
\[\int_{\mathbb{R}^{d}}\left(g_{1}\circ L_{1}(x)\right)^{s_{1}}\cdots\left(g_{n} \circ L_{n}(x)\right)^{s_{n}}dx\leq C_{s},\] (12)
where \(C_{s}\) is the optimal constant such that
\[\int_{\mathbb{R}^{d}}\prod_{j=1}^{n}f_{j}(L_{j}(x))dx\leq C_{s}\prod_{j=1}^{n} ||f_{j}||_{L^{p_{j}}(\mathbb{R}^{d_{j}})}.\]
Since \(\mathcal{P}\) has only finitely many extreme points, hence combining (10), (11)), and (12).
\[\int_{\mathbb{R}^{k}}B(f_{1}\circ L_{1}(x),...,f_{n}\circ L_{n}(x ))dx \leq\left(\sum_{s\in\mathcal{P^{\prime}}}C_{s}\right)B\left(\int \!f_{1},...,\int\!f_{n}\right)\]
\[=CB\left(\int\!f_{1},...,\int\!f_{n}\right).\]
∎
The main goal of the remainder of the section will be to prove the following lemma.
**Lemma 3**.: _Let \(\lambda=(\lambda_{1},...,\lambda_{n})\) such that \(\log\lambda_{j}\) are nonnegative integers. Then, there exists a parellipiped \(S\) such that_
\[|S|\approx\min_{(s_{1},...,s_{n})\in\mathcal{P}}\lambda^{s_{1}}\cdots\lambda_{ n}^{s_{n}}\]
_and_
\[|L_{j}(S)|\leq\lambda_{j},\]
_where the proportionality constants are independent of_ \(\lambda\)_._
To see the usefulness of Lemma 3, let us demonstrate how it may be used to complete the proof of Theorem 1. The reduction to \(\log\lambda_{j}\) will be established in Lemma 6.
Proof of \(\eqref{eq:HBL3}\Rightarrow\eqref{eq:bigthm3}\).: Given \(\lambda_{j}\) such that \(\log\lambda_{j}\) are nonnegative integers, let \(S\) be as in Lemma 3. Define \(f_{j}=y_{j}1_{L_{j}(S)}\). By plugging these \(f_{j}\) into (6), we obtain a left hand side equal to
\[|\cap_{j}L_{j}^{-1}(L_{j}(S))|B(y_{1},...,y_{n})\geq|S|B(y_{1},...,y_{n})=\min _{(s_{1},...,s_{n})\in\mathcal{P}}\lambda^{s_{1}}\cdots\lambda_{n}^{s_{n}}B(y_ {1},...,y_{n})\]
and a right hand side equal to
\[B(|L_{1}(S)|y_{1},...,|L_{n}(S)|y_{n})\leq B(\lambda_{1}y_{1},...,\lambda_{n}y _{n}).\]
Combining the two inequalities gives (8).
∎
Now we begin the proof of Lemma 3. By taking logs of the minimum seen in (8), we reduce computing this term to a linear programming problem. Fixing \(\lambda=(\lambda_{1},...,\lambda_{n})\in\mathbb{R}^{n}_{+}\), we now define the _primal LPP_ as
\[\text{minimize }\log\lambda\cdot s=\sum_{j}s_{j}\log\lambda_{j}\text{ over }s \in\mathbb{R}^{n}_{+}\]
subject to
In the above, \(\bf{E}\) is a finite list of subspaces which are sufficient to determine the HBL polytope. By this, we mean that (4) for only subspaces in \(\bf{E}\) together with (3) is sufficient to describe \(\mathcal{P}\). Because of this fact, we may add a finite number of subspaces to \(\bf{E}\) without changing the optimum value of \(\log\lambda\cdot s\).
One may note that while we have included the restriction \(s_{j}\geq 0\), we have neglected to explicitly include the restriction \(s_{j}\leq 1\). However, this may be obtained from the existing inequalities and proper choice of subspace as follows. Subtract the restriction \(\dim V\leq\sum_{j}s_{j}\dim L_{j}(V)\) from \(d=\sum_{j}s_{j}d_{j}\) to obtain
\[(d-\dim V)\geq\sum_{j}s_{j}(d_{j}-\dim L_{j}(V))\]
for all subspaces \(V\subset\mathbb{R}^{d}\). Fix \(1\leq j_{0}\leq n\) and pick \(V=Ker(L_{j_{0}})\). By the Rank-Nullity theorem, the coefficient on \(s_{j_{0}}\) in the above is equal to \(d-\dim V\). Since all other \(s_{j}\) are already taken to be nonnegative, \(s_{j_{0}}\leq 1\). By taking \(\bf{E}\) to include all subspaces of the form \(Ker(L_{j})\), we may recover the bounds \(s_{j}\leq 1\).
Next, we prove three technical lemmas to aid us in the analysis of this linear programming problem. The first is preliminary, the second allows us to deal with only nonnegative solutions and coefficients, and the third will aid us in showing that a certain algorithm terminates.
**Lemma 4**.: _If \(B:\mathbb{R}_{+}^{n}\rightarrow\mathbb{R}_{+}\) is an HBL function, then_
\[B(R^{d_{1}}y_{1},...,R^{d_{n}}y_{n})\approx R^{d}B(y_{1},...,y_{n})\]
_for all_ \(0<R,y_{j}<\infty\)_._
Proof.: Let \(0<R,y_{j}<\infty\) be arbitrary. Plug in the functions \(f_{j}=y_{j}1_{B_{R}(\mathbb{R}^{d_{j}})}\) to (6). The right hand side becomes \(B(R^{d_{1}}y_{1},...,R^{d_{n}}y_{n})\) while the left hand side scales like \(R^{d}\), giving us the inequality
\[R^{d}B(y_{1},...,y_{n})\lesssim B(R^{d_{1}}y_{1},...,R^{d_{n}}y_{n}).\]
Since the above holds for all \(0<R,y_{j}<\infty\), we may simultaneously repace \(R\) with \(1/R\) and \(y_{j}\) with \(R^{d_{j}}y_{j}\) to obtain the reverse inequality. ∎
**Lemma 5**.: _It suffices to establish (8) for \(\lambda_{j}\geq 1\). That is, if \(B:\mathbb{R}_{+}^{n}\rightarrow\mathbb{R}_{+}\) is an HBL function and (8) holds for \(\lambda_{j}\geq 1\) and \(0<y_{j}<\infty\), then it also holds for \(0<\lambda_{j},y_{j}<\infty\)._
Proof.: Let \(\lambda_{j}>1\) and \(0<y_{j}<\infty\) be given. Choose \(R>0\) sufficiently large such that \(R^{d_{j}}\lambda_{j}>1\) for all \(j\). Then, by Lemma 4 and the fact that \(d=\sum_{j}s_{j}d_{j}\) for any \(s\in\mathcal{P}\),
\[R^{d}B(\lambda_{1}y_{1},...,\lambda_{n}y_{n}) \approx B(R^{d_{1}}\lambda_{1}y_{1},...,R^{d_{n}}\lambda_{n}y_{n})\]
\[\gtrsim\min_{s\in\mathcal{P}}(R^{d_{1}}\lambda^{s_{1}})\cdots(R^{ d_{n}}\lambda_{n}^{s_{n}})B(y_{1},...,y_{n})\]
\[=R^{d}\min_{s\in\mathcal{P}}\lambda_{1}^{s_{1}}\cdots\lambda_{n}^ {s_{n}}B(y_{1},...,y_{n}).\]
Dividing both sides by \(R^{d}\) gives the desired result. ∎
**Lemma 6**.: _It suffices to establish (8) for \(\log\lambda_{j}\in\mathbb{N}\) for all \(j\)._
Proof.: Choose nonnegative integers \(m_{j}\) such that \(e^{m_{j}}\leq\lambda_{j}<e^{m_{j}+d_{j}}\). (We may take the \(m_{j}\geq 0\) by the previous lemma.) Since \(B\) is nondecreasing in each coordinate, we have
\[B(e^{m_{1}}y_{1},...,e^{m_{n}}y_{n})\leq B(\lambda_{1}y_{1},...,\lambda_{n}y_{ n})\leq B(e^{m_{1}+d_{1}}y_{1},...,e^{m_{n}+d_{n}}y_{n}).\]
By Lemma 4, these are uniformly comparable up to a constant multiple of \(e^{d}\). Similarly, for any \(s\in\mathcal{P}\) (in particular the minimum),
\[\Pi_{j}(e^{m_{j}})^{s_{j}}\leq\Pi_{j}\lambda_{j}^{s_{j}}<\Pi_{j}(e^{m_{j}+d_{j }})^{s_{j}}.\]
Again, these are all equivalent up to a constant multiple of \(e^{d}\) by the relation \(d=\sum_{j}s_{j}d_{j}\) for all \(s\in\mathcal{P}\). By hypothesis, we have
\[B(e^{m_{1}}y_{1},...,e^{m_{n}}y_{n})\gtrsim\min_{s\in\mathcal{P}}\Pi_{j}(e^{m_ {j}})^{s_{j}}B(y_{1},...,y_{n}).\]
By replacing the above terms with the corresponding ones involving \(\lambda_{j}\) and adjusting the constant of proportionality, (8) for \(\log\lambda_{j}\in\mathbb{N}\) extends to all \(\lambda_{j}>1\), and therefore all \(\lambda_{j}\). ∎
Let \(\dim({\bf E})=(\dim V)_{V\in{\bf E}}\). We define the _dual LPP_ as
\[\text{maximize }y\cdot\dim(\bf{E})\]
subject to
The dual LPP relates to the primal LPP via the followinfg basic theorem from linear programming. For a source, see an introductory textbook on linear programming, such as [10].
**Theorem 7** (Duality Theorem (special case)).: _Let \(A\) be an \(m\times n\) matrix, \(c,x\in\mathbb{R}^{n}\), and \(b,y\in\mathbb{R}^{m}\) for \(m,n\geq 1\). Suppose that \(A,b,c\) have all nonnegative entries and \(\{x:Ax\leq b,x\geq 0\}\) is nonempty and bounded. Then, the maximum value of \(c^{T}x\) subject to the constraints \(Ax\leq b,x\geq 0\) is equal to the minimum value of \(y^{T}b\) subject to the constraints \(y^{T}A\geq c^{T},y\geq 0\). Furthermore, there exist optimal vectors \(x,y\) for both problems._
By the above theorem, the optimal value of the dual LPP is equal to the optimal value of the primal LPP. In the remainder of this section, we will work with dual vectors \(y\) to construct a parellelipiped \(S\) whose volume is \(e^{y\cdot\dim(\bf{E})}\). By taking the optimal value of \(y\cdot\dim(\bf{E})\), we will show the volume of \(S\) is \(\min_{s\in\mathcal{P}}\lambda^{s_{1}}\cdots\lambda_{n}^{s_{n}}\). We may then translate \(S\) into functions \(f_{j}\) which we plug into (6) to obtain (8).
Since the remainder of this section will only involve the dual LPP with minimal reference to the primal LPP, we now make the following convention. Each dual vector \(y\) is of the form \((y_{V})_{V\in{\bf V}}\), where \({\bf V}\) is the set of all subspaces of \(\mathbb{R}^{d}\). If \({\bf W}\) is a collection of subspaces of \(\mathbb{R}^{d}\), then we say a dual vector \(y\) is _supported on_ **W** if \(y_{V}=0\) for all \(V\notin{\bf W}\). Each vector \(y\) that we consider will be supported on a finite list of subspaces; hence the expression \(y\cdot{\bf V}\) will always be well-defined.
To begin, we will show that \(y\) may be taken to be supported on a _flag_, which we define to be a sequence of properly nested subspaces \(W_{1}\subsetneq W_{2}\subsetneq...\subsetneq W_{t}=\mathbb{R}^{d}\).
**Proposition 8**.: _Let \(y\) be an optimal dual vector of the dual LPP which is supported on \({\bf E}\). Then, there exists a dual vector \(y^{\prime}\) supported on a flag such that \(y\cdot\dim{\bf E}=y^{\prime}\cdot\dim\bf{V}\) and \(y^{\prime}\cdot\dim(L_{j}({\bf V}))\leq y\cdot\dim(L_{j}({\bf E}))\leq\log \lambda_{j}\). Furthermore, there exists a finite list of subpaces \({\bf E^{\prime}}\) independent of \(y\) such that \(y^{\prime}\) may be chosen to be supported on \({\bf E^{\prime}}\) for any optimal dual vector \(y\)._
Before proving the lemma, we remark that the finiteness of \({\bf E^{\prime}}\) is advantageous for the following reason. When we construct the parallelipiped \(S\), we would like the volumes of \(S\) and \(L_{j}(S)\) to be porportional to the \(\lambda_{j}\) in appropriate ways. However, the proportionality constants will depend on the arrangement of the subspaces. A priori, if one changes \(\lambda_{j}\), then one also changes the optimal dual vector, which changes which flag \(y^{\prime}\) is supported on. But, limiting the subspaces to a finite list ensures that a single constant will work as the \(\lambda_{j}\) vary. This is nontrivial, since the algorithm developed in [9] involves summing and intersecting subspaces. It is known [4] that a finite list of subspaces will not necessarily generate a finite list under those operations. We work around this difficulty by performing these operations in a particular order and applying the following lemma.
**Lemma 9**.: _Suppose \(V\subset\mathbb{R}^{d}\) is a subspace and \(W_{1}\subset...\subset W_{t}\) is a flag. Then \(\{V,W_{1},...,W_{t}\}\) generates only a finite list of subspaces under the operations of repeated summation and intersection._
Proof.: (sketch of proof) It suffices to list all such subspaces and show the list is closed under summation and intersection. We claim the complete list is \(\{V\}\cup\{W_{i},V+W_{i},V\cap W_{i}\}_{i=1}^{t}\cup\{W_{i}+(V\cap W_{j})\}_{i <j}\).
Beginning with \(\{V\}\cup\{W_{i},V+W_{i},V\cap W_{i}\}_{i=1}^{t}\), we note that most summations and intersections are already on this list since many subspaces are contained within one another and when \(S\subset T\), we have \(S+T=T\) and \(S\cap T=S\). The two cases which this does not cover are \(W_{i}+(V\cap W_{j})\) where \(i<j\) and \((V+W_{i})\cap W_{j}\) where \(i<j\). Since \(W_{i}\subset W_{j}\), these two are equal and the last type of subspace on our list.
It remains to show that intersections and summations involving subspaces of the \(W_{i}+(V\cap W_{j})\) are still on our list. Adding two such subspaces, we find that
\[[W_{i_{1}}+(V\cap W_{j_{1}})]+[W_{i_{2}}+(V\cap W_{j_{2}})]=W_{\max(i_{1},i_{2 })}+(V\cap W_{\max(j_{1},j_{2})}),\]
which is of the same form.
Similarly, intersecting two such subspaces, we find that
\[[(W_{i_{1}}+V)\cap W_{j_{1}}]\cap[(W_{i_{2}}+V)\cap W_{j_{2}}]=(W_{\min(i_{1}, i_{2})}+V)\cap W_{\min(j_{1},j_{2})},\]
which is also of the same form. ∎
To prove the proposition, we will use the following _basic algorithm (BA)_: Given a vector \(y\) which is not supported on a flag, find two subspaces \(V\) and \(W\) such that neither is contained in the other and \(y_{V}\geq y_{W}>0\). Set \(y^{\prime}_{V+W}=y_{V+W}+y_{W},y^{\prime}_{V\cap W}=y_{V\cap W}+y_{W},y^{ \prime}_{W}=0,y^{\prime}_{V}=y_{V}-y_{W}\). Repeat this process until the desired result.
It was shown in [9] that the BA terminates provided the initial \(y\) has all nonnegative and rational coordinates. Furthermore, at each step \(y\cdot\dim(\bf{V})\) is preserved and \(y\cdot\dim(L_{j}({\bf V}))\) does not increase.
Proof of Proposition 8.: Write \({\bf E}=(E_{1},...,E_{k},\mathbb{R}^{d})\). Perform the BA on \(y\) but only with respect to the coordinates \(y_{E_{1}}\) and \(y_{E_{2}}\). This creates a flag \(W_{1,1}\subsetneq...\subsetneq W_{1,t_{1}}\) such that our modified \(y\) is supported on \(\{W_{1,1},...,W_{1,t_{1}},E_{3},...,E_{k},\mathbb{R}^{d}\}\).
Now, given a \(y\) supported on a flag \(W_{i,1}\subsetneq...\subsetneq W_{i,t_{i}}\) and the remaining original subspaces \(\{E_{i+2},...,E_{k}\}\), we perform the BA on \(y\) using only the subspaces \(\{W_{i,1},...,W_{i,t_{i}},E_{i+2}\}\). This converts \(y\) to a new dual vector supported on a flag \(W_{i+1,1}\subsetneq...\subsetneq W_{i+1,t_{i+1}}\) together with \(E_{i+3},...,E_{k}\).
Continue this process until the list of subspaces \(E_{i}\) is exhausted, resulting in a dual vector supported solely on a flag. While \(y_{\mathbb{R}^{d}}\) is excluded from modification, this does not prevent our final list from being a flag since every subspace is contained in \(\mathbb{R}^{d}\).
Since the \(\log\lambda_{j}\) are integers, we may take optimal \(y\) with all rational coordinates. In addition, each coordinate used in the BA is nonnegative as \(y_{\mathbb{R}^{d}}\) is excluded from such operations. Since this algorithm is solely the concatenation of the BA performed on particular collections of subspaces and the BA is known to terminate in such an instance, our algorithm terminates.
It remains to prove the claim that a finite number of subspaces are considered. Certainly in the case of a particular given \(y\) this is true as only finitely many subspaces are introduced in each of a finite number of steps. However, at each inductive step there are only finitely many subspaces which can be generated from the previous subspaces by Lemma 9. The total number of inductive steps is bounded by \(k-1\), so the total number of subspaces may be counted via a finite tree.
∎
Now we will begin the construction of particular functions which when plugged into (6) will estabish (8).
**Definition 10**.: Suppose a dual vector \(y\) is supported on an independent collection of subspaces \(Y_{1},...,Y_{t}\) whose direct sum is \(\mathbb{R}^{d}\). Define the parellipiped
\[S_{y}=\left\{x\in\mathbb{R}^{d}|x=\sum_{i=1}^{t}\sum_{j=1}^{j_{i}}a_{i}^{j}v_{ i}^{j},0\leq a_{i}^{j}\leq e^{y_{Y_{i}}}\right\},\]
where \(\{v_{i}^{1},...,v_{i}^{j_{i}}\}\) is a (fixed) basis for \(Y_{i}\).
We cite the following two results from [9]. While they were proven in the context of Hölder-Brascamp-Lieb inequalities over the integers, the proofs for the results as stated here may be obtained by simply repeating the proofs from [9], but replacing \(\mathbb{Z}\) with \(\mathbb{R}\) and \(\mathbb{Z}^{d}\) with \(\mathbb{R}^{d}\). Similarly, the dependence on the subspaces \(Y_{i}\) may be deduced by simply following the proofs.
**Proposition 11**.: _Let \(y\) be a dual vector supported on linearly independent subspaces \(Y_{1},...,Y_{t}\) whose direct sum is \(\mathbb{R}^{d}\). Then,_
\[|S_{y}|\approx e^{y\cdot\dim({\bf V})},\]
_where the proportionality constant depends only on the_ \(Y_{i}\)_._
**Lemma 12**.: _Let \(y\) be a dual vector supported on linearly independent subspaces \(Y_{1},...,Y_{t}\) whose direct sum is \(\mathbb{R}^{d}\). Let \(W_{i}=Y_{1}+...+Y_{i}\)._
_Let \(L:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}}\) be any linear map and set \(c_{i}=\dim(L(W_{i}))-\dim(L(W_{i-1}))\). Then_
\[|L(S_{y})|\lesssim e^{\sum y_{Y_{i}}c_{i}},\]
_where the proportionality constant depends only on_ \(L\) _and the_ \(Y_{i}\) _(or equivalently, the_ \(W_{i}\)_)._
Now fix \(y\) as the dual vector supported on a flag \(W_{1}\subsetneq...\subsetneq W_{t}\) as obtained from Proposition 8. Choose linearly independent subspaces \(Y_{i}\) of \(W_{i}\) such that \(Y_{1}+...+Y_{i}=W_{i}\) and define the dual vector \(y^{\prime}\) supported on \(\{Y_{1},...,Y_{t}\}\) by
\[y^{\prime}_{Y_{i}}=y_{W_{i}}+...+y_{W_{t}}.\] (13)
Proof of Lemma 3.: Fix a list of subspaces \(\bf{E}\) which are sufficient to determine the HBL polytope and include \(Ker(L_{j})\) and all the subspaces generated in Proposition 8.
Let \(y\) be an optimal dual vector from the dual LPP, modified by Proposition 8 to be supported on a flag. Define \(S=S_{y^{\prime}}\), where \(y^{\prime}\) is the dual vector obtained in (13). Then, by Proposition 11,
\[|S|\approx e^{y^{\prime}\cdot\dim(\bf{E})} =e^{\sum_{i}(y_{W_{i}}+...+y_{W_{t}})(\dim Y_{i})}\]
\[=e^{\sum_{i}y_{W_{i}}(\dim Y_{1}+...+\dim Y_{i})}=e^{\sum_{i}y_{W _{i}}\dim W_{i}}.\]
Since \(y\) was created from an optimal dual vector, the value of \(\sum_{i}y_{W_{i}}\dim W_{i}\) above is optimal and hence equal to the optimal value of \(s\cdot\log\lambda\) from the primal LPP, giving us the desired volume estimate.
Similarly, by Lemma 12,
\[|S|\approx e^{\sum_{i}y^{\prime}_{Y_{i}}c_{i}} =e^{\sum_{i}(y_{W_{i}}+...+y_{W_{t}})c_{i}}\]
\[=e^{\sum_{i}y_{W_{i}}(c_{1}+...+c_{i})}=e^{\sum_{i}y_{W_{i}}dim(L _{j}(W_{i}))}\leq e^{\log\lambda_{j}}=\lambda_{j}.\]
where the last step follows from the constraints on dual vectors. We may obtain \(|L_{j}(S)|\leq\lambda_{j}\) in place of \(|L_{j}(S)|\lesssim\lambda_{j}\) by a uniform scaling of \(S\) with scaling parameter dependent only on the previous proportionality constants.
∎
## 3. Rearrangement Inequality
Given a function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\), let \(E_{f}(\lambda)=\{x\in\mathbb{R}:f(x)\geq\lambda\}\) denote its distribution function. If \(E_{f}(\lambda)<\infty\) for all \(\lambda>0\), then let \(f^{*}\) denote its symmetric decreasing rearrangement, that is, the unique lower semicontinuous function such that \(f^{*}\) is radially symmetric and nonincreasing with \(E_{f^{*}}=E_{f}\).
Given a function \(F:\mathbb{R}^{3}\rightarrow\mathbb{R}\), denote its third-order difference by
\[\Delta_{3}(F;a,b,c,d,e,f) =F(b,d,f)-F(a,d,f)-F(b,c,f)-F(b,d,e)\]
\[+F(b,c,e)+F(a,d,e)+F(a,c,f)-F(a,c,e).\]
**Theorem 13**.: _Let \(F:\mathbb{R}^{3}\rightarrow\mathbb{R}\) be continuous and satisfy_
\[F(0,0,0)=F(x,0,0)=F(0,y,0)=F(0,0,z)=0,\] (14)
_along with_
\[F(R):=\Delta_{3}(F;a,b,c,d,e,f)\geq 0\] (15)
_for all rectangles \(R=\{(x,y,z):a\leq x\leq b,c\leq y\leq d,e\leq z\leq f\}\)._
_Then, for any non-negative measurable functions \(f,g,h\) on \(\mathbb{R}^{d}\) with finite distribution functions,_
\[\iint F(f(s),g(t),h(s+t))dsdt\leq\iint F(f^{*}(s),g^{*}(t),h^{*}(s+t))dsdt.\] (16)
Condition (14) is simply to ensure the possibility that all integrals in the following proof are finite. If \(\mathbb{R}\) were replaced with a finite measure space, then this condition could be dropped.
Proof.: For this proof, we use the notation
\[I(f,g,h):=\iint F(f(s),g(t),h(s+t))dsdt.\]
By [13] (pp.64-68), we may extend \(F(R)\) from a measure on rectangles to a Borel measure on \(\mathbb{R}^{3}\), also denoted by \(F\), provided that \(F\) is additive.¹ Here, \(F\) is additive if \(F(R_{1}\cup R_{2})=F(R_{1})+F(R_{2})\) for any nonoverlapping rectangles \(R_{1}\) and \(R_{2}\). For \(F(R_{1}\cup R_{2})\) to be pre-defined, \(R_{1}\) and \(R_{2}\) must have an overlapping face; without loss of generality, assume this face is parallel to the \(yz\)-plane. Thus, \(R_{1}=\{(x,y,z):a_{0}\leq x\leq a_{1},c\leq y\leq d,e\leq z\leq f\}\) and \(R_{2}=\{(x,y,z):a_{1}\leq x\leq a_{2},c\leq y\leq d,e\leq z\leq f\}\). By definition of \(F(R)\),
[FOOTNOTE:1][ENDFOOTNOTE]
\[F(R_{1})+F(R_{2}) =F(a_{1},d,f)-F(a_{0},d,f)-F(a_{1},c,f)-F(a_{1},d,e)\]
\[+F(a_{1},c,e)+F(a_{0},d,e)+F(a_{0},c,f)-F(a_{0},c,e)\]
\[+F(a_{2},d,f)-F(a_{1},d,f)-F(a_{2},c,f)-F(a_{2},d,e)\]
\[+F(a_{2},c,e)+F(a_{1},d,e)+F(a_{1},c,f)-F(a_{1},c,e)\]
\[=F(a_{2},d,f)-F(a_{0},d,f)-F(a_{2},c,f)-F(a_{2},d,e)\]
\[+F(a_{2},c,e)+F(a_{0},d,e)+F(a_{0},c,f)-F(a_{0},c,e)=F(R_{1}\cup R _{2}).\]
Let
\[R_{xyz}=\{(\alpha,\beta,\gamma):0\leq\alpha\leq x,0\leq\beta\leq y,0\leq\gamma \leq z\}\]
be a rectangle with characteristic function
\[\chi_{xyz}(\alpha,\beta,\gamma)=\Phi_{\alpha\beta\gamma}(x,y,z).\]
Then, by (14), we have
\[\int\chi_{xyz}(\alpha,\beta,\gamma)dF(\alpha,\beta,\gamma) =F(R_{xyz})\]
\[=F(x,y,z)-F(x,y,0)-F(x,0,z)-F(0,y,z).\]
Now we substitute \(x=f(s),y=g(t),h(s+t)\) and integrate both sides of the above to obtain
\[I(f,g,h) =\iint\left[\int\Phi_{\alpha\beta\gamma}(f(s),g(t),h(s+t))dF( \alpha,\beta,\gamma)\right]dsdt\]
\[=\iint F(f(s),g(t),0)dsdt+\iint F(f(s),0,h(s+t))dsdt+\iint F(0,g( s),h(s+t))dsdt\]
The \(\iint F(f(s),g(t),0)dsdt\) term is invariant under symmetrization of \(f\) and \(g\) since they appear as functions of independent variables. The two following terms may be dealt with similarly after a change of variables, leaving us to show the desired inequality only for the term on the first line. By Fubini’s theorem,
\[\iint\left[\int\Phi_{\alpha\beta\gamma}(f(s),g(t),h(s+t))dF(\alpha,\beta, \gamma)\right]dsdt=\int J(f,g,h)dF(\alpha,\beta,\gamma).\]
where
\[J(f,g,h):=\iint\Phi_{\alpha\beta\gamma}(f(s),g(t),h(s+t))dsdt.\]
Therefore, using that \(F\) is a nonnegative measure, it suffices to show
\[J(f,g,h)\leq J(f^{*},g^{*},h^{*}).\] (17)
By the steps above, we have in fact shown (17) to be equivalent to (16). However, note that (17) is a statement independent of our choice of \(F\). In the case that \(F(x,y,z)=xyz\), then 16 is the classical Riesz rearrangement inequality, which is something we already know to be true. Hence by a series of equivalences, we have proven our theorem for any \(F\).
∎
We conclude this section with the following remark. One may show by example that the third-order condition which is found as a hypothesis in the rearrangement inequality is necessary. To see this, suppose that there exist \(a_{1}\leq a_{2},b_{1}\leq b_{2},c_{1}\leq c_{2}\) such that \(F(R)<0\), where \(R=\{(x,y,z):a_{1}\leq x\leq a_{2},b_{1}\leq y\leq b_{2},c_{1}\leq z\leq c_{2}\}\).
Let \(\chi_{[s,t]}\) denote the indicator function of the interval \([s,t]\) and let \(f=a_{1}\chi_{[-5/2,5/2]}+(a_{2}-a_{1})\chi_{[1/2,3/2]}\), \(g=b_{1}\chi_{[-5/2,5/2]}+(b_{2}-b_{1})\chi_{[1/2,3/2]}\), and \(h=c_{1}\chi_{[-5,5]}+(c_{2}-c_{1})\chi_{[-1,1]}\). Denoting \(LHS=\iint F(f(s),g(t),h(s+t))dsdt\) and \(RHS=\iint F(f^{*}(s),g^{*}(t),h^{*}(s+t))dsdt\), then one may compute
\[LHS=F(a_{2},b_{2},c_{1})+2[F(a_{2},b_{1},c_{2})+F(a_{2},b_{1},c_ {1})+F(a_{1},b_{2},c_{2})+F(a_{1},b_{2},c_{1})]+5F(a_{1},b_{1},c_{2})\\ +11F(a_{1},b_{1},c_{1})+F(a_{1},0,c_{2})+F(0,b_{1},c_{2})+5[F(a_{ 2},0,c_{1})+F(0,b_{2},c_{1})]+19[F(a_{1},0,c_{1})+F(0,b_{1},c_{1})]\]
and
\[RHS=F(a_{2},b_{2},c_{2})+F(a_{2},b_{1},c_{2})+3F(a_{2},b_{1},c_{ 1})+F(a_{1},b_{2},c_{2})+3F(a_{1},b_{2},c_{1})+6F(a_{1},b_{1},c_{2})\\ +10F(a_{1},b_{1},c_{1})+F(a_{1},0,c_{2})+F(0,b_{1},c_{2})+5[F(a_{ 2},0,c_{1})+F(0,b_{2},c_{1})]+19[F(a_{1},0,c_{1})+F(0,b_{1},c_{1})]\]
Thus, \(RHS-LHS=F(R)<0\).
## 4. The Scales Argument
Let \(f,g,h:\mathbb{R}^{d}\rightarrow\mathbb{R}\) and write \(f=\sum_{j\in\mathbb{Z}}2^{j}F_{j}\), where \(1_{\mathcal{F}_{j}}\leq|F_{j}|<2\cdot 1_{\mathcal{F}_{j}}\) and the \(\mathcal{F}_{j}\) are disjoint subsets of \(\mathbb{R}^{d}\). We may decompose \(g=\sum_{k\in\mathbb{Z}}2^{k}G_{k}\) and \(h=\sum_{l\in\mathbb{Z}}2^{l}H_{l}\) with associated sets \(\mathcal{G}_{k}\) and \(\mathcal{H}_{l}\), respectively.
For this section we introduce the following notation. If \(B:\mathbb{R}_{+}^{3}\rightarrow\mathbb{R}_{+}\) is measurable, then
\[I_{B}(f,g,h):=\iint B(f(y),g(x-y),h(x))dxdy.\]
We note that \(I_{B}\) is a trilinear form and that (6) may be stated as \(I_{B}(f,g,h)\lesssim B(\int f,\int g,\int h)\).
**Proposition 14**.: _Let \(P_{i}(a,b,c)=a^{1/p_{i}}b^{1/q_{i}}c^{1/r_{i}}\), where \(p_{i},q_{i},r_{i}\in(1,\infty)\) and \(1/p_{i}+1/q_{i}+1/r_{i}=2\). Let \(B=\rho(P_{1},...,P_{n})\) where_
\[\rho(\lambda_{1}y_{1},...,\lambda_{n}y_{n})\leq C\max_{i}\lambda_{i}\rho(y_{1} ,...,y_{n})\] (18)
_and_
\[\rho(\vec{y_{1}})+\rho(\vec{y_{2}})\leq\rho(\vec{y_{1}}+\vec{y_{2}}).\] (19)
_Then there exist postive constants \(\delta_{0},c_{0},C_{0}\) and positive functions \(\theta,\Theta\) such that_
\[\lim_{t\rightarrow\infty}\theta(t)=0\hskip 36.135pt\lim_{\delta\to 0} \Theta(\delta)=0\]
_with the following properties. Let_ \(0<\delta\leq\delta_{0}\) _Let_ \(f,g,h:\mathbb{R}^{d}\rightarrow[0,\infty)\) _be integrable functions with_ \(\int\!f=\alpha,\int\!g=\beta,\int\!h=\gamma\) _and_
\[I_{B}(f,g,h)\geq(1-\delta)AB(\alpha,\beta,\gamma),\]
_where_ \(A\) _is the optimal constant in the reverse inequality. Then there exist_ \(k,k^{\prime},k^{\prime\prime}\in\mathbb{Z}\) _such that_
\[2^{k}|\mathcal{F}_{k}|\geq c_{0}\]
\[\sum_{|j-k|\geq m}2^{j}|\mathcal{F}_{j}|\leq\theta(m)+\Theta(\delta)\]
_with the analogous properties for_ \(g\) _(with_ \(k^{\prime}\) _in place of_ \(k\)_) and_ \(h\) _(with_ \(k^{\prime\prime}\) _in place of_ \(k\)_). Lastly, we have_
\[|k-k^{\prime}|+|k-k^{\prime\prime}|\leq C_{0}.\]
_Remark 15_.: It is implicit in the statement of this theorem that \(B\) is an HBL function. This may be established by using (18) to prove (7). We also note that (18) is precisely the condition on \(\rho\) which lets (6) hold with \(B=\rho\) in the case that each of the \(L_{j}\) is the identity map.
Proof.: Let \(\eta>0\) be a small parameter and define \(S=\{j\in\mathbb{Z}:2^{j}|\mathcal{F}_{j}|>\eta\}\). Let \(\overline{f}=\sum_{j\in S}2^{j}F_{j}\). Note that \(|S|\leq C\eta^{-1}\) by Chebyshev’s inequality.
Fix \(1\leq i\leq n\) and write \(p=p_{i},q=q_{i},r=r_{i}\). Choose \(\tilde{p}>p,\tilde{q}>q,\tilde{r}>r\) with \(\frac{1}{\tilde{p}}+\frac{1}{\tilde{q}}+\frac{1}{\tilde{r}}=1\). Then, taking advantage of the disjointness of the \(\mathcal{F}_{j}\), we have
\[||f^{1/p}-\overline{f}^{1/p}||_{L^{p,\tilde{p}}}^{\tilde{p}} =||\sum_{j\notin S}2^{j/p}F_{j}^{1/p}||_{L^{p,\tilde{p}}}^{\tilde {p}}\]
\[\asymp\sum_{j\notin S}(2^{j/p}|\mathcal{F}_{j}|^{1/p})^{\tilde{p}}\]
\[\leq\max_{j\notin S}(2^{j/p}|\mathcal{F}_{j}|^{1/p})^{\tilde{p}-p }\sum_{j\notin S}(2^{j/p}|\mathcal{F}_{j}|^{1/p})^{p}\]
\[\leq C\eta^{\frac{\tilde{p}-p}{p}}||f^{1/p}-\overline{f}^{1/p}||_ {L^{p}}^{p}.\]
Now define \(S(\eta)=S\times\mathbb{Z}\times\mathbb{Z}\). Taking advantage of the classical inequality
\[\langle f*g,h\rangle\leq C||f||_{L^{p,\tilde{p}}}||g||_{L^{q}}||h||_{L^{r}},\]
we see that
\[I_{P_{i}}(f-\overline{f},g,h) =\sum_{S(\eta)}2^{j/p_{i}+k/q_{i}+l/r_{i}}\langle F_{j}^{1/p_{i}} *G_{k}^{1/q_{i}},H_{l}^{1/r_{i}}\rangle\]
\[\leq C||f^{1/p}-\overline{f}^{1/p}||_{L^{p}}\]
\[\leq C\eta^{\gamma_{i}},\]
where \(\gamma_{j}=\frac{\tilde{p_{i}}-p_{i}}{p_{i}\tilde{p_{i}}}>0\).
By disjointness of supports of \(f\) and \(\overline{f}\),
\[I_{B}(f,g,h)=I_{B}(\overline{f},g,h)+I_{B}(f-\overline{f},g,h)\]
By Theorem 1,
\[\iint\rho(f_{1}(x,y),...,f_{n}(x,y))dxdy\leq C\rho\left(\int f_{1},...,\int f_ {n}\right).\]
Thus,
\[I_{B}(f-\overline{f},g,h) \leq\rho\left[I_{P_{1}}(f-\overline{f},g,h),...,I_{P_{n}}(f- \overline{f},g,h)\right]\]
\[\leq C\rho(C_{1}\eta^{\gamma_{1}},...,C_{n}\eta^{\gamma_{n}})\]
\[\leq C\eta^{\min\gamma_{i}}\]
and
\[I_{B}(f-\overline{f},g,h)\leq C\eta^{\gamma}\] (20)
for some fixed \(\gamma>0\).
As \(\eta\to 0\), the left hand side of (20) approaches 0. However, we are given that \(f\) is a near-maximizer of this integral, so \(\overline{f}\neq 0\) and \(S\neq\emptyset\). This establishes our first conclusion.
For our next conclusions, we will find an upper bound on the diameter of \(S\),
\[M=\max_{j,j^{\prime}\in S}|j-j^{\prime}|.\]
Let \(N\) be a large positive integer. Then there exist integers \(I^{\flat}<I^{\sharp}\) such that \(S\cap(-\infty,I^{\flat}]\neq\emptyset,S\cap[I^{\sharp},\infty)\neq\emptyset,S \cap(I^{\flat},I^{\sharp})=\emptyset,I^{\sharp}-I^{\flat}\geq M/(2N|S|)\), and, denoting \(f_{0}=\sum_{I^{\flat}<j<I^{\sharp}}2^{j}F_{j}\),
\[\int|f_{0}|\leq N^{-1}\int|f-\overline{f}|\leq CN^{-1}\eta^{c}.\]
Additionally, we may take \(I^{\sharp}-I^{\flat}\) to be divisible by 2. Now define
\[f^{\sharp}=\sum_{j\geq I^{\sharp}}2^{j}F_{j},\hskip 36.135ptf^{\flat}=\sum_{j \leq I^{\flat}}2^{j}F_{j}\]
so that \(f=f^{0}+f^{\sharp}+f^{\flat}\). Next, let \(I=(I^{\sharp}+I^{\flat})/2\) and define
\[g^{\sharp}=\sum_{k\geq I}2^{k}G_{k},\hskip 36.135pth^{\sharp}=\sum_{l\geq I}2^ {l}H_{l},\]
and \(g^{\flat}=g-g^{\sharp},h^{\flat}=h-h^{\sharp}\). We will shortly be analyzing the expression
\[\langle(f-f^{0})^{1/p}*g^{1/q},h^{1/r}\rangle=\langle(f^{\sharp}+f^{\flat})^{1 /p}*(g^{\sharp}+g^{\flat})^{1/q},(h^{\sharp}+h^{\flat})^{1/r}\rangle\] (21)
so let us first prove the following lemma.
**Lemma 16**.: _There exist constants \(c>0\) and \(C<\infty\) such that each of the mixed terms in the expansion of (21) is \(\leq C2^{-c\eta M/N}\)._
Note that while (21) involves nonlinear expressions, we may take a natural multilinear expansion of it since \(f^{\sharp}\) and \(f^{\flat}\) have disjoint supports, hence \((f^{\sharp}+f^{\flat})^{1/p}=(f^{\sharp})^{1/p}+(f^{\flat})^{1/p}\) and so on. To prove the above lemma, we will make use of the following result from [7].
**Lemma 17**.: _Let \(p,q,r\in(1,\infty)\) with \(1/p+1/q+1/r=2\). There exists \(\tau>0\) and \(C<\infty\) such that_
\[\langle 1_{\mathcal{F}}*1_{\mathcal{G}},1_{\mathcal{H}}\rangle\leq C\left[\min _{x,y\in\{|\mathcal{F}|,|\mathcal{G}|,|\mathcal{H}|\}}\frac{x}{y}\right]^{\tau }|\mathcal{F}|^{1/p}|\mathcal{G}|^{1/q}|\mathcal{H}|^{1/r}\] (22)
_for all measurable subsets \(\mathcal{F},\mathcal{G},\mathcal{H}\) of \(\mathbb{R}\) with finite measure._
Proof of Lemma 16..: Consider the mixed term \(\langle(f^{\sharp})^{1/p}*(g^{\flat})^{1/q},(h^{\sharp})^{1/r}\rangle\) and let \(\mathcal{S}\) be the set of multi-indices \((j,k,l)\) such that \(j\geq I^{\sharp}\) and \(k<I\). Let \(\epsilon>0\) and \(\mathcal{S}^{\dagger}\subset\mathcal{S}\) be the set of \((j,k,l)\) such that \(2^{j/p}|\mathcal{F}_{j}|^{1/p}\geq\epsilon,2^{k/q}|\mathcal{G}_{k}|^{1/q}\geq\epsilon\), and \(2^{l/r}|\mathcal{H}_{l}|^{1/r}\geq\epsilon\). Note that \(|\mathcal{S}^{\dagger}|\leq C\epsilon^{-3}\), a bound which may be obtained by the same reasoning as our bound on \(|S|\). By (20), we have
\[\sum_{S\setminus S^{\dagger}}2^{j/p+k/q+l/r}\langle 1_{\mathcal{F}_{j}}*1_{ \mathcal{G}_{k}},1_{\mathcal{H}_{l}}\rangle\leq C\epsilon^{\gamma}.\] (23)
If \((j,k,l)\in\mathcal{S}^{\dagger}\), then \(2^{j/p}|\mathcal{F}_{j}|^{1/p}\leq C\) and \(2^{k/q}|\mathcal{G}_{k}|^{1/q}\). The fact that \((j,k,l)\in\mathcal{S}\) implies
\[j\geq I^{\sharp}\geq I+\frac{1}{4}M/N|S|\geq I+c\eta M/N,\]
so
\[|\mathcal{F}_{j}|\leq C2^{-j}\leq C2^{-I}2^{-(i-I)}\leq C2^{-I}2^{-c\eta M/N}.\]
Also, since \(k\leq I\), we have
\[|\mathcal{G}_{k}|\geq c2^{-j}\epsilon^{q}.\]
Therefore,
\[\frac{|\mathcal{F}_{j}|}{|\mathcal{G}_{k}|}\leq C\epsilon^{-q}2^{-c\eta M/N}\]
and (22) implies
\[\sum_{\mathcal{S}^{\dagger}}2^{j/p+k/q+l/r}\langle 1_{\mathcal{F}_{j}}*1_{ \mathcal{G}_{k}},1_{\mathcal{H}_{l}}\rangle\leq C\epsilon^{-C}2^{-c\eta M/N}.\] (24)
Combining (23) with (24) and choosing \(\epsilon\) small enough gives
\[\sum_{\mathcal{S}}2^{j/p+k/q+l/r}\langle 1_{\mathcal{F}_{j}}*1_{\mathcal{G}_{k }},1_{\mathcal{H}_{l}}\rangle\leq C2^{-c\eta M/N}.\]
This implies the lemma for both \(f^{\sharp},g^{\flat},h^{\sharp}\) and \(f^{\sharp},g^{\flat},h^{\flat}\). All other mixed terms may be dealt with similarly.
∎
We now observe a simple corollary to the above lemma:
\[I_{B}(f^{\sharp},g^{\flat},h^{\sharp}) =\iint\rho(P_{1}(f^{\sharp}(y),g^{\flat}(x-y),h^{\sharp}(x)),..., P_{n}(f^{\sharp}(y),g^{\flat}(x-y),h^{\sharp}(x)))dxdy\]
\[\leq C\rho(I_{P_{1}}(f^{\sharp},g^{\flat},h^{\sharp}),...,I_{P_{n }}(f^{\sharp},g^{\flat},h^{\sharp}))\]
\[\leq C\rho(C2^{-c\eta M/N},...,C2^{-c\eta M/N})\]
\[\leq C2^{-c\eta M/N}\]
This will allow us to deal with the mixed terms that show up in our particular case.
We are almost ready to complete the proof of Proposition 14, but we will need to employ the use of the following lemma, which deals with the power cases inside \(\rho\). It is proven in [7] in the form where \(f\in L^{p},g\in L^{q},h\in L^{r}\).
**Lemma 18**.: _Let \(P(y_{1},y_{2},y_{3})=y_{1}^{1/p}y_{2}^{1/q}y_{3}^{1/r}\), where \(1<p,q,r<\infty\). Let \(f^{\sharp},f^{\flat},g^{\sharp},g^{\flat},h^{\sharp},h^{\flat}\), and \(\eta\) be as before. Then, there exist constants \(c,\gamma>0\), depending only on \(p,q,r\) such that_
\[P\left(\int f^{\sharp},\int\!g^{\sharp},\int\!h^{\sharp}\right)+P \left(\int\!f^{\flat},\int\!g^{\flat},\int\!h^{\flat}\right)\leq(1-c \eta^{\gamma})P\left(\int f,\int\!g,\int\!h\right).\] (25)
Now, let \(A\) be the optimal constant such that \(\iint B(f(y),g(x-y),h(x))dxdy\leq AB(\int f,\int g,\int h)\). We apply Lemma 16 and the disjointness of supports for \(f^{\sharp},f^{\flat},f_{0}\) to observe that
\[I_{B}(f,g,h)\leq AB\left(\int f^{\sharp},\int\!g^{\sharp},\int\!h^{ \sharp}\right)+AB\left(\int f^{\flat},\int\!g^{\flat},\int\!h^{\flat }\right)+AB\left(\int f_{0},\int\!g,\int\!h\right)+C2^{-c\eta M/N}.\] (26)
We deal with the \(f_{0}\) term as follows:
\[B\left(\int f_{0},\int\!g,\int\!h\right) =\rho\left[P_{1}\left(\int f_{0},\int\!g,\int\!h),...,P _{n}(\int f_{0},\int\!g,\int\!h\right)\right]\]
\[=\rho((CN^{-1}\eta^{c})^{1/p_{1}}\beta^{1/q_{1}}\gamma^{1/r_{1}}, ...(CN^{-1}\eta^{c})^{1/p_{n}}\beta^{1/q_{n}}\gamma^{1/r_{n}})\]
\[\leq CN^{-1}\eta^{c}.\]
Now we analyze the first two terms of (26). We begin by using the definition of \(\rho\), along with (19) to combine everything into a single term containing just \(\rho\) and terms found in Lemma 18.
Next, we apply Lemma 18, then use (18) before returning \(B\) to the expression:
\[B\left(\int f^{\sharp},\int\!g^{\sharp},\int\!h^{ \sharp}\right) +B\left(\int f^{\flat},\int\!g^{\flat},\int\!h^{\flat}\right)\]
\[\leq\rho\left[(1-c_{1}\eta^{\gamma_{1}})P_{1}\left(\int f ,\int\!g,\int\!h\right),...,(1-c_{n}\eta^{\gamma_{n}})P_{n}\left( \int f,\int\!g,\int\!h\right)\right]\]
\[\leq(1-c\eta^{\gamma})\rho\left[P_{1}\left(\int f,\int \!g,\int\!h\right),...,P_{n}\left(\int f,\int\!g,\int\!h\right)\right]\]
\[=(1-c\eta^{\gamma})B\left(\int f,\int\!g,\int\!h\right),\]
where \(\gamma=\min_{i}\gamma_{i}\) as before.
In summary, we now have:
\[A(1-\delta)B\left(\int f,\int\!g,\int\!h\right)\leq I_{B}(f,g,h)\leq A (1-c\eta^{\gamma})B\left(\int f,\int\!g,\int\!h\right)+CN^{-1}\eta^{ c}+C2^{-c\eta M/N},\]
the first inequality due to the fact that \((f,g,h)\) is a near-extremizing triplet. Thus,
\[2^{-c\eta M/N}\geq c\eta^{\gamma}-cN^{-1}\eta^{c}-C\delta\geq c\eta^{\gamma}- cN^{-1}-C\delta.\]
We now choose \(N\) to be the integer closest to a sufficiently small multiple of \(\eta^{-\gamma}\) so that
\[2^{-c\eta^{1+\gamma}M}\geq c\eta^{\gamma}-C\delta,\]
so if \(C_{0}\) is chosen large enough we have \(\eta\geq C_{0}\delta^{1/\gamma}\) implies \(M\leq C\eta^{-1-\gamma}(\log\eta)^{-1}\). This completes the proof of the proposition for \(f\) and functions \(g\) and \(h\) may be taken care of similarly.
∎
**Corollary 19**.: _Let \(S\) be a compact subset of \((1,\infty)^{3}\) and let \(\{B_{k}\}_{k=1}^{\infty}\) be a sequence of functions satisfying the hypotheses of Proposition 14 such that the triples of exponents found in the \(P_{i}\) are each contained in \(S\) and such that \(\lim_{k\rightarrow\infty}B_{k}\) exists, where the limit is taken pointwise. Then the conclusions of Proposition 14 hold with \(B=\lim_{k\rightarrow\infty}B_{k}\)._
Proof.: All but one of the main steps in the proof of the main proposition involves bounding an integral of \(B\). This step may be repeated with Fatou’s lemma as
\[\iint B(*)dxdy\leq\liminf_{k\rightarrow\infty}\iint B_{k}(*)dxdy,\]
where \(*\) represents any appropriate collection of functions and the arguments (either \((f,g,h)\), or \((f^{\flat},g^{\flat},h^{\flat})\), etc.). The one remaining step is completed using the containment of power triples within a compact subset. This allows the \(\gamma_{k}\) and \(c_{k}\) to not approach 0 or infinity in the limit.
\[AB\left(\int f^{\sharp},\!\int\!g^{\sharp},\!\int\!h^{ \sharp}\right)+AB\left(\int f^{\flat},\!\int\!g^{\flat},\!\int\!h^{ \flat}\right) =A\lim_{k\rightarrow\infty}B_{k}\left(\int f^{\sharp}, \!\int\!g^{\sharp},\!\int\!h^{\sharp}\right)+B_{k}\left(\int f^{ \flat},\!\int\!g^{\flat},\!\int\!h^{\flat}\right)\]
\[\leq A\liminf_{k\rightarrow\infty}(1-c_{k}\eta^{\gamma_{k}})B_{k} \left(\int f,\int\!g,\int\!h\right)\]
\[\leq A(1-c\eta^{\gamma})B\left(\int f,\int\!g,\int\!h \right),\]
where \(c_{k}\) and \(\gamma_{k}\) are the appropriate constants corresponding to \(B_{k}\). ∎
_Example 20_.: The main proposition applies to \(B(y_{1},y_{2},y_{3})=\int_{-1/6}^{1/6}y_{1}^{2/3-t/2}y_{2}^{2/3-t/2}y_{3}^{2/3 +t}dt\).
## 5. Existence of Extremizers
Following [7], we introduce the following definitions.
**Definition 21**.: Let \(\theta:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}\) be continuous such that \(\lim_{\rho\rightarrow\infty}\theta(\rho)=0\). Then a function \(f\in L^{1}(\mathbb{R}^{d})\) is _normalized with norm \(\alpha\) with respect to \(\theta\)_ if \(\int f=\alpha\) and
\[\int_{|f(x)|>\rho}|f(x)|dx\leq\theta(\rho)\text{ for all }\rho<\infty\]
\[\int_{|f(x)|<\rho^{-1}}|f(x)|dx\leq\theta(\rho)\text{ for all }\rho<\infty.\]
If \(\eta>0\), then \(f\in L^{1}(\mathbb{R}^{d})\) is \(\eta\)_-normalized with respect to \(\theta\)_ if there exists a decomposition \(f=g+b\) where \(g\) is normalized with respect to \(\theta\) and \(||b||_{1}<\eta\).
Under the above definitions, our main proposition from Section 4 states that any extremizing sequence \(\{(f_{n},g_{n},h_{n})\}_{n=1}^{\infty}\) for \(\iint B(f(y),g(x-y),h(x))dxdy\) may be dilated such that all \(f_{n},g_{n},\) and \(h_{n}\) are \(\eta\)-normalized with their original norms and with respect to the same \(\theta\) with \(\eta\to 0\) as \(n\rightarrow\infty\). While this is trivial in the setting involving \(L^{p}\) norms, here we must reference Lemma 4, which says \(B(\lambda y_{1},\lambda y_{2},\lambda y_{3})=\lambda^{2}B(y_{1},y_{2},y_{3})\). Thus, we obtain the dilation symmetry
\[\iint B(\lambda f(\lambda y),\lambda g(\lambda(x-y)),\lambda h(\lambda x))dxdy =\iint B(f(y),g(x-y),h(y))dxdy.\]
One may now take each triple \((f_{n},g_{n},h_{n})\) to be at the same scale by application of the dilation symmetry.
We now begin our proof of Theorem 2.
Proof.: Let \(\{(f_{n},g_{n},h_{n})\}_{n=1}^{\infty}\) be an extremizing sequence satisfying \(\int f_{n}=\alpha,\int g_{n}=\beta,\int h_{n}=\gamma\) for all \(n\geq 1\). By Theorem 13 (and a suitable change of coordinate), we may replace \(f_{n},g_{n},h_{n}\) with \((f_{n}^{*},g_{n}^{*},h_{n}^{*})\) to obtain another extremizing sequence consisting of functions which are radially symmetric and nonincreasing.
By Proposition 14 and the dilation symmetry, we may replace the extremizing sequence with one which is \(\eta\)-normalized with respect to a continuous function \(\theta:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}\), where \(\eta\to 0\) as \(n\rightarrow\infty\). (The benefit here is that we may use the same \(\theta\) for all triples in our sequence.) In the sequel, \(\{(f_{n},g_{n},h_{n})\}_{n=1}^{\infty}\) will denote the new, normalized, symmetrized sequence. To complete the proof, it suffices to show that each of \(\{f_{n}\},\{g_{n}\},\{h_{n}\}\) are precompact.
Let \(\epsilon>0\). For any \(\rho<\infty\) and \(0<A<\infty\) we have
\[\int_{|t|\leq A}f_{n}(t)dt\leq c_{d}\rho A+\int_{f_{n}>\rho}f_{n}.\]
Since \(f_{n}\) is \(\eta\)-normalized with \(\eta\to 0\), there exist \(\rho\) and \(N\) large enough such that \(n>N\) implies
\[\int_{f_{n}>\rho}f_{n}<\epsilon/2.\]
By choosing \(A\) small enough, we have
\[\int_{|t|\leq A}f_{n}(t)dt<\epsilon\] (27)
for sufficiently large \(n\). Now let \(0<B<\infty\). By the fact that for symmetric decreasing \(f_{n}\) with \(\int f_{n}=\alpha\) implies \(f_{n}(s)\leq c_{d}\alpha|s|^{-d}\), we have
\[\int_{|t|\geq B}f_{n}(t)\leq\int_{|t|\geq B}f_{n}\leq c_{d}\alpha B^{-d}f_{n}( t)dt\leq\theta(c_{d}^{-1}\alpha^{-1}B^{d})+o(1),\]
where \(o(1)\to 0\) as \(n\rightarrow\infty\). Since \(\theta(\rho)\to 0\) as \(\rho\rightarrow\infty\), we may take \(B\) large enough that
\[\int_{|t|\geq B}f_{n}(t)dt<\epsilon\] (28)
for sufficiently large \(n\). Fixing \(0<A<B<\infty\), we see that the restrictions of \(f_{n}\) to \([A,B]\) are radial symmetric decreasing with \(0\leq f_{n}(t)\leq c_{d}\alpha A^{-d}\) so they are precompact in \(L^{1}\) on \(\{t\in\mathbb{R}^{d}:A\leq|t|\leq B\}\). By (27) and (28), \(\{f_{n}\}\) is precompact in \(L^{1}(\mathbb{R}^{d})\). By the same reasoning, \(\{g_{n}\}\) and \(\{h_{n}\}\) are precompact in \(L^{1}(\mathbb{R}^{d})\) as well, which completes the proof.
∎
## 6. Non-Gaussian Extremizers
In the classical version of Young’s inequality, it is known that extremizers exist for the entire (possible) range of exponents and furthermore, those extremizers are always Gaussians. In [11], it is shown that for a certain class of functions \(B\), there exist maximizers of
\[\iint B(f(y),g(x-y),h(x))dxdy\]
and that these maximizers are always Gaussians. However, the follow proposition shows that our expansion of the class of functions \(B\) breaks this pattern.
**Proposition 22**.: _Fix \(\alpha,\beta,\gamma>0\). There exists a \(B:\mathbb{R}^{3}\rightarrow\mathbb{R}\) satisfying the hypotheses of Theorem 2 such that under the constraints \(\int\!f=\alpha,\int\!g=\beta,\int\!h=\gamma\), there exist maximizers of_
\[\iint B(f(y),g(x-y),h(x))dxdy\]
_which are not all Gaussians._
The proof of this proposition is based on a simple use of Euler-Lagrange equations, though some aspects are modified to fit our particular setting. Extremizers exist due to results from previous sections and extremizers must also be critical points of the functional \(\int B(f(x),g(x-y),h(y))\). However, any critical point must satisfy the Euler-Lagrange equations and it will be clear that no collection of Gaussians does. Before going any further, let us define a _critical point_ as a triplet of \(L^{1}\) functions \((f,g,h)\) such that for any \(j\in C_{c}^{\infty}\) with \(\int j=0\),
\[\iint B(f(y)+tj(y),g(x-y),h(x))dxdy=\iint B(f(y),g(x-y),h(x))dxdy+o(|t|)\]
as \(\epsilon\to 0\) and that the analogous equation holds with perturbations of \(g\) and \(h\). The reason we add the restiction that \(\int j=0\) is so that \(\int(f+j)=\int f=\alpha\) and \(f+j\) satisfies the appropriate constraint. The condition that \(j\) is bounded with compact support is to ensure convergence of certain integrals which arise in the following proof.
Proof.: Let \(B(y_{1},y_{2},y_{3})=y_{1}^{1/p_{1}}y_{2}^{1/p_{2}}y_{3}^{1/p_{3}}+y_{1}^{1/q_ {1}}y_{2}^{1/q_{2}}y_{3}^{1/q_{3}}\), where
\[\frac{1}{p_{i}}+\frac{1}{q_{i}}+\frac{1}{r_{i}}=2\]
and \(p_{i},q_{i},r_{i}\in(1,\infty)\) for \(i=1,2\), but \((p_{1},q_{1},r_{1})\neq(p_{2},q_{2},r_{2})\). Suppose, to the contrary, that there exists Gaussians \(f,g,h\) which are maximizers of \(\iint B(f(y),g(x-y),h(x))dxdy\). Then, \(f,g,h\) must also form a critical point. Taking the binomial expansion of \((f+tj)^{1/p_{1}}\), we find
\[\iint\!(f(y)\!+\!tj(y))^{1/p_{1}}g^{1/q_{1}}(x-y)h^{1/r_{1}}(x)dxdy \!=\!\iint f^{1/p_{1}}(y)g^{1/q_{1}}(x-y)h^{1/r_{1}}(x)dxdy\]
\[+\frac{t}{p_{1}}\iint f^{1/p_{1}-1}(y)j(y)g^{1/q_{1}}(x-y)h^{1/r_ {1}}(x)dxdy\]
\[+\!O\!\left(t^{2}\!\iint\!f^{1/p_{1}-2}(y)j^{2}(y)g^{1/q_{1}}(x-y )h^{1/r_{1}}(x)dxdy\!\right)\!.\]
The left hand side is well-defined since \(f\) is bounded below by a positive constant on the domain of \(j\). Thus, we may take \(t\) small enough that \(f+tj>0\) everywhere. Furthermore, the integrals on the right hand side are convergent since \(j\) is bounded with compact support and \(1/f\) is bounded on the support of \(j\). In fact, \(f^{1/p_{1}-1}j\in L^{p}\) for all \(1\leq p\leq\infty\). Thus,
\[\iint f^{1/p_{1}-1}(y)j(y)g^{1/q_{1}}(x-y)h^{1/r_{1}}(x)dxdy+\iint f^{1/p_{2}- 1}(y)j(y)g^{1/q_{2}}(x-y)h^{1/r_{2}}(x)dxdy=0\]
for all bounded \(j\) with compact support with \(\int j=0\). This implies that
\[f^{1/p_{1}-1}(\tilde{g}^{1/q_{1}}*h^{1/r_{1}})+f^{1/p_{2}-1}(\tilde{g}^{1/q_{2 }}*h^{1/r_{2}})=C\]
for some constant \(C\), where \(\tilde{g}(x)=g(-x)\). There are now two cases. The first is that neither of the 2 summed terms is constant, in which case each is either a Gaussian or the inverse of a Gaussian and their sum cannot be constant. The second case is that each of the two terms is consant. However, since \((p_{1},q_{1},r_{1})\neq(p_{2},q_{2},r_{2})\), this is impossible to obtain with the same Gaussians for each term. Thus, Gaussians cannot be critical points (or maximizers) for \(\iint B(f(y),g(x-y),h(x))dxdy\) with the given constraints. ∎
## References
* [1] William Beckner. Inequalities in Fourier analysis. _Ann. of Math. (2)_, 102(1):159–182, 1975.
* [2] Jonathan Bennett, Anthony Carbery, Michael Christ, and Terence Tao. The brascamp–lieb inequalities: Finiteness, structure and extremals. _GAFA_, 17:1343–1415, 2008.
* [3] Jonathan Bennett, Anthony Carbery, Michael Christ, and Terence Tao. Finite bounds for Hölder-Brascamp-Lieb multilinear inequalities. _Math. Res. Lett._, 17(4):647–666, 2010.
* [4] Garrett Birkhoff. _Lattice theory_. Third edition. American Mathematical Society Colloquium Publications, Vol. XXV. American Mathematical Society, Providence, R.I., 1967.
* [5] Herm Jan Brascamp and Elliott H. Lieb. Best constants in Young’s inequality, its converse, and its generalization to more than three functions. _Advances in Math._, 20(2):151–173, 1976.
* [6] E. A. Carlen, E. H. Lieb, and M. Loss. A sharp analog of Young’s inequality on \(S^{N}\) and related entropy inequalities. _J. Geom. Anal._, 14(3):487–520, 2004.
* [7] M. Christ. Near-extremizers of Young’s Inequality for R^d. _ArXiv e-prints_, December 2011.
* [8] J. A. Crowe, J. A. Zweibel, and P. C. Rosenbloom. Rearrangements of functions. _J. Funct. Anal._, 66(3):432–438, 1986.
* [9] James Demmel and Alex Rusciano. Parallelepipeds obtaining hbl lower bounds. Technical Report UCB/EECS-2016-162, EECS Department, University of California, Berkeley, Nov 2016.
* [10] Thomas S. Ferguson. Linear programming https://www.math.ucla.edu/ tom/lp.pdf.
* [11] P. Ivanisvili and A. Volberg. Hessian of Bellman functions and uniqueness of the Brascamp-Lieb inequality. _J. Lond. Math. Soc. (2)_, 92(3):657–674, 2015.
* [12] E.H. Lieb. Gaussian Kernels have only Gaussian Maximizers. _Invent. Math._, 102:179–208, 1990.
* [13] Stanislaw Saks. _Theory of the Integral_. Second Revised Edition. Hafner Publishing Company, New York, 1937.
|
1607.07241 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 45101,
"num_imgs": 0,
"llama3_tokens_count": 15682
} | [] | # [
[
carla.mascia@unitn.it, giancarlo.rinaldo@unitn.it, maxsalacodes@gmail.com
###### Abstract
We present an application of Hilbert quasi-polynomials to order domains, allowing the effective check of the second order-domain condition in a direct way. We also provide an improved algorithm for the computation of the related Hilbert quasi-polynomials. This allows to identify order domain codes more easily.
Groebner basis, Hilbert polynomial, Hilbert quasi-polynomial, order domain, order domain code. Hilbert quasi-polynomial for order domains and application to coding theory] Hilbert quasi-polynomial for order domains and application to coding theory Carla Mascia, Giancarlo Rinaldo, Massimiliano Sala] Primary: 13P25, 11T71; Secondary: 12Y05 †
[FOOTNOTE:†][ENDFOOTNOTE]
Carla Mascia, Giancarlo Rinaldo, Massimiliano Sala
Università degli Studi di Trento
Trento, Italy
(Communicated by Aim Sciences)
## 1 Introduction
Fundamental algebraic invariants of a standard-graded ring can be deduced from its Hilbert-Poincaré series and Hilbert polynomial. The Hilbert quasi-polynomial generalizes Hilbert polynomial from the standard grading case to more generalized weighted grading, but until [16] no effective algorithms for its computation were known.
Apart from its natural ideal-theoretical application, we believe that more practical applications can arise from its use, e.g. for the study of order domains and the related codes. We consider order domains as pairs of a quotient ring \(R/I\) and a generalized weighted degree ordering which satisfy some conditions that depend largely on the weights of monomials under the Hilbert staircase of the ideal \(I\) of \(R\).
An important research area in coding theory is that of algebraic geometric codes, which are known to achieve near-optimal performance since the seminal Goppa paper ([7]). A class of algebraic geometric codes that have received a lot of recent attention is formed by the so-called order domain codes ([5],[8]). These codes are defined by evaluating a polynomial vector space at points of a variety which is definided starting from an order domain, and as such they form a subclass of the so-called affine variety codes ([12], [6]). In [19], Høholdt, van Lint and Pellikaan introduced order domains for the first time. They showed how to deal with one-point geometric Goppa codes in the language of order domain theory. In [4], Andersen and Geil give an improved bound on the minimum distance of one-point geometric Goppa codes, an improved construction of one-point geometric Goppa codes and a generalization of the bound and the improved construction to algebraic structures of higher transcendence degrees. To derive these results, they consider first the problems in the most general set-up described by Miura in [20] and by Miura and Matsumoto [21].
In this paper we present our application of Hilbert quasi-polynomials to order domains, which consists in verifying the order domain conditions in a direct way, once the quasi-polynomial has been computed for a related quotient ring. This computation is effective thanks to our algorithm, that improves on that of [16] and can be specialized to the order domain case.
The remainder of this paper is organized as follows:
* here we provide some notation, preliminaries and known results on Hilbert functions, Hilbert quasi-polynomials, order domains and order domain codes;
* in this section we present some results on Hilbert quasi-polynomials, which lead to improvements in their effective calculation, and our main results Corollary 1 and Algorithm 10, which allow to decide effectively if the second order-domain condition is actually satisfied.
* we show some examples of application of our results. In particular, the most important family of affine-variety codes is that of codes coming from maximal curves, since these codes have large length. In this section we specialize our previous results to this family, showing some actual practical cases that can be solved easily.
* in this section we draw our conclusions and point at possible future improvements.
## 2 Preliminaries
In this section we fix some notation and recall some known results. We denote by \(\mathbb{N}_{+}\) the set of positive integers, by \(\mathbb{K}\) a field, by \(R:=\mathbb{K}[x_{1},\dots,x_{n}]\) the polynomial ring in \(n\) variables over \(\mathbb{K}\), and by \(\mathcal{M}=\mathcal{M}(X)\) the set of all monomials in the variables \(x_{1},\dots,x_{n}\). We assign a weight \(w_{i}\in\mathbb{N}_{+}^{r}\) to each variable \(x_{i}\), i.e. if \(X^{\mathbf{\alpha}}=x_{1}^{\alpha_{1}}\cdots x_{n}^{\alpha_{n}}\in\mathcal{M}\)
\[\mathrm{w}(X^{\mathbf{\alpha}}):=\alpha_{1}w_{1}+\cdots+\alpha_{n}w_{n}\]
Let \(\prec_{\mathbb{N}_{+}^{r}}\) and \(\prec_{\mathcal{M}}\) be monomial orderings on \(\mathbb{N}_{+}^{r}\) and \(\mathcal{M}\) respectively, and let \(W:=[w_{1},\dots,w_{n}]\in(\mathbb{N}_{+}^{r})^{n}\) be the vector of the variable weights. The _generalized weighted degree ordering \(\prec_{W}\) defined from \(W\), \(\prec_{\mathbb{N}_{+}^{r}}\) and \(\prec_{\mathcal{M}}\)_ is the ordering given by \(X^{\mathbf{\alpha}}\prec_{W}X^{\mathbf{\beta}}\), with \(X^{\mathbf{\alpha}},X^{\mathbf{\beta}}\in\mathcal{M}\), if either
\[\mathrm{w}(X^{\mathbf{\alpha}})=\mathrm{w}(X^{\mathbf{\beta}})\;\text{ and }X^ {\mathbf{\alpha}}\prec_{\mathcal{M}}X^{\mathbf{\beta}}\;\]
Given \(f\in R\) and \(\prec_{W}\), \(\mathrm{lm}(f)\) (resp. \(\mathrm{lt}(f)\)) stands for the leading monomial (resp. leading term) of \(f\) w.r.t \(\prec_{W}\). Let \(I\subseteq R\) be an ideal of \(R\), then we denote by \(\bar{I}:=\mathrm{in}_{\prec_{W}}(I)\) the _initial ideal_ of \(I\), which is the ideal generated by \(\{\mathrm{lt}(f)\mid f\in I\}\), and by \(\mathcal{N}_{\prec_{W}}(I)\) the _Hilbert staircase_ of \(I\), which is the set of all monomials that are not leading monomials of any polynomial in \(I\). In the remainder of this paper, we suppose \(w_{i}\in\mathbb{N}_{+}\), unless specified otherwise. When \(W=[1,\dots,1]\), the grading is called _standard_. The pair \((R,\prec_{W})\) stands for the polynomial ring with the generalized weighted degree ordering \(\prec_{W}\). A polynomial \(f\in(R,\prec_{W})\) is called \(W\)_-homogeneous_ if all nonzero terms of \(f\) have the same weight. An ideal \(I\subseteq(R,\prec_{W})\) is called \(W\)_-homogeneous_ if it is generated by a set of \(W\)-homogeneous polynomials.
### Order domains
In this section we are going to give the notion of order domain, which represents a relatively new tool in the study of algebraic geometric codes. We refer to [17] and [5].
Recall that an \(\mathbb{K}\)-algebra is a commutative ring with a unit that contains \(\mathbb{K}\) as a unitary subring. The standard example of an \(\mathbb{K}\)-algebra is \(R=\mathbb{K}[x_{1},\dots,x_{n}]\).
**Definition 1**.: Let \(\Gamma\) be a semigroup and \(\prec\) a well-ordering. An **order function** on an \(\mathbb{K}\)-algebra \(R\) is a surjective function
\[\rho:R\rightarrow\Gamma\cup\{-\infty\}\]
such that the following conditions hold
1. [label=(O.0)]
2. \(\rho(f)=-\infty\) if and only if \(f=0\)
3. \(\rho(af)=\rho(f)\) for all nonzero \(a\in\mathbb{K}\)
4. \(\rho(f+g)\leq\max\{\rho(f),\rho(g)\}\)
5. If \(\rho(f)<\rho(g)\) and \(h\neq 0\), then \(\rho(fh)<\rho(gh)\)
6. If \(f\) and \(g\) are nonzero and \(\rho(f)=\rho(g)\), then there exists a nonzero \(a\in\mathbb{F}\) such that \(\rho(f-ag)<\rho(g)\)
for all \(f,g,h\in R\).
**Definition 2**.: Let \(R\) be an \(\mathbb{K}\)-algebra, \((\Gamma,\prec)\) a well-order and \(\rho:R\rightarrow\Gamma\cup\{-\infty\}\) an order function. Then \((R,\rho,\Gamma)\) is called an **order structure** and \(R\) an **order domain** over \(\mathbb{K}\).
All order functions relevant in coding theory are actually weight functions.
**Definition 3**.: Let \(R\) be an \(\mathbb{K}\)-algebra. A **weight function** on \(R\) is an order function on \(R\) that satisfies furthermore
1. \(\rho(fg)=\rho(f)+\rho(g)\)
for all \(f,g\in R\). Here \(-\infty+n=-\infty\) for all \(n\in\mathbb{N}\).
Order domains and weight functions represent helpful tools to construct a large class of algebraic geometric codes. Actually, one can only use Groebner basis theoretical methods for constructing order domains and weight functions, instead of their formal definition, as showed by the following theorem.
**Theorem 1**.: [17] _Let \(I\) be an ideal in \(R=\mathbb{K}[x_{1},\dots,x_{n}]\) and assume \(\mathcal{G}\) is a Groebner basis for \(I\) with respect to a generalized weighted degree ordering \(\prec_{W}\), with \(W\subseteq\mathbb{N}_{+}^{r}\). Suppose that_
* _any_ \(g\in\mathcal{G}\) _has exactly two monomials of highest weight in its support;_
* _no two monomials in the staircase_ \(\mathcal{N}_{\prec_{W}}(I)\) _of_ \(I\) _are of the same weight._
_Write \(\Gamma=\{\mathrm{w}(M)\mid M\in\mathcal{N}_{\prec_{W}}(I)\}\subseteq\mathbb{N} _{+}^{r}\). For \(f\in\mathbb{K}[x_{1},\dots,x_{n}]/I\), denote by \(F\) the unique remainder of any polynomial in \(f\) after division with \(\mathcal{G}\). Then \(R/I\) is an order domain with a weight function \(\rho:R/I\rightarrow\Gamma\cup\{-\infty\}\) defined by \(\rho(0)=-\infty\) and_
\(\rho(f)=\max_{\prec_{\mathbb{N}_{+}^{r}}}\{\mathrm{w}(M)\mid M\in\mathrm{Supp} (F)\}\) _for \(f\neq 0\)._
If \(I\) and \(\prec_{W}\) satisfy the hypotheses of Theorem 1, we call the pair \((R/I,\prec_{W})\) an order domain.
**Example 2**.: Let \(q\) be a prime power and consider the so-called Hermitian polynomial \(x^{q+1}-y^{q}-y\) and let \(I\) be the ideal \(I=(x^{q+1}-y^{q}-y)\subseteq\mathbb{F}_{q^{2}}[x,y]\), where \(\mathbb{F}_{q^{2}}\) stands for a filed with \(q^{2}\) elements and \(q\) is a prime power. We consider \(\prec_{W}\) given by \(w(x)=q\), \(w(y)=q+1\) and \(x\prec_{lex}y\). Then \(\mathcal{G}=\{x^{q+1}-y^{q}-y\}\) is a Groebner basis for \(I\), and it is not difficult to verify that \(I\) and \(\prec_{W}\) satisfy the order domain conditions \((C1)\) and \((C2)\), then \((R/I,\prec_{w})\) is an order domain.
One main advantage of Theorem 1 is that it allows to construct in a very easy way order domains of higher transcendence degree.
### Affine-variety codes and order domain codes
In this section, we present a class of codes, called affine-variety codes and defined in [12], obtained by evaluating functions in the coordinate ring of an affine variety on all the \(\mathbb{K}\)-rational points, i.e. points whose coordinates are in \(\mathbb{K}\), of the variety. Let \(I\) be any ideal in \(R:=\mathbb{K}[x_{1},\dots,x_{n}]\), where \(\mathbb{K}:=\mathbb{F}_{q}\) is the field with \(q\) elements. Put
\[I_{q}:=I+(x_{1}^{q}-x_{1},x_{2}^{q}-x_{2},\dots,x_{n}^{q}-x_{n})\]
The points of the affine variety \(\mathcal{V}(I_{q})\) defined by \(I_{q}\) are the \(\mathbb{F}_{q}\)-rational points of the affine variety defined by \(I\). Let \(\mathcal{V}(I_{q})=\{P_{1},P_{2},\dots,P_{N}\}\). Since \(I_{q}\) contains the polynomials \(x_{i}^{q}-x_{i}\) for all \(i=1,\dots,n\), it is a 0-dimensional and radical ideal. It follows that the coordinate ring
\[R_{q}:=R/I_{q}\]
of \(\mathcal{V}(I_{q})\) is an Artinian ring of length \(N\) and that there is an isomorphism \(\mathrm{ev}\) of \(\mathbb{F}_{q}\)-vector spaces
\[\mathrm{ev}:R_{q}\ \rightarrow\left(\mathbb{F}_{q}\right)^{N}\qquad\bar{f}\ \mapsto(f(P_{1}),\dots,f(P_{N}))\]
where \(f\) is a representative in \(R\) of the residue class \(\bar{f}\). Let \(L\subseteq R_{q}\) be an \(\mathbb{F}_{q}\)-vector subspace of \(R_{q}\) of dimension \(k\). The image \(\mathrm{ev}(L)\) of \(L\) under the evaluation map \(\mathrm{ev}\) is called the **affine-variety code** and we denote it by \(C(I,L)\). The dual code \(C^{\perp}(I,L)\) is the orthogonal complement of \(C(I,L)\) with respect to the usual inner product on \(\mathbb{F}_{q}^{N}\).
**Theorem 3**.: _[_12_]_ _Every linear code may be represented as an affine-variety code._
Let \(\mathcal{N}_{\prec_{W}}(I)\) be the Hilbert staircase of \(I\). If \(\mathrm{Supp}(b_{i})\subseteq\mathcal{N}_{\prec_{W}}(I_{q})\), for all \(i=1,\dots,k\), and \(\mathrm{lm}(b_{1})\prec_{W}\mathrm{lm}(b_{2})\prec_{W}\cdots\prec_{W}\mathrm{ lm}(b_{k})\), then the basis \(B\) is called _well-behaving_ and we define \(\mathcal{L}(L)=\{\mathrm{lm}(b_{1}),\dots,\mathrm{lm}(b_{k})\}\).
Let \(\mathcal{G}\) be a Groebner basis for \(I_{q}\). An ordered pair of monomials \((m_{1},m_{2})\), with \(m_{1},m_{2}\) in \(\mathcal{N}_{\prec_{W}}(I_{q})\), is said to be _one-way-well-behaving (OWB)_ if for any \(f\) such that \(\mathrm{lm}(f)=m_{1}\) and \(\mathrm{Supp}(f)\subseteq\mathcal{N}_{\prec_{W}}(I_{q})\), we have
where the notation ”\(f\;\mathrm{rem}\;\mathcal{G}\)” stands for the remainder of \(f\) modulo \(\mathcal{G}\).
**Theorem 4**.: _For any monomial ordering \(\prec\), the minimum distance of \(C(I,L)\) is at least_
\[\min_{p\in\mathcal{L}(L)}\mid\{s\in\mathcal{N}_{\prec}(I_{q})\ \mid\ \exists h \in\mathcal{N}_{\prec}(I_{q})\ \text{s.t}\ (p,h)\ \text{is OWB},\ \mathrm{lm}( ph\;{\mathrm{rem}}\;\mathcal{G})=s\}\mid\]
_and the minimum distance of \(C(I,L)^{\perp}\) is at least_
We are going to define the order domain codes, thus we can translate the results on their minimum distance given in Theorem 4 into the language of semigroup.
**Definition 4**.: Let \((R/I,\prec_{W})\) be an order domain and \(L\subseteq R_{q}\), the affine-variety code \(C(I,L)\) is called an **order domain code**.
**Example 5**.: Let \(I=(x^{q+1}+y^{q}+y)\subseteq\mathbb{F}_{q^{2}}[x,y]\) and \(\prec_{W}\) be as in Example 2, with \(q=2\). \((R/I,\prec_{W})\) is an order domain and then the code from the curve \(x^{3}-y^{2}-y\) over \(\mathbb{F}_{4}\) is an order domain code.
We observe that this code is called Hermitian code since it is obtained by evaluating at the points of the Hermitian curve.
Observe that also other algebraic geometric codes, such as norm-trace codes, Reed-Solomon codes and Hyperbolic codes can be put into a form satisfying the order domain conditions ([8]).
**Theorem 6**.: _Let \((R/I,\prec_{W})\) be an order domain and \(L\subseteq R_{q}\). The minimum distance of \(C(I,L)\) is at least_
\[\min_{\alpha\in\mathcal{L}(L)}\sigma(w(\alpha))\]
_and the minimum distance of \(C(I,L)^{\perp}\) is at least_
\[\min\{\mu(\mathrm{w}(h))\mid h\in\mathcal{N}_{\prec}(I_{q})\setminus\mathcal{L }(L)\}\geq\min\{\mu(\lambda)\mid\lambda\in\Gamma\setminus\mathrm{w}(\mathcal{L }(L))\}\]
_and so it is at least_
\[\min\{\mu(\lambda)\mid\lambda\in\Gamma\setminus\mathrm{w}(\mathcal{L}(L))\}\] (1)
_where \(\Gamma:=\mathrm{w}(\mathcal{N}_{\prec}(I))\) is the semigroup of the variable weights, \(\mu(\lambda):=\mid\{\alpha\in\Gamma\mid\exists\beta\in\Gamma\;\text{s.t.}\; \alpha+\beta=\lambda\}\mid\), for \(\lambda\in\Gamma\), and \(\sigma(\alpha):=\mid\{\lambda\in\mathrm{w}(\mathcal{N}(I_{q}))\mid\exists\beta \in\Gamma\;\text{s.t.}\;\alpha+\beta=\lambda\}\mid\), for \(\alpha\in\mathrm{w}(\mathcal{N}(I_{q}))\)._
One of the advantages of the order domain approach to algebraic geometric codes is given by the bound on the distance (1) provided in previous theorem, since this is a bound that can be easily computed from the knowledge of the semigroup.
### Introduction to Hilbert quasi-polynomials
In the following we refer to [3] for standard notation.
Let \(I\) be a \(W\)-homogeneous ideal of \((R,\prec_{W})\). The component of \(R/I\) of degree \(k\in\mathbb{N}\) is given by
\[(R/I)_{k}:=\{f\in R/I\mid\mathrm{w}(m)=k\ \ \forall m\in\mathrm{Supp}(f)\}\quad\]
The **Hilbert function**\(H_{R/I}^{W}:\mathbb{N}\rightarrow\mathbb{N}\) of \((R/I,\prec_{W})\) is defined by
\[H_{R/I}^{W}(k):=\dim_{\mathbb{K}}((R/I)_{k})\]
and the Hilbert-Poincaré series of \((R/I,\prec_{W})\) is given by
\[\mathrm{HP}_{R/I}^{W}(t):=\sum_{k\in\mathbb{N}}H_{R/I}^{W}(k)t^{k}\in\mathbb{N }\llbracket t\rrbracket\]
When the grading given by \(W\) is clear from the context, we denote the Hilbert function and the Hilbert-Poincaré series of \((R/I,\prec_{W})\) by \(H_{R/I}\) and \(\mathrm{HP}_{R/I}\), respectively.
By the Hilbert-Serre theorem, the Hilbert-Poincaré series of \((R/I,\prec_{W})\) is a rational function, which is
\[\mathrm{HP}_{R/I}(t)=\frac{h(t)}{\prod_{i=1}^{n}(1-t^{w_{i}})}\in\mathbb{Z} \llbracket t\rrbracket\]
We recall that a function \(f\colon\mathbb{N}\to\mathbb{N}\) is a _quasi-polynomial of period s_ if there exists a set of \(s\) polynomials \(\{p_{0},\dots,p_{s-1}\}\) in \(\mathbb{Q}[x]\) such that \(f(n)=p_{i}(n)\) when \(n\equiv i\bmod s\). Let \(d:=\mathrm{lcm}(w_{1},\dots,w_{n})\) and let \((R/I,\prec_{W})\) be as above. We now refer to [2] for Hilbert quasi-polynomial theory. There exists a unique quasi-polynomial \(P_{R/I}^{W}:=\{P_{0},\dots,P_{d-1}\}\) of period \(d\) such that \(H_{R/I}(k)=P_{R/I}^{W}(k)\) for all \(k\) sufficiently large (that we denote with \(k\gg 0\)), i.e.
\[H_{R/I}(k)=P_{i}(k)\qquad\forall i\equiv k\mod d\quad\text{and}\quad\forall k\gg 0\]
\(P_{R/I}^{W}\) is called the _Hilbert quasi-polynomial associated to_\((R/I,\prec_{W})\). Observe that if \(d=1\), then \(P_{R/I}^{W}\) is a polynomial, and it is simply the _Hilbert polynomial_. We underline that the Hilbert quasi-polynomial does not depend on the chosen monomial ordering \(\prec_{\mathcal{M}}\), but only on the variable weights. It consists of \(d\) polynomials, which are not necessarily distinct. The minimum integer \(k_{0}\in\mathbb{N}\) such that \(H_{R/I}(k)=P_{R/I}^{W}(k)\;\forall\ k\geq k_{0}\) is called _generalized regularity index_ and we denote it by \(\mathrm{ri}^{W}(R/I)\). All the polynomials of the Hilbert quasi-polynomial \(P_{R/I}^{W}\) have rational coefficients and the same degree \(r\leq n-1\), where the equality holds if and only if \(I=(0)\). In this latter case, the leading coefficient \(a_{n-1}\) is the same for all \(P_{i}\), with \(i=0,\dots,d-1\), and \(a_{n-1}=\frac{1}{(n-1)!\prod_{i=1}^{n}w_{i}}\).
## 3 Computational improvements for Hilbert quasi-polynomials, with an application to order domains
In this section we present an algorithm for an effective computation of Hilbert quasi-polynomials and we show how to exploit them for checking order domain conditions.
We have improved the Singular procedures showed in [16] to compute the Hilbert quasi-polynomial for rings \(\mathbb{K}[x_{1},\ldots,x_{n}]/I\).
Before showing the algorithm and our improvement, we give some preliminary results.
**Theorem 7**.: [13] _Let \(\alpha_{1},\dots,\alpha_{d}\) be a fixed sequence of complex numbers, \(d\geq 1\) and \(\alpha_{d}\neq 0\). The following condition on a function \(f:\mathbb{N}\rightarrow\mathbb{C}\) are equivalent:_
* \[\sum_{n\geq 0}f(n)x^{n}=\frac{P(x)}{Q(x)},\] _where_ \(Q(x)=1+\alpha_{1}x+\cdots+\alpha_{d}x^{d}\) _and_ \(P(x)\) _is a polynomial in_ \(x\) _of degree less than_ \(d\)_._
* _For all_ \(n\geq 0\)_,_ \[f(n+d)+\alpha_{1}f(n+d-1)+\cdots+\alpha_{d}f(n)=0.\] (2)
* _For all_ \(n\geq 0\)_,_ \[f(n)=\sum_{i=1}^{k}P_{i}(n)\gamma_{i}^{n},\] (3) _where_ \(1+\alpha_{1}x+\cdots+\alpha_{d}x^{d}=\prod_{i=1}^{k}(1-\gamma_{i}x)^{d_{i}}\)_, the_ \(\gamma_{i}\) _’s are distinct, and_ \(P_{i}(n)\) _is a polynomial in_ \(n\) _of degree less than_ \(d_{i}\)_._
**Proposition 1**.: [13] _Let \(f:\mathbb{N}\rightarrow\mathbb{C}\) and suppose that_
\[\sum_{n\geq 0}f(n)x^{n}=\frac{P(x)}{Q(x)}\]
_where \(P,Q\in\mathbb{C}[x]\). then there is a unique finite set \(E_{f}\subset\mathbb{N}\) and a unique function \(f_{1}:E_{f}\rightarrow\mathbb{C}*=\mathbb{C}\setminus\{0\}\) such that the function \(g:\mathbb{N}\rightarrow\mathbb{C}\) defined by_
\[g(n)=\begin{cases}f(n)&\text{ if }n\not\in E_{f},\\ f(n)+f_{1}(n)&\text{ if }n\in E_{f}\end{cases}\]
_satisfies \(\sum_{n\geq 0}g(n)x^{n}=R(x)/Q(x)\), where \(R\in\mathbb{C}[x]\) and \(\deg R<\deg Q\). Moreover, assuming \(E_{f}\neq\emptyset\) (i.e. \(\deg P\geq\deg Q\)), define \(\mathrm{m}(f)=\max\{i:i\in E_{f}\}\). Then:_
* \(\mathrm{m}(f)=\deg P-\deg Q\)_,_
* \(\mathrm{m}(f)\) _is the largest integer_ \(n\) _for which (_2_) fails to hold,_
* _writing_ \(Q(x)=\prod_{i=1}^{k}(1-\gamma_{i}x)^{d_{i}}\) _as above, there are unique polynomials_ \(P_{1},\dots,P_{k}\) _for which (_3_) holds for_ \(n\) _sufficiently large. Then_ \(\mathrm{m}(f)\) _is the largest integer_ \(n\) _for which (_3_) fails._
Thanks to these two results, we are able to compute the generalized regularity index of \(R/I\), one compute the Hilbert-Poincaré series of \(R/I\).
**Proposition 2**.: _Let \((R/I,\prec_{W})\) be as usual and let \(\mathrm{HP}_{R/I}^{W}(t)=\frac{h(t)}{g(t)}\). Then the generalized regularity index of \(R/I\) is given by_
\[\mathrm{ri}^{W}(R/I)=\max\{0,\deg h(t)-\deg g(t)+1\}\]
Proof.: By Theorem 7 and Proposition 1, and since
\[\sum_{k\geq 0}H_{R/I}(k)t^{k}=\frac{h(t)}{g(t)}\]
with \(g(t)=\prod_{i=1}^{n}(1-t^{w_{i}})=\prod_{j=0}^{d-1}(1-\zeta^{j}t)^{\alpha_{j}}\), where \(\zeta\) is a primitive \(d\)th root of unity and \(\sum_{i=1}^{n}w_{i}=\sum_{j=0}^{d-1}\alpha_{j}=\deg g(t)\), we obtain that, for all \(k\geq k_{0}\) with
\[k_{0}=\begin{cases}0&\text{if }\deg h(t)<\deg g(t)\\ \deg h(t)-\deg g(t)+1&\text{if }\deg h(t)\geq\deg g(t)\end{cases}\quad,\]
the Hilbert function can be written as
\[H_{R/I}(k)=\sum_{i=0}^{d-1}S_{i}(k)\zeta^{ik}\]
where \(S_{i}(k)\) is a polynomial in \(k\) of degree less than \(\alpha_{i}\). Then, by uniqueness of the Hilbert quasi-polynomial of period \(d\), we deduce that the \(i\)th polynomial of \(P_{R/I}\) is given by \(P_{i}(t):=\sum_{j=0}^{d-1}S_{j}(t)\zeta^{ij}\) and that \(H_{R/I}(k)=P_{i}(k)\) for \(k\geq k_{0}\text{ and }k\equiv i\mod d\). ∎
**Remark 1**.: Since \(\mathrm{HP}^{W}_{R}(t)=\frac{1}{\prod_{i=1}^{n}(1-t^{w_{i}})}\), the generalized regularity index of \(R\) is 0.
Thanks to the following two results we can recover \(H^{W}_{R/I}(k)\) from \(H^{W}_{R}(k)\) and \(P_{R/I}^{W^{\prime}}\) from \(P_{R}^{W}\), where \(W\) is obtained by \(W^{\prime}\) dividing each \(w_{i}\in W^{\prime}\) by \(\gcd(W^{\prime})\).
**Proposition 3**.: _Let \(\bar{I}\) be the initial ideal of \(I\) and \(\mathrm{HP}^{W}_{R/\bar{I}}(t)=\frac{h(t)}{g(t)}\) the Hilbert-Poincaré series of \(R/\bar{I}\), with \(h(t)=\sum_{i=0}^{s}h_{i}t^{i}\). Then_
\[H^{W}_{R/\bar{I}}(k)=\sum_{i=0}^{s}h_{i}H^{W}_{R}(k-i)\]
_for all \(k\geq 0\), with \(H^{W}_{R}(k)=0\) for all \(k<0\)._
Proof.: Since
\[\mathrm{HP}_{R}(t)=\sum_{k\geq 0}H_{R}(k)t^{k}=\frac{1}{\prod_{i=1}^{n}(1-t^{w _{i}})}\]
we have
\[\mathrm{HP}_{R/\bar{I}}(t)=\sum_{k\geq 0}H_{R/\bar{I}}(k)t^{k}=\frac{h(t)}{ \prod_{i=1}^{n}(1-t^{w_{i}})}=\left(\sum_{k\geq 0}H_{R}(k)t^{k}\right)\left( \sum_{j=0}^{s}h_{j}t^{j}\right)=\sum_{k\geq 0}\left(\sum_{j=0}^{s}h_{j}H_{R}(k -j)\right)t^{k}\]
where \(H_{R}(k)=0\) for all \(k<0\). Therefore, \(H_{R/\bar{I}}(k)=\sum_{j=0}^{s}h_{j}H_{R}(k-j)\), for all \(k\geq 0\). ∎
**Lemma 1**.: _([16]) Let \(W^{\prime}:=a\cdot W=[w_{1}^{\prime},\dots,w_{n}^{\prime}]\) for some \(a\in\mathbb{N}_{+}\) and let \(\mathrm{HP}^{W}_{R/I}(t)=\frac{\sum_{j=0}^{s}h_{j}t^{j}}{\prod_{i =1}^{n}(1-t^{w_{i}})}\). Then it holds:_
1. \(P_{R/I}^{W}(k)=\sum_{j=0}^{s}h_{j}P_{R}^{W}(k-j)\) _for all_ \(k\geq\mathrm{ri}^{W}(R/I)\)__
2. \(P_{R}^{W^{\prime}}=\{P_{0}^{\prime},\dots,P_{ad-1}^{\prime}\}\) _is such that_ \[P_{i}^{\prime}(x)=\begin{cases}0&\text{ if }a\nmid i\\ P_{\frac{i}{a}}\left(\frac{x}{a}\right)&\text{ if }a\mid i\end{cases}\]
### Algorithm for computing Hilbert quasi-polynomials
Let \((R/I,\prec_{W})\) be as usual, we wish to compute its Hilbert quasi-polynomial \(P_{R/I}^{W}:=\{P_{0},\dots,P_{d-1}\}\). Since we know some degree bounds for Hilbert quasi-polynomials, we can compute them by means of interpolation.
First of all, let us consider \(I=(0)\) and \(W\) such that the \(w_{i}\)’s have no common factor. Each \(P_{j}\) has degree equal to \(n-1\), so, given \(j=0,\dots,d-1\), we want to calculate \(P_{j}(x):=a_{0}+a_{1}x+\dots+a_{n-1}x^{n-1}\) such that
\[P_{j}(k)=H_{R}(k)\qquad\forall k\geq 0\quad\text{and}\quad k\equiv j\mod d\]
Therefore, let us consider the first \(n\) positive integers \(x_{r}\) such that \(P_{j}(x_{r})=H_{R}(x_{r})\)
\[x_{r}:=j+rd,\quad\text{for }r=0,\dots,n-1\]
By construction, the polynomial \(P_{j}(x)\) interpolates the points \((x_{r},H_{R}(x_{r}))\).
Since we know the leading coefficient \(a_{n-1}\), we can reduce the number of interpolation points \(x_{r}\), and we get a system of linear equations in the coefficients \(a_{i}\), with \(i=0,\dots,n-2\). The system in matrix-vector form reads
\[\begin{bmatrix}1&x_{0}&\ldots&x_{0}^{n-2}\\ 1&x_{1}&\ldots&x_{1}^{n-2}\\ \vdots&\vdots&\vdots&\vdots\\ 1&x_{n-2}&\ldots&x_{n-2}^{n-2}\end{bmatrix}\begin{bmatrix}a_{0}\\ a_{1}\\ \vdots\\ a_{n-2}\end{bmatrix}=\begin{bmatrix}H_{R}(x_{0})-a_{n-1}x_{0}^{n-1}\\ H_{R}(x_{1})-a_{n-1}x_{1}^{n-1}\\ \vdots\\ H_{R}(x_{n-2})-a_{n-1}x_{n-2}^{n-1}\end{bmatrix}\] (4)
We observe that for computing \(P_{j}\) the algorithm requires the computation of \(n-1\) values of the Hilbert function, the construction of a Vandermonde matrix of dimension \(n-1\) and its inversion. We have not yet shown how to calculate \(H_{R}(x_{r})\), for \(r=0,\dots,n-2\).
Let \(k\in\mathbb{N}\). The problem of calculating \(H_{R}(k)\) is equivalent to the problem of determining the number of partitions of an integer into elements of a finite set \(S:=\{w_{1},\dots,w_{n}\}\), that is, the number of solutions in non-negative integers, \(\alpha_{1},\dots,\alpha_{n}\), of the equation
\[\alpha_{1}w_{1}+\cdots+\alpha_{n}w_{n}=k\]
This problem was solved in [9], [10] and the solution is the coefficient of \(x^{k}\) in the following power series
\[\frac{1}{(1-x^{w_{1}})\cdots(1-x^{w_{n}})}\] (5)
We are going to give an efficient method for getting the coefficient of \(x^{k}\) in the power series expansion of Equation (5). We refer to [11] for an in-depth analysis on the power series expansion of a rational function. Let
\[g(x)=\prod_{i=1}^{k}(1-\lambda_{i}x)^{\alpha_{i}}\qquad\text{and}\qquad f(x)= \prod_{i=k+1}^{l}(1-\lambda_{i}x)^{\alpha_{i}}\]
be polynomials in \(\mathbb{C}[x]\), where \(\alpha_{1},\ldots,\alpha_{l}\) are non-negative integers, all \(\lambda_{i}\)’s are distinct and non-zero and the degree of \(f(x)\) is less than that of \(g(x)\).
**Lemma 2**.: _Let_
\[\frac{f(x)}{g(x)}=\sum_{n\geq 0}b(n)x^{n}\]
_the power series expansion of \(f(x)/g(x)\). Then,_
\[b(n)n=\sum_{r=1}^{n}\left(\sum_{i=1}^{k}\alpha_{i}\lambda_{i}^{r}-\sum_{i=k+1} ^{l}\alpha_{i}\lambda_{i}^{r}\right)b(n-r)\]
Let \(\zeta:=\zeta_{d}\) be a primitive \(d\)th root of unity. Since
\[\prod_{i=1}^{n}(1-x^{w_{i}})=\prod_{j\in T_{n}}(1-\zeta^{j}x)^{n}\prod_{j\in T _{n-1}}(1-\zeta^{j}x)^{n-1}\cdots\prod_{j\in T_{1}}(1-\zeta^{j}x)\]
for some pairwise disjoint subsets \(T_{1},\dots,T_{n}\subseteq\{0,\dots,d-1\}\), we can apply Lemma 2 with \(g(x):=\prod_{i=1}^{k}(1-x^{w_{i}})\) and \(f(x)=1\). Observing that any \(j\) in \(\{0,\ldots,d-1\}\) appears in exactly one of the \(T_{i}\)’s, say \(T_{\iota}\), and that \(\zeta^{j}\) appears \(\iota\) times in the product, we obtain the following recursive formula for computing \(H_{R}(k)\)
\[H_{R}(k)=\frac{1}{k}\sum_{r=1}^{k}\left[\sum_{i=1}^{n}i\left(\sum_{j\in T_{i}} \zeta^{jr}\right)\right]H_{R}(k-r)\] (6)
It follows that if we know \(H_{R}(i)\) for all \(i=1,\dots,k-1\), we can easily compute \(H_{R}(k)\) by means of (6).
Given an equation \(\alpha_{1}w_{1}+\dots+\alpha_{n}w_{n}=k\) the problem of counting the number of non-negative integer solutions \(\alpha_{1},\dots,\alpha_{n}\) could be solved also using brute force. But, given \(k\in\mathbb{N}\), to compute \(H_{R}^{W}(k)\) with brute force needs \(O(k^{n})\) operations, whereas the procedure which we have implemented has a quadratic cost in \(k\), in fact it needs \(O(nk^{2})\) operations ([16]).
Up to now we have shown how to calculate \(P_{R}^{W}\). For computing a Hilbert quasi-polynomial \(P_{R/I}^{W}=\{P_{0},\dots,P_{d-1}\}\), for any vector \(W\) and any homogeneous ideal \(I\) of \(R\), the procedure computes first \(P_{R}^{W^{\prime}}\), where \(W^{\prime}\) is obtained by dividing \(W\) by \(\gcd(w_{1},\dots,w_{n})\), and then it produces \(P_{R/I}^{W}\) starting from \(P_{R}^{W}\), using the relation between \(P_{R}^{W^{\prime}}\) and \(P_{R/I}^{W}\) showed in Lemma 1.b.
We reduced the complexity of the algorithm presented in [16] by exploiting formulas for the 2th and 3rd coefficient of the Hilbert quasi-polynomial of \(R\), whenever possible, and by computing the values of the Hilbert function once, instead of \(d\) times. In particular, the considered formulas, which are defined in [16], allow to reduce the dimension of the linear system (4) to be solved in order to compute the Hilbert quasi-polynomials \(P_{R}^{W}\). We compare experimentaly the difference of speed of the two algoritms for computing \(H_{R}^{W}\), that in [16] and ours, with \(R=\mathbb{K}[x_{1},\dots,x_{n}]\) where \(n\in\{2,\dots,5\}\) and \(W\) a random vector of weights \(w_{i}\in[1,\dots,12]\). The following table summarizes the results obtained using an Intel(R) Core(TM) i3-6100 CPU @ 3.70 GHz processor and 8 GB of RAM.
\begin{tabular}{|l|l|l|p{142.3pt}|}
\hline
n & W & Old Algorithm Time in ms & New Algorithm Time in ms \\
\hline
2 & [1,3] & 20 & 0 \\
\hline
3 & [2,5,12] & 3300 & 580 \\
\hline
4 & [1,4,5,8] & 1080 & 140 \\
\hline
5 & [2,2,6,9,12] & 5340 & 230 \\ \hline
\end{tabular}
For \(n\geq 6\), we have to choose suitably low weights \(w_{i}\), otherwise we encounter software limitation of Singular. For example, for \(n=6\) we set \(W=[1,1,1,2,2,9]\), and the old algorithm obtains some _Int-overflow errors_, and then it computes wrongly 4 of the 18 polynomials of \(H_{R}^{W}\), in 1050 ms, whereas our algorithm computes correctly all the 18 polynomials in 100 ms.
### Hilbert quasi-polynomials for order domain codes
To test if a pair \((R/I,\prec_{W}~{})\) satisfies the order domain condition \((C1)\), it suffices to compute a Groebner basis of \(I\) w.r.t. \(\prec_{W}\) and, for each polynomial in the basis, to check the two monomials of highest weight. Whereas, for the condition \((C2)\) we would need to study the ideal \(I\) or the semigroup \(\mathcal{N}_{\prec_{W}}(I)\). We give an alternative and efficient way to establish if the condition \((C2)\) is respected by \(I\) and \(\prec_{W}\).
**Theorem 8**.: _Let \((R/I,\prec_{W})\) be as usual. The following are equivalent:_
* \(I\) _and_ \(\prec_{W}\) _satisfy the order domain condition_ \((C2)\)_;_
* \(H_{R/\bar{I}}(k)\in\{0,1\}\) _for all_ \(0\leq k<\mathrm{ri}^{W}(R/\bar{I})\) _and each_ \(P_{i}\) _in_ \(P_{R/\bar{I}}^{W}\) _is the constant polynomial 0 or 1._
Proof.: Since \(\mathcal{N}_{\prec_{W}}(I)=\{\mathrm{lm}(f)\mid f\in R/\bar{I}\}\), then \(H_{R/\bar{I}}(k)=\mid\{m\in\mathcal{N}_{\prec_{W}}(I)\mid\mathrm{w}(m)=k\}\mid\). We observe that condition \((C2)\) can be equivalently formulated as requiring that for any possible weight there is at most one monomial in \(\mathcal{N}_{\prec_{W}}(I)\) enjoying that weight. In terms of Hilbert function, this conditions is then equivalent to
\[H_{R/\bar{I}}(k)\in\{0,1\}\mbox{ for all }k\geq 0.\]
Recalling that \(H_{R/\bar{I}}\) will be eventually equal to the quasi-polynomial \(P_{R/\bar{I}}^{W}=\{P_{0},\dots,P_{d-1}\}\), our assertion follows. ∎
**Remark 2**.: The only requirement that each \(P_{i}\) in \(P_{R/\bar{I}}^{W}\) is the constant polynomial 0 or 1 does not imply \((i)\), as we now show. Let \(R=\mathbb{K}[x_{1},x_{2}]\) with standard grading, and \(I=(x_{1}^{2},x_{1}x_{2})\subseteq R\). The Hilbert polynomial is the constant polynomial 1 but \(H_{R/I}(1)=2\), then \(I\) does not satisfy \((C2)\). That is because \(H_{R/\bar{I}}(k)=P_{R/\bar{I}}^{W}(k)\) for all \(k\geq\mathrm{ri}^{W}(R/\bar{I})\), but \(P_{R/\bar{I}}^{W}\) does not give any information for \(k<\mathrm{ri}^{W}(R/\bar{I})\).
**Corollary 1**.: _Let \((R/I,\prec_{W})\) be as usual, \(\bar{I}=\mathrm{in}(I)\) and \(\mathcal{G}\) a Groebner basis for \(I\). If_
* _any_ \(g\in\mathcal{G}\) _has exactly two monomials of highest weight in its support, and_
* \(H_{R/\bar{I}}(k)\in\{0,1\}\) _for all_ \(0\leq k<\mathrm{ri}^{W}(R/\bar{I})\) _and each_ \(P_{i}\) _in_ \(P_{R/\bar{I}}^{W}\) _is the constant polynomial 0 or 1._
_then \((R/I,\prec_{W})\) is an order domain._
**Example 9**.: Let \((R/I,\prec_{W})\) be as in Example 5. The Hilbert-Poincaré series of \((R/\bar{I},\prec_{W})\) is given by \(\mathrm{HP}_{R/\bar{I}}(t)=\frac{1-t^{6}}{(1-t^{2})(1-t^{3})}\), then \(\mathrm{ri}^{W}(R/\bar{I})=2\). The Hilbert quasi-polynomial is \(P_{R/\bar{I}}=\{P_{0},\dots,P_{5}\}\), with \(P_{i}=1\) for all \(i=0,\dots,5\). Since \(H_{R/\bar{I}}(k)=P_{R/\bar{I}}(k)\) for all \(k\geq 2\), we have only to check \(H_{R/\bar{I}}(1)\) which is obviously equal to 0, since \(1\) is a gap in the semigroup \(\Gamma=<2,3>\).
Now we are ready to describe the following
**Algorithm 10** (Check Order Domain).:
* Input: \((R/I,\prec_{W})\).
* Output: IsOrderDomain \(\in\{\)True, False\(\}\).
1. IsOrderDomain:= False;
2. Compute a Groebner basis \(\mathcal{G}\) for \(I\);
3. If \(\mathrm{w}(\mathrm{lt}(g))=\mathrm{w}(\mathrm{lt}(g-\mathrm{lt}(g)))\) for all \(g\in\mathcal{G}\), then 1. Let \(k_{1}:=\max\{\mathrm{ri}^{W}(R/\bar{I}),d(n-1)\}\); 2. Compute \(H_{R}(k)\) for all \(1\leq k<k_{1}\); 3. If \(H_{R/\bar{I}}(k)\in\{0,1\}\) for all \(1\leq k<k_{1}\) and each \(P_{i}\in P_{R/\bar{I}}\) is \(0\) or \(1\), then * IsOrderDomain:=True.
4. Return IsOrderDomain.
We recall the results necessary to the description of the algorithm.
* Line 3. We check the condition \((C1)\) of Corollary 1. If it is satisfied we test the second one.
* Line 3a-3b. We compute the first \(k_{1}\) values of the Hilbert function of the polynomial ring \(R\) using Equation (6), where \(k_{1}\) is the maximum between \(d(n-1)\), that is the number of interpolation points, and the regularity index, \(\mathrm{ri}^{W}(R/\bar{I})\), which is known thanks to Proposition 2, where \(\deg h(t)\) is computed by Singular.
* Line 3c. Thanks to Proposition 3 we are able to check if \(H_{R/\bar{I}}(k)\in\{0,1\}\) for the values \(0\leq k<\mathrm{ri}^{W}(R/\bar{I})\). If the previous test does not fail we can compute, using the algorithm showed in Section 3.1, the quasi-polynomial of \(R/\bar{I}\), completing the test of the second condition described in Corollary 1.
We would like to point out an interesting aspect of our approach. For our computation of Hilbert quasi-polynomials it is essential to work in a characteristic-0 field. In our applications, e.g. to coding theory, we can have positive characteristic fields for the order domain. However, the actual field matters only for the computation of the Groebner basis of \(I\). Once the Groebner basis has been obtained, the monomials under the Hilbert staircase are the same for any field and so we can consider the leading monomials of the obtained Groebner basis as if they were on a characteristic-zero field.
## 4 Applications to codes from maximal curves
A maximal curve over a finite field \(\mathbb{F}_{q}\) is a projective geometrically irreducible non-singular algebraic curve defined over \(\mathbb{F}_{q}\) whose number of \(\mathbb{F}_{q}\)-rational points attains the Hasse-Weil upper bound
\[q+1+2g\sqrt{q};\]
where \(g\) is the genus of the curve. Maximal curves, especially those having large genus with respect to \(q\), are known to be very useful in coding theory. In this section, we show some examples of affine-variety codes constructed over maximal curves, which are also order domains.
**Example 11**.: Let \(\chi\subseteq\mathbb{P}^{3}\) is a \(\mathbb{F}_{49}\)-maximal curve of genus \(g=7\) ([1]) whose affine plane curve is
\[y^{16}=x(x+1)^{6}\]
Let \(I=(y^{16}-x(x+1)^{6})\subseteq\mathbb{F}_{49}[x,y]\) and let \(\prec_{W}\) be given by \(\mathrm{w}(x)=16,\mathrm{w}(y)=7\) and \(x\prec_{lex}y\). It is easy to verify that \(I\) and \(\prec_{W}\) satisfy the order domain condition \((C1)\). Let us check condition \((C2)\). Obviously, \(\bar{I}=(y^{16})\). With our algorithm we have computed the Hilbert quasi-polynomial \(P_{R/\bar{I}}=\{P_{0},\dots,P_{111}\}\), and each \(P_{i}\) is equal to 1. With Singular we have computed \(\mathrm{HP}_{R/\bar{I}}(t)=\frac{1-t^{112}}{(1-t^{7})(1-t^{16})}\) and so \(\mathrm{ri}^{W}(R/\bar{I})=90\), which means that we are left with computing \(H_{R/\bar{I}}(k)\), with \(0<k<90\). With our recursive algorithm we can easily see that all obtained values are in \(\{0,1\}\). Then we can conclude that \((R/I,\prec_{W})\) is an order domain.
In the previous example we could avoid to compute the Hilbert function for values less than 90, thanks to the following lemma.
**Proposition 4**.: _Let \(I\subseteq\mathbb{K}[x_{1},\dots,x_{n}]\) be an ideal such that its initial ideal \(\bar{I}\subseteq\mathbb{K}[x_{1},\dots,x_{n-1}]\), up to a reordering of variables. If each polynomial in the Hilbert quasi-polynomial of \(R/\bar{I}\) is 0 or 1, then \(H_{R/\bar{I}}(k)\in\{0,1\}\) for all \(k\geq 0\)._
Proof.: Suppose by contradiction \(H_{R/\bar{I}}(\tilde{k})\not\in\{0,1\}\) for some \(\tilde{k}\geq 0\), then there exist two distinct monomials \(m_{1},m_{2}\in R/\bar{I}\) with the same weight \(\tilde{k}\). For all \(\alpha_{n}\geq 0\) also the monomials \(m_{1}x_{n}^{\alpha_{n}},m_{2}x_{n}^{\alpha_{n}}\in R/\bar{I}\), and they are distinct with the same weight \(\tilde{k}+\alpha_{n}w_{n}\). In particular, it holds also for \(\tilde{k}+\alpha_{n}w_{n}\geq\mathrm{ri}^{W}(R/\bar{I})\), but this contradicts our hypothesis on the Hilbert quasi-polynomial of \(R/\bar{I}\). ∎
**Remark 3**.: If \(I\) and \(\prec_{W}\) satisfy the hypothesis of Proposition 4, then \((R/I,\prec_{W})\) is an order domain.
**Example 12**.: Let \(q>2\) be a prime power. The GK-curve, introduced by Giulietti and Korchmaros in [14], is the curve \(\mathcal{C}_{3}\subseteq\mathbb{P}^{3}\) defined over \(\mathbb{F}_{q^{6}}\) by the affine equations
\[v^{q+1}=u^{q}+u\quad\text{ and }\quad w^{\frac{q^{3}+1}{q+1}}=vh(u)\]
where \(h(u)=(u^{q}+u)^{q-1}-1\). This curve is maximal over \(\mathbb{F}_{q^{6}}\) and it is so far the only known example of a maximal curve which cannot be dominated by the Hermitian curve. It turns out in [15] that \(\mathcal{C}_{3}\) can also be defined by the equations
\[v^{q+1}=u^{q}+u\quad\text{ and }\quad w^{\frac{q^{3}+1}{q+1}}=v^{q^{2}}-v\]
We are going to investigate if \(\mathcal{C}_{3}\), with an opportune generalized weighted degree ordering \(\prec_{W}\), defines an order domain. Let \(q=3\), and \(I=(v^{4}-u^{3}-u,w^{7}-v^{9}+v)\in\mathbb{F}_{3^{6}}[u,v,w]\) with \(u\prec_{lex}v\prec_{lex}w\). Obviously, \(\mathcal{G}=\{v^{4}-u^{3}-u,w^{7}-v^{9}+v\}\) is a Groebner basis for \(I\) and \(\bar{I}=(v^{4},w^{7})\). In order to satisfy condition \((C1)\), we set \(W=[28,21,27]\). Note that, up to a constant factor, \(W\) is unique. Since \(\mathrm{lcm}(28,21,27)=756\), we have computed the Hilbert quasi-polynomial \(P_{R/\bar{I}}=\{P_{0},\dots,P_{755}\}\) of \((R/\bar{I},\prec_{W})\). Since each \(P_{i}\) turns out to be \(1\), thanks to the remark above, we can conclude that \((R/I,\prec_{W})\) is an order domain.
Let us show a last example for which the condition \((C2)\) does not hold.
**Example 13**.: Let \(\mathcal{R}(\ell)\) be the _Ree curve_ defined by
\[\mathcal{R}(\ell):\begin{cases}y^{\ell}-y=x^{\ell_{0}}(x^{\ell}-x)\\ z^{\ell}-z=x^{\ell_{0}}(y^{\ell}-y)\end{cases}\quad\text{with }\ell_{0}=3^{r}, \;r\geq 0,\;\ell=3\ell_{0}^{2}.\]
\(\mathcal{R}(\ell)\) is \(\mathbb{F}_{q}\)-maximal for \(q=\ell^{6}\). Consider \(r=0\) and \(\ell_{0}=1\), then \(I=(x^{4}-x^{2}-y^{3}+y,xy^{3}-xy-z^{3}+z)\in R=\mathbb{F}_{3^{6}}[x,y,z]\) with \(z\prec_{lex}y\prec_{lex}x\). The set \(\mathcal{G}=\{x^{4}-x^{2}-y^{3}+y,xy^{3}-xy-z^{3}+z\}\) is a Groebner basis for \(I\) and \(\bar{I}=(x^{4},xy^{3})\). In order to verify condition \((C1)\), we choose as generalized weighted degree ordering \(\prec_{W}\) that defined by \(W=[3,4,5]\). The computation of the Hilbert quasi-polynomial for \((R/\bar{I},\prec_{W})\) gives 60 distinct polynomials of degree 1, it means that \((R/I,\prec_{W})\) is not an order domain.
## 5 Conclusions and further comments
Algorithm 10 allows to decide effectively if a quotient ring can be seen as an order domain w.r.t. a generalized weighted degree ordering. It can be applied to coding theory, and it can indeed solve some interesting examples, as we have shown, and as such we believe it can become a convenient tools for coding theorists working with algebraic geometric codes.
We see at least two paths to follow in order to increase the impact of our approach, as we elaborate below.
An advantage of order domain codes (with respect to more traditional codes over curves) is that we can build them on higher dimensions variety, e.g. the surface in Example 51 [4]. The higher dimension requires generalized weighted degree orderings \(\prec_{W}\), with \(W\in(\mathbb{N}_{+}^{r})^{n}\) and \(r\geq 2\), that cannot be tackled by our present theory and algorithms. Therefore, we believe that this extension is natural and worth studying.
In addition, our algorithm is well-suited when a given variety has been chosen to build the code. Often it happens that we know an infinite family of varieties that would be ideal to build codes on them, e.g. known families of maximal curves. In this situation we would need to adapt our computational approach to a more theoretical one, in order to use the core ideas of our algorithms for generating general proofs.
## Acknowledgements
These results are included in the first author’s PhD thesis and so she would like to thank her supervisors: the other two authors. The authors would like to thank the referee for valuable suggestions.
## References
* [1] S. Fanali, M. Giulietti and I. Platoni, _On maximal curves over finite fields of small order_, Adv. Math. Commun., **6**, (2012), 107–120.
* [2] W. Vasconcelos, _Computational methods in commutative algebra and algebraic geometry_, Springer Science & Business Media, **2**, (2004)
* [3] M. Kreuzer and L. Robbiano, _Computational commutative algebra 2_, Springer Science & Business Media, **2**, (2005).
* [4] H. E. Andersen and O. Geil, _Evaluation Codes from Order Domain Theory_, Finite Fields Appl., **14**, (2008), 92–123.
* [5] O. Geil and R. Pellikaan, _On the Structure of Order Domains_, Finite Fields Appl., **8**, (2002), 369–396.
* [6] C. Marcolla, E. Orsini and M. Sala, _Improved decoding of affine-variety codes_, Journal of Pure and Applied Algebra, **216.7**, (2012), 1533–1565.
* [7] V. D. Goppa, _Codes associated with divisors_, Problem of Inform. Trans., **13**, (1977), 22–26.
* [8] O. Geil, _Evaluation codes from an affine-variety codes perspective_, Advances in algebraic geometry codes, Ser. Coding Theory Cryptol, **5**, (2008), 153–180.
* [9] J. J. Sylvester, _On subvariants, ie semi-invariants to binary quantics of an unlimited order_, American Journal of Mathematics, **5.1**, (1882), 79–136.
* [10] J. W. L. Glaisher, _Formulae for partitions into given elements, derived from Sylvester’s theorem_, Quart. J. Math, **40**, (1909), 275–348.
* [11] D. V. Lee, _On the power-series expansion of a rational function_, Acta Arithmetica, **62.3**, (1992), 229–255.
* [12] J. Fitzgerald and R. F. Lax, _Decoding affine variety codes using Gröbner bases_, Des. Codes Cryptogr., **2**, (1998), 147–158.
* [13] R. Stanley, _Combinatorics and commutative algebra_, Springer Science & Business Media, **41**, (2007).
* [14] M. Giulietti and G. Korchmáros, _A new family of maximal curves over a finite field_, Mathematische Annalen, **343.1**, (2009), 229–245.
* [15] A. Garcia, C. Güneri and H. Stichtenoth, _A generalization of the Giulietti–Korchmáros maximal curve_, Advances in Geometry, **10.3**, (2010), 427–434.
* [16] M. Caboara and C. Mascia, _A partial characterization of Hilbert quasi-polynomials in the non-standard case_, arXiv:math/1607.05468, (2016).
* [17] O. Geil, _Algebraic geometry codes from order domains_. In M. Sala, T. Mora, L. Perret, S. Sakata and C. Traverso, _Groebner Bases, Coding, and Cryptography_, RISC Book Series, Springer, (2009), 121–141.
* [18] R. Matsumoto, _Miura’s Generalization of One-Point AG codes is Equivalent to Høholdt, van Lint and Pellikaan’s generalization_, IEICE Trans. Fund., **E82-A.10**, (1999), 2007–2010.
* [19] T. Høholdt, J. van Lint and R. Pellikaan, _Algebraic Geometry of Codes_. In _Handbook of Coding Theory_, V. S. Pless and W.C. Huffman, (1998), 871–961.
* [20] S. Miura, _Linear Codes on Affine Algebraic Varieties_, IEICE Trans. Fundamentals, (1996).
* [21] R. Matsumoto and S. Miura, _On the Feng-Rao bound for the L-construction of algebraic geometry codes_, IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences, **83.5**, (2000), 923–926.
Received October 2016; revised 16 November 2016.
|
1710.05654 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 68123,
"num_imgs": 15,
"llama3_tokens_count": 18766
} | [
"content_image/1710.05654/x1.png",
"content_image/1710.05654/x3.png",
"content_image/1710.05654/x6.png",
"content_image/1710.05654/x7.png",
"content_image/1710.05654/x8.png",
"content_image/1710.05654/x11.png",
"content_image/1710.05654/x13.png",
"content_image/1710.05654/x17.png",
"content_image/1710.05654/x21.png",
"content_image/1710.05654/x24.png",
"content_image/1710.05654/x27.png",
"content_image/1710.05654/x29.png",
"content_image/1710.05654/x31.png",
"content_image/1710.05654/x35.png",
"content_image/1710.05654/x37.png"
] | # Large Scale Graph Learning
from Smooth Signals
Vassilis Kalofolias
Signal Processing Laboratory 2, EPFL
Station 11, 1015 Lausanne, Switzerland
v.kalofolias@gmail.com
&Nathanaël Perraudin
Swiss Data Science Center, ETH Zürich
Universitätstrasse 25, 8006 Zürich, Switzerland
nperraud@ethz.ch
###### Abstract
Graphs are a prevalent tool in data science, as they model the inherent structure of data. Graphs have been used successfully in unsupervised and semi-supervised learning. Typically they are constructed either by connecting nearest samples, or by learning them from data, solving an optimization problem. While graph learning does achieve a better quality, it also comes with a higher computational cost. In particular, the previous state-of-the-art model cost is \(\mathcal{O}\left(n^{2}\right)\) for \(n\) samples. In this paper, we show how to scale it, obtaining an approximation with leading cost of \(\mathcal{O}\left(n\log(n)\right)\), with quality that approaches the exact graph learning model. Our algorithm uses known approximate nearest neighbor techniques to reduce the number of variables, and automatically selects the correct parameters of the model, requiring a single intuitive input: the desired edge density.
## 1 Introduction
Graphs are an invaluable tool in data science, as they can capture complex structures inherent in seemingly irregular high-dimensional data. While classical applications of graphs include data embedding, manifold learning, clustering and semi-supervised learning (Zhu et al., 2003; Belkin et al., 2006; Von Luxburg, 2007), they were later used for regularizing various machine learning models, for example for classification, sparse coding, matrix completion, or PCA (Zhang et al., 2006; Zheng et al., 2011; Kalofolias et al., 2014; Shahid et al., 2016).
More recently, graphs drew the attention of the deep learning community. While convolutional neural networks (CNNs) were highly successful for learning image representations, it was not obvious how to generalize them for irregular high dimensional domains, were standard convolution is not applicable. Graphs bridge the gap between irregular data and CNNs through the generalization of convolutions on graphs (Defferrard et al., 2016; Kipf & Welling, 2016; Monti et al., 2016; Li et al., 2015). While clearly graph quality is important in such applications (Henaff et al., 2015; Defferrard et al., 2016), the question of how to optimally construct a graph remains an open problem.
The first applications mostly used weighted \(k\)-nearest neighbors graphs (\(k\)-NN) (Zhu et al., 2003; Belkin et al., 2006; Von Luxburg, 2007), but the last few years more sophisticated methods of _learning_ graphs from data were proposed. Today, _graph learning_, or _network inference_, has become an important problem itself (Wang & Zhang, 2008; Daitch et al., 2009; Jebara et al., 2009; Lake & Tenenbaum, 2010; Hu et al., 2013; Dong et al., 2015; Kalofolias, 2016).
Hoever, graph learning is computationally too costly for large-scale applications that need graphs between millions of samples. Current state of the art models for learning weighted undirected graphs (Dong et al., 2015; Kalofolias, 2016) cost \(\mathcal{O}\left(n^{2}\right)\) per iteration for \(n\) nodes, while previous solutions are even more expensive. Furthermore, they need parameter tuning to control sparsity, and this adds an extra burden making them prohibitive for applications with more than a few thousands of nodes.
Large-scale applications can only resort to approximate nearest neighbors (A-NN), e.g. (Dong et al., 2011; Muja & Lowe, 2014; Malkov & Yashunin, 2016), that run with a cost of \(\mathcal{O}\left(n\log(n)\right)\). This is low compared to computing a simple \(k\)-NN graph, as even the pairwise distance matrix between all samples needs \(\mathcal{O}\left(n^{2}\right)\) computations. However, the quality of A-NN is worse than the \(k\)-NN, that is already not as good as if we would learn the graph from data.
_In this paper, we propose the first scalable graph learning method, with the same leading cost as A-NN, and with quality that approaches state-of-the-art graph learning._ Our method leverages A-NN graphs to effectively reduce the number of variables, and the state-of-the-art graph learning model by Kalofolias (2016) in order to achieve the best of both worlds: low cost and good quality. In Figure 1 we illustrate the advantage of our solution compared to the current state-of-the-art. Note that while the standard model costs the same regardless of the graph density (average node degree) \(k\), our solution benefits from the desired graph sparsity to reduce computation.
<figure><img src="content_image/1710.05654/x1.png"><figcaption>Figure 1: Time comparison of different ways to compute a graph. Left: Graphbetween 10,000 most frequent English words using a word2vec representation.Right: Graph between 1,000,000 nodes from 68 features (US Census 1990).Scalable algorithms benefit from a small average node degree k.</figcaption></figure>
One of our key contributions is to provide _a method to automatically select the parameters of the model_ by Kalofolias (2016) given a desired graph sparsity level. Like in \(k\)-NN, the user can choose the number of neighbors \(k\), _without performing grid search_ over two parameters. Using our scheme, we can learn a 1-million-nodes graph with a desired sparsity level on a desktop computer in \(16\) minutes, with a simple Matlab implementation.
## 2 Graph Learning from Smooth Signals
A widely used assumption for data residing on graphs is that values change smoothly across adjacent nodes. The smoothness of a set of vectors \(x_{1},\dots,x_{n}\in\mathbb{R}^{d}\) on a given weighted undirected graph is usually quantified by the _Dirichlet energy_ (Belkin & Niyogi, 2001)
\[\frac{1}{2}\sum_{i,j}W_{ij}\|x_{i}-x_{j}\|^{2}=\operatorname{tr}\left({X^{\top }LX}\right),\] (1)
where \(W_{ij}\in\mathbb{R}_{+}\) denotes the weight of the edge between nodes \(i\) and \(j\), \(L=D-W\) is the graph Laplacian, and \(D_{ii}=\sum_{j}W_{ij}\) is the diagonal weighted degree matrix.
Regularization using the Dirichlet energy has been used extensively in machine learning, to enhance for example image processing (Elmoataz et al., 2008; Zheng et al., 2011), non-negative matrix factorization (Cai et al., 2011; Benzi et al., 2016), matrix completion (Kalofolias et al., 2014), or principal component analysis (PCA) (Jiang et al., 2013; Shahid et al., 2015, 2016).
In these methods the Dirichlet energy is minimized w.r.t. matrix \(X\) given the Laplacian \(L\). On the other hand, we can learn a graph under the assumption that \(X\) is smooth on it, by minimizing the same energy w.r.t. \(L\), when \(X\) is given.
The first works for graph learning focused on learning the weights of a fixed k-nearest neighbor pattern (Wang & Zhang, 2008), learning a binary pattern (Jebara et al., 2009), or the whole adjacency matrix (Daitch et al., 2009). A more recent family of models is based on minimizing the _Dirichlet energy_ on a graph (Lake & Tenenbaum, 2010; Hu et al., 2013; Dong et al., 2015; Kalofolias, 2016). In the latter, Kalofolias (2016) proposed a unified model for learning a graph from smooth signals, that reads as follows:
\[\min_{W\in{\cal W}}\|W\circ Z\|_{1,1}+f(W).\] (2)
Here, \(Z_{ij}=\|x_{i}-x_{j}\|^{2}\), \(\circ\) denotes the Hadamard product, and the first term is equal to \(\operatorname{tr}\left({X^{\top}LX}\right)\). The optimization is over the set \({\cal W}\) of valid adjacency matrices (non-negative, symmetric, with zero diagonal).
The role of matrix function \(f(W)\) is to prevent \(W\) from obtaining a trivial zero value, control sparsity, and impose further structure, depending on the data and the application. Kalofolias obtained state-of-the-art results using
\[f(W)=-\alpha\mathbf{1}^{\top}\log(W\mathbf{1})+\frac{\beta}{2}\| W\|_{F}^{2},\] (3)
where \(\mathbf{1}=[1,\dots 1]^{\top}\). We will call this the _log model_. The previous state of the art was proposed by Hu et al. (2013) and by Dong et al. (2015), using
\[f(W)=\alpha\|W\mathbf{1}\|^{2}+\alpha\|W\|_{F}^{2}+\mathbbm{1} \left\{{\|W\|_{1,1}=n}\right\},\] (4)
where \(\mathbbm{1}\left\{{\text{condition}}\right\}=0\) if condition holds, \(\infty\) otherwise. In the sequel we call this the \(\ell_{2}\) _model_. Since \(W\mathbf{1}\) is the node degrees’ vector, the _log_ model (3) prevents the formation of disconnected nodes due to the logarithmic barrier, while the \(\ell_{2}\) model (4) controls sparsity by penalizing large degrees due to the first term. The choice therefore depends on the data and application in question.
## 3 Constrained edge pattern
In traditional graph learning, all \(\binom{n}{2}\) possible edges between \(n\) nodes are considered, that results in a cost of at least \(\mathcal{O}\left(n^{2}\right)\) computations per iteration. Often, however, we need graphs with a roughly fixed number of edges per node, like in \(k\)-NN graphs. It is natural to then ask ourselves whether the cost of graph learning can be reduced, reflecting the final desired graph sparsity.
In fact, the original problem (2) for the log model (3) can be solved efficiently when a constrained set \(\mathcal{E}^{\text{allowed}}\subseteq\{(i,j):i<j\}\) of allowed edges is known a priori. In that case, it suffices to solve the modified problem
\[\operatorname*{minimize}_{W\in{\widetilde{\cal W}}}\leavevmode\nobreak\ \|W \circ Z\|_{1,1}\leavevmode\nobreak\ -\leavevmode\nobreak\ \alpha\mathbf{1}^{ \top}\log(W\mathbf{1})\leavevmode\nobreak\ +\leavevmode\nobreak\ \frac{\beta}{ 2}\|W\|_{F}^{2},\] (5)
where we optimize in the constrained set of adjacency matrices \(W\in{\widetilde{\cal W}}\). After reducing the set of edges to \(\mathcal{E}^{\text{allowed}}\), it suffices to solve the modified problem (5) Following Kalofolias (2016), we can rewrite the problem as
\[\operatorname*{minimize}_{\widetilde{w}}f_{1}(\widetilde{w})+f_{2}(K\widetilde {w})+f_{3}(\widetilde{w}),\] (6)
with
\[f_{1}(\widetilde{w})=\mathbbm{1}\left\{{\tilde{w}\geq 0}\right\} +2\tilde{w}^{\top}\tilde{z},\]
\[f_{2}(v)=-\alpha\mathbf{1}^{\top}\log(v),\]
\[f_{3}(\widetilde{w})=\beta\|\widetilde{w}\|^{2},\text{ with } \zeta=2\beta,\]
where \(\zeta\) is the Lipschitz constant of \(f_{3}\). Note that we gather all free parameters of the adjacency matrix \(\widetilde{W}\in{\widetilde{\cal W}_{m}}\) in a vector \(\widetilde{w}\in{\widetilde{\cal W}_{v}}\) of size only \(|\mathcal{E}^{\text{allowed}}|\), that is, the number of allowed edges, each counted only once. Accordingly, in \(\tilde{z}=z(\mathcal{E}^{\text{allowed}})\) we only keep the corresponding pairwise distances from matrix \(Z\). The linear operator \(K=\widetilde{S}=S(:,\mathcal{E}^{\text{allowed}})\) is also modified, keeping only the columns corresponding to the edges in \(\mathcal{E}^{\text{allowed}}\).
In this form, the problem can be solved by the primal dual techniques by Komodakis & Pesquet (2015). The cost of the dual step, operating on the dual variable \(v\) (degrees vector) remains \(\mathcal{O}\left(n\right)\). However, the cost of the primal step, as well as the cost of applying the modified operator \(\widetilde{S}\) in order to exchange between the primal and dual spaces is \(\mathcal{O}\left(\mathcal{E}^{\text{allowed}}\right)\) instead of \(\mathcal{O}\left(n^{2}\right)\) of the initial algorithm 1 by Kalofolias (2016), reducing the overall complexity.
In some cases, a pattern of allowed edges \(\mathcal{E}^{\text{allowed}}\) can be induced by constraints of the model, for example sensor networks only assume connections between geographically nearby sensors. In most applications, however, a constrained set is not known beforehand, and we need to approximate the edge support of the final learned graph in order to reduce the number of variables. To this end, we propose using approximate nearest neighbors graphs to obtain a good approximation. While computing a \(k\)-NN graph needs \(\mathcal{O}\left(n^{2}d\right)\) computations, _approximate nearest neighbors_ (A-NN) algorithms (Muja & Lowe, 2009; Dong et al., 2011; Muja & Lowe, 2012, 2014; Malkov & Yashunin, 2016) offer a good compromise between accuracy and speed. Specifically, A-NN methods scale gracefully with the number of nodes \(n\), the fastest ones having an overall complexity of \(\mathcal{O}\left(n\log(n)d\right)\) for \(d\)-dimensional data.
When approximating the support of the final edges of a graph, we prefer to have false positives than false negatives. We thus start with an initial support with a larger cardinality than that of the desired final graph, and let the weight learning step automatically select which edges to set to zero. We select a set \(\mathcal{E}^{\text{allowed}}\) with cardinality \(|\mathcal{E}^{\text{allowed}}|=\mathcal{O}\left(nkr\right)\), where \(k\) is the desired number of neighbors per node and \(r\) a small multiplicative factor. By setting the sparsity parameters correctly, the graph learning step will only keep the final \(\mathcal{O}\left(nk\right)\) edges, setting the less important or wrong edges to zero. The bigger the factor \(r\), the more freedom the learning algorithm has to select the right edges. If in the end of the graph learning it occurs that many nodes still have all allowed edges set to non-zero values, it is an indication that we should have given a more generous set of allowed edges.
### Overall theoretical complexity
The cost of learning a \(kr\)-A-NN graph is \(\mathcal{O}\left(n\log(n)d\right)\) for \(n\) nodes and data in \(\mathbb{R}^{d}\), while additionally learning the edge weights costs \(\mathcal{O}\left(krn\right)\) per iteration. The overall complexity is therefore \(\mathcal{O}\left(n\log(n)d\right)+\mathcal{O}\left(nkrI\right)\) for \(I\) iterations. For large \(n\), the dominating cost is asymptotically the one of computing the A-NN and not the cost of learning the weights on the reduced set.
## 4 Automatic parameter selection
A major problem of models (3) and (4) is the choice of meaningful parameters \(\alpha,\beta\), as grid search increases computation significantly. We show how this burden can be completely avoided for model (3). First, we show that sparsity depends effectively on a single parameter, and then we propose a method to set it automatically for any \(k\). Our method is based on predicting the number of edges of any node for a given parameter value, if we relax the symmetricity constraint of \(W\).
### Reduction to a single optimization parameter
In (Kalofolias, 2016, Proposition 2), it is argued that model (3) effectively has one parameter changing the shape of the edges, the other changing the magnitude. We reformulate this claim as follows:
**Proposition 1**.: _Let \(W^{*}(Z,\alpha,\beta)\) denote the solution of model (3) for input distances \(Z\) and parameters \(\alpha,\beta>0\). Then the same solution can be obtained with fixed parameters \(\alpha=1\) and \(\beta=1\), by multiplying the input distances by \(\theta=\frac{1}{\sqrt{\alpha\beta}}\) and the resulting edges by \(\delta=\sqrt{\frac{\alpha}{\beta}}\):_
\[W^{*}\left(Z,\alpha,\beta\right)=\sqrt{\frac{\alpha}{\beta}}W^{*}\left(\frac{1 }{\sqrt{\alpha\beta}}Z,1,1\right)=\delta W^{*}\left(\theta Z,1,1\right).\] (7)
**Proof 1**.: _Apply (Kalofolias, 2016, Prop. 2), with \(\gamma=\sqrt{\frac{\alpha}{\beta}}\) and divide all operands by \(\sqrt{\alpha\beta}\)._
Proposition 1 shows that the parameter spaces \((\alpha,\beta)\) and \((\theta,\delta)\) are equivalent. While the first one is convenient to define (3), the second one makes the sparsity analysis and the application of the model simpler. In words, all graphs that can be learned by model (3) can be equivalently computed by multiplying the initial distances \(Z\) by \(\theta=\frac{1}{\sqrt{\alpha\beta}}\), using them to learn a graph with fixed parameters \(\alpha=\beta=1\), and multiplying all resulting edges by the same constant \(\sqrt{\alpha\beta}\).
This property allows independent tuning of sparsity and scale of the solution. For some applications the scale of the graph is less important, and multiplying all edges by the same constant does not change its functionality. In other cases, we want to explicitly normalize the graph to a specific size, for example setting \(\|W\|_{1,1}=n\) as in (Hu et al., 2013), or making sure that \(\lambda_{\text{max}}=1\). Nevertheless, for applications where scale shall be set automatically, we provide the following Theorem that characterizes the connection between \(\delta\) and the scale of the weights.
**Theorem 1**.: _All edges of the solution \(W^{*}(Z,\alpha,\beta)=\delta W^{*}(\theta Z,1,1)\) with \(\theta=\frac{1}{\sqrt{\alpha\beta}}\), \(\delta=\sqrt{\frac{\alpha}{\beta}}\) of model (3) are upper bounded by \(\delta\):_
\[W^{*}(Z,\alpha,\beta)\leq\sqrt{\frac{\alpha}{\beta}}=\delta.\] (8)
**Proof 2**.: _See supplementary material A._
In practice, when we search for relatively sparse graphs, the largest edges will have weights equal or very close to \(\delta\).
### Setting the remaining regularization parameter
The last step for automatizing parameter selection is to find a relation between \(\theta\) and the desired sparsity (the average number of neighbors per node). We first analyze the sparsity level with respect to \(\theta\) for each node independently. Once the independent problems are well characterized, we propose an empirical solution to obtain a global value of \(\theta\) providing approximately the desired sparsity level.
#### 4.2.1 Sparsity analysis for one node
In order to analyze the sparsity of the graphs obtained by the model (3), we take one step back and drop the symmetricity constraint. The problem becomes separable and we can focus on only one node. Keeping only one column \(w\) of matrix \(W\), we arrive to the simpler optimization problem
\[\min_{w\in\mathbb{R}^{n}_{+}}\leavevmode\nobreak\ \leavevmode \nobreak\ \leavevmode\nobreak\ \theta w^{\top}z-\log(w^{\top}\mathbf{1})+\frac {1}{2}\|w\|_{2}^{2}.\] (9)
The above problem also has only one parameter \(\theta\) that controls sparsity, so that larger values of \(\theta\) yield sparser solutions \(w^{*}\). Furthermore, _it enjoys an analytic solution if we sort the elements of \(z\)_, as we prove with the next theorem.
**Theorem 2**.: _Suppose that the input vector \(z\) is sorted in ascending order. Then the solution of problem (9) has the form_
\[w^{*} =\max\left(0,\lambda^{*}-\theta z\right)=[\lambda^{*}-\theta z_{ \mathcal{I}};\mathbf{0}],\] (10)
_with_
\[\lambda^{*}=\frac{\theta b_{k}+\sqrt{\theta^{2}b_{k}^{2}+4k}}{2k}.\]
_The set \(\mathcal{I}=\{1,\dots,k\}\) corresponds to the indices of the \(k\) smallest distances \(z_{i}\) and \(b_{k}\) is the cumulative sum of the smallest \(k\) distances in \(z\), \(b_{k}=\sum_{i=1}^{k}z_{i}\)._
We provide the proof of Theorem 1 after presenting certain intermediate results. In order to solve Problem (9) we first introduce a slack variable \(l\) for the inequality constraint, so that the KKT optimality conditions are
\[\theta z-\frac{1}{w^{\top}\mathbf{1}}+w-l=0,\] (11)
\[w\geq 0,\] (12)
\[l\geq 0,\] (13)
\[l_{i}w_{i}=0,\forall i.\] (14)
the optimum of \(w\) can be revealed by introducing the term \(\lambda^{*}=\frac{1}{w^{*\top}\mathbf{1}}\) and rewrite (11) as
\[w^{*}=\lambda^{*}-\theta z+l.\] (15)
Then, we split the elements of \(w\) in two sets, \(\mathcal{A}\) and \(\mathcal{I}\) according to the activity of the inequality constraint (12), so that \(w_{\mathcal{I}}>\mathbf{0}\) (inactive) and \(w_{\mathcal{A}}=\mathbf{0}\) (active). Note that at the minimum, the elements of \(w\) will also be sorted in a descending order so that \(w^{*}=[w^{*}_{\mathcal{I}};\mathbf{0}]\), according to Theorem 2. We first need a condition for an element of \(w^{*}\) to be positive, as expressed in the following lemma:
**Lemma 1**.: _An element \(w^{*}_{i}\) of the solution \(w^{*}\) of problem (9) is in the active set \(\mathcal{A}\) if and only if it corresponds to an element of \(z_{i}\) for which \(\theta z_{i}\geq\lambda^{*}\)._
**Proof 3**.: \(\left(\Rightarrow\right)\)_: If \(w_{i}\) is in the active set we have \(w_{i}=0\) and \(l_{i}\geq 0\), therefore from (15) we have \(\theta z_{i}-\lambda^{*}\geq 0\). \(\left(\Leftarrow\right)\): Suppose that there exists \(i\in\mathcal{I}\) for which \(\theta z_{i}\geq\lambda^{*}\). The constraint being inactive means that \(w^{*}_{i}>0\). From (14) we have that \(l_{i}=0\) and (15) gives \(w^{*}_{i}=\lambda^{*}-\theta z_{i}\leq 0\), a contradiction._
We are now ready to proceed to the proof of Theorem 2.
**Proof 4** (Theorem 2).: _As elements of \(\theta z\) are sorted in an ascending order, the elements of \(\lambda^{*}-\theta z\) will be in a descending order. Furthermore, we know from Lemma 1 that all positive \(w^{*}_{i}\) will correspond to \(\theta z_{i}<\lambda^{*}\). Then, supposing that \(|\mathcal{I}|=k\) we have the following ordering:_
\[-\theta z_{1}\geq\dots\geq-\theta z_{k}>- \lambda^{*}\geq-\theta z_{k+1}\geq\dots\geq-\theta z_{n}\Rightarrow\]
\[\lambda^{*}-\theta z_{1}\geq\dots\geq\lambda^{*}-\theta z_{k}> 0\geq\lambda^{*}-\theta z_{k+1}\geq\dots\geq\lambda^{*}-\theta z _{n}.\]
_In words, the vector \(\lambda^{*}-\theta z\) will have sorted elements so that the first \(k\) are positive and the rest are non-positive. Furthermore, we know that the elements of \(l\) in the optimal have to be \(0\) for all inactive variables \(w^{*}_{\mathcal{I}}\), therefore \(w^{*}_{\mathcal{I}}=\lambda^{*}-\theta z_{\mathcal{I}}\). The remaining elements of \(w\) will be \(0\) by definition of the active set:_
\[w^{*}=[\underbrace{\lambda^{*}-\theta z_{1},\cdots,\lambda^{*}- \theta z_{k}}_{w^{*}_{\mathcal{I}}},\underbrace{0,\cdots,0}_{w^{*}_{\mathcal{A }}}].\]
_What remains is to find an expression to compute \(\lambda^{*}\) for any given \(z\). Keeping \(z\) ordered in ascending order, let the cumulative sum of \(z_{i}\) be \(b_{k}=\sum_{i=1}^{k}z_{i}.\) Then, from the definition of \(\lambda^{*}=\frac{1}{w^{*\top}\mathbf{1}}\) and using the structure of \(w^{*}\) we have_
(16)
_which has only one positive solution,_
\[\lambda^{*}=\frac{\theta b_{k}+\sqrt{\theta^{2}b_{k}^{2}+4k}}{2k}.\] (17)
#### 4.2.2 Parameter selection for the non-symmetric case
While Theorem 2 gives the form of the solution for a known \(k\), the latter cannot be known a priori, as it is also a function of \(z\). For this, we propose Algorithm 1 that solves this problem, simultaneously finding \(k\) and \(\lambda^{*}\) in \(\mathcal{O}\left(k\right)\) iterations. This algorithm will be needed for automatically setting the parameters for the symmetric case of graph learning.
As \(k\) is the number of non-zero edges per node, we can assume it to be small, like for \(k\)-NN. It is thus cheap to incrementally try all values of \(k\) from \(k=1\) until we find the correct one, as Algorithm 1 does. Once we try a value that satisfies \(\lambda\in\left(\theta z_{k},\theta z_{k+1}\right]\), all KKT conditions hold and we have found the solution to our problem. A similar algorithm was proposed in (Duchi et al., 2008) for projecting vectors on the probability simplex, that could be used for a similar analysis for the \(\ell_{2}\)-degree constraints model (4).
Most interestingly, using the form of the solution given by Theorem 2 we can solve the reverse problem: _If we know the distances vector \(z\) and we want a solution \(w^{*}\) with exactly \(k\) non-zero elements, what should the parameter \(\theta\) be?_ The following theorem answers this question, giving intervals for \(\theta\) as a function of \(k\), \(z\) and its cumulative sum \(b\).
**Theorem 3**.: _Let \(\theta\in\left(\frac{1}{\sqrt{kz_{k+1}^{2}-b_{k}z_{k+1}}},\frac{1}{\sqrt{kz_{k }^{2}-b_{k}z_{k}}}\right]\), the solution of eq. (9) has exactly \(k\) non-zeros._
**Proof 5**.: _See supplementary material B._
The idea of Theorem 3 is illustrated in the left part of Figure 2. For this figure we have used the distances between one image of MNIST and 999 other images. For any given sparsity level \(k\) we know what are the intervals of the valid values of \(\theta\) by looking at the pairwise distances.
```
1: Input:\(z\in\mathbb{R}^{n}_{*+}\) in ascending order, \(\theta\in\mathbb{R}_{*+}\)
2: \(b_{0}\gets 0\){Initialize cumulative sum}
3: for \(i=1,\dots,n\) do
4: \(b_{i}\gets b_{i-1}+z_{i}\){Cumulative sum of z}
5: \(\lambda_{i}\leftarrow\frac{\sqrt{\theta^{2}b_{i}^{2}+4i}+\theta b_{i}}{2i}\)
6: if \(\lambda_{i}>\theta z_{i}\) then
7: \(k\gets i-1\)
8: \(\lambda^{*}\leftarrow\lambda_{k}\)
9: \(w^{*}\leftarrow\max\{0,\lambda^{*}-\theta z\}\){\(k\)-sparse output}
10: break
11: end if
12: end for
```
**Algorithm 1** Solver of the one-node problem, (9).
<figure><img src="content_image/1710.05654/x3.png"><figcaption>Figure 2: Theoretical bounds of θ for a given sparsity level on 1000 imagesfrom MNIST. Left: Solving (9) for only one column of Z. Theorem 3 applies andfor each k gives the bounds of θ (blue). Middle: Solving (3) for the wholepairwise distance matrix Z of the same dataset. The bounds of (18) (bluedashed line) are used to approximate the sparsity of the solution. The redline is the measured sparsity of the learned graphs from model (3). Right:Same for USPS dataset.</figcaption></figure>
#### 4.2.3 Parameter selection for the symmetric case
In order to approximate the parameter \(\theta\) that gives the desired sparsity of \(W\), we use the above analysis for each row or column separately, omitting the symmetricity constraint. Then, using the arithmetic mean of the bounds of \(\theta\) we obtain a good approximation of the behaviour of the full symmetric problem. In other words, to obtain a graph with approximately \(k\) edges per node, we propose to use the following intervals:
\[\theta_{k}\in\left(\frac{1}{n}\sum_{j=1}^{n}\theta^{\textit{lower}}_{k,j}, \frac{1}{n}\sum_{j=1}^{n}\theta^{\textit{upper}}_{k,j}\right]=\Bigg{(}\sum_{j= 1}^{n}\frac{1}{n\sqrt{k\hat{Z}_{k+1,j}^{2}-B_{k,j}\hat{Z}_{k+1,i}}},\sum_{j=1} ^{n}\frac{1}{n\sqrt{k\hat{Z}_{k,j}^{2}-B_{k,j}\hat{Z}_{k,j}}}\Bigg{]},\] (18)
where \(\hat{Z}\) is obtained by sorting each column of \(Z\) in increasing order, and \(B_{k,j}=\sum_{i=1}^{k}\hat{Z}_{i,j}\). The above expression is the arithmetic mean over all minimum and maximum values of \(\theta_{k,j}\) that would give a \(k\)-sparse result \(W_{:,j}\) if we were to solve problem (9) for each of columns separately, according to Theorem 3. Even though the above approach does not take into account the symmetricity constraints, it gives surprisingly good results in the vast majority of the cases. For the final value of \(\theta\) for a given \(k\), we use the middle of this interval in a logarithmic scale, that is, the _geometric mean_ of the upper and lower limits.
## 5 Experiments
In our experiments we wish to answer questions regarding (1) the approximation quality of our model, (2) the quality of our automatic parameter selection, (3) the benefit from learning versus A-NN for large scale applications and (4) the scalability of the model. As this contribution is the first to scale a graph learning algorithm to a large amount of nodes, it is difficult to compare with other methods without trying to scale them first. While this paper focuses on the scaled log model, we also scaled the \(\ell_{2}\)-model and the "hard" and "soft" models proposed by (Daitch et al., 2009). We refer the reader to the supplementary material C for further details, and D for details about the datasets we use.
### Approximation quality of large scale model
<figure><img src="content_image/1710.05654/x6.png"><figcaption>Figure 3: Approximation error between our large scale log model and the exactlog model by Kalofolias (2016).</figcaption></figure>
When computing \(\mathcal{E}^{\text{allowed}}\), we use an A-NN graph from the publicly available FLANN library¹ that implements the work of Muja & Lowe (2014). To learn a graph with on average \(k\) neighbors per node (\(k\)-NN), we first compute an \(rk\)-A-NN graph and use its edges as \(\mathcal{E}^{\text{allowed}}\). The graph is then learned on this subset. The size of \(\mathcal{E}^{\text{allowed}}\) does not only affect the time needed to learn the graph, but also its quality. A too restrictive choice might prevent the final graph from learning useful edges. In Figure 3, we study the effect of this restriction on the final result. The vertical axis is the relative \(\ell_{1}\) error between our approximate log model and the actual log model by Kalofolias (2016) when learning a graph between 1000 images of MNIST, averaged over 10 runs. Note that the result will depend on the A-NN algorithm used, while a comparison between different types of A-NN is beyond the scope of this paper.
[FOOTNOTE:1][ENDFOOTNOTE]
### Effectiveness of automatic parameter selection
<figure><img src="content_image/1710.05654/x7.png"><figcaption>Figure 4: Effectiveness of θ bounds eq. (18). Requested versus obtaineddegree, "spherical" data (262,000 nodes).</figcaption></figure>
As we saw in the middle and right plots of Figure 2, the approximate bounds of \(\theta\) (red line) given by eq. (18) are very effective at predicting sparsity. The same experiment is repeated for more datasets is in the supplementary material D.3. In Figure 4 we see this also in _large scale_, between 262K nodes from our "spherical" dataset. Please note, that in the rare cases that the actual sparsity is outside the predicted bounds, we already have a good starting point for finding a good \(\theta\). Note also that small fluctuations in the density are tolerated, for example in \(k\)-NN or A-NN graphs we always obtain results with slightly more than \(nk\) edges due to the fact that \(W\) is symmetric. Finally, as we see in Figure 12, the bounds are very _robust to outliers, as well as duplicates_ in the data. The curious reader can find details in Section D.3.1 of the supplementary material.
<figure><img src="content_image/1710.05654/x8.png"><figcaption>Figure 5: Connectivity across classes of MNIST. The graph is normalized sothat ∥W∥1,1=1. We measure the percentage of the total weight for connectedpairs of each label. The last columns correspond to the total of the wrongedges, between images of different labels. Left: A-NN graph. Middle: ℓ2 model(4) neglects digits with larger distance. Right: log model (5) does notneglect to connect any cluster even for very sparse graphs of 5 edges pernode.</figcaption></figure>
### Edge quality
We first compare scalable models of graphs between \(60,000\) images of MNIST. MNIST has a relatively uniform sampling between numbers of different labels, **except for the digits “1” that are more densely sampled**. That is, the average intra-class distance between digits “1” is smaller than for other digits (supplementary material D.2). As we see, this affects the results of graph learning.
In Figure 5, we plot the histograms of connectivity between images of each label. We normalize all graphs to \(\|W\|_{1,1}=1\). The last bar is the connectivity wasted on wrong edges (pairs of different labels). The A-NN graph does not take into account the different sampling rates of different digits, while it has many wrong edges. The \(\ell_{2}\) model (4) assigns most of its connectivity to the label “1” that has the smallest intra-label image distance and neglects digits with larger distance (“2”, “8”) even for 30 edges per node. This effect becomes smaller when more dense graphs (30 edges per node, yellow bars) are sought. The log model does not suffer from this problem even for sparse graphs of degree \(5\) and gives consistent connectivities for all digits. The scaled versions of models in (Daitch et al., 2009) perform worse than our large scale log model, but often better than A-NN or \(\ell\)-2, as plotted in Figure 14 of the supplementary material.
Figure 6 (left) summarizes the number of wrong edges with respect to \(k\). Models \(\ell_{2}\) and log that minimize \(\mathrm{tr}(X^{\top}LX)=\|Z\circ W\|_{1,1}\) have a large advantage compared to models by Daitch et al. (2009) that minimize \(\|LX\|_{F}^{2}\) a quadratic function of \(W\). The first induces sparsity of \(W\), while the latter favors small edges. This explains the existence of many wrong edges for models "hard" and "soft", while creates problems in controlling sparsity. Note that A-NN is more accurate than "hard" and "soft" graphs. It seems that \(\ell_{2}\) is more accurate than the log model, but as shown in Figure 5, the majority of its edges connect digit "1" without connecting sufficiently digits such as \(2,3,8\).
<figure><img src="content_image/1710.05654/x11.png"><figcaption>Figure 6: Left: Edge accuracy of large scale models for MNIST. Right: Digitclassification error with 1% labels. Dashed lines represent nodes incomponents without known labels (non-classifiable).</figcaption></figure>
### Semi-supervised learning
On the same graphs, we perform label propagation (Zhu et al., 2003) on MNIST, with \(1\%\) known labels. The results are plotted in Figure 6 (right). Note that in label propagation, knowledge is not shared between disconnected components, making it impossible to classify nodes in components without known labels. The dashed lines of Figure 6 represent the number of nodes in such components, that occur much less in the log graph. Again, the log model performs best. To have a fair comparison with A-NN, we weighted its edges with standard exponential decay \(W_{i,j}=\exp(-Z_{i,j}/\sigma^{2})\) and chose every time the best performing \(\sigma\). Note that it is very difficult for the "soft" and "hard" models to control sparsity (hard graphs have no parameter at all). Hence they are less adaptable to a particular problem and in this particular case perform poorly.
Given the performance of the A-NN graph, one might wonder why pay the additional cost of learning a graph only for a small improvement in classification. Note, however, that the additional cost is not significant. Asymptotically, the cost of learning an A-NN graph is \(\mathcal{O}\left(n\log(n)d\right)\) for a graph of \(n\) nodes and data with dimensionality \(d\), while additionally learning the edge weights costs only \(\mathcal{O}\left(kn\right)\). The asymptotic complexity is thus dominated by the cost of computing an A-NN, and not by the cost of learning the weights. For the relatively small size of the problem we solve, this corresponds to \(20\) seconds for the A-NN graph (using compiled C code), and additional \(45\) seconds for our graph learning (using a Matlab-only implementation) With a full C implementation for our algorithm, the second number would be significantly smaller. Furthermore, for really large scale problems the asymptotic complexity would show that the bottleneck is not the graph learning but the A-NN.
### Manifold recovery quality
Graphs can be used to recover a low dimensional manifold from data. We evaluate the performance of our large scale model for this application on three datasets: "_spherical data_" (\(n=262,144\)) and "_spherical data small_" (\(n=4,096\)) from a known 2-D manifold, and "_word2vec_" (\(n=10,000\)).
#### 5.5.1 Large spherical data
Using 1920 signals from a known spherical manifold (Perraudin et al., 2018) we learn graphs of \(262,144\) nodes and recover a 2D manifold using the first \(2\) non-trivial eigenvectors of their Laplacians. The data is sampled on a \(512\times 512\) grid on the sphere, so we use graphs of degree close to \(k=4\). We try to recover it using scalable models: \(\ell_{2}\) (\(k=4.70\)), log (\(k=4.73\)) and A-NN (\(k=4.31\)).
While the A-NN graph performs very poorly, the representations recovered by both \(\ell_{2}\) and log graphs are almost perfectly square (Figure 7). However, as shown in Figure 8, the log graph gives the best representation. We plot subgraphs of the two models with only the nodes that correspond to 2D grids in the middle or the corner _of the the original manifold_. While the \(\ell_{2}\) graph does not connect all central nodes (green), the log graph is closer to a 2D grid. Furthermore, while the \(\ell_{2}\) model had \(46\) disconnected nodes that we discarded before computing eigenvectors, the log graph was connected.
<figure><img src="content_image/1710.05654/x13.png"><figcaption>Figure 7: Spherical data, ground truth and recovered manifolds. Up left: Theground truth manifold is on the sphere. We have colored the nodes thatcorrespond to the middle of the 2-D grid and the lower corner so that we trackwhere they are mapped in the recovered manifolds. In Figure 8 we keep only thesubgraphs of the green or blue nodes. Up, right: Recovered by A-NN, k=4.31.Down, left: Recovered by the ℓ2 model, k=4.70. The middle region is mixedwith nodes outside the very center. The corners are much more dense, the blueregion is barely visible on the bottom. Note that 46 nodes were disconnectedso they are not mapped at all. Down, right: Recovered by the log model,k=4.73. The middle region is much better mapped. The corners are still verydense, we have to zoom-in for the blue region (Figure 8).</figcaption></figure>
#### 5.5.2 Small spherical data
We then learn graphs from the “small spherical data” so as to be able to compute graph diameters. This operation has a complexity above \(O(n^{2})\) and is not possible for large graphs. Manifold-like graphs are known to have larger diameter compared to small world graphs (Watts & Strogatz, 1998) for the same average degree.
In Figure 9 (left and middle) we plot the diameter of scalable graph models. The data has \(4096\) nodes organized as a \(64\times 64\) grid, therefore a ground-truth \(4\)-NN has exactly diameter \(127=2\cdot 64-1\), and a ground-truth \(8\)-NN diameter \(63\). The learned graphs, given enough information (1920 signals) reach the ideal diameter around degrees 4 or 8, with a phase transition from diameter 127 to diameter 63 between degrees \(6\) to \(7\). When the information is not enough to recover the exact manifold (using only 40 signals, middle plot of Figure 9), the learned graphs have a higher diameter than the A-NN graph.
<figure><img src="content_image/1710.05654/x17.png"><figcaption>Figure 8: Detail from the manifolds recovered by ℓ2 and log models from"spherical data" (262,144 nodes, 1920 signals). Corner (blue) and middle(green) parts of the manifold. Left: ℓ2 model, k=4.70. Right: log model,k=4.73. See Figure 7 for the big picture.</figcaption></figure>
<figure><img src="content_image/1710.05654/x21.png"><figcaption>Figure 9: Graph diameter measures manifold recovery quality. Left: smallspherical data: 4096 nodes, 1920 signals. Middle: Same data, 40 signals.Right: word2vec: 10,000 nodes, 300 features.</figcaption></figure>
#### 5.5.3 Graph between words using word2vec representations
Finally we learn a graph between \(10,000\) words using \(300\) word2vec features, like the ones used for graph convolutional neural networks (Defferrard et al., 2016). In this application, connectivity of all nodes is important, and \(\ell_{2}\) graphs failed, always giving many disconnected nodes.
In Figure 9 we compare the diameter of our large scale log graphs against A-NN and the \(k\)-NN graphs used by (Defferrard et al., 2016). The large scale log graph has significantly larger diameter than both \(k\)-NN and A-NN graphs. This indicates that our learned graph is closer to a manifold-like structure, unlike the other two types that are closer to a small-world graph. Manifold-like graphs reveal better the structure of data, while small world graphs are related to randomness (Watts & Strogatz, 1998).
In Figure 10, we plot nodes within two hops from the word "use" in the three types of graphs of degree around \(k=5\). We see that the NN graphs span a larger part of the entire graph just with \(2\) hops, the A-NN being closer to small-world. While the \(k\)-NN graph does better in terms of quality, it is significantly worse than our large scale log graph, that is actually cheaper to compute. Additionally, graph learning seems to assign more meaningful weights as we show in Table 1 of the supplementary material D.4.
<figure><img src="content_image/1710.05654/x24.png"><figcaption>Figure 10: A 2-hop sub-graph of the word ”use”. Left: A-NN (k=5.4). Center:k-NN graph (k=5.0). Right: Large scale log (k=5.7) being manifold-like onlyreaches relevant terms.</figcaption></figure>
### Computation time
In Figure 1 we compare our scaled model to other graph learning models in terms of time as a function of \(n\). For all iterative algorithms, except A-NN, we perform a maximum of \(500\) iterations. The scaled \(\ell_{2}\) needs time per iteration identical to the scaled log model, but typically needs more iterations (more than \(5,000\) compared to \(500\)) till convergence. As we fix the number of iterations, they overlap and we only plot the log model.
In Figure 1 (left) we learn graphs between words, using \(300\) features (_word2vec_). The cost is almost linear for our method, but quadratic for the original log model of Kalofolias (2016). The "hard" model corresponds to a close implementation of the original work of (Daitch et al., 2009). The most expensive part of the computation comes from the quadratic program solver. For the "soft" model, we used the forward backward scheme of (Beck & Teboulle, 2009) and the cost is governed by evaluating the KKT conditions to find the optimal support (\(\mathcal{O}(n^{2})\)), that is faster than their original algorithm.
We also compare to our scaled versions of the "hard" and "soft" models, where we fix the support in the same way as explained in Section 3 (cf. supplementary material C). The bottleneck of the computation comes from the term \(\|LX\|^{2}_{F}\) that requires \(\mathcal{O}\left(knd\right)\) operations as opposed to our \(\mathcal{O}\left(kn\right)\) for \(\operatorname{tr}\left({X^{\top}LX}\right)\). To reduce this cost, we randomly projected the original \(d=300\) input features to a subspace of size \(d=20\). While the cost per iteration of the "soft" method is slower than for the "hard" method, the latter typically needs more iterations to reach convergence.
In Figure1 (right), we show the scalability of our model to graphs of up to 1**M samples²** of _US census_. Setting \(k=5\), it used \(\mathbf{16}\)**minutes** to perform \(500\) iterations of graph learning on a desktop computer running Matlab. While this is a proof of scalability, for really large data one will have to consider implementation details like memory management, and using a faster programming language (like C++). Note that the A-NN implementation we used was compiled C code and thus much faster. In Figure 15 we illustrate the linear scalability of our model w.r.t. the size of the set of allowed edges.
[FOOTNOTE:2][ENDFOOTNOTE]
## 6 Conclusions
We propose the first scalable solution to learn a weighted undirected graph from data, based on A-NN and the current state-of-the-art graph learning model. While it costs roughly as much as A-NN, it achieves quality very close to state-of-the-art. Its ability to scale is based on reducing the variables used for learning, while out automatica parameter selection eliminates the need for grid-search in order to achieve a sought graph sparsity. We assess its quality and scalability by providing an extensive set of experiments on many real datasets. The new large scale graphs perform best in various machine learning cases. They give better manifolds, are better for semi-supervised learning, and select the right amount of edges without allowing disconnected nodes. Learning a graph of 1 million nodes only takes 16 minutes using our simple Matlab implementation on a desktop computer.
### Thanks
The authors would like to especially thank Pierre Vandergheynst for his helpful comments during the preparation of this work at LTS2. We thank also the ETHZ Cosmology Research Group for allowing us to use their "Spherical convergence maps dataset" in the manifold recovery experiment.
## References
* Beck & Teboulle (2009) Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. _SIAM Journal on Imaging Sciences_, 2(1):183–202, 2009.
* Belkin & Niyogi (2001) Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In _NIPS_, volume 14, pp. 585–591, 2001.
* Belkin et al. (2006) Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. _The Journal of Machine Learning Research_, 7:2399–2434, 2006.
* Benzi et al. (2016) Kirell Benzi, Vassilis Kalofolias, Xavier Bresson, and Pierre Vandergheynst. Song recommendation with non-negative matrix factorization and graph total variation. In _2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pp. 2439–2443. IEEE, 2016.
* Cai et al. (2011) Deng Cai, Xiaofei He, Jiawei Han, and Thomas S Huang. Graph regularized nonnegative matrix factorization for data representation. _Pattern Analysis and Machine Intelligence, IEEE Transactions on_, 33(8):1548–1560, 2011.
* Daitch et al. (2009) Samuel I Daitch, Jonathan A Kelner, and Daniel A Spielman. Fitting a graph to vector data. In _Proceedings of the 26th Annual International Conference on Machine Learning_, pp. 201–208. ACM, 2009.
* Defferrard et al. (2016) Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In _Advances in Neural Information Processing Systems_, pp. 3837–3845, 2016.
* Dong et al. (2011) Wei Dong, Charikar Moses, and Kai Li. Efficient k-nearest neighbor graph construction for generic similarity measures. In _Proceedings of the 20th international conference on World wide web_, pp. 577–586. ACM, 2011.
* Dong et al. (2015) Xiaowen Dong, Dorina Thanou, Pascal Frossard, and Pierre Vandergheynst. Learning laplacian matrix in smooth graph signal representations. _arXiv preprint arXiv:1406.7842v2_, 2015.
* Duchi et al. (2008) John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l 1-ball for learning in high dimensions. In _Proceedings of the 25th international conference on Machine learning_, pp. 272–279. ACM, 2008.
* Elmoataz et al. (2008) Abderrahim Elmoataz, Olivier Lezoray, and Sébastien Bougleux. Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing. _IEEE transactions on Image Processing_, 17(7):1047–1060, 2008.
* Henaff et al. (2015) Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. _arXiv preprint arXiv:1506.05163_, 2015.
* Hu et al. (2013) Chenhui Hu, Lin Cheng, Jorge Sepulcre, Georges El Fakhri, Yue M Lu, and Quanzheng Li. A graph theoretical regression model for brain connectivity learning of alzheimer’s disease. In _2013 IEEE 10th International Symposium on Biomedical Imaging_, pp. 616–619. IEEE, 2013.
* Jebara et al. (2009) Tony Jebara, Jun Wang, and Shih-Fu Chang. Graph construction and b-matching for semi-supervised learning. In _Proceedings of the 26th Annual International Conference on Machine Learning_, pp. 441–448. ACM, 2009.
* Jiang et al. (2013) Bo Jiang, Chibiao Ding, Bio Luo, and Jin Tang. Graph-laplacian pca: Closed-form solution and robustness. In _Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on_, pp. 3492–3498. IEEE, 2013.
* Kalofolias (2016) Vassilis Kalofolias. How to learn a graph from smooth signals. In _The 19th International Conference on Artificial Intelligence and Statistics (AISTATS 2016)_. Journal of Machine Learning Research (JMLR), 2016.
* Kalofolias et al. (2014) Vassilis Kalofolias, Xavier Bresson, Michael Bronstein, and Pierre Vandergheynst. Matrix completion on graphs. _arXiv preprint arXiv:1408.1717_, 2014.
* Kipf & Welling (2016) Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. _arXiv preprint arXiv:1609.02907_, 2016.
* Komodakis & Pesquet (2015) Nikos Komodakis and Jean-Christophe Pesquet. Playing with duality: An overview of recent primal? dual approaches for solving large-scale optimization problems. _IEEE Signal Processing Magazine_, 32(6):31–54, 2015.
* Lake & Tenenbaum (2010) Brenden Lake and Joshua Tenenbaum. Discovering structure by learning sparse graph. In _Proceedings of the 33rd Annual Cognitive Science Conference_. Citeseer, 2010.
* Li et al. (2015) Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. _arXiv preprint arXiv:1511.05493_, 2015.
* Malkov & Yashunin (2016) Yu A Malkov and DA Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. _arXiv preprint arXiv:1603.09320_, 2016.
* Monti et al. (2016) Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodolà, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. _arXiv preprint arXiv:1611.08402_, 2016.
* Muja & Lowe (2009) Marius Muja and David G Lowe. Fast approximate nearest neighbors with automatic algorithm configuration. _VISAPP (1)_, 2(331-340):2, 2009.
* Muja & Lowe (2012) Marius Muja and David G Lowe. Fast matching of binary features. In _Computer and Robot Vision (CRV), 2012 Ninth Conference on_, pp. 404–410. IEEE, 2012.
* Muja & Lowe (2014) Marius Muja and David G Lowe. Scalable nearest neighbor algorithms for high dimensional data. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 36(11):2227–2240, 2014.
* Perraudin et al. (2014) Nathanaël Perraudin, Johan Paratte, David Shuman, Vassilis Kalofolias, Pierre Vandergheynst, and David K. Hammond. GSPBOX: A toolbox for signal processing on graphs. _ArXiv e-prints_, August 2014.
* Perraudin et al. (2018) Nathanaël Perraudin, Michaël Defferrard, Tomasz Kacprzak, and Raphael Sgier. Deepsphere: Efficient spherical convolutional neural network with healpix sampling for cosmological applications. _arXiv preprint arXiv:1810.12186_, 2018.
* Sgier et al. (2018) Raphael Sgier, Tomasz Kacprzak, Nathanaël Perraudin, and Michaël Defferrard. Spherical convergence maps dataset, July 2018. URL https://doi.org/10.5281/zenodo.1303272.
* Shahid et al. (2015) Nauman Shahid, Vassilis Kalofolias, Xavier Bresson, Michael Bronstein, and Pierre Vandergheynst. Robust principal component analysis on graphs. In _Proceedings of the IEEE International Conference on Computer Vision_, pp. 2812–2820, 2015.
* Shahid et al. (2016) Nauman Shahid, Nathanael Perraudin, Vassilis Kalofolias, Gilles Puy, and Pierre Vandergheynst. Fast robust pca on graphs. _IEEE Journal of Selected Topics in Signal Processing_, 10(4):740–756, 2016.
* Tütüncü et al. (2003) Reha H Tütüncü, Kim-Chuan Toh, and Michael J Todd. Solving semidefinite-quadratic-linear programs using sdpt3. _Mathematical programming_, 95(2):189–217, 2003.
* Von Luxburg (2007) Ulrike Von Luxburg. A tutorial on spectral clustering. _Statistics and computing_, 17(4):395–416, 2007.
* Wang & Zhang (2008) Fei Wang and Changshui Zhang. Label propagation through linear neighborhoods. _Knowledge and Data Engineering, IEEE Transactions on_, 20(1):55–67, 2008.
* Watts & Strogatz (1998) Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’networks. _nature_, 393(6684):440–442, 1998.
* Zhang et al. (2006) Tong Zhang, Alexandrin Popescul, and Byron Dom. Linear prediction models with graph regularization for web-page categorization. In _Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining_, pp. 821–826. ACM, 2006.
* Zheng et al. (2011) Miao Zheng, Jiajun Bu, Chun Chen, Can Wang, Lijun Zhang, Guang Qiu, and Deng Cai. Graph regularized sparse coding for image representation. _Image Processing, IEEE Transactions on_, 20(5):1327–1336, 2011.
* Zhu et al. (2003) Xiaojin Zhu, Zoubin Ghahramani, John Lafferty, et al. Semi-supervised learning using gaussian fields and harmonic functions. In _ICML_, volume 3, pp. 912–919, 2003.
Large Scale Graph Learning
from Smooth Signals:
Supplementary Material
## Appendix A Proof of Theorem 1
**Proof 6**.: _For any edge \(i,j\) the optimality conditions are:_
\[2\theta Z_{i,j}+2W^{*}_{i,j}-\frac{1}{\sum_{k\neq i}W^{*}_{i,k}}-\frac{1}{\sum _{k\neq j}W^{*}_{k,j}}=0.\]
_Let us select any \(i,j\) such that \(W_{i,j}>0\). Define \(\eta_{i,j}\) so that_
\[2\eta_{i,j}=\frac{2}{W^{*}_{i,j}}-\frac{1}{\sum_{k\neq i}W^{*}_{i,k}}-\frac{1} {\sum_{k\neq j}W^{*}_{k,j}},\] (19)
_and note that \(\eta_{i,j}\geq 0\), because all elements of \(W^{*}\) are non-negative and \(\frac{1}{W^{*}_{i,j}}\geq\frac{1}{W^{*}_{i,j}+\sum_{k\neq i,j}W^{*}_{i,k}}\). Then we can rewrite the optimality conditions so that they can be solved analytically as a function of the unknown \(\eta_{i,j}\):_
\[2\theta Z_{i,j}+2W^{*}_{i,j}-\frac{2}{W^{*}_{i,j}}+2\eta_{i,j}=0\Rightarrow\]
\[W^{*}_{i,j}=\frac{\sqrt{\left(\theta Z_{i,j}+\eta_{i,j}\right)^{2}+4}-\left( \theta Z_{i,j}+\eta_{i,j}\right)}{2}\leq 1,\] (20)
_since \(\sqrt{4+x^{2}}-x\leq 2,\forall x\geq 0\). This inequality holds for any edge \(i,j\) of the learned graph such that \(W_{i,j}^{*}>0\), therefore we can write_
\[\max_{i,j}W_{i,j}^{*}(\theta Z,1,1)\leq 1.\] (21)
_Hence using proposition 1, we conclude the proof._
Note that for sparsely connected regions, \(\eta_{i,j}\) becomes smaller, and therefore the edges of the graph come closer to the upper limit. We can deduct from equation 20 that the limit is actually reached only in case of duplicate nodes, i.e when \(Z_{i,j}=0\), and if the rest of the nodes are sufficiently far so that the edge \(i,j\) is the only edge for both nodes (so that \(\eta_{i,j}=0\)).
## Appendix B Proof of Theorem 3
**Proof 7**.: _From the proof of Theorem 2, we know that \(\|w^{*}\|_{0}=k\) if and only if \(\lambda^{*}\in\left[\theta z_{k},\theta z_{k+1}\right)\). We can rewrite this condition as_
\[\theta z_{k} \leq\frac{\theta b_{k}+\sqrt{\theta^{2}b_{k}^{2}+4k}}{2k}<\theta z _{k+1}\Leftrightarrow\]
\[2k\theta z_{k}-\theta b_{k} \leq\sqrt{\theta^{2}b_{k}^{2}+4k}<2k\theta z_{k+1}-\theta b_{k}\Leftrightarrow\]
\[4k^{2}\theta^{2}z_{k}^{2}-4k\theta^{2}b_{k}z_{k} \leq 4k<4k^{2}\theta^{2}z_{k+1}^{2}-4k\theta^{2}b_{k}z_{k+1}\Leftrightarrow\]
\[\theta^{2}(kz_{k}^{2}-b_{k}z_{k}) \leq 1<\theta^{2}(kz_{k+1}^{2}-b_{k}z_{k+1}).\]
_As \(\theta\) is constrained to be positive, the only values that satisfy the above inequalities are the ones proposed in the theorem._
## Appendix C Scaling algorithm of (Daitch et al., 2009)
(Daitch et al., 2009) proposes two convex optimization problems in order to learn graph from smooth signals. The prior used is \(\|LX\|_{F}^{2}\), which is a quadratic function of the weight \(W\). For the first problem, corresponding to the "hard" model, each node is constraint to have a degree of at least \(1\):
\[\operatorname*{minimize}_{w}\|Mw\|_{2}^{2}\hskip 28.452756pt\text{s.t. }w\geq \mathbf{0},Sw\geq\mathbf{1},\] (22)
where \(M\) is a linear operator such that \(\|LX\|_{F}^{2}=\|Mw\|_{2}^{2}\). We refer to the original paper for the construction of \(M\). The second problem, corresponding to the "soft" problem, is a relaxed version of the "hard" one where each every degree below \(1\) is penalized quadratically. It reads:
\[\operatorname*{minimize}_{w}\|Mw\|_{2}^{2}+\mu\|\max(\mathbf{1}-Sw,0)\|_{2}^{2 }\hskip 28.452756pt\text{s.t. }w\geq\mathbf{0},\] (23)
where \(\mu\) is a parameter weighting the importance of the constraint. In comparizon to the log and the \(\ell_{2}\) models where changing the regularization parameters has an important effect on the final average node degree, we note that \(\mu\) has little effect on the final average node degree. In their original paper, Daitch etal solve their optimization problems using SDPT3 (Tütüncü et al., 2003), which has a complexity significantly higher than \(\mathcal{O}\left(v^{2}\right)\) for \(v\) variables. So even when the number of edges scales with the number of nodes, i.e. \(v=\mathcal{O}\left(n\right)\), the complexity is not better than \(\mathcal{O}\left(n^{2}\right)\). To minimize this issue, one of the important contribution of (Daitch et al., 2009) is to solve the problem on subset \(v\) of edges. Nevertheless, instead of keeping the support fixed (as we do), they propose to check the KKT conditions of the dual variable, they can assess if the some edges should be added to the support at end optimization scheme. If so, the optimization is run again with the new edges. The process is repeated until all KKT conditions are satisfied. This technique allows for finding the global solution of larger problems, (particularly if SDPT3 is used). We note however that the search of the next relevant support costs \(O(n^{2})\) again, since they have to compute the KKT conditions for all possible edges.
We started by implementing the original "hard" and "soft" algorithms and obtained similar speed that were comparable as the results reported in the original paper. Unfortunately, we were not able to run theses implementations for more than a few thousands of nodes. Hence, in order to compare with our algorithm, we had to cope with their scalability issues first. Essentially, we derived new algorithms thanks to two modifications. First, we used FISTA ((Beck & Teboulle, 2009)) and forward-backward-based primal-dual (Komodakis & Pesquet, 2015, Algorithm 2) optimization schemes respectively for the soft and hard graph optimization instead of SDPT3. Second, we removed the support optimization using the KKT conditions and use the same A-NN support as our algorithm. After those modification, our implementation scaled to approximately a \(100^{\prime}000\) nodes. While the theoretical complexity is about the same as for our optimization problem, the running time are significantly longer than our method because the term \(\|LX\|_{F}^{2}=\|Mw\|_{2}^{2}\) requires \(d\) times more computation than \(\operatorname{tr}\left({X^{T}LX}\right)=\|Zw\|_{1}\), where \(d\) is the number of signals. In order to be able to compare to these algorithm in Figure 1, we randomly projected the original \(d=300\) input features to a subspace of size \(d=20\) (this does not change significantly the computation time of our model). While the cost per iteration of the "soft" method is slower than for the "hard" method, the latter typically needs more iterations to reach convergence. The reason is that our accelerated algorithm uses FISTA that has a global rate of convergence proven to be significantly better than the forward-backward based primal-dual we used (Beck & Teboulle, 2009; Komodakis & Pesquet, 2015).
## Appendix D Experiments
### Datasets
* **MNIST:** We use the 60000 images of the MNIST training dataset.
* **US Census 1990:** Dataset available at the UCI machine learning repository, consists of approximately 2.5 million samples of 68 features. https://archive.ics.uci.edu/ml/datasets/US+Census+Data+(1990)
* **word2vec:** The \(10^{\prime}000\) most used words in English (https://research.googleblog.com/2006/08/all-our-n-gram-are-belong-to-you.html). It uses the Google word2vec features (https://code.google.com/archive/p/word2vec/).
* **Spherical data:** Simulated cosmological mass maps (i.e. the amount of mass in the universe observed from earth in every direction) from Perraudin et al. (2018); Sgier et al. (2018). The dataset consists of two sets of maps that were created using the standard cosmological model with two sets of cosmological parameters. This data resides on the sphere, which can be considered as its underlying manifold. We used the slightly smoothed maps (Gaussian kernel of radius 3 arcmin), as proposed in the original paper. While the original data consists of \(40\) maps of size \(12\cdot 1024^{2}\), we work only a sup-part of the sphere of size. This allow us to build \(1920\) signals on a sub-part of the sphere (\(512\times 512\) grid), i.e. between \(262,144\) nodes. The actual manifold with the correct coordinates is plotted in Figure 7 up left. https://zenodo.org/record/1303272
* **Small spherical data:** A subset of "Spherical data" containing 4096 nodes, (64x64 grid) and the same 1920 signals.
* **USPS:** Handwritten digits from \(0\) to \(9\). https://ieeexplore.ieee.org/document/291440
### MNIST irregular intra-class distances
Figure 11 illustrates one irregularity of the MNIST dataset. One could expect that the average distance between two digits of the same class (intra-class) is more or less independent of the class. Nevertheless, for the MNIST dataset, the distance between the \(1\) is significantly smaller than the one of the other digits. For this reason, the L2 model connects significantly more the digits \(1\) than the others.
<figure><img src="content_image/1710.05654/x27.png"><figcaption>Figure 11: Label frequency (left) and average squared distribution (right) ofMNIST train data (60000 nodes). The distances between digits “1” aresignificantly smaller than distances between other digits.</figcaption></figure>
### Approximation accuracy and robustness of parameter \(\theta\)
We already saw in Figure 2 (middle) that for the MNIST dataset (18) predicts very well the sparsity of the final graph for any choice of \(\theta\). This is further illustrated on the USPS and ATT faces datasets in Figure 13. Note, that in the rare cases that the actual sparsity is outside the predicted bounds, we already have a good starting point for finding a good \(\theta\). For example, in the COIL dataset, if we want a graph with \(15\) edges per node we will set \(\theta=1.2\), obtaining instead a graph with \(12\) edges per node. This kind of fluctuations are usually tolerated, while even in \(k\)-NN we always obtain graphs with more than \(nk\) edges due to the fact that \(W\) is symmetric.
#### d.3.1 Robustness to outliers and duplicates
We test the robustness of the bounds of \(\theta\) by repeating the experiment of Figure 2 (middle) after replacing \(10\%\) of the images with outliers or duplicates. For this experiment, we first added outliers by contaminating \(10\%\) of the images with Gaussian noise from \(\cal{N}(0,1)\). Note that this is huge noise since the initial images have intensities in the range \([0,1]\). The result, plotted in Figure 12 (left), is almost identical to the results without outliers (Figure 2, middle).
While this might seem surprising, note that \(\theta\) given by eq. (18) are really dependent on the smallest distances in \(Z\), rather than the largest ones: \(\hat{Z}\) is sorted, and B as well. Adding outliers induces additional nodes distances larger than usually, that never make it to change the first \(k\) rows of matrices \(\hat{Z}\) and \(B\), for the columns that correspond to non-outlier nodes. _Therefore, eq. (18) is very robust to outliers_.
To complete the experiment, instead of adding noise, we replaced \(10\%\) of the images with copies of other images already in the dataset. In this case, we have \(20\%\) of images in _pairs of duplicates_, with essentially zero distances with each other in matrix \(Z\). As we see in Figure 12 (right), while the intervals of \(\theta\) change, they do so _following closely the actual measured sparsity_.
<figure><img src="content_image/1710.05654/x29.png"><figcaption>Figure 12: Robustness of the theoretical bounds of θ in the existence ofoutliers or duplicate nodes. Same dataset as the one used for Figure 2. Evenfor extreme cases in terms of distance distribution, the bounds give a goodapproximation. Left: Results when we add Gaussian noise from N(0,1) to 10% ofthe images before calculating Z. Note that the noise added is significantgiven that the initial pixel values are in [0,1]. Right: We replaced 10% ofthe images with duplicates of other images already in the dataset.</figcaption></figure>
<figure><img src="content_image/1710.05654/x31.png"><figcaption>Figure 13: Predicted and measured sparsity for different choices of θ. Notethat θ is plotted in logarithmic scale and decreasing. Up left: 400 ATT faceimages. Up right: 1440 object images from the COIL dataset. Down left: Graphbetween 1000 samples from a multivariate uniform distribution. Down right:Graph between 1000 samples from a multivariate Gaussian distribution.</figcaption></figure>
### Connectivity example of the graph of words
In Table 1, we look in more detail at the graph constructed from the word2vec features. We present the connectivity for the word "glucose" and "academy". Looking at different words, we observe that the learned graph is able to associate meaningful edge weights to the different words according to the confidence of their similarity.
Word | k-NN | A-NN | Learned
---|---|---|---
glucose | | 0.0800 insulin |
0.1226 insulin | 0.0337 protein | 0.5742 insulin
0.0233 protein | 0.0306 oxygen | 0.0395 calcium
0.0210 oxygen | 0.0295 cholesterol | 0.0151 metabolism
0.0148 hormone | 0.0263 calcium | 0.0131 cholesterol
| 0.0225 hormone |
academy | | | 0.3549 training
0.0996 training | 0.0901 young | 0.2323 institute
0.0953 school | 0.0863 department | 0.1329 school
0.0918 institute | 0.0841 bizrate | 0.0135 camp
| | 0.0008 vocational
Table 1: Weight comparison between k-NN, A-NN and learned graphs. The weights
assigned by graph learning correspond much better to the relevance of the
terms.
### MNIST connectivity for (Daitch et al., 2009) methods
In Figure 14, we plot the connectivity across different digits of MNIST for the "hard" and "soft" model. As the degree is constant over the nodes, the hard model perform similarly to the A-NN (see Figure 5). It seems that due to hard constraint, the "hard" model forces many edges with a small weights. On the other hand, in terms of connextivity, the soft model seems to be between the log and the \(\ell_{2}\) model.
<figure><img src="content_image/1710.05654/x35.png"><figcaption>Figure 14: Connectivity across different classes of MNIST (60000 nodes). Thegraph is normalized so that ∥W∥1,1=1. We measure the percentage of the totalweight for connected pairs of each label. The last columns correspond to thetotal of the wrong edges, between images of different labels. Left: (Daitch etal., 2009) hard model. As the degree is constant over the nodes, the hardmodel is close the A-NN. Right: (Daitch et al., 2009) soft model. In terms ofconnextivity, the soft model seems to be between the log and the ℓ2 model.Note that while it favors connections between "1"s, this effect becomes worsewith higher density. Note also that these algorithms fail to give reasonablegraphs for densities outside a small range, making it very difficult tocontrol sparsity.</figcaption></figure>
### MNIST Computational time with respect to k
The cost of learning a graph with a subset of allowed edges \(\mathcal{E}^{\text{allowed}}\) is linear to the size of the set as illustrated in Figure 15. For this experiment, we use the MNIST data set. To learn a graph with approximately \(10\) edges per node, we needed \(20\) seconds to compute \(\mathcal{E}^{\text{allowed}}\), and \(20\) seconds to learn the final graph of \(60000\) nodes (around 250 iterations). Note that the time necessary to search for the nearest neighbors is in the same order of magnitude than the learning process.
<figure><img src="content_image/1710.05654/x37.png"><figcaption>Figure 15: Time needed for learning a graph of 60000 nodes (MNIST images)using the large-scale version of (3). Our algorithm converged after 250 to 450iterations with a tolerance of 1e−4. The time needed is linear to the numberof variables, that is linear to the average degree of the graph.</figcaption></figure>
|
1301.3432 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 38440,
"num_imgs": 6,
"llama3_tokens_count": 11163
} | [
"content_image/1301.3432/x1.png",
"content_image/1301.3432/x2.png",
"content_image/1301.3432/x3.png",
"content_image/1301.3432/x4.png",
"content_image/1301.3432/x8.png",
"content_image/1301.3432/x9.png"
] | # Pressure exerted by a grafted polymer on the limiting line of a semi-infinite square lattice
Iwan Jensen
ARC Centre of Excellence for Mathematics and Statistics of Complex Systems, Department of Mathematics and Statistics, The University of Melbourne, VIC 3010, Australia
Wellington G. Dantas
Departamento de Ciências Exatas, EEIMVR, Universidade Federal Fluminense, 27.255-125 - Volta Redonda - RJ, Brasil
Carlos M. Marques
Institut Charles Sadron, Université de Strasbourg, CNRS-UPR 22, 23 rue du Loess, 67034 Strasbourg, France
Jürgen F. Stilck
Instituto de Física and National Institute of Science and Technology for Complex Systems, Universidade Federal Fluminense, Av. Litorânea s/n, Boa Viagem, 24.210-340 - Niterói - RJ, Brasil
February 27, 2024
###### Abstract
Using exact enumerations of self-avoiding walks (SAWs) we compute the inhomogeneous pressure exerted by a two-dimensional end-grafted polymer on the grafting line which limits a semi-infinite square lattice. The results for SAWs show that the asymptotic decay of the pressure as a function of the distance to the grafting point follows a power-law with an exponent similar to that of gaussian chains and is, in this sense, independent of excluded volume effects.
pacs: 05.50.+q,36.20.Ey
## I Introduction
Imaging and manipulating matter at sub-micron length scales has been the cornerstone of nano-sciences development [1]. In Soft Matter systems, including those of biological relevance, the cohesive energies being only barely larger than the thermal energy \(k_{B}T\), forces as small as a pico-Newton exerted over a nanometer length scale might be significant enough to induce structural changes. Examples can be found in the stretching of DNA molecules by optical traps [2], on the behavior of colloidal solutions under external fields [3] and on the deformations of self-assembled bilayers [4] to name just a few. Thus, in Soft Matter, when one exerts a localized force over a small area, precise control of the acting force requires not only a prescribed value of the total applied force but, more importantly, a precise pressure distribution in the contact area.
The microscopic nature of pressure has been understood since the seminal work of Bernoulli two and a half centuries ago: in a container, momentum is transferred by collisions from the moving particles to the walls [5]. When the particle concentration is homogeneous so is the pressure. Strategies for localizing the pressure over a nanometer area thus requires the generation of strong concentration inhomogeneities, at equivalently small scales. Bickel _et al._[6; 7] and Breidnich _et al._[8] have recently realized that such inhomogeneities are intrinsic to entropic systems of connected particles such as polymer chains, and have computed the inhomogeneous pressure associated with end-grafted polymer chains within available analytical theories for ideal chains. Their results show that the polymer produces a local field of pressure on the grafting surface, with the interaction being strong at the anchoring point and vanishing far enough from it. Scaling arguments were also put forward in [7] to discuss the more relevant case of real polymer chains, where excluded volume interactions between the different monomers need to be taken into account. These arguments suggest that the functional variation of pressure with distance from the grafting point should be the same in chains with or without excluded volume interactions, albeit with different prefactors.
In this paper we compute the inhomogenous pressure applied to a wall by an end-grafted polymer with excluded volume interactions, modeled by selfavoiding walks (SAWs) on the square lattice. In Fig. 1 we illustrate our model with a wall located at \(x=0\). The wall is neutral, in the sense that the statistical weight of a monomer placed on the wall is equal to the weight of a monomer in the bulk. The length of a step of the walk is equal to the lattice constant \(a\), and we use this as the length unit. The model is athermal, that is, all allowed configurations of a SAW have the same energy.
<figure><img src="content_image/1301.3432/x1.png"><figcaption>Figure 1: A SAW grafted at the origin x=y=0 to a wall placed on the y axis.If the vertex on the wall at (0,1) is not excluded, the only possibility forthe next step would be towards this vertex. If this vertex is excluded, theSAW will end at the final point (1,1).</figcaption></figure>
The canonical partition function of walks with \(n\) steps (\(Z_{n}\)) is equal to the number of SAWs starting at the origin and restricted to the half-plane \(x\geq 0\), called \(c_{n}^{(1)}\) in [9]. The Helmholtz free energy is given by \(F_{n}=-k_{B}T\ln c_{n}^{(1)}\). We can estimate the pressure exerted by the SAW at a point \((0,r)\) on the wall by excluding this vertex from the lattice. The excluded vertex is represented as a hatched square in Fig. 1 at \(r=1\). The pressure \(P_{n}(r)\) exerted at this point is then related to the change in the free energy when the vertex is excluded, \(P_{n}a^{2}=-\Delta F_{n}\). If we call \(c_{n}^{(1)}(r)\) the number of \(n\) step SAWs with the vertex at \((0,r)\) excluded, the dimensionless reduced pressure may be written as
\[p_{n}(r)=\frac{P_{n}(r)a^{2}}{k_{B}T}=-\ln\frac{c_{n}^{(1)}(r)}{c_{n}^{(1)}}.\] (1)
Of course we are interested in the thermodynamic limit \(p(r)=\lim_{n\to\infty}p_{n}(r)\), so the enumeration data must be extrapolated to the infinite length limit. It is worth noting that the density of monomers at the vertex \((0,r)\) is given by \(\rho(r)=1-\lim_{n\to\infty}c_{n}^{(1)}(r)/c_{n}^{(1)}\), so that
\[p(r)=-\ln[1-\rho(r)].\] (2)
The exact enumerations allow us to obtain precise estimates of the pressure exerted by SAW’s at small distances of the grafting point, and we find, rather surprisingly, that the asymptotic form of this pressure is well reproduced even for these small values of \(r\). In section II we give some details of the computational enumeration procedure. In section III the enumeration data are analyzed and estimates for the pressure as a function of the distance to the grafting point are presented. Final discussions and conclusions may be found in section IV.
## II Exact enumerations
The algorithm we use to enumerate SAWs on the square lattice builds on the pioneering work of Enting [10] who enumerated square lattice self-avoiding polygons using the finite lattice method. More specifically our algorithm is based in large part on the one devised by Conway, Enting and Guttmann [11] for the enumeration of SAWs. The details of our algorithm can be found in [12]. Below we shall only briefly outline the basics of the algorithm and describe the changes made for the particular problem studied in this work.
The first terms in the series for the SAWs generating function can be calculated using transfer matrix techniques to count the number of SAWs in rectangles \(W\) vertices wide and \(L\) vertices long. Any SAW spanning such a rectangle has length at least \(W+L-2\). By adding the contributions from all rectangles of width \(W\leq N+1\) and length \(W\leq L\leq N-W+1\) the number of SAW is obtained correctly up to length \(N\).
The generating function for rectangles with fixed width \(W\) are calculated using transfer matrix (TM) techniques. The most efficient implementation of the TM algorithm generally involves bisecting the finite lattice with a boundary (this is just a line in the case of rectangles) and moving the boundary in such a way as to build up the lattice vertex by vertex as illustrated in Fig. 2. If we draw a SAW and then cut it by a line we observe that the partial SAW to the left of this line consists of a number of loops connecting two edges (we shall refer to these as loop ends) in the intersection, and pieces which are connected to only one edge (we call these free ends). The other end of the free piece is either the start-point or the end-point of the SAW so there are at most two free ends.
Each end of a loop is assigned one of two labels depending on whether it is the lower end or the upper end of a loop. Each configuration along the boundary line can thus be represented by a set of edge states \(\{\sigma_{i}\}\), where
\[\sigma_{i}=\left\{\begin{array}[]{rl}0&\;\;\;\mbox{empty edge},\\ 1&\;\;\;\mbox{lower loop-end},\\ 2&\;\;\;\mbox{upper loop-end}.\\ 3&\;\;\;\mbox{free end}.\\ \end{array}\right.\] (3)
If we read from the bottom to the top, the configuration or signature \(S\) along the intersection of the partial SAW in Fig. 2 is \(S=\{031212120\}\). Since crossings aren’t permitted this encoding uniquely describes which loop ends are connected.
<figure><img src="content_image/1301.3432/x2.png"><figcaption>Figure 2: A snapshot of the boundary line (dashed line) during the transfermatrix (TM) calculation on a strip of width 7 with r=3. The filled circleindicates the grafted start-point of the SAW and the shaded box the excludedvertex. SAWs are enumerated by successive moves of the kink in the boundaryline, as exemplified by the position given by the dotted line, so that onevertex and two edges at a time are added to the strip. To the left of theboundary line we have drawn an example of a partially completed SAW.</figcaption></figure>
The sum over all contributing graphs is calculated as the boundary is moved through the lattice. For each configuration of occupied or empty edges along the intersection we maintain a generating function \(G_{S}\) for partial walks with signature \(S\). In exact enumeration studies such as this \(G_{S}\) is a truncated polynomial \(G_{S}(x)\) where \(x\) is conjugate to the number of steps. In a TM update each source signature \(S\) (before the boundary is moved) gives rise to a few new target signatures \(S^{\prime}\) (after the move of the boundary line) and \(m=0,1\) or 2 new edges are inserted leading to the update \(G_{S^{\prime}}(x)=G_{S^{\prime}}(x)+x^{m}G_{S}(x)\). Once a signature \(S\) has been processed it can be discarded. The calculations were done using integer arithmetic modulo several prime numbers with the full integer coefficients reconstructed at the end using the Chinese remainder theorem.
Some changes to the algorithm described in [12] are required in order to enumerate the restricted SAW we study here. Grafting the SAW to the wall can be achieved by forcing the SAW to have a free end (the start-point) on the top side of the rectangle. In enumerations of unrestricted SAW one can use symmetry to restrict the TM calculations to rectangles with \(W\leq N/2+1\) and \(L\geq W\) by counting contributions for rectangles with \(L>W\) twice. The grafting of the start-point to the wall breaks the symmetry and we have to consider all rectangles with \(W\leq N+1\). Clearly the number of configurations one must consider grows with \(W\). Hence one wants to minimize the length of the boundary line. To achieve this the TM calculation on the set of rectangles is broken into two sub-sets with \(L\geq W\) and \(L<W\), respectively. The calculations for the sub-set with \(L\geq W\) is done as outlined above. In the calculations for the sub-set with \(L<W\) the boundary line is chosen to be horizontal (rather than vertical) so it cuts across at most \(L+1\) edges. Alternatively, one may view the calculation for the second sub-set as a TM algorithm for SAW with its start-point on the left-most border of the rectangle.
Exclusion of the vertex at distance \(r\) from the starting point of the SAW is achieved by blocking this vertex so the walk can’t visit the vertex. The actual calculation can be done in at least two ways. One can simply specify the position of the starting point (and \(r\)) on the upper/left border and sum over all possible positions. This means doing calculations for a given width \(W\) many times; once for each position of the starting point of the SAW. Alternatively one can introduce ‘memory’ into the TM algorithm. Specifically once we have created a configuration which inserts the first free end we ‘remember’ that it did so. We can flag that the free end has been inserted by adding a ghost edge to the configuration initially in state 0. Once the first free end is inserted the state of the ghost edge is changed to \(1\). In the next sweep the state of the ghost edge is incremented by 1. When the state of the ghost edge has reached the value \(r\) the vertex on the top border is blocked. The problem with the first approach is that we need to do many calculations for any given rectangle. The problem with the second approach is that we need to keep \(r+1\) copies of most TM configurations thus using substantially more memory. The choice will be a matter of whether the major computational bottle-neck is CPU time or memory. For this study we used the first approach.
In more detail the TM algorithm for the case \(L\geq W\) works as follows. A SAW has two free ends and in the TM algorithm the first free end is forced to be at the top at a distance \(k\) from from the left border (this is the starting point of the SAW). We then add a further \(r-1\) columns; in the next column the top vertex is forced to be empty. After this further columns are added up to a maximum length of \(L_{m}=N-W+1\). This calculation is then repeated for \(k=0\) to \(L_{m}\) thus enumerating all possible SAWs spanning rectangles of width exactly \(W\) and length \(L\geq W\). A similar calculation is then done with the SAW grafted to the left border and in each case repeated for all \(W\leq N/2\).
The calculation above enumerates almost all possible SAWs. However, we have missed those SAWs with two free ends in the top border where the end-point precedes the starting-point. That is there is a free end in the top border at a distance \(>r\) prior to the excluded vertex. We need to count such SAWs separately. The required changes to the algorithm are quite straight-forward and will not be detailed here.
We calculated the number of SAWs up to length \(n=59\) for the unrestricted case and for an excluded vertex with \(r=1,\,2,\,3,\,4,\,5,\,10\), 20. In each case the calculation was performed in parallel using up to 8 processors, a maximum of some 16GB of memory and using a total of under 2000 CPU hours (see [12] for details of the parallel algorithm). We needed 3 primes to represent each series correctly and the calculations for all the primes were done in a single run.
## III Analysis and results
In tables 1 and 2, we have listed the results for the enumerations of self-avoiding walks without additional restrictions, \(c_{n}^{(1)}\), and walks which are not allowed to occupy the vertex \((0,1)\) of the wall, \(c_{n}^{(1)}(1)\). If we calculate the pressures directly, we notice a parity effect, as seen in the results presented in Fig. 3. This effect is related to an unphysical singularity in the generating function of the counts \(c_{n}^{(1)}\), \(G(x)=\sum_{n=0}^{\infty}c_{n}^{(1)}x^{n}\). Besides the physical singularity at \(x=x_{c}=1/\mu\), where \(\mu\) is the connective constant, there is another singularity at \(x=-1/\mu\)[9]. This point will be discussed in more detail below, and more precise estimates for the pressures at several distances from the grafting point will be provided.
<figure><img src="content_image/1301.3432/x3.png"><figcaption>Figure 3: Pressure pn(r) for r=1, calculated with the enumeration data forc(1)n and c(1)n(1) using expression (1).</figcaption></figure>
n | c(1)n | n | c(1)n | n | c(1)n
---|---|---|---|---|---
1 | 3 | 21 | 681552747 | 41 | 176707555110156095
2 | 7 | 22 | 1793492411 | 42 | 465629874801142259
3 | 19 | 23 | 4725856129 | 43 | 1227318029107006037
4 | 49 | 24 | 12439233695 | 44 | 3234212894649555857
5 | 131 | 25 | 32778031159 | 45 | 8525055738741918835
6 | 339 | 26 | 86295460555 | 46 | 22466322857670716727
7 | 899 | 27 | 227399388019 | 47 | 59220537922987286933
8 | 2345 | 28 | 598784536563 | 48 | 156073168859898607113
9 | 6199 | 29 | 1577923781445 | 49 | 411414632591966686887
10 | 16225 | 30 | 4155578176581 | 50 | 1084313600069268939547
11 | 42811 | 31 | 10951205039221 | 51 | 2858360190045390998925
12 | 112285 | 32 | 28844438356929 | 52 | 7533725151809823220637
13 | 296051 | 33 | 76016486583763 | 53 | 19860118923927104821817
14 | 777411 | 34 | 200242023748929 | 54 | 52346889766180530489735
15 | 2049025 | 35 | 527735162655901 | 55 | 137997896899080793506959
16 | 5384855 | 36 | 1390287671021273 | 56 | 363744527134008049572583
17 | 14190509 | 37 | 3664208598233159 | 57 | 958930393586321187515995
18 | 37313977 | 38 | 9653950752700371 | 58 | 2527696511232818406275131
19 | 98324565 | 39 | 25444550692827111 | 59 | 6663833305674862002802763
20 | 258654441 | 40 | 67042749110884297 | |
Table 1: Number of walks in the half-plane c(1)n.
n | c(1)n(1) | n | c(1)n(1) | n | c(1)n(1)
---|---|---|---|---|---
1 | 2 | 21 | 484553893 | 41 | 125845983216200025
2 | 5 | 22 | 1277403184 | 42 | 331741159147128245
3 | 13 | 23 | 3361118347 | 43 | 874112388226242422
4 | 35 | 24 | 8860136085 | 44 | 2304278197456842952
5 | 91 | 25 | 23319106552 | 45 | 6071977423574762560
6 | 242 | 26 | 61468398004 | 46 | 16006835327039914244
7 | 630 | 27 | 161814936995 | 47 | 42181825940070651834
8 | 1672 | 28 | 426530787110 | 48 | 111200914189945767681
9 | 4369 | 29 | 1123043680259 | 49 | 293056004233059019257
10 | 11558 | 30 | 2960232320818 | 50 | 772575890795109134325
11 | 30275 | 31 | 7795418415398 | 51 | 2036121996024316003415
12 | 79967 | 32 | 20548006324647 | 52 | 5367866589569286706072
13 | 209779 | 33 | 54117914172220 | 53 | 14147607361624429924807
14 | 553634 | 34 | 142651034798697 | 54 | 37298221266819312654286
15 | 1453801 | 35 | 375747632401071 | 55 | 98307470253293931954939
16 | 3834878 | 36 | 990456507011029 | 56 | 259178303320281122974230
17 | 10077384 | 37 | 2609158017850105 | 57 | 683144867659867533730505
18 | 26574366 | 38 | 6877742334133961 | 58 | 1801074652042354959971779
19 | 69870615 | 39 | 18119629209950641 | 59 | 4747450605648675761162683
20 | 184216886 | 40 | 47764129557587369 | |
Table 2: Number of restricted walks in the half-plane c(1)n(1).
### Critical points and exponents
The critical behaviour of a polymer grafted to a surface is well established [13]. It has been proved that the connective constant of grafted walks equals that of unrestricted walks [14]. The associated generating function has a dominant singularity at \(x=x_{c}=1/\mu\)
\[G(x)=\sum_{n}c_{n}^{(1)}x^{n}\sim A(1-\mu x)^{-\gamma_{1}},\] (4)
where \(\gamma_{1}=61/64\) is a known [15; 16] critical exponent. Besides the physical singularity there is another singularity at \(x=x_{-}=-x_{c}\)[17; 9].
We have analysed the series using differential approximants [18]. We calculate many individual approximants and obtain estimates for the critical points and exponents by an averaging procedure described in chapter 8 of reference [19]. Here and elsewhere uncertainties on estimates from differential apprimants was obtained from the spread among the various approximants as detailed in [19]. The results for unrestricted grafted SAWs are listed in Table 3 under \(r=0\). We also list estimates for the cases \(r=1,\,2,\,5\) and 10.. From these estimates it is clear that all the series have the same critical behaviour. That is a dominant singularity at \(x=x_{c}\) with exponent \(-\gamma_{1}=-61/64\) and a non-physical singularity at \(x=x_{-}=-x_{c}\) with a critical exponent consistent with the exact value \(\gamma_{-}=3/2\).
The critical behavior can be established more rigorously from a simple combinatorial argument. The number of walks \(c_{n}^{(1)}(r)\) with the point at \((0,r)\) excluded is clearly less than the number of unrestricted walks \(c_{n}^{(1)}\). On the other hand if we attach a single vertical step to the grafting point of an unrestricted walk we get a walk which does not touch the surface at all and hence these walks are a subset of \(c_{n}^{(1)}(r)\). This establishes the inequality
\[c_{n-1}^{(1)}\leq c_{n}^{(1)}(r)\leq c_{n}^{(1)},\] (5)
and hence shows that up to amplitudes the asymptotic behaviors of these sequences are identical.
r | L | xc | γ | x− | γ−
---|---|---|---|---|---
0 | 0 | 0.379052260(64) | 0.953097(70) | -0.3790526(38) | 1.5002(19)
0 | 4 | 0.379052241(20) | 0.953072(17) | -0.3790492(30) | 1.5023(13)
0 | 8 | 0.379052243(14) | 0.953071(15) | -0.3790498(21) | 1.5016(12)
1 | 0 | 0.3790522582(30) | 0.9530884(24) | -0.3790425(97) | 1.5074(74)
1 | 4 | 0.3790522575(38) | 0.9530879(30) | -0.379030(26) | 1.523(29)
1 | 8 | 0.379052257(11) | 0.953090(14) | -0.379058(16) | 1.4988(69)
2 | 0 | 0.379052292(16) | 0.953123(13) | -0.3790511(33) | 1.5011(24)
2 | 4 | 0.379052276(12) | 0.9531115(97) | -0.3790478(89) | 1.5036(60)
2 | 8 | 0.379052306(26) | 0.953135(20) | -0.379057(21) | 1.498(20)
5 | 0 | 0.37905218(21) | 0.95304(17) | -0.379114(61) | 1.457(37)
5 | 4 | 0.37905225(31) | 0.95313(24) | -0.379099(40) | 1.467(29)
5 | 8 | 0.37905226(29) | 0.95313(25) | -0.379074(31) | 1.482(20)
10 | 0 | 0.3790483(12) | 0.9494(12) | -0.379230(55) | 1.369(32)
10 | 4 | 0.3790493(40) | 0.9503(32) | -0.379237(29) | 1.369(14)
10 | 8 | 0.3790508(22) | 0.9514(14) | -0.379246(91) | 1.365(54)
Table 3: Estimates of the critical points and exponents for SAWs with an
excluded vertex a distance r from the origin (r=0 is the unrestricted case).
The estimates were obtained from third order approximants with L being the
degree of the inhomogenous polynomial.
### Pressure
Having established the critical behaviour of the series we can now turn to the determination of the pressure exerted by the polymer on the surface. Since all the series have the same dominant critical behaviour it follows from (1) that the pressure is given by the ratio of the critical amplitudes.
One way of estimating the amplitudes is by a direct fit to an assumed asymptotic form. Here we assume that the asymptotic behaviour of our series is similar to that of unrestricted SAW [20]. The asymptotic analysis of [20] was very thorough and clearly established that the leading non-analytic correction-to-scaling exponent has the value 3/2 (there are also analytic, _i.e_., integer valued corrections to scaling). We repeated some of the steps in this analysis with the same result for the leading non-analytic correction-to-scaling exponent. Naturally there may be further non-analytic correction-to-scaling exponents with values \(>3/2\), but these would be impossible to detect numerically with any degree of certainty. So here we assume that the physical singularity has a leading correction-to-scaling exponent of 1 followed by further half-integer corrections while we assume only integer corrections at the non-physical singularity. We thus fit the coefficients to the asymptotic form
\[c_{n}^{(1)}(r)=\mu^{n}\left[n^{\gamma_{1}-1}\left(A(r)+\sum_{j=2}a_{j}(r)/n^{j /2}\right)+(-1)^{n}n^{-\gamma_{-}-1}\sum_{k=0}b_{k}(r)/n^{k}\right].\] (6)
In the fits we use the extremely accurate estimate \(\mu=2.63815853035(2)\) obtained from an analysis of the series for self-avoiding polygons on the square lattice [21] and the conjectured exact values \(\gamma_{1}=61/64\) and \(\gamma_{-}=3/2\). That is we take a sub-sequence of terms \(\{c_{n}^{(1)}(r),c_{n-1}^{(1)}(r),\ldots,c_{n-2m-1}^{(1)}(r)\}\), plug into the formula above taking \(m\) terms from both the \(a_{j}\) and \(b_{k}\) sums, and solve the \(2m\) linear equations to obtain estimates for the amplitudes.
<figure><img src="content_image/1301.3432/x4.png"><figcaption>Figure 4: Estimates for the leading amplitudes obtained by fitting to theasymptotic form (6) plotted against 1/n while truncating the asymptoticexpansion after 4 to 7 terms.</figcaption></figure>
It is then advantageous to plot estimates for the leading amplitude \(A(r)\) against \(1/n\) for several values of \(m\) as done in Fig. 4. The behaviour of the estimates for the leading amplitudes shown in this figure supports that (6) is a very good approximation to the true asymptotic form. In particular note that the slope becomes very flat as \(n\) is increased and decreases as the number of terms \(m\) included in the fit is increased. From these plots we estimate \(A=1.124705(5)\), \(A(1)=0.801625(5)\), \(A(2)=0.97564(2)\) and \(A(5)=1.09325(10)\), where the uncertainty is a conservative value chosen to include most of the extrapolations from Fig. 4.
The amplitude ratios \(A(r)/A\), and hence the pressure, can also be estimated by direct extrapolation of the relevant quotient sequence, using a method due to Owczarek et al. [22]: Given a sequence \(\{a_{n}\}\) defined for \(n\geq 1\), assumed to converge to a limit \(a_{\infty}\) with corrections of the form \(a_{n}\sim a_{\infty}(1+b/n+\ldots)\), we first construct a new sequence \(\{p_{n}\}\) defined by \(p_{n}=\prod_{m=1}^{n}a_{m}\). We then analyse the corresponding generating function
\[P(x)=\sum p_{n}x^{n}\sim(1-a_{\infty}x)^{-(1+b)}.\]
Estimates for \(a_{\infty}\) and the parameter \(b\) can then be obtained from differential approximants, that is \(a_{\infty}\) is just the reciprocal of the first singularity on the positive real axis of \(P(x)\). In our case we study the sequence of ratios \(a_{n}(r)=c_{n}^{(1)}(r)/c_{n}^{(1)}\), which has the required asymptotic form. Using the same type of differential approximant method outlined above we find that \(A/A(1)=1.4030218(5)\), which is entirely consistent with the estimate \(A/A(1)=1.403030(15)\) obtained using the amplitude estimates from the direct fitting procedure.
Next we compare these results for the pressure with the ones for gaussian chains as expressed in equation (4) in [6]. That expression is for polymers in a three-dimensional half-space confined by a two-dimensional wall, and corresponds to finite values of the radius of gyration. If the expression is generalized to the \(d\)-dimensional case and restricted to the limit of infinite chains, where the radius of gyration diverges, the result is:
\[p_{G}(r)=\frac{P_{G}(r)a^{d}}{k_{B}T}=\frac{\Gamma(d/2)}{\pi^{d/2}}\frac{1}{(r ^{2}+1)^{d/2}},\] (7)
where we recall that \(r\) is dimensionless, measured in units of the lattice constant \(a\). In table 4 we have listed the estimated pressures for SAWs and the pressures obtained for Gaussian chains in \(d=2\), on the semi-infinite square lattice. In Fig. 5 we have plotted the pressure for polymers modelled as SAWs and as Gaussian chains. In this figure the dashed line represents a decay in pressure with the same asymptotic form, \(\propto 1/(r^{2}+1)\), as the Gaussian chain but normalised so the curve passes through the SAWs data point for \(r=10\). Quite clearly the SAWs data is well represented by this form even for small distances \(r>2\). For \(r=20\) the SAWs data was indistinguishable from zero pressure.
r | p(r)−SAWs | p(r)-gaussian
---|---|---
1 | 0.33863 | 0.15915
2 | 0.14218 | 0.06366
3 | 0.07334 | 0.03183
4 | 0.04347 | 0.01872
5 | 0.02844 | 0.01224
10 | 0.00735 | 0.00315
Table 4: Pressure at a distance r from the grafting point for SAWs and
Gaussian chains.
<figure><img src="content_image/1301.3432/x8.png"><figcaption>Figure 5: (a) The pressure p(r) exerted by a polymer on a surface at adistance r from the grafting point. Data are for polymers modelled as SAWs orGaussian chains. The dashed line are a 1/r2 fit. (b) Both data have the same1/r2 scaling form, even for values close to r=2. The dashed lines are guidelines with slope equal to −2.</figcaption></figure>
## IV Final discussions and conclusion
Since our model is athermal and discrete, it is not really possible to compare our results with those obtained for the gaussian chain. However, as was already mentioned by Bickel _et al._ [7], the excluded volume interactions should not change the scaling form of the pressure. Fig. 5(b) clearly shows a \(1/r^{2}\) decay of the pressure, even for small distances. According to Bickel _et al._[7], this similarity is due to the fact that the pressure and the monomer concentration in the vicinity of the wall are linearly related. On the other hand, it seems that the concentration is not affected by the molecular details or by the differences between chain models. In our case, despite the fact that \(\rho(r)\) and \(p(r)\) are related by a logarithmic relation, as shown in expression (2), we have for \(r\gg 1\) a small concentration leading to a linear relation between those quantities. Actually, even for \(r\sim 2\), we can observe a linear dependence, as shown in Fig. 6.
<figure><img src="content_image/1301.3432/x9.png"><figcaption>Figure 6: Relation between pressure and concentration of monomers near to thewall at a distance r from the grafting point. For r>1, a linear relationshipis observed.</figcaption></figure>
Since the grafted chain is in mechanical equilibrium, the force \({\cal F}\) applied to the walk at the grafting point, which is in the negative \(x\) direction in Fig. 1, should be equal to the sum of the forces applied by the wall at other contact points, which are in the positive \(x\) direction. Thus, the dimensionless force is given by:
\[f=\frac{{\cal F}a}{k_{B}T}=2\sum_{r=1}^{\infty}p(r).\] (8)
For gaussian chains, integrating equation (7), we find \(f_{G}=1\). For SAWs, we may estimate the force summing the results for \(r=1,2,\ldots,5\) and obtaining the remaining contributions (\(r=6,7,\ldots,\infty\)) using the asymptotic result \(p(r)\approx A_{p}/(r^{2}+1)\) where \(A_{p}\approx 0.74235\) was estimated using the result of \(p(r)\) for \(r=10\). The result of this calculation is \(f_{SAW}\approx 1.533\), larger than the one for gaussian chains. As mentioned above, it does not seem straightforward to compare the two models, since a gaussian chain is a mass-spring model and therefore it is, unlike SAWs, not athermal. We may also mention that if \(p(r)\) for SAWs is extended to real values of \(r\) using a numerical interpolation procedure and the data for gaussian chains are rescaled so the areas below both curves are the same, the difference between the curves is quite small, the maximum being close to the origin and of order \(10^{-3}\). Due to the limited precision of the estimates for SAWs and to the expected small dependency of the results on the interpolation procedure we will not present these results here, but we found that in general the rescaled results for the pressure of gaussian chains are larger than the pressures for SAWs at small values of \(r\), but the inverse situation is found for larger distances. This net effect may be understood if we recall that the pressure is a monotonically growing function of the local density at the wall (Eq. (2)) and that the effect of the excluded volume interactions should be a slower decay of this density with the distance from the grafting point, as compared to approximations where this interaction is neglected.
It is of some interest to obtain the total force applied to the chain at the grafting point for ideal chains, modeled by random walks on the semi-infinite square lattice. This force may be calculated considering the shift of the grafting point by one lattice unit in the positive \(x\) direction in Fig. 1. The change in free energy under this operation will be proportional to the force. This calculations should lead to the same result of the ones above, where the force was obtained summing over the pressures at all other sites of the wall besides the origin, since the total force applied on the chain has to vanish.
Let us start by briefly reviewing the calculation of the number of random walks on a half-plane of the square lattice. If we call \(c_{n}(\vec{\rho})\) the number of \(n\)-steps random walks on a square lattice starting at the origin and ending at the point \(\vec{\rho}=x{\bf i}+y{\bf j}\), the number of RWs on the half-plane \(x\geq 0\) may be calculated by placing an absorbing wall at \(x=-1\), so that any walk reaching the wall is annihilated. This may be accomplished by using an image walker, starting at the reflection point of the origin with respect to the wall and ending at \(\vec{\rho}\). We will place the starting point of the random walk at \((s,0)\), where \(s=0\) corresponds to walks starting at the origin. In this case the image walker starting point will be at \(\vec{\rho}_{0}=-(s+2){\bf i}\), with distances measured in units of the lattice constant \(a\). The number of walks confined to the \(x\geq 0\) half plane is given by [23]
\[c^{(1)}_{n}(\vec{\rho},s)=c_{n}(\vec{\rho})-c_{n}(\vec{\rho}+(2+s){\bf i}).\] (9)
Since we are interested in the large \(n\) limit, we may use the gaussian approximation for the number of walks
\[c_{n}(\vec{\rho})=\frac{4^{n}}{n\pi}\exp\left(-\frac{|\vec{\rho}|^{2}}{n} \right).\] (10)
For the half-plane we get
\[c_{n}^{(1)}(\vec{\rho},s)=\frac{4^{n}}{n\pi}\left[\exp\left(-\frac{|\vec{\rho} |^{2}}{n}\right)-\exp\left(-\frac{|\vec{\rho}+(2+s){\bf i}|^{2}}{n}\right)\right]\] (11)
To obtain the total number of walks, we integrate this expression over the final point \(\vec{\rho}\)
\[c_{n}^{(1)}(s)=\int_{0}^{\infty}dx\int_{-\infty}^{\infty}dy\;c_{n}^{(1)}(\vec{ \rho},s).\] (12)
The result is
\[c_{n}^{(1)}(s)=\frac{4^{n}}{\sqrt{\pi}}\int_{-s/\sqrt{n}}^{(2+s)/\sqrt{n}}e^{- x^{2}}dx,\] (13)
for \(n\gg s\), we have the asymptotic behavior
\[c_{n}^{(1)}(s)=4^{n}\frac{2(s+1)}{\sqrt{n\pi}},\] (14)
which has the expected scaling form (4), with exponent \(\gamma=1/2\) and amplitude \(A=2(s+1)/\sqrt{\pi}\). The change in free energy between the cases with \(s=0\) and \(s=1\) is therefore given by \(-k_{B}T\ln 2\), so that the force applied to the polymer by the wall at the grafting point will be \(f_{RW}=\ln 2\approx 0.6931\), which is lower than the forces obtained for gaussian chains and estimated for SAWs.
It should be mentioned that for SAWs the sum of the pressures corresponding to two distances \(p(r_{i})+p(r_{j})\) is always smaller (for finite \(|r_{i}-r_{j}|\)) than \(-\Delta F(r_{i},r_{j})/(k_{B}T)\), where \(\Delta F(r_{i},r_{j})\) is the change in free energy when both cells, at \(r_{i}\) and \(r_{j}\) are excluded. In other words, an effective attractive interaction exists between the two excluded cells, so that the free energy decreases as the cells approach each other. This effect is due to walks in the unrestricted case which visit both excluded cells, and are therefore not counted in either \(c_{n}^{(1)}(r_{1})\) or \(c_{n}^{(1)}(r_{2})\). The total force \(f_{SAW}^{\prime}\), resulting from the simultaneous exclusion of all cells besides the one at the grafting point r = 0, must thus be smaller than the force \(f_{SAW}\) defined in equation (8). It is easy to find, since the number of SAWs with \(n\) steps \(d_{n}^{(1)}\) in this case is given by \(d_{n}^{(1)}=1+c_{n-1}^{(1)}\), that for a given value of \(n\) the force at the grafting point will be \(f^{\prime}_{n,SAW}=-\ln(d_{n}^{(1)}/c_{n}^{(1)})\). For large \(n\), we get \(f^{\prime}_{SAW}=\ln\mu\approx 0.9701\), smaller than \(f_{SAW}=1.533\), as expected.
Finally, we should also stress that although the pressure applied by the SAWs and by the gaussian chains display a similar power-law behavior, other possible walks on the lattice might lead to different results. Recently the pressure exerted by directed walks starting at the origin on the limiting line of a semi-infinite square lattice was obtained [24]. In the limit of large directed walks the asymptotic decay of the pressure with the distance to the grafting point also follows a power law, albeit with an exponent smaller than the one obtained here for SAWs and gaussian chains.
###### Acknowledgements.
We would like to thank Neal Madras for useful comments. The computations for this work was supported by an award under the Merit Allocation Scheme on the NCI National Facility at the Australian National University. We also made use of the computational facilities of the Victorian Partnership for Advanced Computing. IJ was supported under the Australian Research Council’s Discovery Projects funding scheme by the grants DP0770705 and DP1201593. JFS acknowledges financial support by the brazilian agency CNPq.
## References
* (1) B. W. Ninham and P. Lo Nostro, _Molecular Forces and Self Assembly: in Colloid, Nano Sciences and Biology_, Cambridge University Press (2010).
* (2) S. B. Smith. Y. J. Cui and C. Bustamante, Science **271**, 795 (1996).
* (3) P. M. Chaikin and T. C. Lubansky, _Principles of Condensed Matter_, Cambridge University Press (2000).
* (4) S. Safran, _Statistical Thermodynamics of Surfaces, Interfaces and Membranes_, Westview Press (1994).
* (5) R. C. Tolman, _The Principles of Statistical Mechanics_, Dover Publications (1979).
* (6) T. Bickel, C. Marques and C. Jeppesen, Phys. Rev. E **62**, 1124 (2000).
* (7) T. Bickel, C. Jeppesen and C. M. Marques, Eur. Phys. J. E **4**, 33 (2001).
* (8) M. Breidenich _et al_, Eur. Phys. Lett. **49**, 431 (2000).
* (9) M. N. Barber, A. J. Guttmann, K. M. Middlemiss, G. M. Torrie and S. G. Whittington, J. Phys. A **11**, 1833 (1978).
* (10) I. G. Enting, J. Phys. A **13**, 3713 (1980).
* (11) A. R. Conway, I. G. Enting and A. J. Guttmann, J. Phys. A **26**, 1519 (1993).
* (12) I. Jensen, J. Phys. A **37**, 5503 (2004).
* (13) K. De’Bell and T. Lookman, Rev. Mod. Phys. **65**, 87 (1993).
* (14) S. G. Whittington, J. Chem. Phys. **63**, 779 (1975).
* (15) B. Duplantier and H. Saleur, Phys. Rev. Lett. **57**, 3179 (1986).
* (16) B. Duplantier, J. Stat. Phys. **54**, 581 (1989).
* (17) A.J. Guttmann and S. G. Whittington, J. Phys. A **11**, 721 (1978).
* (18) A. J. Guttmann in _Phase Transitions and Critical Phenomena_, vol. 13, Academic Press (1989).
* (19) A. J. Guttmann ed., _Polygons, Polyominoes and Polycubes_ vol. 775 of _Lecture Notes in Physics_, Springer (2009).
* (20) S. Caracciolo, A. J. Guttmann, I. Jensen, A. Pelissetto, A. N. Rogers, and A. D. Sokal, J. Stat. Phys. 120, 1037-1100 (2005).
* (21) N. Clisby and I. Jensen, J. Phys. A **45**, 055208 (2012).
* (22) A. L. Owczarek, T. Prellberg, D. Bennett-Wood D and A. J. Guttmann, J. Phys. A **27**, L919 (1994).
* (23)J. Rudnick and G. Gaspari, _Elements of the Random Walk_, Cambridge University Press (2004).
* (24)E. J. J. van Rensburg and T. Prellberg, arXiv:1210.2761 (2012).
|
1812.07415 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 35749,
"num_imgs": 1,
"llama3_tokens_count": 13371
} | [
"content_image/1812.07415/MidCurveSwptImplCorr1y1y1y.png"
] | # Change of Measure in Midcurve Pricing
K.E. Feldman
###### Abstract
We derive measure change formulae required to price midcurve swaptions in the forward swap annuity measure with stochastic annuities’ ratios. We construct the corresponding linear and exponential terminal swap rate pricing models and show how they capture the midcurve swaption correlation skew.
## Introduction
An interest rate swap is a financial instrument with a triangle property. The value of two swaps \(S_{t_{1}t_{2}}\), \(S_{t_{2}t_{3}}\) between times \(t_{1}\) and \(t_{2}\) and between times \(t_{2}\) and \(t_{3}\) is equal to the value of the swap \(S_{t_{1}t_{3}}\) between times \(t_{1}\) and \(t_{3}\) (we assume that all three swaps have the same fixed leg strike). Equivalently, we may say that the swap \(S_{t_{2}t_{3}}\) is the difference between a long swap \(S_{t_{1}t_{3}}\) and a short swap \(S_{t_{1}t_{2}}\). To express views on swap rates in the future, the interest rate market actively trades options on swaps, i.e. swaptions. Swaptions are non-linear products. The triangle property of the swap generalises into the property of the swaptions by including the convexity. A portfolio of a vanilla swaption on the short swap \(S_{t_{1}t_{2}}\) and an option (midcurve swaption) on the swap \(S_{t_{2}t_{3}}\) is more expensive than the price of the long swaption \(S_{t_{1}t_{3}}\) (when the strikes are the same, and the exercise time of all of the swaptions is the same, \(t_{1}\) - the start of the short and long swaps). If, in the Black-Scholes world, we assume also that the long and short annuities ratios to the annuity of the midcurve swap are deterministic, then from the swap triangle one can derive a useful relationship for the volatilities of all of the three swap rates. The challenge comes when we look into the relations between the volatility smiles (skews) of those rates. In this paper, we discuss a modelling approach for pricing midcurve swaptions that allows one to take into account the stochasticity of the swaps’ annuities and to generate pronounced correlation skews which are typically observed in the midcurve swaption market.
A midcurve swaption is an efficient way to trade correlations between the short and long swap rates. Others also used this product to trade on the difference between levels in the short and long term implied volatilities [1]. Being the simplest product on forward volatility, midcurve swaptions can be used for the calibration of the mean reversion parameters in the one factor short rate models [2].
The rich structure of the interest rate market offers two approaches to modelling the price of a midcurve swaption. The product can be viewed dynamically and be priced by modelling the time evolution of the underlying swap rate, or it can be viewed statically and its price can be derived from prices of closely related products - the long and the short swaptions traded in the market.
We shall be looking at the static way of pricing the midcurve swaption using a generalisation of the triangle property of the swaps to the case of the swaptions. A midcurve swaption can be priced as an option on a weighted basket of the short and long swap rates with the same fixing date. The weights coefficients are functions of the swap annuities ratios. The industry standard is to freeze these ratios to be constants. Taking correlation as an input parameter, the weighted basket can be priced by pairing the short and long swap rate distributions via a copula.
Some use more advanced models to account for stochasticity of the annuity ratios. The approach that has been adopted by the larger banks is first to move both the short and the long swaps rates distributions to the same terminal (discount bond) measure, and then to approximate each of the short and long annuities by deterministic functions of the corresponding (short or long) swap rates. This is an extension of the idea [3] where the authors developed a model that directly links constant maturity swap to volatilities of swaptions of all relevant tenors. Note that midcurve swaptions are not in the scope of [3]. This product is liquidly traded in the US Dollar market where the settlement style is physical. Thus, the natural pricing measure for this product is the annuity measure.
While allowing a better risk management of the midcurve correlation skew, the terminal measure approach suffers from an inconsistency. In this paper, we show that once you fix the stochastic form of the annuity ratio, the measure change is no longer free. We derive the explicit formulae for the measure change in terms of the functional forms of the annuity ratios. One other deficiency of the terminal measure approach are negative ratios of annuities. The exponential terminal swap rate model developed in this paper is free of this problem by construction.
We analyse in detail the measure change formulae in the case where the annuity ratio is a linear or an exponential function of the short and the long swap rates. The price of a midcurve swaption is often parameterised by its implied correlation as a function of the strike. Even if we use a model that captures the implied volatility smiles of the long and short swap rates well, the implied correlation is still not a constant function of the strike. The termianl swap rate models with stochastic annuities developed in this paper give a handle to match the implied correlation skew.
The effect studied in this paper is applicable in conjunction with any smile model. In particular, it is present in the flat volatility world. We provide numerical results on how our methodology captures the midcurve correlation skew in the case when the underlying swap rates are modelled as standard normal variables with a flat smile (i.e. constant across all strikes’ volatilities).
## 1 Product valuation
A midcurve receiver swaption \(W_{rec}=W_{rec}(S_{rec},T_{e(x)piry})\) on a swap \(S_{rec}(T_{(s)tart},T_{(e)nd},K)\) with a fixed leg rate \(K\) gives the holder an option to enter into a receiver swap \(S_{rec}\) at expiry time \(T_{x}\), where the swap starts on \(T_{s}\), ends on \(T_{e}\) and the holder receives the fixed rate \(K\) accrued on a notional \(N\) over all periods in the schedule formed by a sequence of dates: \(T^{fix}_{1},\quad\dots,\quad T^{fix}_{n}=T_{e}\), with \(n\) payment dates in the fixed leg schedule. In return the holder pays floating rate payments on the sequence of dates from the floating rate schedule: \(T^{fl}_{1},\quad\dots,\quad T^{fl}_{m}=T_{e}\). We will use short notations for the time intervals between two consecutive payments on each of the swap legs: \(\tau^{fix}_{i}=T^{fix}_{i}-T^{fix}_{i-1}\), \(i=1,\dots n\), \(\tau^{fl}_{j}=T^{fl}_{j}-T^{fl}_{j-1}\), \(j=1,\dots m\), \(T^{fl}_{0}=T^{fix}_{0}=T_{s}\).
In order to price a swaption, one uses the swap fixed leg annuity \(A(t)\) (\(t\leq T_{s}\)) as a numeraire:
\[A(t)=A(t,T_{s},T_{e})=\sum^{n}_{i=1}\tau^{fix}_{i}D(t,T^{fix}_{i}),\] (1)
where \(D(t,T)\) is the relevant discount bond from \(t\) to \(T\). We write the swap as:
\[\ \ \ \ \ \ \ \ \ \ \ S_{rec} =NA(t)(K-R(t))=NA(t,T_{s},T_{e})(K-R(t,T_{s},T_{e})),\] (2)
where \(R(t)=R(t,T_{s},T_{e})\) is the forward \(T_{s}\)-to-\(T_{e}\)-swap rate as seen at \(t\). Consequently, in order to price the swaption \(W_{rec}\), we can model the distribution for the \(R(T_{x},T_{s},T_{e})\) in the annuity measure and calculate the value of the swaption as:
\[W_{rec}(t) = A(t)\mathbb{E^{A}}[W(T_{x})/A(T_{x})]\] (3)
\[= A(t,T_{s},T_{e})N\mathbb{E^{A}}[[K-R(T_{x},T_{s},T_{e})]^{+}],\]
where the superscript in \(\mathbb{E^{A}}\) denotes the annuity measure with numeraire \(A(t)\).
The distributions for \(R(t_{0},t_{1},t_{2})\) can be implied from the swaption market whenever \(t_{0}=t_{1}\). We are primarily interested in the distributions of the following two stochastic variables:
\[R_{s}=R(T_{x},T_{x},T_{s}),\quad R_{e}=R(T_{x},T_{x},T_{e}),\] (4)
where \(R(T_{x},T_{x},T_{s})\) and \(R(T_{x},T_{x},T_{e})\) are the swap rates of the corresponding ”short” and ”long” swaps. Following [4] and using (3) the probability density function, \(\rm PDF\), for the distribution of the swap rate \(R(T_{x},T_{x},T_{J})\) with \(J=s\) or \(e\) in the corresponding annuity measure is given by
\[{\rm PDF}^{J}_{R_{J}}(r)=\frac{1}{A(t_{0},T_{x},T_{J})\cdot N}\frac{\partial^{ 2}W_{rec}(t_{0})}{\partial K^{2}}|_{K=r},\] (5)
where the derivative is taken with respect to the strike \(K\) of the swaption
\[W_{rec}(S_{rec}(T_{x},T_{J},K),T_{x}).\] (6)
Within the pricing approach provided by (3), the distributions for \(R_{s}\), \(R_{e}\) are specified in the corresponding annuity measures: \(A(t,T_{x},T_{s})\), \(A(t,T_{x},T_{e})\). The swap rate \(R(T_{x},T_{s},T_{e})\), that we are interested in, can be expressed as
\[R(T_{x},T_{s},T_{e}) =w_{1}\cdot R_{e}-w_{2}\cdot R_{s},\] (7)
where
\[w_{1}=\frac{A(T_{x},T_{x},T_{e})}{A(T_{x},T_{s},T_{e})}\quad w_{2}=\frac{A(T_{ x},T_{x},T_{s})}{A(T_{x},T_{s},T_{e})}.\] (8)
Therefore, the stochastic variable \(R(T_{x},T_{s},T_{e})\) representing the underlying swap rate is a weighted difference of the stochastic variables representing the long and the short swap rates with stochastic coefficients. Note that the three stochastic variables
\[A_{s}=A(T_{x},T_{x},T_{s}),\quad A_{e}=A(T_{x},T_{x},T_{e}),\quad and\quad A_{ u}=A(T_{x},T_{s},T_{e})\] (9)
are related via
\[A(T_{x},T_{s},T_{e})=A(T_{x},T_{x},T_{e})-A(T_{x},T_{x},T_{s}).\] (10)
We are going to model the distribution of the swap rate \(R(T_{x},T_{s},T_{e})\) in terms of the distributions of \(R(T_{x},T_{x},T_{e})\), \(R(T_{x},T_{x},T_{s})\) and their correlation. In order to do this we need to relate three annuity measures corresponding to \(A(t,T_{x},T_{s})\), \(A(t,T_{x},T_{e})\) and \(A(t,T_{s},T_{e})\).
The Radon-Nikodym derivative for the measure change between \(A(t,T_{x},T_{J})\) measure, \(J=s,e\), and \(A(t,T_{s},T_{e})\) measure can be reconstructed using the following identity:
\[\ \ \ \ \ \ \ \ V(t_{0}) = A(t_{0},T_{x},T_{J}){\mathbb{E}^{A(T_{x},T_{J})}}\left[\frac{V(t )}{A(t,T_{x},T_{J})}\right]\] (11)
\[= A(t_{0},T_{s},T_{e}){\mathbb{E}^{A(T_{s},T_{e})}}\left[\frac{V(t )}{A(t,T_{s},T_{e})}\right],\]
where \(V(t)\) is the price of a traded security (which is a stochastic variable at any future time). Under the standard assumptions on attainable claims and measure changes, equation (11) implies that for any stochastic process \(X_{t}\), which is a function of the swap rate \(R(T_{x},T_{x},T_{J})\),
\[{\mathbb{E}^{A(T_{s},T_{e})}}\left[X_{t}\right]={\mathbb{E}^{A(T_{x},T_{J})}} \left[X_{t}\frac{A(t_{0},T_{x},T_{J})}{A(t,T_{x},T_{J})}\cdot\frac{A(t,T_{s},T _{e})}{A(t_{0},T_{s},T_{e})}\right].\] (12)
The quantity
\[G_{J,T_{x}}=\frac{A(T_{x},T_{s},T_{e})}{A(T_{x},T_{x},T_{J})}\] (13)
is itself a stochastic variable. We shall assume that it has a joint distribution with the swap rates \(R(T_{x},T_{x},T_{s})\) and \(R(T_{x},T_{x},T_{e})\)
\[{\rm PDF}^{J}_{G_{J},R_{s},R_{e}}(g_{J},x,y),\] (14)
where the variable \(g_{J}\) is used to indicate a stochastic value for \(G_{J,T_{x}}\), the variable \(x\) is used to indicate a stochastic value for \(R(T_{x},T_{x},T_{s})\), and the variable \(y\) is used to indicate a stochastic value for \(R(T_{x},T_{x},T_{e})\).
**Lemma 1**.: _The measure change formulae for the marginals \(\phi_{s}(x)\) and \(\phi_{e}(y)\) of the joint distribution of the short and the long swap rates in the measure (\(u\)) associated with the underlying swap annuity \(A_{u}\) of the midcurve swaption are:_
\[\phi_{s}(x) := {\rm PDF}^{u}_{R_{s}}(x)=\int^{+\infty}_{-\infty}\int^{+\infty}_{ -\infty}{\rm PDF}^{u}_{R_{u},R_{s},R_{e}}(z,x,y)dzdy=\] (15)
\[= {\rm PDF}^{s}_{R_{s}}(x)\frac{A(t_{0},T_{x},T_{s})}{A(t_{0},T_{s} ,T_{e})}{\mathbb{E}^{A(T_{x},T_{s})}\left[G_{s,T_{x}}|R(T_{x},T_{x},T_{s})=x \right]}\]
\[\phi_{e}(y) := {\rm PDF}^{u}_{R_{e}}(y)=\int^{+\infty}_{-\infty}\int^{+\infty}_{ -\infty}{\rm PDF}^{u}_{R_{u},R_{s},R_{e}}(z,x,y)dzdx=\] (16)
\[= {\rm PDF}^{e}_{R_{e}}(y)\frac{A(t_{0},T_{x},T_{e})}{A(t_{0},T_{s} ,T_{e})}{\mathbb{E}^{A(T_{x},T_{e})}\left[G_{e,T_{x}}|R(T_{x},T_{x},T_{e})=y \right]}\]
Thus, given the PDFs of \(R_{e}\) and \(R_{s}\) in their natural (annuities) measures, we can derive the PDFs of \(R_{e}\) and \(R_{s}\) in the common \(A(T_{s},T_{e})\)-measure as soon as we can evaluate \({\mathbb{E}^{A(T_{x},T_{e})}\left[G_{e,T_{x}}|R(T_{x},T_{x},T_{e})=y\right]}\), and \({\mathbb{E}^{A(T_{x},T_{s})}\left[G_{s,T_{x}}|R(T_{x},T_{x},T_{s})=x\right]}\).
To evaluate the payoff of the midcurve swaption we will make an assumption that the stochastic variables \(G_{J,T_{x}}\), \(J=e,s\) from (13) are deterministic functions of the swap rates \(R(T_{x},T_{x},T_{s})\) and \(R(T_{x},T_{x},T_{e})\). The integral formula for the payoff is:
\[\frac{W_{rec}(t_{0})}{A(t_{0},T_{s},T_{e})\cdot N}={\mathbb{E}^{A (T_{s},T_{e})}}\left[[K-R(T_{x},T_{s},T_{e})]^{+}\right]=\]
\[={\mathbb{E}^{A(T_{s},T_{e})}}\left[{\mathbb{E}^{A(T_{s},T_{e})}} \left[[K-R(t,T_{s},T_{e})]^{+}|R_{s}=x,R_{e}=y\right]\right]=\]
\[={\mathbb{E}^{A(T_{s},T_{e})}}\left[{\mathbb{E}^{A(T_{s},T_{e})}} \left[[K-\frac{A(T_{x},T_{x},T_{e})}{A(T_{x},T_{s},T_{e})}R_{e}+\frac{A(T_{x}, T_{x},T_{s})}{A(T_{x},T_{s},T_{e})}R_{s}]^{+}|R_{s}=x,R_{e}=y\right]\right]\]
\[={\mathbb{E}^{A(T_{s},T_{e})}}\left[{\mathbb{E}^{A(T_{s},T_{e})}} \left[[K-w_{1}(y,x)y+w_{2}(y,x)x]^{+}|R_{s}=x,R_{e}=y\right]\right]=\]
\[=\int^{+\infty}_{-\infty}\int^{+\infty}_{-\infty}{\mathbb{E}^{A(T _{s},T_{e})}}\left[[K-w_{1}(y,x)y+w_{2}(y,x)x]^{+}|R_{s}=x,R_{e}=y\right]\times\]
\[\times{\rm PDF}^{u}_{R_{u},R_{s},R_{e}}(z(x,y),x,y)dxdy=\]
\[=\int^{+\infty}_{-\infty}\int^{+\infty}_{-\infty}\left[K-w_{1}(y, x)y+w_{2}(y,x)x\right]^{+}{\rm PDF}^{u}_{R_{s},R_{e}}(x,y)dxdy,\] (17)
where we omitted the symbol \(z(x,y)\) in the last formula because it is fully determined by \(x\) and \(y\) due to our assumption on \(G_{J,T_{x}}\), \(J=e,s\).
In order to use (17) we need to specify the weight functions \(w_{1}(y,x)\) and \(w_{2}(y,x)\) as well as the full joint distribution \({\rm PDF}^{u}_{R_{s},R_{e}}(x,y)\). The latter can be constructed using the copula technique applied to distributions \(\phi_{s}(x)\) from (15) and \(\phi_{e}(y)\) from (16). A popular choice is the Gaussian copula:
\[{\rm GC_{Join}}(x,y)=\phi(u,v,\rho)\frac{du}{dx}\frac{dv}{dy},\] (18)
where \(\phi(u,v,\rho)\) is the joint normal PDF of two univariate normal variables with correlation \(\rho\) and
\[\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ u=\Phi^{-1}\left[cdf_{s}(x) \right], v=\Phi^{-1}\left[cdf_{e}(y)\right],\] (19)
where \(\Phi\) is the CDF of a univariate normal variable and \(cdf_{s}(x)\), \(cdf_{e}(y)\) are the CDFs corresponding to pdfs \(\phi_{s}(x)\) from (15) and \(\phi_{e}(y)\) from (16).
## 2 First order approximations
The Radon-Nikodym derivative for measure change in (12) and the payoff in (17) depend only on the ratio of the annuities \(A(t,T_{x},T_{s})\) and \(A(t,T_{x},T_{e})\). Therefore, to use the copula valuation by means of (17) it is sufficient to model dynamics of the ratio of annuities. A convenient way for modelling dynamics of the ratio of annuities is provided by the Terminal Swap Rate Model methodology. It covers the zero-th and the first order approximations for the ratio. We discuss the corresponding approximations below.
**Deterministic Annuity Ratio:** Assume that the conditional expectations in (15-16) are independent from the respective variables \(x\) and \(y\) (we may think of \(G_{J,T_{x}}\), \(J=e,s\) from (13), for example, as being deterministic). Then in (15) \(\phi_{s}(x)\equiv{\rm PDF}^{s}_{R_{s}}(x)\) and in (16) \(\phi_{e}(y)\equiv{\rm PDF}^{e}_{R_{e}}(y)\), i.e. no change of the measure is needed and
\[w_{1}(y,x)=G_{e,T_{x}}^{-1}=\frac{A(t_{0},T_{x},T_{e})}{A(t_{0},T_{s},T_{e})}, \quad{}w_{2}(y,x)=G_{s,T_{x}}^{-1}=\frac{A(t_{0},T_{x},T_{s})}{A(t_{0},T_{s},T _{e})}.\] (20)
This is exactly the constant annuity ratio assumption used in [5] for pricing midcurve swaptions by means of the Gaussian copula.
**Linear Approximation:** We can approximate linearly the weights in (17) as
\[w_{1}(y,x)=G_{e,T_{x}}^{-1}=\]
\[=\frac{A(t_{0},T_{x},T_{e})}{A(t_{0},T_{s},T_{e})}(1+\mu_{e}(y- \hat{R}(t_{0},T_{x},T_{e}))+\mu_{s}(x-\hat{R}(t_{0},T_{x},T_{s}))),\] (21)
and
\[w_{2}(y,x)=G_{s,T_{x}}^{-1}=\]
\[=\frac{A(t_{0},T_{x},T_{s})}{A(t_{0},T_{s},T_{e})}(1+\nu_{e}(y- \hat{R}(t_{0},T_{x},T_{e}))+\nu_{s}(x-\hat{R}(t_{0},T_{x},T_{s}))),\] (22)
where \(\hat{R}\) is used to underline that a measure change is needed to evaluate the corresponding quantity, so that
\[\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \hat{R}(t_{0},T_{x},T_{e}) = {\mathbb{E}^{A(T_{s},T_{e})}}\left[R(t,T_{x},T_{e})\right],\]
\[\hat{R}(t_{0},T_{x},T_{s}) = {\mathbb{E}^{A(T_{s},T_{e})}}\left[R(t,T_{x},T_{s})\right],\] (23)
and both \(w_{1}(y,x)\), \(w_{2}(y,x)\) are \(A(T_{s},T_{e})\)-martingales.
Equating coefficients under \(x\) and \(y\) in \(w_{1}(y,x)-w_{2}(y,x)=1\), we see that the four coefficients of linear expansion in (21) and (22) are actually spanned by two parameters \(\sigma_{e}\) and \(\sigma_{s}\) as
\[\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mu_{s}=\frac{A(t_{0},T_{s},T_{e})} {A(t_{0},T_{x},T_{e})}\sigma_{s}, \mu_{e}=\frac{A(t_{0},T_{s},T_{e})}{A(t_{0},T_{x},T_{e})}\sigma_{ e},\]
\[\nu_{s}=\frac{A(t_{0},T_{s},T_{e})}{A(t_{0},T_{x},T_{s})}\sigma_{ s}, \nu_{e}=\frac{A(t_{0},T_{s},T_{e})}{A(t_{0},T_{x},T_{s})}\sigma_{ e}.\] (24)
We shall approximate linearly \(G_{e,T_{x}}=w_{1}(y,x)^{-1}\) and \(G_{s,T_{x}}=w_{2}(y,x)^{-1}\):
\[G_{e,T_{x}} \approx \frac{A(t_{0},T_{s},T_{e})}{A(t_{0},T_{x},T_{e})}(1-\mu_{e}(y-R(t _{0},T_{x},T_{e}))-\mu_{s}(x-\tilde{R}(t_{0},T_{x},T_{s}))),\] (25)
\[\tilde{R}(t_{0},T_{x},T_{s}) = {\mathbb{E}^{A(T_{x},T_{e})}}\left[R(t,T_{x},T_{s})\right],\] (26)
and
\[G_{s,T_{x}} \approx \frac{A(t_{0},T_{s},T_{e})}{A(t_{0},T_{x},T_{s})}(1-\nu_{e}(y- \tilde{R}(t_{0},T_{x},T_{e}))-\nu_{s}(x-R(t_{0},T_{x},T_{s}))),\] (27)
\[\tilde{R}(t_{0},T_{x},T_{e}) = {\mathbb{E}^{A(T_{x},T_{s})}}\left[R(t,T_{x},T_{e})\right],\] (28)
so that \(G_{e,T_{x}}\) is \(A(T_{x},T_{e})\)-martingale and \(G_{s,T_{x}}\) is \(A(T_{x},T_{s})\)-martingale.
Using Equations (25)-(28) we derive the following Lemma:
**Lemma 2**.: _Under an assumption that the long and the short swap rates are approximately Gaussian the marginals of the joint distribution of the long and the short swap rates in \(A(t,T_{s},T_{e})\)-measure are_
\[\ \ \ \ \ \ \ \ \ \phi_{s}(x) \approx {\rm PDF}^{s}_{R_{s}}(x)\left(1-\left(\nu_{s}+\nu_{e}\rho\frac{ \Sigma_{e}}{\Sigma_{s}}\right)(x-R(t_{0},T_{x},T_{s}))\right),\]
\[\phi_{e}(y) \approx {\rm PDF}^{e}_{R_{e}}(y)\left(1-\left(\mu_{e}+\mu_{s}\rho\frac{ \Sigma_{s}}{\Sigma_{e}}\right)(y-R(t_{0},T_{x},T_{e}))\right).\]
_where \(\Sigma_{e},\Sigma_{s}\) are the volatilities of the long and the short swap rates:_
\[\Sigma^{2}_{e}={\mathbb{E}^{A(T_{x},T_{e})}}\left[\left(y-R(t_{0},T_{x},T_{e}) \right)^{2}\right],\quad{}\Sigma^{2}_{s}={\mathbb{E}^{A(T_{x},T_{s})}}\left[ \left(x-R(t_{0},T_{x},T_{s})\right)^{2}\right].\]
**Proof:** Under an assumption that the long and the short swap rates are approximately Gaussian we can project \(y\) on to \(x\) as:
\[{\mathbb{E}^{A(T_{x},T_{s})}}[y|x] = {\mathbb{E}^{A(T_{x},T_{s})}}\left[R(T_{x},T_{x},T_{e})|R(T_{x},T _{x},T_{s})=x\right]\] (30)
\[= E^{A(T_{x},T_{s})}\left[R(T_{x},T_{x},T_{e})\right]+\rho\frac{ \Sigma_{e}}{\Sigma_{s}}(x-R(t_{0},T_{x},T_{s})).\]
We can evaluate
\[{\mathbb{E}^{A(T_{x},T_{s})}\left[G_{s,T_{x}}|R(T_{x},T_{x},T_{s} )=x\right]} = \frac{A(t,T_{s},T_{e})}{A(t,T_{x},T_{s})}\left(1-\left(\nu_{s}+ \nu_{e}\rho\frac{\Sigma_{e}}{\Sigma_{s}}\right)(x-R(t,T_{x},T_{s}))\right),\]
which leads to the expression for the first marginal. Similarly we derive the expression for the second marginal.
Integrating \(R(t,T_{x},T_{e})\) with respect to \(\phi_{e}(y)\) and \(R(t,T_{x},T_{s})\) with respect to \(\phi_{s}(y)\) we derive
**Lemma 3**.: _If the long and the short rates are approximately Gaussian then the linear approximation for the weights \(w_{1}(y,x)\) and \(w_{2}(y,x)\) leads to:_
\[\hat{R}(t,T_{x},T_{e}) = {\mathbb{E}^{A(T_{s},T_{e})}\left[R(T_{x},T_{x},T_{e})\right]}=R( t_{0},T_{x},T_{e})-(\mu_{e}\Sigma_{e}+\mu_{s}\rho\Sigma_{s})\Sigma_{e},\]
\[\hat{R}(t,T_{x},T_{s}) = {\mathbb{E}^{A(T_{s},T_{e})}\left[R(T_{x},T_{x},T_{s})\right]}=R( t_{0},T_{x},T_{s})-(\nu_{s}\Sigma_{s}+\nu_{e}\rho\Sigma_{e})\Sigma_{s}.\] (31)
In practice, it may be convenient to calcuate \(\tilde{R}(t_{0},T_{x},T_{s})\) from (26) and \(\tilde{R}(t_{0},T_{x},T_{s})\) from (28). This can be done by matching \(w_{1}(y,x)G_{e,T_{x}}=1\) and \(w_{2}(y,x)G_{s,T_{x}}=1\) in the expectation relative to \(A(T_{s},T_{e})\)-measure.
**Lemma 4**.: _If the long and the short rates are approximately Gaussian then the linear approximation for the weights \(w_{1}(y,x)\) and \(w_{2}(y,x)\) leads to:_
\[\tilde{R}(t,T_{x},T_{e}) = {\mathbb{E}^{A(T_{x},T_{e})}\left[R(T_{x},T_{x},T_{e})\right]}\]
\[= R(t_{0},T_{x},T_{e})-(\mu_{e}\Sigma_{e}+\mu_{s}\rho\Sigma_{s}) \Sigma_{e}+(\nu_{e}\Sigma_{e}+\nu_{s}\rho\Sigma_{s})\Sigma_{e},\]
\[\tilde{R}(t,T_{x},T_{s}) = {\mathbb{E}^{A(T_{x},T_{s})}\left[R(T_{x},T_{x},T_{e})\right]}\] (32)
\[= R(t_{0},T_{x},T_{s})-(\nu_{s}\Sigma_{s}+\nu_{e}\rho\Sigma_{e}) \Sigma_{s}+(\mu_{s}\Sigma_{s}+\mu_{e}\rho\Sigma_{e})\Sigma_{s}.\]
Thus, in order to price a midcurve swaption we just need two extra parameters \(\sigma_{e}\) and \(\sigma_{s}\) from (24). Together with the swap rates distributions \({\rm PDF}^{s}_{R_{s}}(x)\), \({\rm PDF}^{e}_{R_{e}}(y)\) and the correlation between them, \(\sigma_{e}\) and \(\sigma_{s}\) uniquely determine the midcurve swaption price in the Gaussian copula model via (17),(21),(22), (1) and (31).
**Log Linear Approximation:** The first order approximation does not immediately prevent weight coefficients \(w_{1}(y,x)\) and \(w_{2}(y,x)\) from going negative. This can be addressed by an exponential approximation for \(w_{2}(y,x)\):
\[w_{2}(y,x)^{-1}=G_{s,T_{x}}=\alpha_{s}e^{-\nu_{e}\left(y-\tilde{R}(t,T_{x},T_{ e})\right)-\nu_{s}\left(x-R(t,T_{x},T_{s})\right))},\] (33)
where
\[\tilde{R}(t,T_{x},T_{e}) = {\mathbb{E}^{A(T_{x},T_{s})}}\left[R(T_{x},T_{x},T_{e})\right]\] (34)
is used to underline that \(G_{s,T_{x}}\) and \(R(t,T_{x},T_{s})\) are \(A(t,T_{x},T_{s})\)-martingales. The relation \(w_{1}(y,x)-w_{2}(y,x)=1\) allows us to recover \(w_{1}(y,x)\) from \(w_{2}(y,x)\). We can evaluate the coefficient \(\alpha\) if we assume that the long and the short rates are approximately Gaussian. Let’s project \(y\) on to \(x\) as:
\[{\mathbb{E}^{A(T_{x},T_{s})}}[y|x] = {\mathbb{E}^{A(T_{x},T_{s})}}\left[R(T_{x},T_{x},T_{e})|R(T_{x},T _{x},T_{s})=x\right]\] (35)
\[= E^{A(T_{x},T_{s})}\left[R(T_{x},T_{x},T_{e})\right]+\rho\frac{ \Sigma_{e}}{\Sigma_{s}}(x-R(t_{0},T_{x},T_{s})),\]
so that
\[y-\tilde{R}(t,T_{x},T_{e}) = \sqrt{1-\rho^{2}}\Sigma_{e}z+{\mathbb{E}^{A(T_{x},T_{s})}}[y-R(t, T_{x},T_{e})|x]\] (36)
\[= \sqrt{1-\rho^{2}}\Sigma_{e}z+\rho\frac{\Sigma_{e}}{\Sigma_{s}}(x- R(t,T_{x},T_{s})).\]
with \(z\sim N(0,1)\). With this assumption we have
\[E^{A(T_{x},T_{s})}\left[G_{s,T_{x}}\right] = \alpha_{s}e^{\nu^{2}_{e}(1-\rho^{2})\Sigma^{2}_{e}/2}e^{\left(( \nu_{s}\Sigma_{s}+\nu_{e}\rho\Sigma_{e})^{2}/2\right)}\]
\[= \alpha_{s}e^{\left(\nu^{2}_{e}\Sigma^{2}_{e}+2\rho\nu_{e}\nu_{s} \Sigma_{e}\Sigma_{s}+\nu^{2}_{s}\Sigma^{2}_{s}\right)/2}\]
\[= \frac{A(t,T_{s},T_{e})}{A(t,T_{x},T_{s})}.\]
\[\alpha_{s} = \frac{A(t,T_{s},T_{e})}{A(t,T_{x},T_{s})}e^{-\left(\nu^{2}_{e} \Sigma^{2}_{e}+2\rho\nu_{e}\nu_{s}\Sigma_{e}\Sigma_{s}+\nu^{2}_{s}\Sigma^{2}_{ s}\right)/2}.\] (37)
We can now evaluate
\[{\mathbb{E}^{A(T_{x},T_{s})}\left[G_{s,T_{x}}|R(T_{x},T_{x},T_{s} )=x\right]} = \alpha_{s}{\mathbb{E}^{A(T_{x},T_{s})}}\left[e^{-\nu_{e}\sqrt{1- \rho^{2}}\Sigma_{e}z}\right]e^{-\left(\nu_{s}+\nu_{e}\rho\frac{\Sigma_{e}}{ \Sigma_{s}}\right)(x-R(t_{0},T_{x},T_{s}))}\]
\[= \frac{A(t,T_{s},T_{e})}{A(t,T_{x},T_{s})}e^{-\frac{\left(\rho\nu_ {e}\Sigma_{e}+\nu_{s}\Sigma_{s}\right)^{2}}{2}}e^{-\left(\nu_{s}+\nu_{e}\rho \frac{\Sigma_{e}}{\Sigma_{s}}\right)(x-R(t,T_{x},T_{s}))}\]
This leads to the next result:
**Lemma 5**.: _If the long and the short rates are approximately Gaussian then the log linear approximation for the weights \(w_{2}(y,x)\)_
\[\phi_{s}(x) \approx {\rm PDF}^{s}_{R_{s}}(x)e^{-\frac{\left(\rho\nu_{e}\Sigma_{e}+\nu _{s}\Sigma_{s}\right)^{2}}{2}}e^{-\left(\nu_{s}+\nu_{e}\rho\frac{\Sigma_{e}}{ \Sigma_{s}}\right)(x-R(t_{0},T_{x},T_{s}))},\]
\[\hat{R}(t,T_{x},T_{s}) = {\mathbb{E}^{A(T_{s},T_{e})}\left[R(T_{x},T_{x},T_{s})\right]}=R( t_{0},T_{x},T_{s})-(\nu_{s}\Sigma_{s}+\nu_{e}\rho\Sigma_{e})\Sigma_{s}.\] (39)
Using the relation \(w_{1}(y,x)-w_{2}(y,x)=1\) we can reconstruct \(G_{e,T_{x}}\) by numerical integration. To get less precise but more tractable formulae, instead, we evaluate \(G_{e,T_{x}}\) as an exponential martingale:
\[w_{1}(y,x)^{-1}=G_{e,T_{x}}=\alpha_{e}e^{-\mu_{e}\left(y-R(t,T_{x},T_{e}) \right)-\mu_{s}\left(x-\tilde{R}(t,T_{x},T_{s})\right)},\\\]
where
\[\tilde{R}(t,T_{x},T_{s}) = {\mathbb{E}^{A(T_{x},T_{e})}}\left[R(T_{x},T_{x},T_{s})\right].\] (40)
Similarly to (37) we obtain
\[\alpha_{e} = \frac{A(t,T_{s},T_{e})}{A(t,T_{x},T_{e})}e^{-\left(\mu^{2}_{e} \Sigma^{2}_{e}+2\rho\mu_{e}\mu_{s}\Sigma_{e}\Sigma_{s}+\mu^{2}_{s}\Sigma^{2}_{ s}\right)/2},\] (41)
\[{\mathbb{E}^{A(T_{x},T_{e})}\left[G_{e,T_{x}}|R(T_{x},T_{x},T_{e} )=y\right]} = \alpha_{e}{\mathbb{E}^{A(T_{x},T_{e})}}\left[e^{-\mu_{s}\sqrt{1- \rho^{2}}\Sigma_{s}z}\right]e^{-\left(\mu_{e}+\mu_{s}\rho\frac{\Sigma_{s}}{ \Sigma_{e}}\right)(y-R(t_{0},T_{x},T_{e}))}\]
\[= \frac{A(t,T_{s},T_{e})}{A(t,T_{x},T_{e})}e^{-\frac{\left(\rho\mu_ {s}\Sigma_{s}+\mu_{e}\Sigma_{e}\right)^{2}}{2}}e^{-\left(\mu_{e}+\mu_{s}\rho \frac{\Sigma_{s}}{\Sigma_{e}}\right)(y-R(t,T_{x},T_{e}))}.\]
Similar to Lemma 5 we derive:
**Lemma 6**.: _If the long and the short rates are approximately Gaussian then the exponential approximation for the weight \(w_{1}(y,x)\) leads to:_
\[\phi_{e}(y) \approx {\rm PDF}^{e}_{R_{e}}(y)e^{-\frac{\left(\rho\mu_{s}\Sigma_{s}+\mu _{e}\Sigma_{e}\right)^{2}}{2}}e^{-\left(\mu_{e}+\mu_{s}\rho\frac{\Sigma_{s}}{ \Sigma_{e}}\right)(y-R(t_{0},T_{x},T_{e}))},\]
\[\hat{R}(t,T_{x},T_{e}) = {\mathbb{E}^{A(T_{s},T_{e})}\left[R(T_{x},T_{x},T_{e})\right]}=R( t_{0},T_{x},T_{e})-(\mu_{e}\Sigma_{e}+\mu_{s}\rho\Sigma_{s})\Sigma_{e}.\] (43)
In the log linear approximation for the weights we can explicitly evaluate \(\tilde{R}(t,T_{x},T_{s})\) from (40) and \(\tilde{R}(t,T_{x},T_{e})\) from (34) by observing that \(w_{1}(y,x)\) and \(w_{2}(y,x)\) are \(A(t,T_{s},T_{e})\)-martingales. We find, similarly to the linear case:
\[\tilde{R}(t,T_{x},T_{s}) = R(t_{0},T_{x},T_{s})-(\nu_{s}\Sigma_{s}+\nu_{e}\rho\Sigma_{e}) \Sigma_{s}+(\mu_{s}\Sigma_{s}+\mu_{e}\rho\Sigma_{e})\Sigma_{s},\]
\[w_{1}(y,x) = \frac{A(t,T_{x},T_{e})}{A(t,T_{s},T_{e})}e^{-\left(\mu^{2}_{e} \Sigma^{2}_{e}+2\rho\mu_{e}\mu_{s}\Sigma_{e}\Sigma_{s}+\mu^{2}_{s}\Sigma^{2}_{ s}\right)/2}\times\] (44)
\[\times e^{\mu_{e}\left(y-\hat{R}(t,T_{x},T_{e})\right)+\mu_{s}\left(x- \hat{R}(t,T_{x},T_{s})\right)},\]
and
\[\tilde{R}(t,T_{x},T_{e}) = R(t_{0},T_{x},T_{e})-(\mu_{e}\Sigma_{e}+\mu_{s}\rho\Sigma_{s}) \Sigma_{e}+(\nu_{e}\Sigma_{e}+\nu_{s}\rho\Sigma_{s})\Sigma_{e}.\]
\[w_{2}(y,x) = \frac{A(t,T_{x},T_{s})}{A(t,T_{s},T_{e})}e^{-\left(\nu^{2}_{e} \Sigma^{2}_{e}+2\rho\nu_{e}\nu_{s}\Sigma_{e}\Sigma_{s}+\nu^{2}_{s}\Sigma^{2}_{ s}\right)/2}\times\] (45)
\[\times e^{\nu_{e}\left(y-\hat{R}(t,T_{x},T_{e})\right)+\nu_{s}\left(x- \hat{R}(t,T_{x},T_{s})\right)}.\]
With these exponential approximations for both \(G_{s,T_{x}}\) and \(G_{e,T_{x}}\) we shall chose parameters \(\nu_{e}\), \(\nu_{s}\), \(\mu_{e}\), and \(\mu_{s}\) to minimise
\[{\mathbb{E}^{A(T_{s},T_{e})}\left[\left(w_{1}(y,x)-w_{2}(y,x) \right)^{2}\right]}\]
in \(A(t,T_{s},T_{e})\)-measure. Expanding up to the second order in volatilities \(\Sigma_{e}\), \(\Sigma_{s}\) we see that as soon as parameters \(\mu_{e}\), \(\mu_{s}\), \(\nu_{e}\) and \(\nu_{s}\) are related as in (24) via:
\[\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mu_{s}=\frac{A(t_{0},T_{s},T_{e})} {A(t_{0},T_{x},T_{e})}\sigma_{s}, \mu_{e}=\frac{A(t_{0},T_{s},T_{e})}{A(t_{0},T_{x},T_{e})}\sigma_{ e},\]
\[\nu_{s}=\frac{A(t_{0},T_{s},T_{e})}{A(t_{0},T_{x},T_{s})}\sigma_{ s}, \nu_{e}=\frac{A(t_{0},T_{s},T_{e})}{A(t_{0},T_{x},T_{s})}\sigma_{ e},\] (47)
the variance of \(w_{1}(y,x)-w_{2}(y,x)\) is zero up to the second order in \(\Sigma_{e}\), \(\Sigma_{s}\), i.e.:
\[w_{1}(y,x)-w_{2}(y,x)\approx 1+\overline{o}(\Sigma^{2}_{e},\Sigma^{2}_{s}).\] (48)
Again as in the linear case, in order to price a midcurve swaption we just need two extra parameters \(\sigma_{e}\) and \(\sigma_{s}\) from (47). Together with the swap rates distributions \({\rm PDF}^{s}_{R_{s}}(x)\) and \({\rm PDF}^{e}_{R_{e}}(y)\), the correlation between them, \(\sigma_{e}\) and \(\sigma_{s}\) uniquely determine the midcurve swaption price in the Gaussian copula model via (17),(44), (45), from Lemma 5 and from Lemma 6.
## 3 Estimating parameters \(\sigma_{e}\) and \(\sigma_{s}\) and some numerical results.
Parameters \(\sigma_{e}\) and \(\sigma_{s}\) introduced in (24) for the linear approximation and in (47) for the log linear approximation, are related to the covariances between swap annuities’ ratios and the swap rates via
\[Cov^{A(T_{x},T_{e})}\langle\frac{A(T_{x},T_{s},T_{e})}{A(T_{x},T _{x},T_{e})},R(T_{x},T_{x},T_{e})\rangle = -\frac{A(T_{x},T_{s},T_{e})^{2}}{A(T_{x},T_{x},T_{e})^{2}}\left( \sigma_{e}\Sigma^{2}_{e}+\sigma_{s}\rho\Sigma_{e}\Sigma_{s}\right),\]
\[Cov^{A(T_{x},T_{s})}\langle\frac{A(T_{x},T_{s},T_{e})}{A(T_{x},T _{x},T_{s})},R(T_{x},T_{x},T_{s})\rangle = -\frac{A(t_{0},T_{s},T_{e})^{2}}{A(t_{0},T_{x},T_{s})^{2}}\left( \sigma_{e}\rho\Sigma_{e}\Sigma_{s}+\sigma_{s}\Sigma^{2}_{s}\right).\] (49)
The covariances on the left hand side of (49) can be estimated either from the historical data or by using methods suggested in [3]. In [3], the authors used the expansions for annuities in terms of swap rates for modeling CMS claims dependence on the volatilities and the correlations of the swap rates beyond the smile of the underlying CMS rate. The methods [3] can be immediately adapted to our case. Using the non-linear annuity mapping from [3] with a single stochastic driver, first we link all of the swap rates \(R(t,T_{x},T_{i})\), \(i=1,\dots e\), which fix at \(T_{x}\) to the stochastic driver \(Y\sim N(0,1)\) behind the long swap rate \(R(T_{x},T_{x},T_{e})\):
\[1+\tau_{i}R_{i}(T_{x},T_{x},T_{i})\approx(1+\tau_{i}R_{i})e^{\mu_{i}+\nu_{i}Y} ,\quad R_{i}={\mathbb{E}^{A(T_{x},T_{i})}}\left[R(t,T_{x},T_{i})\right],\] (50)
with \(\nu_{i}\) given by the variance \(\Sigma^{2}_{i}\) of the swap rate \(R(T_{x},T_{x},T_{i})\) and its correlation \(\rho_{e,i}\) with the long swap rate \(R(T_{x},T_{x},T_{e})\) via
\[\nu_{i}\approx\frac{\tau_{i}}{1+\tau_{i}R_{i}}\rho_{e,i}\Sigma_{i},\] (51)
and \(\mu_{j}\) determined (usig \(T_{x}\)-forward measure) recursively from
\[A_{j} = {\mathbb{E}^{T_{x}}}\left[A(T_{x},T_{x},T_{j})\right]\] (52)
\[= \sum^{j}_{i=1}\tau_{i}exp\left(-\sum^{j}_{k=i}\mu_{k}+\frac{1}{2} \left(\sum^{j}_{k=i}\nu_{k}\right)^{2}\right)\prod^{j}_{k=i}\frac{1}{1+\tau_{k }R_{k}}.\]
Within this parameterisation the annuities \(A(T_{x},T_{x},T_{e})\) and \(A(T_{x},T_{x},T_{s})\) are given by [3]:
\[A(T_{x},T_{x},T_{e}) = \sum^{e}_{i=1}\tau_{i}exp\left(-\sum^{e}_{k=i}\left(\mu_{k}+\nu_{ k}Y\right)\right)\prod^{e}_{k=i}\frac{1}{1+\tau_{k}R_{k}},\]
\[A(T_{x},T_{x},T_{s}) = \sum^{s}_{i=1}\tau_{i}exp\left(-\sum^{s}_{k=i}\left(\mu_{k}+\nu_{ k}Y\right)\right)\prod^{s}_{k=i}\frac{1}{1+\tau_{k}R_{k}}.\] (53)
Linearising the ratio \((A(T_{x},T_{x},T_{e})-A(T_{x},T_{x},T_{s}))/A(T_{x},T_{x},T_{e})\) with respect to \(Y\), we estimate the left hand side of the first equation in (49). Repeating the same procedure for the short rate \(R(T_{x},T_{x},T_{s})\) and the corresponding driver \(X\sim N(0,1)\), we estimate the left hand side of the second equation in (49).
We conducted a numerical experiment by Monte Carlo integration of the two dimensional Gaussian copula to estimate the implied correlation of the midcurve swaption. To simplify the calculation of the coefficients \(\sigma_{e}\) and \(\sigma_{s}\) along the lines of [3] we study the case of \(1y\to 1y1y\) midcurve, where the swaption expires in one year, and the holder has the right to enter into a one year swap starting one year after the expiry (equivalently, starting two years from now). We assume that all of the swap rates’ payment frequencies are annual, i.e. all of the accrual fractions \(\tau_{i}=1\). We set the short 1y swap rate at \(2.631\%\) with the normal volatility of 60.00 bps and the long 2y rate at \(2.2347\%\) with the normal volatility of 64.18 bps. We choose the correlation between the long and the short swap rates to be \(80\%\). Calculation [3], as described in the beginning of the section for the \(1y\to 1y1y\) midcurve with annually paid coupons, is easier than in the generic case, and we can estimate indicative levels for parameters \(\sigma_{e}\) and \(\sigma_{s}\) using analytical expressions for the annuities in terms of just the long and the short swap rates. We found \(\sigma_{e}\) to be close to 2.0 and \(\sigma_{s}\) to be close to -1.0.
<figure><img src="content_image/1812.07415/MidCurveSwptImplCorr1y1y1y.png"><figcaption>Figure 1: Midcurve 1y→1y1y, forward swap (ATM) rate 1.8294%, the short swaprate normal vol 60 bps, the long swap rate normal vol 64.18 bps, flat volsmile, the swap rates correlation 80% and σe=2, σs=−1.</figcaption></figure>
Under the assumption of no volatility smile, we implied the correlation by strike from the prices of midcurve swaptions evaluated by Monte Carlo integration. From Figure 1, we see that the stochastic annuity ratio assumption introduces a skew into the midcurve implied correlation even in the case of flat implied volatilities of the long and the short swap rates.
## Conclusion
We developed a consistent model for midcurve swaption pricing which explicitly accounts for stochasticity of the ratios of the annuities. It gives a handle on the correlation skew which is typically risk managed via the correlation-by-strike. The latter approach is not arbitrage free.
Our paper shares a common idea with [3], and, thus, depending on the size of the book the model presented here can be used for trading a small number of midcurve products and understanding their correlation risk in terms of linear regression coefficients \(\sigma_{e}\) and \(\sigma_{s}\), or the model can be used for risk managing large books of swaptions and CMS products via full projection of the correlation risk on all of the swap rates’ volatilities and all of their pairwise correlations.
**Email address:** kostyafeldman@gmail.com.
## References
* [1] Caron, J.A., McGraw, W.J. and Stipanov, J.D. 2009. System and method for calculating a volatility carry metric. Morgan Stanley Research, http://patents.justia.com/patent/7958036.
* [2] Andersen, B.G. and Piterbarg, V.V. 2012. _Interest Rate Modeling. Volume 2: Term Structure Models._ Atlantic Financial Press.
* [3] Cedervall, S. and Piterbarg, V. 2012. CMS: covering all bases. _Risk_ March, 64-69.
* [4] Breeden, D. and Litzenberger, R. 1978. Prices of State Contingent Claims Implicit in Options Prices. _Journal of Business_ 51, 621-651.
* [5] Andersen, B.G. and Piterbarg, V.V. 2012. _Interest Rate Modeling. Volume 3: Products and Risk Management._ Atlantic Financial Press.
|
1508.01650 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 23392,
"num_imgs": 8,
"llama3_tokens_count": 7483
} | [
"content_image/1508.01650/x1.png",
"content_image/1508.01650/x2.png",
"content_image/1508.01650/x3.png",
"content_image/1508.01650/x4.png",
"content_image/1508.01650/x5.png",
"content_image/1508.01650/x6.png",
"content_image/1508.01650/x7.png",
"content_image/1508.01650/x8.png"
] | # Experimental Conditions for Determination of the Neutrino Mass Hierarchy With Reactor Antineutrinos
M. Y. Pac
pac@dsu.kr
Dongshin University, Naju, 58245, Korea
###### Abstract
This article reports the optimized experimental requirements to determine neutrino mass hierarchy using electron antineutrinos (\(\bar{\nu}_{e}\)) generated in a nuclear reactor. The features of the neutrino mass hierarchy can be extracted from the \(|\Delta m^{2}_{31}|\) and \(|\Delta m^{2}_{32}|\) oscillations by applying the Fourier sine and cosine transform to the \(L/E\) spectrum. To determine the neutrino mass hierarchy above 90% probability, the requirements on the energy resolution as a function of the baseline are studied at \(\sin^{2}2\theta_{13}=0.1\). If the energy resolution of the neutrino detector is less than \(0.04/\sqrt{E_{\nu}}\) and the determination probability obtained from Bayes’ theorem is above 90%, the detector needs to be located around 48–53 km from the reactor(s) to measure the energy spectrum of \(\bar{\nu}_{e}\). These results will be helpful for setting up an experiment to determine the neutrino mass hierarchy, which is an important problem in neutrino physics.
**PACS numbers**: 13.15.+g, 14.60.Pq, 14.60.Lm, 29.85.+c
**Keywords**: neutrino mass hierarchy, reactor experiment, Fourier transform
## I Introduction
Since the measurement of the large \(\sin^{2}2\theta_{13}\) at RENO, Daya Bay, and Double Chooz, the precise measurement of neutrino mass hierarchy, the sign of \(\Delta m^{2}_{32}\), has become the focus in neutrino physics[1; 2; 3]. It had been believed that the neutrino mass hierarchy can be determined through long-baseline experiments, mainly using accelerator neutrino beams. Recently, the capability of a reactor neutrino experiment at an intermediate baseline to distinguish normal or inverted hierarchy was reported.
For an intermediate-baseline neutrino experiment, many approaches have been proposed; they can be categorized into the \(\chi^{2}\) analysis methods, which are discussed in Refs.[4; 5; 6; 7; 8], and the Fourier-transform methods[5; 9; 10]. The \(\chi^{2}\) analysis methods based on the newly adopted Bayesian approach utilizes all the available information from experiments, and it is straightforward to incorporate the uncertainties in order to evaluate the sensitivity, providing robust and complementary results in the Fourier-transform methods[11]. Although the \(\chi^{2}\) analysis methods are attractive and interesting, the Fourier-transform methods are more intuitive. The prominent merit of the Fourier-transform methods are that the mass hierarchy can be extracted without precise knowledge of the reactor antineutrino spectrum, the absolute value of the large \(|\Delta m^{2}_{31}|\), and the energy scale of a detector. The Fourier-transform methods were introduced to enhance and visualize the structures of mass hierarchy in the frequency spectrum, as first discussed in Ref. [12].
In principle, the mass hierarchy can be determined through precise measurements of \(|\Delta m^{2}_{31}|\) and \(|\Delta m^{2}_{32}|\). As \(|\Delta m^{2}_{21}|\) is very small and is only \(\sim 3~{}\%\) of \(|\Delta m^{2}_{31}|\), we have to measure \(|\Delta m^{2}_{31}|\) and \(|\Delta m^{2}_{32}|\) with a precision much better than \(3~{}\%\). However, \(|\Delta m^{2}_{31}|\) and \(|\Delta m^{2}_{32}|\) have been measured in many experiments with only \(\gg 3\%\) precision[13].
The intermediate baseline based on reactor neutrino experiments has been explored using the precise measurement of distortions of the energy spectrum with negligible matter effect. Learned _et al._ proposed a new method to distinguish normal and inverse hierarchy after a Fourier transform of the \(L/E\) spectrum of reactor neutrinos[12]. They pointed out that the Fourier power spectrum has a small but not negligible shoulder next to the main peak, and its relative position could be used to extract the mass hierarchy while a non-zero \(\theta_{13}\) is considered.
In this paper, we analyze the sensitivity of medium-baseline reactor antineutrino experiments to the neutrino mass hierarchy for a baseline range of 30–60 km and overall energy resolution, \(\delta E/\sqrt{E_{\nu}}\), in the range of 0 to \(0.08/\sqrt{E_{\nu}}\) with the Fourier-transform method. The optimal baseline length is estimated based on the expected probability of determination.
## II Detection of reactor antineutrino
In a nuclear reactor, antineutrinos are mainly produced via the \(\beta\)-decay of the fission products of the four types of radioactive isotopes, \({}^{235}U,^{238}U,^{239}Pu\), and \({}^{241}Pu\), in the fuel. The antineutrino flux having energy \(E_{\nu}\) in MeV with thermal power \(P_{th}\) in \(\rm GW_{th}\) is given as
\[\frac{dN}{dE_{\nu}}=\frac{P_{th}}{\sum_{k}f_{k}\epsilon_{k}}\phi(E_{\nu}) \times 6.24\times 10^{21},\] (1)
where \(f_{k}\) and \(\epsilon_{k}\) are the relative fission contribution and the energy released per fission of isotope \(k\), respectively. Further, \(\phi(E_{\nu})\) is the number of antineutrinos produced per fission and is obtained as follows[14; 16]:
\[\phi(E_{\nu}) = f_{{}^{235}U}e^{0.870-0.160E_{\nu}-0.091E^{2}_{\nu}}\] (2)
\[+ f_{{}^{239}Pu}e^{0.896-0.239E_{\nu}-0.0981E^{2}_{\nu}}\]
\[+ f_{{}^{238}U}e^{0.976-0.162E_{\nu}-0.0790E^{2}_{\nu}}\]
\[+ f_{{}^{241}Pu}e^{0.793-0.080E_{\nu}-0.1085E^{2}_{\nu}}.\]
The antineutrino flux is modulated by neutrino oscillation. The antineutrino survival probability \(P_{ee}\) is expressed as
\[P_{ee}=1 - \cos^{4}(\theta_{13})\sin^{2}(2\theta_{12})\sin^{2}(\Delta_{21})\] (3)
\[- \cos^{2}(\theta_{12})\sin^{2}(2\theta_{13})\sin^{2}(\Delta_{31})\]
\[- \sin^{2}(\theta_{12})\sin^{2}(2\theta_{13})\sin^{2}(\Delta_{32}).\]
The oscillation phase \(\Delta_{ij}\) is defined as
\[\Delta_{ij}\equiv\frac{\Delta m^{2}_{ij}L}{4E_{\nu}},~{}~{}(\Delta m^{2}_{ij} \equiv m^{2}_{i}-m^{2}_{j})\] (4)
with a baseline \(L\). As \(\Delta_{31}\) and \(\Delta_{32}\) appear simultaneously in Eq. (3), the effects of mass hierarchy on \(P_{ee}\) are hardly recognized. By using the relation between the squared mass differences,
\[\delta m^{2}_{12}+\delta m^{2}_{23}+\delta m^{2}_{31}=0,\] (5)
we rearrange Eq. (3) to eliminate the \(\Delta_{32}\) term as follows:
\[P_{ee} = 1-\cos^{4}(\theta_{13})\sin^{2}(2\theta_{12})\sin^{2}(\Delta_{21})\] (6)
\[- \sin^{2}(2\theta_{13})\sin^{2}(\Delta_{31})\]
\[- \sin^{2}(\theta_{12})\sin^{2}(2\theta_{13})\sin^{2}(\Delta_{21}) \cos(2|\Delta_{31}|)\]
\[\pm \frac{\sin^{2}(\theta_{12})}{2}\sin^{2}(2\theta_{13})\sin(2\Delta _{21})\sin(2|\Delta_{31}|).\]
The plus (minus) sign in the fifth term on the right-hand side of Eq. (6) corresponds to the normal (inverted) mass hierarchy or NH (IH) in short.
In ongoing reactor experiments, we assume that protons will be used as targets to detect electron antineutrinos via the inverse-beta-decay (IBD), which produces a neutron and positron. The antineutrino distribution observed with a detector having \(N_{p}\) free protons can be expressed for an exposure time \(T\) as follows:
\[\frac{dN^{osc}}{dE_{\nu}}=\frac{N_{P}T}{4\pi L^{2}}\frac{dN}{dE_{\nu}}P_{ee}(L ,E_{\nu})\sigma_{IBD}(E_{\nu}),\] (7)
where \(\sigma_{IBD}\) is the cross section of the IBD process and \(L\) is the baseline length.
We use the distribution of the expected antineutrino events from the above expression. For the IBD cross section, we use the following expression from Vogel and Beacom’s work[15]:
\[\sigma_{IBD}=0.0952(E_{e}p_{e})\times 10^{-42}{\rm cm^{2}}.\] (8)
In order to study the sensitivity of the mass hierarchy, we use Fourier-transform method together with Monte–Carlo simulations to compare the simulated IBD energy spectrum with the expected spectrum in both the NH and IH cases.
Taking into account the detector response, the reactor electron antineutrino \(\bar{\nu}_{e}\)\(L/E\) spectrum becomes
\[\frac{dN^{osc}}{dE^{obs}_{\nu}}=\int{dE_{\nu}\frac{dN^{osc}}{dE_{\nu}}R(E_{\nu },E^{obs}_{\nu},\delta E_{\nu})},\] (9)
where \(E_{\nu}\) is the actual \(\bar{\nu}_{e}\) energy, \(E^{obs}_{\nu}\) is the observed \(\bar{\nu}_{e}\) energy with the detector response, \(\delta E_{\nu}\) is the energy resolution, and \(R(E,E^{\prime})\) describes the detector response function including effects such as the energy resolution and energy scale. In this study, we take the normalized Gaussian function as the response function:
\[R(E^{obs}_{\nu},E_{\nu},\delta E_{\nu})=\frac{1}{\sqrt{2\pi}\delta E_{\nu}}exp \left\{{-\frac{(E^{obs}_{\nu}-E_{\nu})^{2}}{2{\delta E_{\nu}}^{2}}}\right\}.\] (10)
As the neutrino energy is usually measured using scintillators, the energy is typically proportional to the number of photoelectrons, and the error is dominated by the photoelectron statistics. Therefore, the neutrino energy resolution is proportional to \(1/\sqrt{{E}_{\nu}}\). In general, the detector energy resolution is parameterized into two parts:
\[\frac{\delta E_{\nu}}{E_{\nu}}=\sqrt{\frac{a^{2}}{E_{\nu}}+b^{2}}.\] (11)
The first term represents the uncertainty from statistical fluctuation, and the second term originates from the systematic uncertainty. In this study, \(b=0\) is assumed for simplicity.
## III Extraction of the mass hierarchy
Before the measurement of the surprisingly large \(\sin^{2}2\theta_{13}\), it had been known that at the oscillation maximum of \(\Delta_{12}\), which corresponds to a baseline of approximately 58 km, the sensitivity to the mass hierarchy is maximized at \(\sin^{2}2\theta_{13}\sim 0.02\). As \(\sin^{2}2\theta_{13}\) is no longer small, the sensitivity to mass hierarchy needs to be explored as a function of the baseline, \(L\), and the detector energy resolution, \(\delta E_{\nu}/E_{\nu}\).
In this study, each Monte–Carlo experiment generates a set of 500,000 \(\bar{\nu}_{e}\) events by sampling \({dN^{osc}}/{dE^{obs}_{\nu}}\) with input parameters, \(L\) and \(\delta E_{\nu}/E_{\nu}\). The default oscillation parameters are taken from Ref. [1; 2; 13] and listed in Table 1, together with the explored ranges of baseline and energy resolution.
δm221[eV2] | Δm231[eV2] | sin22θ12 | sin22θ13
---|---|---|---
7.50×10−5 | 2.32×10−3 | 0.857 | 0.1
L[km] | a | b
---|---|---
30≤L≤60 | 0≤a≤0.08 | 0
Table 1: Default values of neutrino oscillation parameters and the explored
ranges of other input parameters.
A total of 72,000 experiment samples are independently generated for every 2 km in the baseline and every 0.01 of the energy resolution, \(\delta E/\sqrt{E_{\nu}}\). Figure 1 shows the \(\bar{\nu}_{e}\)\(L/E\) spectra at 50-km baseline with the energy resolution varying from 0, which corresponds to an ideal detector, to \(0.08/\sqrt{E_{\nu}}\). As all neutrino masses appear in the frequency domain, as indicated by Eq. (6), a Fourier transform of \(N(L/E_{\nu})\) would enhance the sensitivity to the mass hierarchy. The frequency spectrum can be obtained using the following Fourier sine transform (FST) and Fourier cosine transform (FCT):
\[FST(\omega)=\int^{t_{max}}_{t_{min}}N(t)\sin(\omega t)dt,\] (12)
\[FCT(\omega)=\int^{t_{max}}_{t_{min}}N(t)\cos(\omega t)dt,\]
where \(\omega=2.54\Delta m^{2}_{ij}\) is the frequency and \(t=L/E_{\nu}\) is the variable in \(L/E_{\nu}\) space, varying from \(t_{min}=L/E_{max}\) to \(t_{max}=L/E_{min}\). Once a finite energy resolution is introduced, the phase difference over \(L/E_{\nu}\) is significantly smeared out.
<figure><img src="content_image/1508.01650/x1.png"><figcaption>Figure 1: L/Eν spectra at 50-km baseline for normal hierarchy (solid line) andinverted hierarchy (dotted line) with different detector energy resolutions.</figcaption></figure>
Figures 2 and 3 show FST and FCT spectra obtained through the Monte–Carlo simulation from \(\delta m^{2}=0.002\) to 0.028 with energy resolution varied in steps of \(2\times 10^{-5}\). The impact of energy resolution is clear because noisy peaks and valleys fluctuate more with increasing magnitude of energy resolution. The main peak and valley are distinctive and can be used to determine the neutrino mass hierarchy while \(\delta E_{\nu}/E_{\nu}\leq\sim 0.05/\sqrt{E_{\nu}}\).
<figure><img src="content_image/1508.01650/x2.png"><figcaption>Figure 2: Fourier sine transformed (FST) reactor ¯νe event rate from 50-kmbaseline in arbitrary units for sin22θ13=0.1, normal hierarchy (solid line),and inverted hierarchy (dotted line) at different energy resolutions.</figcaption></figure>
<figure><img src="content_image/1508.01650/x3.png"><figcaption>Figure 3: Fourier cosine transformed (FCT) reactor ¯νe event rate from 50-kmbaseline in arbitrary units for sin22θ13=0.1, normal hierarchy (solid line),and inverted hierarchy (dotted line) at different energy resolutions.</figcaption></figure>
We introduce parameters \(PV_{FST}\) and \(PV_{FCT}\) to quantify the features of FST and FCT spectra:
\[PV_{FST}=\frac{A_{p}-|A_{v}|}{A_{p}+|A_{v}|}\frac{\delta m^{2}_{v}-\delta m^{2 }_{p}}{|\delta m^{2}_{v}-\delta m^{2}_{p}|}\] (13)
and
\[PV_{FCT}=\frac{A_{p}-|A_{v}|}{A_{p}+|A_{v}|}\frac{\delta m^{2}_{v}-\delta m^{2 }_{p}}{|\delta m^{2}_{v}-\delta m^{2}_{p}|},\] (14)
where \(A_{p}\) and \(A_{v}\) are the amplitudes of the peak and valley, respectively, and \(\delta m^{2}_{p}\) and \(\delta m^{2}_{v}\) are the values of \(\delta m^{2}\) at the peak and valley positions, respectively. Figure 4 shows the distributions of \(PV_{FST}\) and \(PV_{FCT}\) for 500 experiments at different energy resolutions. Two clusters of points represented by the red open circle (bottom right) and blue solid circle (top left) in the plane of (\(PV_{FST}\), \(PV_{FCT}-PV_{FST}\)) corresponding to NH and IH cases, respectively, show their own region exactly when \(\delta E_{\nu}/E_{\nu}\leq\sim 0.05/\sqrt{E_{\nu}}\). The upper and the lower parts of the scatter plot correspond to IH and NH, respectively. It is shown that the distinctive features of NH and IH cases become smeared out as the energy resolution worsens, as shown in Fig. 5.
At large value of \(\sin^{2}2\theta_{13}\), the uncertainty of \(|\Delta m^{2}_{31}|\) has a little effect on the FST and FCT spectra. It comes from the fact that \(\sin^{2}2\theta_{13}\) is more effective on narrow modulation in the \(L/E\) spectrum than \(\sin(2|\Delta m^{2}_{31}|)\) does in the last term of Eq. (6). The effect of the uncertainty of \(\Delta m^{2}_{31}\) is shown in Fig. 6.
<figure><img src="content_image/1508.01650/x4.png"><figcaption>Figure 4: PVFCT−PVFST vs. PVFST scatter plots obtained from 50-km baseline fornormal hierarchy (red open circle) and inverted hierarchy (blue solid circle)at different energy resolutions. In the case of an energy resolution ofδE/E≤0.03/√E, we recognize that points from normal hierarchy (bottom right)and points from inverted hierarchy (top left) are well isolated from eachother.</figcaption></figure>
<figure><img src="content_image/1508.01650/x5.png"><figcaption>Figure 5: PVFCT−PVFST distributions obtained from 50-km baseline for normalhierarchy (red solid line) and inverted hierarchy (blue dotted line) atdifferent energy resolutions. In the case of an energy resolution ofδE/E≥0.05/√E, we could not distinguish normal hierarchy from invertedhierarchy.</figcaption></figure>
<figure><img src="content_image/1508.01650/x6.png"><figcaption>Figure 6: The effect of the uncertainty of |Δm231| on the Fourier spectra at50-km baseline. Varying Δm231 over its uncertainty range has not significanteffect on the characteristic features of Fourier sine and cosine spectra. Here|Δm231|=2.32+0.12−0.09×10−3 is consideredpdg .</figcaption></figure>
Now we consider a method to discriminate between normal and inverted hierarchy using the information we gathered from an experiment. We will find an experiment on the plane of \(PV_{FST}\) versus \(PV_{FCT}-PV_{FST}\) as shown in Fig. 4, if we performed the analysis based on an approach suggested in this paper. The experiment will be placed on the region of NH or IH. Could we assess quantitatively whether the mass hierarchy of neutrino has normal or inverted hierarchy from the point? This is the probability of being NH (IH) given that an experiment happens to be placed on the NH (IH) region: we name it a success probability, \(P_{NH(IH)}\). The probability is simply calculated using classical Bayes’ theorem. For example, NH concerned, Bayes’ theorem says,
\[P(NH|x)=\frac{P(x|NH)P(NH)}{P(x)},\] (15)
where \(P(NH|x)\) is the probability of being NH given that an experiment is found on the NH region, \(P(x|NH)\) is the probability of being found on the NH reqion given that the hierarchy is NH, \(P(NH)\) is the probability of being NH, and \(P(x)\) is the probability of being found on the NH region.
The probability that an experiment will remain in its own region could be calculated from many experiments, which is why we need many experiments. According to classical Bayes’ theorem, there are
\[P_{NH(IH)}=\frac{N^{NH(IH)}_{success}}{N^{NH(IH)}_{total}},\] (16)
where \(N^{NH(IH)}_{success}\) is the number of NH (IH) experiments found in the NH (IH) region and \(N^{NH(IH)}_{total}\) is the number of total experiments found in the NH (IH) region. In this approach, 50% probability implies a null result.
Figure 7 shows \(P_{NH}\) and \(P_{IH}\) values obtained from the simulated event samples over the baselines at different energy resolutions. We list numerical values of those probabilities acquired from MC samples in Table II. In the case of \(\delta E/E\leq 0.03/\sqrt{E}\), \(P_{NH}\) is greater than 95% for a baseline of 38–56 \({\rm km}\). Similarly, \(P_{IH}\) is greater than 95% at a baseline of 32–52 \({\rm km}\) when \(\delta E/E\leq 0.04/\sqrt{E}\). As we have no preferred basis to determine which hierarchy is correct, we need to introduce a new probability, which shows that an experiment found in a region remains in its correct region as long as the energy resolution is sufficient.
The probability that an experiment will be found in its correct region, namely the determination probability, \(P_{D}\), is expressed as
\[P_{D}\equiv P_{NH}\cdot P_{IH}.\] (17)
<figure><img src="content_image/1508.01650/x7.png"><figcaption>Figure 7: PNH and PIH from Eq. (16) as a function of baseline at differentenergy resolutions.</figcaption></figure>
<figure><img src="content_image/1508.01650/x8.png"><figcaption>Figure 8: Determination probabilities, PD, from Eq. (17) as a function ofbaseline at different energy resolutions.</figcaption></figure>
As shown in Fig. 8, \(P_{D}\) has a value of \(\geq 99\%\) when \(\delta E/E\leq 0.03/\sqrt{E}\) with a baseline of 40–52 \({\rm km}\). As the energy resolution worsens, \(P_{D}\) rapidly decreases. When \(\delta E/E=0.04/\sqrt{E}\), the baseline is 48–53 \({\rm km}\) at \(P_{D}\geq 90\%\) as shown in Table II.
baseline [km] | a | PNH | σPNH | PIH | σPIH | PD | σPD
---|---|---|---|---|---|---|---
30 | | 0.776 | 0.024 | 0.991 | 0.006 | 0.769 | 0.025
32 | | 0.857 | 0.020 | 0.996 | 0.004 | 0.853 | 0.021
34 | | 0.867 | 0.020 | 1 | 0 | 0.867 | 0.020
36 | | 0.938 | 0.014 | 1 | 0 | 0.938 | 0.014
38 | | 0.987 | 0.007 | 1 | 0 | 0.987 | 0.007
40 | | 0.997 | 0.003 | 1 | 0 | 0.997 | 0.003
42 | 0.03 | 1 | 0 | 1 | 0 | 1 | 0
44 | | 0.997 | 0.003 | 1 | 0 | 0.997 | 0.003
46 | | 1 | 0 | 1 | 0 | 1 | 0
48 | | 1 | 0 | 1 | 0 | 1 | 0
50 | | 1 | 0 | 0.998 | 0.002 | 0.998 | 0.002
52 | | 0.997 | 0.003 | 0.997 | 0.003 | 0.993 | 0.005
54 | | 1 | 0 | 0.935 | 0.014 | 0.935 | 0.014
55 | | 0.991 | 0.005 | 0.907 | 0.017 | 0.900 | 0.018
56 | | 0.984 | 0.007 | 0.846 | 0.021 | 0.832 | 0.022
30 | | 0.607 | 0.0282 | 0.948 | 0.013 | 0.576 | 0.031
32 | | 0.670 | 0.027 | 0.975 | 0.009 | 0.653 | 0.029
34 | | 0.642 | 0.028 | 0.978 | 0.008 | 0.629 | 0.029
36 | | 0.690 | 0.027 | 0.988 | 0.006 | 0.682 | 0.027
38 | | 0.724 | 0.026 | 0.995 | 0.004 | 0.720 | 0.026
40 | | 0.756 | 0.025 | 1.000 | 0.000 | 0.756 | 0.025
42 | 0.04 | 0.867 | 0.020 | 0.996 | 0.004 | 0.863 | 0.020
44 | | 0.847 | 0.021 | 0.996 | 0.004 | 0.844 | 0.021
46 | | 0.861 | 0.020 | 0.989 | 0.006 | 0.851 | 0.021
48 | | 0.905 | 0.017 | 0.989 | 0.006 | 0.896 | 0.018
50 | | 0.913 | 0.013 | 0.985 | 0.005 | 0.899 | 0.014
52 | | 0.938 | 0.014 | 0.959 | 0.011 | 0.900 | 0.018
54 | | 0.891 | 0.018 | 0.854 | 0.020 | 0.761 | 0.028
55 | | 0.900 | 0.018 | 0.811 | 0.023 | 0.722 | 0.029
56 | | 0.861 | 0.020 | 0.816 | 0.022 | 0.702 | 0.030
Table 2: Numerical values of PNH, PIH and PD for 0.03/√E and 0.04/√E . Errors
are calculated from binomial distribution.
## IV Discussion
We have studied the experimental requirements to determine neutrino mass hierarchy using Fourier sine and cosine transform of the reactor neutrino \(L/E\) spectrum at \(\sin^{2}2\theta_{13}=0.1\). The parameters \(PV_{FST}\) and \(PV_{FCT}\) were defined to extract features of the Fourier sine and cosine spectra, and the mass hierarchy could be obtained from the determination probability, \(P_{D}\) based on Bayes’ theorem.
Since the effect of varying \(|\Delta m^{2}_{31}|\) over its uncertainty has little effect on the FST and FCT spectra at \(\sin^{2}2\theta_{13}=0.1\), the \(P_{D}\) is less dependent on the uncertainty of \(|\Delta m^{2}_{31}|\) than the value of \(\sin^{2}2\theta_{13}\).
As defined in Eq. (16) and Eq. (17), the probability \(P_{D}\) is closely related to the different features of each mass hierarchy. Each value of \(P_{D}\) indicates the probability that an experiment will be found inside its correct \(NH\) or \(IH\) region on the \(PV_{FST}\) and \(PV_{FCT}-PV_{FST}\) planes. These different features from different neutrino mass hierarchies suggest that the analysis method described in this paper, which is a simple and straightforward approach, can be used to determine the neutrino mass hierarchy by using the determination probability \(P_{D}\) based on the Fourier sine and cosine transform of the \(L/E\) spectrum.
## V Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF-2013R1A1A2011108) and also supported, in part, by the NRF grant No. 2009-0083526.
## References
* (1) J. K. Ahn _et al.,_ (RENO Collaboration) Phys. Rev. Lett. **108**, 191802 (2012).
* (2) F. P. An _et al.,_ (Daya Bay Collaboration) Phys. Rev. Lett. **108**, 171803 (2012).
* (3) M. Apollonio _et al._, Phys. Rev. **D61**, 012001 (2000).
* (4) Ghoshal and S. T. Petcov, JHEP 1103 (2011) 058 [arXiv:1011.1646].
* (5) X. Qian, D. A. Dwyer, R. D. McKeown, P. Vogel, W. Wang and C. Zhang, arXiv:1208.1551 (2012).
* (6) P. Ghoshal and S. T. Petcov, arXiv:1208.6473 (2012).
* (7) Y.-F. Li, J. Cao, Y. Wang and L. Zhan, arXiv:1303.6733 (2013).
* (8) Yoshitaro Takaesu, arXiv:1304.5306 (2013).
* (9) E. Ciuffoli, J. Evslin and X. Zhang, arXiv:1208.1991 (2012).
* (10) E. Ciuffoli, J. Evslin and X. Zhang, arXiv:1209.2227 (2012).
* (11) X. Qian, A. Tan, W. Wang, J. J. Ling, R. D. McKeown and C. Zhang, arXiv:1210.3651 (2012).
* (12)J. Learned, S. T. Dye, S. Pakvasa and R. C. Svoboda, Phys. Rev. **D78**, 071302 (2008).
* (13) J. Beringer _et al.,_ Phys. Rev. **D86**, 010001 (2012).
* (14) P. Vogel and E. Engel, Phys. Rev. **D39**, 3378 (1989).
* (15)P. Vogel and J. F. Beacom, Phys. Rev. **D60**, 053003 (1999).
* (16) P. Huber and T. Schwetz, Phys. Rev. **D40**, 053011 (2004).
|
1303.5514 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 18584,
"num_imgs": 0,
"llama3_tokens_count": 6518
} | [] | # Superradiance and statistical entropy of hairy black hole in three dimensions
Myungseok Eune
younms@sogang.ac.kr
Research Institute for Basic Science, Sogang University, Seoul, 121-742, Republic of Korea
Yongwan Gim
yongwan89@sogang.ac.kr
Department of Physics, Sogang University, Seoul 121-742, Republic of Korea
Wontae Kim
wtkim@sogang.ac.kr
Research Institute for Basic Science, Sogang University, Seoul, 121-742, Republic of Korea
Department of Physics, Sogang University, Seoul 121-742, Republic of Korea
Center for Quantum Spacetime, Sogang University, Seoul 121-742, Republic of Korea
February 22, 2024
###### Abstract
We calculate the statistical entropy of a rotating hairy black hole by taking into account superradiant modes in the brick wall method. The UV cutoff is independent of the gravitational hair, which gives the well-defined area law of the entropy. It can be shown that the angular momentum and the energy of matter field depend on the gravitational hair. For the vanishing gravitational hair, it turns out that the energy for matter is related to both the black hole mass and the black hole angular momentum whereas the angular momentum for matter field is directly proportional to the angular momentum of the black hole.
Black Hole, Thermodynamics, Modified Gravity pacs: 04.70.Dy, 04.50.Kd
## I Introduction
There has been much attention to three-dimensional topological massive gravity Deser _et al._ (1, 2) because it has rich structures even though it is lower dimensional gravity. Recently, new massive gravity Bergshoeff _et al._ (3, 4) has been also intensively studied, in particular, it can be shown that there is a new type of a rotating black hole solution apart from the rotating Banados-Teitelboim-Zanelli (BTZ) black hole Banados _et al._ (1992). A new rotating black hole has three hairs: two of them are mass and angular momentum and the other corresponds to the gravitational hair Oliva _et al._ (2009); Giribet _et al._ (2009); Kwon _et al._ (2011); Perez _et al._ (2011).
On the other hand, the statistical origin of the entropy for black holes has been studied in terms of the brick wall method ’t Hooft (1985). Subsequently, there have been extensive applications of the brick wall method to various black holes Mann _et al._ (1992); Ghosh and Mitra (1994); 13. In connection with the rotating hairy black hole, there exists a superradiant mode so that the brick wall method is nontrivial, and it should be treated carefully Ho and Kang (1998). Moreover, a thin-layer method has been introduced Liu and Zhao (2001); Zhou and Liu (2004), where it is in thermal equilibrium locally and the divergent term due to a box with infinite size does not appear anymore. For a thin layer near the horizon, this method is valid if the proper thickness is taken keeping the local equilibrium state, since the degree of freedom is dominant near the horizon.
In this paper, we would like to calculate the entropy of the rotating hairy black hole Oliva _et al._ (2009) from new massive gravity Bergshoeff _et al._ (3, 4) by using the brick wall method ’t Hooft (1985) in thin-layer approximations Liu and Zhao (2001); Zhou and Liu (2004), where the angular speed of a particle near the horizon can be approximately fixed to a constant. So we will show how to take into account the superradiant mode in the entropy calculation of the rotating hairy black hole which has complicated metric components. The entropy of the hairy black hole in connection with the superradiant mode for the rotating case deserves to be studied. In Sec. II, a new hairy rotating black hole and its thermodynamic quantities are introduced. Additionally, one can write down the explicit form of the horizons and the radius of the ergosphere analytically. In Sec. III, we calculate the thermodynamic quantities such as the entropy, the angular momentum, and the energy by considering the superradiant and the nonsuperradiant modes simultaneously. A summary is given in Sec. IV.
## II Rotating Hairy Black Hole
We consider the rotating black hole described by Oliva _et al._ (2009); Giribet _et al._ (2009); Kwon _et al._ (2011); Perez _et al._ (2011)
\[ds^{2}=-N^{2}fdt^{2}+\frac{dr^{2}}{f}+r^{2}(d\phi-\Omega_{0}dt)^ {2},\] (1)
where \(N(r)=1+b\ell^{2}(1-\eta)/(4\Sigma)\), \(f(r)=(\Sigma^{2}/r^{2})\left[\Sigma^{2}/\ell^{2}+b(1+\eta)\Sigma/2+b^{2}\ell^{ 2}(1-\eta)^{2}/16-\mu\eta\right]\), \(\Omega_{0}(r)=a(\mu-b\Sigma)/(2r^{2})\), and \(\Sigma(r)^{2}=r^{2}-\mu\ell^{2}(1-\eta)/2-b^{2}\ell^{4}(1-\eta)^{2}/16\). \(a\), \(b\), and \(\mu\) are integration constants and \(\Lambda\) and \(\eta\) have been defined as \(\Lambda=-1/(2\ell^{2})\) and \(\eta\equiv\sqrt{1-a^{2}/\ell^{2}}\). For the limit of \(b=0\), the metric functions reduce to \(N=1\), \(f=-\mu+r^{2}/\ell^{2}+\mu^{2}a^{2}/(4r^{2})\), and \(\Omega_{0}=\mu a/(2r^{2})\), and Eq. (1) simply describes the rotating BTZ black hole Banados _et al._ (1992). Note that the range of the rotating parameter \(a\) is given by \(-\ell\leq a\leq\ell\). Then the Arnowitt-Deser-Misner (ADM) mass and the ADM angular momentum can be obtained as Kwon _et al._ (2011)
\[\mathcal{M}=\frac{\mu}{4G}+\frac{b^{2}\ell^{2}}{16G},\qquad \mathcal{J}=\mathcal{M}a,\] (2)
respectively. \(G\) is a Newton constant in three dimensions, which is set to \(G=1\) for convenience. The entropy has been also obtained as \(\mathcal{S}=\pi\ell\sqrt{2\mathcal{M}(1+\eta)}\) Oliva _et al._ (2009); Giribet _et al._ (2009); Perez _et al._ (2011). Now, the condition of \(f(r)=0\) gives
\[r_{\pm} =\ell\sqrt{2(1+\eta)}\left(\sqrt{\mathcal{M}}\pm\frac{|b|\ell}{4} \sqrt{\eta}\right),\] (3)
\[r_{0} =\ell\sqrt{2(1-\eta)}\left[\mathcal{M}-\frac{b^{2}\ell^{2}}{32}(1 +\eta)\right]^{1/2},\] (4)
which yields the horizon \(r_{+}\). Equation (1) describes the rotating BTZ black hole, which has two horizons of \(r_{\pm}=\ell\sqrt{\mu(1+\eta)/2}\) and \(r_{0}=\ell\sqrt{\mu(1-\eta)/2}\). Especially for \(a=0\), the black hole has two horizons \(r_{\pm}=2\ell(\sqrt{\mathcal{M}}\pm|b|\ell/4)\) with \(r_{0}=0\). Next, the Hawking temperature of the black hole can be obtained from the surface gravity,
\[T_{\rm H} =\frac{1}{4\pi}N(r_{+})f^{\prime}(r_{+})\]
\[=\frac{\eta}{\pi\ell}\sqrt{\frac{2\mathcal{M}}{1+\eta}}.\] (5)
Consequently, the first law of thermodynamics \(d\mathcal{M}=T_{\rm H}d\mathcal{S}+\Omega_{H}d\mathcal{J}\) is satisfied.
The maximum \(\Omega_{+}\) and the minimum \(\Omega_{-}\) of the angular velocity for a particle are given by
\[\Omega_{\pm}(r) =\Omega_{0}\pm\frac{N}{r}\sqrt{f}.\] (6)
Note that the angular velocity of a particle on the event horizon becomes
\[\Omega_{H}=\Omega_{0}(r_{+})=\frac{1}{\ell}\sqrt{\frac{1-\eta}{1+ \eta}},\] (7)
and the radius \(r_{e}\) of the ergosphere is explicitly written as
\[r_{e} =2\ell\sqrt{\mathcal{M}+\frac{b^{2}\ell^{2}}{16}+\frac{|b|\ell}{4 }\sqrt{2(1+\eta)}\left[\mathcal{M}+\frac{b^{2}\ell^{2}}{32}(1-\eta)\right]^{1/ 2}},\] (8)
which can be calculated from \(\Omega_{-}(r_{e})=0\).
## III Statistical entropy
Now, we consider a scalar field in a thin layer between \(r_{+}+h\) to \(r_{+}+h+\delta\) with \(h\ll r_{+}\) and \(\delta\ll r_{+}\), where \(h\) is a cutoff parameter and \(\delta\) is a small constant related to the thickness of the thin layer. It satisfies the massless Klein-Gordon equation, \(\square\Phi(t,r,\phi)=0\). Assuming \(\Phi(t,r,\phi)=\Psi_{\omega m}(r)~{}e^{-i\omega t+im\phi}\), we obtain \(rN\partial_{r}(rNf\partial_{r}\Psi_{\omega m})+r^{2}N^{2}f^{2}k^{2}\Psi_{ \omega m}=0\), where \(k(r;\omega,m)=N^{-1}f^{-1}\sqrt{(\omega-\Omega_{+}m)(\omega-\Omega_{-}m)}\). In the WKB approximation with \(\Psi\sim e^{iS(r)}\), \(k\) is the radial momentum defined by \(k=\partial S/\partial r\). Therefore, the number of states less than the energy \(\omega\) and the angular momentum \(m\) is given by
\[n(\omega,m)=\frac{1}{\pi}\int^{r_{h}+h+\delta}_{r_{h}+h}dr``k"(r ;\omega,m),\] (9)
where\(``k"(r;\omega,m)=k(r;\omega,m)\) if \(k^{2}>0\) and \(``k"(r;\omega,m)=0\) if \(k^{2}<0\). The free energy of a rotating black hole should be written as Ho and Kang (1998)
\[F=F_{\mathrm{NS}}+F_{\mathrm{SR}},\] (10)
where
\[\beta F_{\rm NS} =\sum_{\lambda\notin\mathrm{SR}}\int d\omega g(\omega,m)\ln[1-e^{ -\beta(\omega-m\Omega_{H})}],\] (11)
\[\beta F_{\rm SR} =\sum_{\lambda\in\mathrm{SR}}\int d\omega g(\omega,m)\ln[1-e^{ \beta(\omega-m\Omega_{H})}],\] (12)
where the “NS” and “SR” denote the nonsuperradiant mode with \(\omega-m\Omega_{H}>0\) and superradiant mode with \(\omega-m\Omega_{H}<0\), respectively, \(\lambda\) is the set of \((\omega,m)\), and the density of the number of states is given by \(g(\omega,m)=dn/d\omega\) for the NS mode and \(g(\omega,m)=-dn/d\omega\) for the SR mode. Substituting Eq. (9) into Eqs. (11) and (12), we obtain
\[\beta F_{\mathrm{NS}}= -\frac{\beta}{\pi}\int dr\sum_{m}\int d\omega\frac{``k"(r;\omega, m)}{e^{\beta(\omega-\Omega_{H}m)}-1}\]
\[+\frac{1}{\pi}\int dr\sum_{m}\left.``k"(r;\omega,m)\ln[1-e^{- \beta(\omega-\Omega_{H}m)}]\right|^{\omega_{\rm max}(m)}_{\omega_{\rm min}(m)},\] (13)
\[\beta F_{\mathrm{SR}}= -\frac{\beta}{\pi}\int dr\sum_{m}\int d\omega\frac{``k"(r;\omega, m)}{e^{-\beta(\omega-\Omega_{H}m)}-1}\]
\[-\frac{1}{\pi}\int dr\sum_{m}\left.``k"(r;\omega,m)\ln[1-e^{\beta (\omega-\Omega_{H}m)}]\right|^{\omega_{\rm max}(m)}_{\omega_{\rm min}(m)},\] (14)
where \(\omega_{\rm max}(m)\) and \(\omega_{\rm min}(m)\) denote the maximum and the minimum of \(\omega\) for a given \(m\) in each mode, respectively. For convenience, Eq. (13) can be rewritten as
\[F_{\rm NS}\equiv F_{\rm NS}^{(m>0)}+F_{\rm NS}^{(m<0)},\] (15)
where
\[\beta F_{\mathrm{NS}}^{(m>0)} =-\frac{\beta}{\pi}\int^{r_{+}+h+\delta}_{r_{+}+h}\frac{dr}{Nf} \int^{\infty}_{0}dm\int^{\infty}_{\Omega_{+}m}d\omega\frac{\sqrt{(\omega- \Omega_{+}m)(\omega-\Omega_{-}m)}}{e^{\beta(\omega-\Omega_{H}m)}-1},\] (16)
\[\beta F_{\mathrm{NS}}^{(m<0)} =-\frac{\beta}{\pi}\int^{r_{+}+h+\delta}_{r_{+}+h}\frac{dr}{Nf} \int^{0}_{-\infty}dm\int^{\infty}_{0}d\omega\frac{\sqrt{(\omega-\Omega_{+}m)( \omega-\Omega_{-}m)}}{e^{\beta(\omega-\Omega_{H}m)}-1}\]
\[\quad-\frac{1}{\pi}\int^{r_{+}+h+\delta}_{r_{+}+h}\frac{dr}{Nf} \int^{0}_{-\infty}dm\sqrt{\Omega_{+}\Omega_{-}m^{2}}\ln\left(1-e^{\beta\Omega_ {H}m}\right).\] (17)
From Eq. (14), the free energy of the SR mode is written as
\[\beta F_{\mathrm{SR}}= -\frac{\beta}{\pi}\int^{r_{+}+h+\delta}_{r_{+}+h}\frac{dr}{Nf} \int^{\infty}_{0}dm\int^{\Omega_{-}m}_{0}d\omega\frac{\sqrt{(\omega-\Omega_{+} m)(\omega-\Omega_{-}m)}}{e^{-\beta(\omega-\Omega_{H}m)}-1}\]
\[+\frac{1}{\pi}\int^{r_{+}+h+\delta}_{r_{+}+h}\frac{dr}{Nf}\int^{ \infty}_{0}dm\sqrt{\Omega_{+}\Omega_{-}m^{2}}\ln\left(1-e^{-\beta\Omega_{H}m} \right).\] (18)
Then, the total free energy which consists of the nonsuperradiant and superradiant modes (10) becomes
\[F =-\frac{\zeta(3)}{4\beta^{3}}\int^{r_{+}+h+\delta}_{r_{+}+h}\frac {dr}{Nf}\frac{(\Omega_{+}-\Omega_{-})^{2}}{(\Omega_{+}-\Omega_{H})^{3/2}( \Omega_{H}-\Omega_{-})^{3/2}},\] (19)
which leads to
\[F =-\frac{\zeta(3)}{\beta^{3}}\frac{2r_{+}}{N(r_{+})^{2}f^{\prime}( r_{+})^{3/2}}\left(\frac{1}{\sqrt{h}}-\frac{1}{\sqrt{h+\delta}}\right),\] (20)
in the leading order of the cutoff and the thickness. Note that the second term in the free energy for the positive mode in Eq. (17) and the second term in the free energy for the superradiant mode in (18) canceled out. Thus the entropy can be simplified as
\[S =\left.\beta^{2}\frac{\partial F}{\partial\beta}\right|_{\beta= \beta_{\rm H}}\]
\[=\frac{3\zeta(3)}{8\pi^{2}}r_{+}\sqrt{f^{\prime}(r_{+})}\left( \frac{1}{\sqrt{h}}-\frac{1}{\sqrt{h+\delta}}\right),\] (21)
where \(\beta_{\rm H}\) is defined as the inverse of the Hawking temperature \(T_{\rm H}\). The proper lengths for the UV cutoff parameter and the thickness are defined by \(\bar{h}\equiv\int^{r_{+}+h}_{r_{+}}dr\sqrt{g_{rr}}\simeq 2\sqrt{h}/\sqrt{f^{ \prime}(r_{+})}\) and \(\bar{\delta}\equiv\int^{r_{+}+h+\delta}_{r_{+}+h}dr\sqrt{g_{rr}}\simeq 2(\sqrt {h+\delta}-\sqrt{h})/\sqrt{f^{\prime}(r_{+})}\). Then, the entropy is written as \(S=3\zeta(3)r_{+}\bar{\delta}/[4\pi^{2}\bar{h}(\bar{\delta}+\bar{h})]\). Recovering dimensions, the entropy becomes
\[S=\frac{c^{3}A}{4G\hbar}\frac{3\zeta(3)\ell_{\rm P}\bar{\delta}} {2\pi^{3}\bar{h}(\bar{h}+\bar{\delta})},\] (22)
where \(A\equiv 2\pi r_{+}\) and \(\ell_{\rm P}\equiv\hbar G/c^{3}\) are the area of the event horizon and the three-dimensional Plank length, respectively. If the cutoff is chosen as \(\bar{h}(\bar{h}+\bar{\delta})/\bar{\delta}=[3\zeta(3)/(2\pi^{3})]\ell_{\rm P}\), the entropy (22) agrees with the Bekenstein-Hawking entropy \(S_{\rm BH}=c^{3}A/(4G\hbar)\).
Finally, let us calculate angular momentum of matter, which becomes
\[J =\left.-\frac{\partial F}{\partial\Omega_{\mathrm{H}}}\right|_{ \beta=\beta_{\mathrm{H}}}\]
\[=\frac{a}{2}\left(\sqrt{\mathcal{M}}+\frac{|b|\ell}{4}\sqrt{\eta} \right)\left[\sqrt{\mathcal{M}}+\frac{\ell}{8}(b+|b|)\left(\sqrt{\eta}+\frac{1 }{\sqrt{\eta}}\right)\right],\] (23)
and the internal energy of the system is written as
\[E =F_{\mathrm{H}}+\beta^{-1}_{\mathrm{H}}S+\Omega_{\mathrm{H}}J\]
\[=\frac{1}{6}\left(\sqrt{\mathcal{M}}+\frac{|b|\ell}{4}\sqrt{\eta} \right)\left[(3+\eta)\sqrt{\mathcal{M}}+\frac{3\ell}{8\sqrt{\eta}}(b+|b|)(1- \eta^{2})\right].\] (24)
Note that the angular momentum (23) and the energy (24) of matter have well-defined limits and they are compatible with the results in Ref. Ho and Kang (1998) for \(b=0\).
On the other hand, it would be interesting to note that a partition function from free energy (20) can be compared with the result for the partition function of the corresponding two-dimensional conformal field theory (CFT) on the boundary of three-dimensional anti-de Sitter (AdS) spacetime Hawking _et al._ (1999). For this purpose, we write down the free energy (20) by using Eqs. (5) and (7) along with the proper lengths \(\bar{h}\) and \(\bar{\delta}\) as
\[F =-\frac{\zeta(3)}{2\pi(\beta_{H}/\ell)^{2}[1-(\ell\Omega_{H})^{2} ]}\frac{\bar{\delta}}{\bar{h}(\bar{\delta}+\bar{h})},\] (25)
where we restricted to the case of \(b=0\) and identified \(\beta\) with \(\beta_{\rm H}\). If one chooses the cutoff as \(\bar{h}(\bar{\delta}+\bar{h})/\bar{\delta}=[3\zeta(3)/\pi^{3}]\ell_{\rm P}\), then the free energy (25) is simplified as
\[F=-\frac{\pi^{2}}{6(\beta_{H}/\ell)^{2}[1-(\ell\Omega_{H})^{2}] \ell_{\rm P}}.\] (26)
In order to write down the free energy in terms of dimensionless quantities, we rescale the free energy, the inverse Hawking temperature, and the angular velocity at the horizon by \(\ell_{\rm P}F\to F\), \(\beta_{\rm H}/\ell\to\beta\), and \(\ell\Omega_{\rm H}\to\Omega\), respectively. Then, the free energy becomes \(F=-\pi^{2}/[6\beta^{2}(1-\Omega^{2})]\). Since the relation between the partition function \(Z\) and the free energy is given by \(\beta F=-\ln Z\), we can obtain
\[\ln Z=\frac{\pi^{2}}{6\beta(1-\Omega^{2})},\] (27)
which agrees with the result given in Ref. Hawking _et al._ (1999). However, it may depend on the cutoff within our brick wall formulation so that the coefficient can be adjusted. As a result, the degrees of freedom near the horizon can be described by the boundary degrees of freedom. In fact, the bulk degrees of freedom can be read off from the boundary degrees of freedom from the AdS/CFT while the bulk degrees of freedom can be also described by the degrees of freedom near the horizon based on the brick wall formalism. Combining these two notions, the boundary degrees of freedom at both ends can be connected.
## IV Summary
In the course of calculations, the second term of the free energy for the positive mode in Eq. (17) and the second term of the free energy for the superradiant mode in (18) canceled out so that from the simplified resulting free energy we have obtained the statistical entropy satisfying the area law by determining the UV cutoff which is independent of the hairs of the black hole, and additionally derived the angular momentum and the energy of matter field.
The energy \(E\) is always positive and it depends on the mass of the black hole, the angular momentum of the black hole, and the gravitational hair \(b\). For the limit of \(b=0\), the angular momentum can be reduced to \(J=\frac{1}{2}\mathcal{J}\) and \(E=\frac{1}{2}\mathcal{M}+\frac{1}{6}\sqrt{\mathcal{M}^{2}-\mathcal{J}^{2}/\ell ^{2}}\). It means that the angular momentum of the matter is directly proportional to that of the black hole while the energy is related to the mass and angular momentum of the black hole simultaneously.
###### Acknowledgements.
This work was supported by the Sogang University Research Grant 201310022 (2013).
## References
* Deser _et al._ (1982a)S. Deser, R. Jackiw, and S. Templeton, Ann. Phys. (N.Y.) **140**, 372 (1982a).
* Deser _et al._ (1982b)S. Deser, R. Jackiw, and S. Templeton, Phys. Rev. Lett. **48**, 975 (1982b).
* Bergshoeff _et al._ (2009a)E. A. Bergshoeff, O. Hohm, and P. K. Townsend, Phys. Rev. Lett. **102**, 201301 (2009a), arXiv:0901.1766 [hep-th] .
* Bergshoeff _et al._ (2009b)E. A. Bergshoeff, O. Hohm, and P. K. Townsend, Phys. Rev. D **79**, 124042 (2009b), arXiv:0905.1259 [hep-th] .
* Banados _et al._ (1992)M. Banados, C. Teitelboim, and J. Zanelli, Phys. Rev. Lett. **69**, 1849 (1992), hep-th/9204099 .
* Oliva _et al._ (2009)J. Oliva, D. Tempo, and R. Troncoso, J. High Energy Phys. **07**, 011 (2009), arXiv:0905.1545 [hep-th] .
* Giribet _et al._ (2009)G. Giribet, J. Oliva, D. Tempo, and R. Troncoso, Phys. Rev. D **80**, 124046 (2009), arXiv:0909.2564 [hep-th] .
* Kwon _et al._ (2011)Y. Kwon, S. Nam, J.-D. Park, and S.-H. Yi, J. High Energy Phys. **11**, 029 (2011), arXiv:1106.4609 [hep-th] .
* Perez _et al._ (2011)A. Perez, D. Tempo, and R. Troncoso, J. High Energy Phys. **07**, 093 (2011), arXiv:1106.4849 [hep-th] .
* ’t Hooft (1985)G. ’t Hooft, Nucl. Phys. B **256**, 727 (1985).
* Mann _et al._ (1992)R. B. Mann, L. Tarasov, and A. Zelnikov, Class. Quant. Grav. **9**, 1487 (1992).
* Ghosh and Mitra (1994)A. Ghosh and P. Mitra, Phys. Rev. Lett. **73**, 2521 (1994), hep-th/9406210 .
* (13)B. S. Kay and L. Ortiz, arXiv:1111.6429 [hep-th] .
* Ho and Kang (1998)J.-w. Ho and G. Kang, Phys. Lett. B **445**, 27 (1998), gr-qc/9806118 .
* Liu and Zhao (2001)W.-B. Liu and Z. Zhao, Chin. Phys. Lett. **18**, 310 (2001).
* Zhou and Liu (2004)Z.-A. Zhou and W.-B. Liu, Int. J. Mod. Phys. A **19**, 3005 (2004).
* Hawking _et al._ (1999)S. Hawking, C. Hunter, and M. Taylor, Phys. Rev. D **59**, 064005 (1999), hep-th/9811056 .
|
1511.06691 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 75190,
"num_imgs": 8,
"llama3_tokens_count": 23141
} | [
"content_image/1511.06691/x1.png",
"content_image/1511.06691/x2.png",
"content_image/1511.06691/x3.png",
"content_image/1511.06691/x4.png",
"content_image/1511.06691/x5.png",
"content_image/1511.06691/x6.png",
"content_image/1511.06691/x8.png",
"content_image/1511.06691/x9.png"
] | # Rectification of position data of Scotland in Ptolemy’s _Geographike Hyphegesis† [FOOTNOTE:†][ENDFOOTNOTE]_
Christian Marx
C. Marx, Gropiusstraße 6, 13357 Berlin, Germany; e-mail: ch.marx@gmx.net.
###### Abstract
**Abstract:** The ancient geographic coordinates given for places of Great Britain in Ptolemy’s _Geographike Hyphegesis_ are investigated by means of geodetic methods. The turning of Scotland to the east is modelled by a three-dimensional rotation. On the basis of different data sets of control points, the parameters of the rotation are estimated by means of methods of adjustment theory. Furthermore, a geodetic-statistical analysis method is applied to Scotland, by which groups of places of homogenous distortions and modern counterparts of the ancient places are determined. Based on the results of the investigations, answers are given for questions concerning Ptolemaic positions unsolved so far.
**Keywords:** Ancient geography, Klaudios Ptolemaios, _Geographike Hyphegesis_, _Albion_, Scotland, England, _Thule_, Rectification
disposition
## 1 Introduction
The oldest comprehensive description of Great Britain that has been handed down can be found in Book II of the _Geographike Hyphegesis_ (GH) by Klaudios Ptolemaios (Ptolemy, ca. 100–178). That description of Great Britain, which he called _Albion_, is part of a location catalogue (GH Books II–VII), wherein positions of about 6300 places of the whole _Oikoumene_ (the inhabited world known to the Greeks and Romans) are given by means of geographic coordinates in Ptolemy’s geographic reference system, which differs from the modern system by its zero meridian at the ’Blest islands’ (GH IV.6.34).
For different fields of research, the modern counterparts of unknown places of the GH have been of interest as well as the accuracy of the Ptolemaic coordinates and their origination. Ptolemy’s description often shows considerable differences to the actual situation. In particular, Ptolemaic Scotland, the part of Great Britain north of Hadrian’s Wall, is turned to the east. If the Ptolemaic positions are not rough, conjectural positions but locality determinations based on accurate data sources (such as military measurements), it can be expected from the Ptolemaic coordinates that they are systematically distorted. The determination of systematic errors provides a rectification and the possibility of identifying unknown Ptolemaic places.
The first data on Great Britain possibly originate from the Greek Pytheas of Massalia, who circumnavigated it in ca. 330 B.C. and traveled to the legendary _Thule_. A major source of the GH was the works of the Greek geographer Marinos of Tyre (ca. 70–130), which are dealt with in detail in Book I of the GH. Presumably, military sources were available to Marinos (cf. [32]). Geographic information arose with the Roman conquest of Great Britain in the first century. Roman sources were surely available to Ptolemy, which is affirmed by the occurrence of Latin place names in the GH and by the high accuracy of the Ptolemaic data determined by Marx [14], Kleineberg et al. [11], and Marx and Kleineberg [17]. Agricola, the Roman governor of the province _Britannia_, conquered parts of Scotland. These regions, however, were given up later; the province _Britannia_ was bordered by Hadrian’s Wall. Two known seafarings surely yielded geographic information on Scotland: the circumnavigation of Scotland by the geographer Demetrius in 81–83 mentioned by Plutarch (cf. [10]) and the circumnavigation of Great Britain by Agricola’s fleet in 84 mentioned by Tacitus.
Ptolemy’s places of _Albion_ have been the subject of a multitude of investigations so far; Strang [27, 29] gives an overview. Tierney [32] discusses the works of Ptolemy’s predecessors with regard to the influence on Ptolemy’s _Albion_. Thomas [31] identifies the Ptolemaic places in Scotland by a comparison of the Ptolemaic distances with the true distances from place to place. Richmond [25] corrects the turning of Scotland by means of a rotation by 90°, performed by an exchange of longitude and latitude. The essential work by Rivet and Smith [26] deals with the history and literary sources concerning ancient places in Great Britain; the turning of Scotland is explained by a rotation by ca. 50°. Strang [27, 29] gives a comprehensive analysis of the distortions of _Albion_ on the basis of mappings of modern and Ptolemaic positions, which is discussed in Section 3. He describes the distortions by rotations, scaling errors and shifts. However, the results of the investigations of Kleineberg et al. [11, p. 35 ff.] on England and the present investigations on Scotland show that the presence of more than one rotation is doubtful.
The objective of an analysis of the distortions of Ptolemy’s Scotland should be to explain them realistically and as simply as possible. In Section 4 the turning of Scotland is described by a three-dimensional (3D) rotation, which appears to be a satisfactory modeling of the turning of Scotland. The pivot point and the rotation angle are determined by methods of adjustment theory. Further distortions of the Ptolemaic positions are determined along with their identifications by means of a geodetic-statistical analysis method, which is described introductorily in Section 2. In Section 3 results of the analysis of Ptolemaic England are given, which are of importance for the investigation of Scotland.
## 2 Analysis method
Because of the unreliability and inaccuracy of the ancient coordinates of the GH, further information must be consulted for the identification of the ancient places in addition to a computational analysis of the coordinates, e.g. historical information, archaeological sites, and toponymy. According to this, Ptolemy’s data for Europe (GH Books II, III) have been investigated interdisciplinarily, whereby identifications of the Ptolemaic places have been affirmed and newly found and the errors and accuracy of the coordinates have been determined (see [14], [11], [17]). The underlying analysis method is described in detail by Marx [15] and is therefore dealt with only briefly in the following.
Investigations of regions with a multitude of known Ptolemaic places (e.g. _Italia_ in GH III.1, see [17, p. 10 ff.]) revealed that the places subdivide into groups with systematic distortions; these are scaling errors and shifts. One exception among the investigated Ptolemaic places of Europe is Ptolemaic Scotland, where additionally a rotation can be found. This case is introduced only in Section 4.3. Furthermore, the Ptolemaic coordinates have random components, which originate from errors and inaccuracies in the data used by Ptolemy (measurement data, information from travel reports, maps) and from Ptolemy’s determination of geographic coordinates based on his data sources (on the sources see [30, p. 16 ff.], [11, p. 5 ff.]).
The aim of the geodetic-statistical analysis of the Ptolemaic coordinates is the determination of groups of places of homogenous distortion (transformation units). The analysis method is based on adjustment theory and statistical hypothesis testing. The observation equations of the applied Gauß-Markov model (see e.g. [22, p. 117 ff.]) are
\[\Lambda_{i}+v_{\Lambda\,i} =m_{\lambda}\,\lambda_{i}+\Lambda_{0k}\] (1)
\[\Phi_{i}+v_{\Phi\,i} =m_{\phi}\,\phi_{i}+\Phi_{0k}\;,\]
where the Ptolemaic longitude \(\Lambda_{i}\) and latitude \(\Phi_{i}\) of a place with index \(i\) are observations, the modern longitude \(\lambda_{i}\) and latitude \(\phi_{i}\) are constants, the scale parameters \(m_{\lambda}\) and \(m_{\phi}\) are unknowns or constants (see below), the shift parameters \(\Lambda_{0k}\) and \(\Phi_{0k}\) of a group of places with group index \(k\) are unknowns, and \(v_{\Lambda\,i}\) and \(v_{\Phi\,i}\) are residuals (corrections) taking into account random errors. Model (1) describes a transformation of the modern into the ancient coordinates of a place. The transformation parameters \(m_{\lambda}\), \(m_{\phi}\), \(\Lambda_{0}\), \(\Phi_{0}\) contain local and global effects.
The scale parameters \(m_{\lambda}\), \(m_{\phi}\) are assumed to be spaciously valid. Scalings may originate from Ptolemy’s overestimation of the longitudinal dimension of the _Oikoumene_ as well as from differences between ancient measurement units, which were unintentionally not considered. Owing to interactions of different influences, \(m_{\lambda}\) and \(m_{\phi}\) are possibly not entirely identical (see the example of _Peloponnesus_ in GH III.16 given in [17, p. 125 f.]). Inconsistencies of the ancient coordinates and disadvantageous geometries of groups of places of homogenous distortions can adulterate the adjustment of transformation parameters such that the results are unrealistic. Thus, for \(m_{\lambda}\) and \(m_{\phi}\) approximate values are determined, which are used as constants in several steps of the analysis method and are iteratively improved.
The parameter \(\Lambda_{0k}\) contains the difference between the Ptolemaic and the modern zero meridian. The computed \(\Lambda_{0k}\) and \(\Phi_{0k}\) are in general no real shifts. In order to illustrate the actual shifts, relative shifts of transformation units with respect to a chosen transformation unit are determined by means of
\[\Delta\Lambda_{0k} =\Lambda_{0k}-\Lambda_{0\mathrm{R}}\] (2)
\[\Delta\Phi_{0k} =\Phi_{0k}-\Phi_{0\mathrm{R}}\;.\]
\(\Lambda_{0\mathrm{R}}\) and \(\Phi_{0\mathrm{R}}\) are the adjusted parameters of the transformation unit which is taken as a reference.
In the Greek manuscripts of the GH the coordinates are listed by way of Milesian numerals; they are given in degree and fractions of degree. The smallest resolution occurring is \(\frac{1}{12}\lx@math@degree=5^{\prime}\). Marx [13] shows that the actual resolution is partly lower and gives a method for the estimation of the occurring resolutions. According to the resolution, the standard deviations \(\sigma_{\Lambda\,i}\) and \(\sigma_{\Phi\,i}\) of the \(\Lambda_{i}\) and \(\Phi_{i}\) are chosen in the stochastic part of the adjustment model; of the most accurate coordinates ca. 5\({}^{\prime}\) are assumed. Correlations between coordinates are not considered because there is no information about dependencies. Coordinate values not explicable by the distortion model (1) are regarded to be grossly erroneous; often they can be explained by a scribal error in the manuscripts.
In essence, the analysis method is a multi-stage combinatorial search for transformation units. In a combinatorial search for consistent subsets of data, different combinations of the observations are tested for whether they satisfy specific conditions, e.g. a statistical test (for other applications of such a strategy see [19], [12]). According to model (1), the quantities to be combined are the Ptolemaic coordinates \(\Lambda_{i}\) and \(\Phi_{i}\); however, the combinatorial search is extended to the modern positions (\(\lambda_{i}\), \(\phi_{i}\)) because more than one (uncertain) identification can be given for a place. Moreover, the problem is complicated by differences between ancient coordinate values in the manuscripts. The manuscripts are presumably based on the two recensions \(\Omega\) and \(\Xi\); \(\Xi\) is only represented by the manuscript Codex Vaticanus Graecus 191 (X). An edition of both recensions is published by Stückelberger and Graßhoff [30], which has been used for the investigations. By means of the analysis method an inconsistent input-variant of a coordinate is replaced by a consistent variant, if available.
For an area under investigation, the procedure of the analysis method is the following:
* [itemsep=-3ex,leftmargin=0.5cm]
* Initial solution: analysis of the resolution of the ancient coordinate values, determination of approximate values for the transformation parameters, generation of initial subsets of places with similar distortions by means of a visualisation of residuals (cf. Fig. 1).
* Combinatorial search: search for transformation units in the initial subsets; multiple identifications per place are possible; statistical tests: overall model test of the adjustment model, individual test for gross errors in the coordinates.
* Forward-strategy: search for the best possible mergings of unassigned places with nearby transformation units; multiple identifications and ancient coordinate variants per place are possible; statistical tests: individual test, final overall model test; geometric tests: distance test, point-in-polygon test concerning the convex hull of a transformation unit.
* Verification of the scales: test of the suppositional scales introduced in step 1 for validity by an adjustment; statistical test: t-test.
* Merging of transformation units: combinatorial search for possible mergings of neighbouring transformation units; statistical test: analysis of variance; geometric tests: distance, overlap by means of point-in-polygon test.
* Postprocessing: if present, test of topographically implausible assignments of places to transformation units for possible rearrangements; statistical tests: overall model test, individual test.
(On the applied tests see e.g. [22, pp. 66, 150, 171 ff., 356], [9, pp. 189, 193], [2, p. 28].)
Based on the determined transformation units, presumable modern coordinates \(\bar{\lambda}_{i}\), \(\bar{\phi}_{i}\) can be computed for unidentified places. This rectifying transformation is
\[\bar{\lambda}_{i} =m_{\Lambda}\,\Lambda_{i}+\lambda_{0k}\] (3)
\[\bar{\phi}_{i} =m_{\Phi}\,\Phi_{i}+\phi_{0k}\;,\]
where \(m_{\Lambda}\), \(m_{\Phi}\), \(\lambda_{0}\), and \(\phi_{0}\) are parameters derived from an inversion of the transformation (1):
\[m_{\Lambda} =1/{m_{\lambda}}\;,\;\;m_{\Phi}=1/{m_{\phi}}\] (4)
\[\lambda_{0k} =-\Lambda_{0k}/{m_{\lambda}}\;,\;\;\phi_{0k}=-\Phi_{0k}/{m_{\phi} }\,.\] (5)
The analysis method and the determination of new identifications are applied repeatedly.
## 3 England
Initially, the analysis of the distortions of Ptolemaic England (GH II.3) by Strang [27, 29] is discussed. From the south to the north of England, Strang identifies five regions having differing rotations (absolute value of the rotation angles \(\leq\)20°) and a common pivot point at Long Melford. The procedure of his determination of these regions is the following. The latitudinal scale of Ptolemy’s positions is assumed to be 62.5 Roman miles (R.mi.) per 1° (actually \(1\lx@math@degree\mathrel{\widehat{=}}111\,\mathrm{km}=75\,\mathrm{R.mi.}\)). The longitudinal scale is determined on the basis of selected places by a comparison of Ptolemaic longitudinal distances with those in a modern map. The result is 41.67 R.mi./°. Based on the assumed scales, the Ptolemaic positions are plotted and superimposed on a modern map with coincidence at _Londinium_/London. For selected places, the residual vector between the Ptolemaic position and the respective position on the modern map is drawn. From the orthogonal bisectors of the residual vectors is expected that they meet in the pivot point. Strang [27, Fig. 6] presents a map with residual vectors and orthogonal bisectors for five places: _Itunae aestuarium_/mouth of the Eden (No. 17 in Kleineberg et al. [11], which is the reference for the numbering in the following), _Ganganorum promontorium_/Braich-y-Pwll (No. 23), _Tamarus fluvius_/mouth of the Tamar (No. 35), _Vedra fluvius_/mouth of the Wear (No. 56), _Maridunum_/Carmarthen (No. 108).
The method described was reapplied in order to get an insight into its reliability. In doing so, formula (3) was applied for a centring with respect to London (\(\Lambda=20\lx@math@degree\), \(\Phi=54\lx@math@degree\)) and for a scaling. \(m_{\Phi}\) is \(62.5\,\mathrm{R.mi.}/75\,\mathrm{R.mi.}\approx 0.83333\). The longitudinal scale of 41.67 R.mi.\(/\)° is assumed to be valid here for the south of England, i.e. for \(\phi=50\lx@math@degree\), so that \(m_{\Lambda}=41.67\,\mathrm{R.mi.}/(75\,\mathrm{R.mi.}\cos{50\lx@math@degree}) \approx 0.86436\). The residual vectors were computed for 50 known places consentaneously identified by Rivet and Smith [26], Strang [28], and Kleineberg et al. [11] (Nos. 17, 20, 21, 23, 25–27, 29, 31, 33, 35–37, 41, 56, 58, 59, 61, 64, 65, 88–91, 94–97, 99–102, 104–119, 123, 135). For the Ptolemaic coordinates the values of the \(\Omega\)-recension and of the X-manuscript were used separately with the exception of No. 108, for which \(\Phi=55\lx@math@degree\) given by Nobbe [23] was used. As a result, the directions of the residual vectors and their orthogonal bisectors strongly depend on the parameters used. Fig. 1 shows the residual vectors based on \(\Omega\). The vectors of Nos. 17, 23, 35, 56, and 108 are in acceptable agreement with those shown by Strang [27]. However, obviously the orthogonal bisectors do not meet in the alleged pivot point at Long Melford in general, not even if the parameters used are modified. Accordingly, the results of Strang [27] are doubtful. At last, the three places _Eboracum Legio VI Victrix_/York (No. 94), _Isurium_/Aldborough (No. 91), and _Caturactonium_/Catterick (No. 89) are considered exemplarily. They are actually located towards the north-west but in the GH towards the north (cf. Fig. 1), which indicates a rotation. However, the places have \(\Lambda=20\lx@math@degree\) so that they are rather roughly positioned than rotated.
A new investigation of the Ptolemaic places of _Albion_ was carried out interdisciplinarily, whereby the analysis method described in Section 2 was applied; the results are given in Kleineberg et al. [11, p. 35 ff.]. The distortions of the Ptolemaic places in England could be described satisfactorily by shifts of groups of places and longitudinal and latitudinal scalings. The scale factors \(m_{\lambda}=1.35\) and \(m_{\phi}=1.30\) given by Kleineberg et al. [11, p. 203 f.] for _Albion_ are based on a deficient model for the distortions of Scotland; however, a recalculation only for England yields similar results: \(m_{\lambda}=1.379\pm 0.037\), \(m_{\phi}=1.363\pm 0.044\). Accordingly, an identical scale factor of ca. 1.35 can be assumed for longitude and latitude. Based on these values, a reapplication of the analysis method resulted in no significant changes.
Nine transformation units Al8–16 were determined for England. Table 1 gives their relative shifts with respect to the central transformation unit Al14 (\(\Lambda_{0\mathrm{R}}=20\lx@math@degree 32^{\prime}\), \(\Phi_{0\mathrm{R}}=-15\lx@math@degree 13^{\prime}\)) based on formula (2). The relative shifts are shown in Fig. 3 (_Mona insula_/Man from _Hibernia_, GH II.2, is assigned to Al10). For this plot in the modern reference system the relative shifts were scale-corrected by \(\Delta\Lambda_{0k}/m_{\lambda}\) and \(\Delta\Phi_{0k}/m_{\phi}\). The latitudinal relative shifts are similar and maximally ca. \(\frac{1}{2}\lx@math@degree\); the longitudinal relative shifts are nearly twice as large (in km). Hence, the shape of Ptolemaic England is more accurate in latitude than in longitude.
## 4 Scotland
The places of Ptolemaic Scotland and nearby islands are given in Table 2 (_ae._ = _aestuarium_/estuary; _fl._ = _fluvius_/river, in Scotland all positions of rivers refer to river mouths; _pr._ = _promontorium_/cape or foreland). Ptolemy divides his description of _Albion_ into the ’north side’ (Scotland Nos. 1–11), the ’west side’ (Scotland Nos. 12–17), the ’south side’, the ’east and south side’ (Scotland Nos. 42–55), ’towns’ (Scotland Nos. 66–86), and ’islands’ (Scotland Nos. 125–132). Among the latter, five points (Nos. 128–132) describe the position and shape of _Thule insula_. The place _Alauna_ (No. 79) does not occur in the \(\Omega\)-recension but in the X-manuscript.
Fig. 2 shows the Ptolemaic positions of _Albion_. Obviously, the northern part is turned to the east. It begins at Hadrian’s Wall located between Solway Firth and Newcastle upon Tyne, and, accordingly, almost corresponds to modern Scotland.
### Objective of the turning of Scotland
Jones and Keillar [10] assume an arithmetical origin for the turning of Ptolemaic Scotland because even Ireland (_Hibernia_ in GH II.2), which was not a part of the Roman Empire, was described accurately by Ptolemy and the island _Ebuda_/Inner Hebrides (Islay according to [11, p. 31]) nearby Scotland, for example, is located relatively correctly in relation to the north of Ireland. The high accuracy of the Ptolemaic positions of _Hibernia_ is confirmed by an analysis according to the methods described in Section 2, see Kleineberg et al. [11, p. 24 ff.].
The reason for an intentional turning of Scotland by Ptolemy is surely the position of _Thule_. Ptolemy gives the latitude 63° for the centre of _Thule_. In GH I.7.1 Ptolemy says that Marinos located _Thule_ at 63° latitude (cf. [30, p. 69]) so that, obviously, Ptolemy adopted this latitude from Marinos (also suggested by e.g. [6, pp. 73, 77]). In antiquity _Thule_ constituted the northern limit of the _Oikoumene_ (cf. [30, p. 69, note 33]) so that Ptolemy was obliged to arrange _Albion_ south of _Thule_. Because of too large latitudes in the southern part of Great Britain (e.g. London: \(\phi=51\lx@math@degree 30^{\prime}\), \(\Phi=54\lx@math@degree\)) and the latitudinal scaling (\(m_{\phi}>1\)), the problem occurred that there was not enough space for the northern part of _Albion_ south of _Thule_. A way out was the turning of the northern part of _Albion_ into the free space in the east. (Dilke [5] assumes that a bending to the east is due to the traditional triangular shape of _Albion_ given by Eratosthenes; see also Tierney [32] in this regard.) In particular a rotation was suitable because, properly performed, it does not change the distances of the places of the rotated part. Hadrian’s Wall, the northern border of the Roman province _Britannia_, lent itself to the limit of the turning.
Ptolemy’s _Thule_ must be distinguished from Pytheas’ _Thule_, which presumably corresponds to the region of Trondheim in Norway (e.g. Hennig [8, p. 168]). Ptolemy’s _Thule_ is to be equated with the Shetland Islands (cf. Rivet and Smith [26, p. 146], Dilke [6, pp. 83, 136]). A reason for this is that according to Tacitus’ _Agricola_ 10 the Romans named an archipelago _Thule_ which came within the range of vision during their circumnavigation of Great Britain, and this archipelago was surely the Shetland Islands. That information was certainly known to and used by Ptolemy. Furthermore, in his _Mathematike Syntaxis_ (MS) II.6 Ptolemy assigns _Thule_ to the parallel at \(\Phi=63\lx@math@degree\) and Scythian people further north at \(\Phi=64\lx@math@degree 30^{\prime}\). If Ptolemy referred to Pytheas’ _Thule_, the northern limit of the _Oikoumene_, he would not locate people north of it.
The _Orcades insulae_ (No. 127), in the south of _Thule_, are identified as the Orkney Islands (Rivet and Smith [26, p. 433 f.]). The latitudinal distance of 1° between the southern point of _Thule_ (62°40\({}^{\prime}\)) and the _Orcades insulae_ (61°40\({}^{\prime}\)) is in good agreement with the actual distance of 50\({}^{\prime}\) (from the southern tip of Shetland at \(\lambda=-1\lx@math@degree 20\), \(\phi=59\lx@math@degree 50^{\prime}\) to the centre of Orkney at \(\lambda=-3\lx@math@degree 00^{\prime}\), \(\phi=59\lx@math@degree 00^{\prime}\)), also if a scaling exists (e.g. factor 1.35: \(1\lx@math@degree/1.35=44^{\prime}\)). Likewise, the longitudinal distance of 1°40\({}^{\prime}\) between the easternmost point of _Thule_ (31°40\({}^{\prime}\)) and the _Orcades insulae_ (30°) coincides with the actual distance of 1°40\({}^{\prime}\) (points as above). Since the relative position of _Thule_ and the _Orcades insulae_ is correct, the _Orcades insulae_ were, obviously, not turned to the east together with Scotland so that its latitude was a further limit for the latitudinal dimension of _Albion_.
As it can be seen in Fig. 2, the rotation of Scotland amounts to about 90° (with respect to the actual situation). Richmond [25] assumes a rotation around _Vedra fl._/Wear (No. 56) by 90° and performs a rotational correction of the Ptolemaic positions by means of an exchange of \(\Lambda\) and \(\Phi\). Strang [27] also presumes a rotation around _Vedra fl._ and determines a rotation angle of 20° for the north of England and of 70° for Scotland with respect to the north of England, together 90°. Furthermore, he determines an additional rotation of places in the south of Scotland. Rivet and Smith [26, p. 114] assume a rotation around _Itunae aestuarium_/Eden (No. 17) by ca. 50°. The reason for this rotation angle is the cape _Epidium pr._/Mull of Kintyre (No. 6), which appears in the description of _Hibernia_ as the island _Epidium_. The assumed rotation makes both places coincide. The equation of both places, however, is not mandatory. By Kleineberg et al. [11, p. 32] the island _Epidium_ is identified as Arran. Furthermore, the rotation-corrected position of _Epidium pr._ does not need to coincide with the Ptolemaic position of _Epidium_ if Ptolemy adjusted the positions of _Hibernia_ to the afore rotated positions of Scotland.
### Adjustment model for the turning of Scotland
Supposing an intentional rotation for Ptolemaic Scotland, an accurate way for its accomplishment would have been a 3D rotation of points on the earth surface performed by means of a rotation around an axis through the point of origin and a given pivot point. In the MS Ptolemy describes problems of the spherical astronomy, in MS VIII.5 the conversion of ecliptic longitude and latitude into right ascension and declination (see [24, p. 97 ff.]). This conversion can be achieved by a rotation of the ecliptic or equatorial, respectively, reference system around the first coordinate axis through the vernal equinox by the angle of the obliquity of the ecliptic. A similar problem is a 3D rotation of points around an arbitrary axis, which can be solved by at least three single rotations around coordinate axes. Accordingly, it is imaginable that Ptolemy was able to perform a 3D rotation around an axis (pivot point) by means of a decomposition into single rotations around coordinate axes and that he applied the procedure to the places of Scotland or to some selected places. That gave reason to model the turning of Scotland by a 3D rotation. But even if Ptolemy proceeded in another way, e.g. by rotating mapped points computationally or graphically in the plane, a 3D rotation is a good approximation for the unknown original procedure.
The rotation applied to Scotland is an anti-clockwise rotation around an axis through the origin of the Ptolemaic coordinate system and the pivot point at \(\Lambda_{\mathrm{P}}\), \(\Phi_{\mathrm{P}}\) by an angle \(\alpha\). The direction of the axis is given by the unit vector
\[\mathbf{p}=\begin{pmatrix}p_{1}\\ p_{2}\\ p_{3}\end{pmatrix}=\begin{pmatrix}\cos\Lambda_{\mathrm{P}}\cos\Phi_{\mathrm{P} }\\ \sin\Lambda_{\mathrm{P}}\cos\Phi_{\mathrm{P}}\\ \sin\Phi_{\mathrm{P}}\end{pmatrix}\;.\] (6)
The geometric transformation of the 3D position vector \(\mathbf{x}_{i}\) of a place into its rotated position vector \(\mathbf{X}_{i}\) by means of the mentioned rotation is
\[\mathbf{X}_{i}=\mathbf{R}(\Lambda_{\mathrm{P}},\Phi_{\mathrm{P}},\alpha)\; \mathbf{x}_{i}\;,\] (7)
where \(\mathbf{R}\) is a rotation matrix. \(\mathbf{R}\) can be based on different compositions of single rotations around coordinate axes. A description based on the elements of \(\mathbf{p}\) is (cf. Bronstein et al. [4, p. 301]):
\[\mathbf{R}=\begin{pmatrix}p_{1}^{2}(1-\cos\alpha)+\cos\alpha&p_{1}p_{2}(1-\cos \alpha)-p_{3}\sin\alpha&p_{1}p_{3}(1-\cos\alpha)+p_{2}\sin\alpha\\ p_{1}p_{2}(1-\cos\alpha)+p_{3}\sin\alpha&p_{2}^{2}(1-\cos\alpha)+\cos\alpha&p_ {2}p_{3}(1-\cos\alpha)-p_{1}\sin\alpha\\ p_{1}p_{3}(1-\cos\alpha)-p_{2}\sin\alpha&p_{2}p_{3}(1-\cos\alpha)+p_{1}\sin \alpha&p_{3}^{2}(1-\cos\alpha)+\cos\alpha\end{pmatrix}\;.\] (8)
The objective is to estimate the unknowns \(\Lambda_{\mathrm{P}}\), \(\Phi_{\mathrm{P}}\), \(\alpha\) by means of a least-squares adjustment on the basis of control points. The observations of the adjustment model are the Ptolemaic coordinates \(\Lambda_{i}\) and \(\Phi_{i}\), which are composed to the observation vector
\[\mathbf{l}=(\ldots\Lambda_{i}\;\Phi_{i}\ldots)^{\top}\;.\] (9)
Using the unit sphere, the position vector \(\mathbf{X}_{i}\) expressed by \(\Lambda_{i}\) and \(\Phi_{i}\) is
\[\begin{pmatrix}X_{i1}\\ X_{i2}\\ X_{i3}\end{pmatrix} =\begin{pmatrix}\cos(\Lambda_{i}+v_{\Lambda\,i})\cos(\Phi_{i}+v_{ \Phi\,i})\\ \sin(\Lambda_{i}+v_{\Lambda\,i})\cos(\Phi_{i}+v_{\Phi\,i})\\ \sin(\Phi_{i}+v_{\Phi\,i})\end{pmatrix}\;,\] (10)
wherein the corrections \(v_{\Lambda\,i}\), \(v_{\Phi\,i}\) for the observations are introduced. The position vector \(\mathbf{x}_{i}\) is expressed by means of the presumable ancient longitude and latitude before the rotation. To them the distortion model (1) is applied so that longitude and latitude are replaced by \(m_{\lambda}\lambda_{i}+\Lambda_{0}\) and \(m_{\phi}\phi_{i}+\Phi_{0}\):
\[\begin{pmatrix}x_{i1}\\ x_{i2}\\ x_{i3}\end{pmatrix} =\begin{pmatrix}\cos(m_{\lambda}\lambda_{i}+\Lambda_{0})\cos(m_{ \phi}\phi_{i}+\Phi_{0})\\ \sin(m_{\lambda}\lambda_{i}+\Lambda_{0})\cos(m_{\phi}\phi_{i}+\Phi_{0})\\ \sin(m_{\phi}\phi_{i}+\Phi_{0})\end{pmatrix}\;.\] (11)
The (unknown) differences in the shifts of groups of places must be neglected and average shift parameters \(\Lambda_{0}\) and \(\Phi_{0}\) must be used. They are additional unknowns of the adjustment model so that the vector of unknowns becomes:
\[\mathbf{u}=(\Lambda_{\mathrm{P}}\;\Phi_{\mathrm{P}}\;\alpha\;\Lambda_{0}\;\Phi _{0})^{\top}\;.\] (12)
The modern coordinates \(\lambda_{i}\), \(\phi_{i}\) are constants in the model as well as the scale factors \(m_{\lambda}\), \(m_{\phi}\), which are set at postulated values (resulting from the investigation of Ptolemaic England).
A rearrangement of formula (7) yields the condition equations
\[\mathbf{f}_{i} =\mathbf{R}(\Lambda_{\mathrm{P}},\Phi_{\mathrm{P}},\alpha)\; \mathbf{x}_{i}(\Lambda_{0},\Phi_{0})-\mathbf{X}_{i}(\Lambda_{i},\Phi_{i})= \mathbf{0}\] (13)
for \(i=1(1)n\), where \(n\) is the number of control points. Because of the dependencies of the three components of \(\mathbf{f}_{i}=(f_{i1}\;f_{i2}\;f_{i3})^{\top}\), two of them are sufficient for the system of condition equations of the adjustment. The two equations
\[\psi_{i1} =f_{i2}(\Lambda_{i},\Phi_{i},\Lambda_{\mathrm{P}},\Phi_{\mathrm{P }},\alpha,\Lambda_{0},\Phi_{0})=0\] (14)
\[\psi_{i2} =f_{i3}(\Phi_{i},\Lambda_{\mathrm{P}},\Phi_{\mathrm{P}},\alpha, \Lambda_{0},\Phi_{0})=0\]
are chosen so that \(\psi_{i1}\) and \(\psi_{i2}\) are composed for each control point.
The equations (14) lead to an adjustment of nonlinear condition equations with unknowns (see e.g. [20]). The condition equations must be linearised and the unknowns must be determined iteratively. The linearisation is based on approximate values \(\bar{\mathbf{l}}^{0}=\mathbf{l}+\mathbf{v}^{0}\) and \(\mathbf{u}^{0}\):
\[\mathbf{B}(\bar{\mathbf{l}}-\bar{\mathbf{l}}^{0})+\mathbf{A}( \mathbf{u}-\mathbf{u}^{0})+\psi(\bar{\mathbf{l}}^{0},\mathbf{u}^{0}) =\mathbf{0}\;,\] (15)
where \(\bar{\mathbf{l}}\) are the adjusted observations and \(\mathbf{\psi}=(\ldots\psi_{i1}\;\psi_{i2}\ldots)^{\top}\). The matrices \(\mathbf{B}\) and \(\mathbf{A}\) contain the partial derivatives of \(\mathbf{\psi}_{i}\) with respect to the observations and unknowns, respectively, which are computed on the basis of the approximate values. By means of \(\bar{\mathbf{l}}-\bar{\mathbf{l}}^{0}=\bar{\mathbf{l}}-\mathbf{l}-\mathbf{v}^{ 0}=\mathbf{v}-\mathbf{v}^{0}\), the common condition equations
\[\mathbf{B}\mathbf{v}+\mathbf{A}(\mathbf{u}-\mathbf{u}^{0})- \mathbf{w}=\mathbf{0}\] (16)
\[\mathbf{w}=\mathbf{B}\mathbf{v}^{0}-\mathbf{\psi}(\bar{\mathbf{l} }^{0},\mathbf{u}^{0})\] (17)
of a linear Gauß-Helmert-model are obtained, where \(\mathbf{w}\) is the vector of misclosures.
The minimisation of the objective function \(\mathbf{v}^{\top}\mathbf{P}\mathbf{v}\) with the side conditions (16) is carried out as usual by means of the Lagrange multipliers (see e.g. [22, p. 156 ff.]). \(\mathbf{P}\) is the weight matrix of the stochastic part
\[\mathbf{C}_{l}=\sigma_{0}^{2}\mathbf{P}^{-1}\] (18)
of the adjustment model, where \(\mathbf{C}_{l}\) is the covariance matrix of the observations and \(\sigma_{0}^{2}\) is the variance of unit weight. Possible correlations are not considered because they are unknown. Since (unknown) relative shifts of places cannot be taken into account in the functional model (14), possible shifts are modelled by larger standard deviations in \(\mathbf{C}_{l}\).
The unknowns are in part highly correlated (a change in \(\Lambda_{\mathrm{P}}\), \(\Phi_{\mathrm{P}}\) is compensated by \(\Lambda_{0}\), \(\Phi_{0}\), \(\alpha\)), which leads to large uncertainties of the adjusted unknowns. A way out is an ‘adjustment with stochastic advance information’ (see e.g. [22, p. 240 f.], [1, p. 167 ff.]), in which additional pseudo-observations are introduced for the unknowns. According to this, the observations \(l_{\Lambda_{0}}\) and \(l_{\Phi_{0}}\) are added for \(\Lambda_{0}\) and \(\Phi_{0}\) because for them advance information is available from the investigation of England. In addition to the condition equations (14), the equations
\[l_{\Lambda_{0}}+v_{\Lambda_{0}}-\Lambda_{0} =0\] (19)
\[l_{\Phi_{0}}+v_{\Phi_{0}}-\Phi_{0} =0\]
appear.
In a further adjustment \(\alpha\) is not an unknown but is set at a constant value.
### The rotation of Scotland
The adjustment described in the last section was applied to: 1) the identifications for the Ptolemaic places given by Rivet and Smith [26, p. 237 ff.] in conjunction with the ancient coordinates a) of \(\Omega\) and b) of X; 2) the identifications given by Strang [28] in conjunction with the ancient coordinates a) of \(\Omega\) and b) of X; 3) the identifications and ancient coordinate variants determined by the present work (cf. Section 4.5).
Firstly, the computations 1) and 2) are considered. In the case of 1) no identifications are given for the places Nos. 8, 73, 83. Table 3 gives the a priori standard deviations \(\sigma_{\Lambda i}\) and \(\sigma_{\Phi i}\) of the observations \(\Lambda_{i}\) and \(\Phi_{i}\) and the standard deviation of unit weight \(s_{0}\) from the adjustment; its a priori value is \(\sigma_{0}=1\). \(\sigma_{\Lambda i}\) and \(\sigma_{\Phi i}\) were chosen such that the overall model test (significance level 5%) showed no errors, i.e. \(s_{0}\approx\sigma_{0}\). For the constants \(m_{\lambda}\) and \(m_{\phi}\) the value 1.35 was adopted from England (see Section 3). For the observations \(l_{\Lambda_{0}}\) and \(l_{\Phi_{0}}\) of the unknowns \(\Lambda_{0}\) and \(\Phi_{0}\), the results \(\Lambda_{0\,8}=21\lx@math@degree 15^{\prime}\), \(\Phi_{0\,8}=-15\lx@math@degree 28^{\prime}\) of the adjustment of the northernmost transformation unit Al8 in England (cf. Fig. 3) are first guidelines. However, preceding investigations of Scotland turned out that the majority of the Scottish places are shifted by ca. 1° further towards the north on average (see Section 4.5) so that \(l_{\Phi_{0}}=-15\lx@math@degree 28^{\prime}+1\lx@math@degree=-14 \lx@math@degree 28^{\prime}\) was used. The standard deviations used are \(\sigma_{\Lambda_{0}}=30^{\prime}\) and \(\sigma_{\Phi_{0}}=15^{\prime}\).
The results of the adjustment are given in Table 3. Fig. 4 shows the adjusted pivot points P\({}_{1}\) and P\({}_{2}\) for the computations 1.b) and 2.b), respectively, and the places in the vicinity (X-coordinates). The geographic coordinates were regarded as two-dimensional coordinates, and the confidence ellipses of P\({}_{1}\) and P\({}_{2}\) were computed from the cofactor matrix of the unknowns and \(s_{0}\) as usual in the adjustment of geodetic networks (e.g. [22, p. 258]). Fig. 4 shows the ellipses based on a probability of 80%.
Taking into account the uncertainty of the results, the estimated rotation parameters are \(\Lambda_{\mathrm{P}}\approx 18\lx@math@degree\), \(\Phi_{\mathrm{P}}\approx 58\lx@math@degree 30^{\prime}\), and \(\alpha\approx-80\lx@math@degree\) in each case. The rotation parameters in conjunction with \(\Lambda_{0}\) and \(\Phi_{0}\) apply to a transformation between the Ptolemaic and modern coordinates. Taken alone, the rotation parameters are also approximations for the transformation between the Ptolemaic and the not rotated ancient coordinates if Scotland was oriented correctly before its rotation. In the following, possible original rotation parameters are discussed.
For the pivot point, firstly places of the GH are taken into account which are located near the adjustment result. Assuming that _Itunae ae._/River Eden (No. 17; cf. [31], [26, p. 380]) is rotated, its rotation-corrected position is used as a criterion. For the rotation angle, 80°–90° are presumed according to the result of the adjustment. The results for the considered places (cf. Fig. 4) are: pivot point _Trimontium_ (No. 71): _Itunae ae._ (No. 17) is too far east; pivot point _Vinovium_ (No. 88): _Itunae ae._ (No. 17) is too far west; pivot point _Epiacum_ (No. 87): _Itunae ae._ (No. 17) is too far south with respect to _Moricambe ae._/Morecambe Bay (No. 18; [11, p. 43]) and _Epiacum_/Wreay (No. 87; [11, p. 51]); pivot point _Itunae ae._ (No. 17): _Itunae ae._ is too far east with respect to _Moricambe ae._ (No. 18) and _Epiacum_ (No. 87); pivot point _Moricambe ae._ (No. 18): _Itunae ae._ (No. 17) is positioned without disagreement. A somewhat better result than by means of _Moricambe ae._ (No. 18) is achieved by the point P\({}_{\mathrm{P}}=(\Lambda=18\lx@math@degree\), \(\Phi=58\lx@math@degree 30^{\prime}\)), which corresponds to the result of the adjustment. By means of this point, _Itunae ae._ (No. 17) is positioned correctly north of _Moricambe ae._ (No. 18). Moreover, the round coordinate values of P\({}_{\mathrm{P}}\) argue for this pivot point because they are easy to handle and possibly eased the procedure of the rotation (e.g. a calculation). Nonetheless, no certain conclusion on the exact position of the pivot point can be drawn here owing to the inaccuracy of the ancient coordinates; due to the arguments for P\({}_{\mathrm{P}}\), however, this point is assumed to be the pivot point in the following.
A rotation angle of 90° is less probable because the correction of this rotation leads to a northern direction of the east coast between _Boderia aestuarium_/Firth of Forth (No. 54; [11, p. 47]) and _Taezalorum pr._/Kinnairds Head (No. 50; [11, p. 46]) in disagreement with the actual northeastern direction. For this coast a more accurate direction can be expected, because the southern part of the east coast between _Vedra fl._/Wear (No. 57; [11, p. 46]) and _Boderia aestuarium_ has a correct northwestern direction. Hence, a rotation angle \(<90\lx@math@degree\) is probable. As in the case of the pivot point, a round figure comes into consideration, that is 80°, which is in agreement with the adjustment result. Probably, Ptolemy chose a rotation by which _Dumna insula_ (No. 126) can be positioned south of the latitude of the not rotated _Orcades insulae_ (No. 127), cf. Fig. 2 and Section 4.1. Moreover, an angle of 80° in conjunction with P\({}_{\mathrm{P}}\) is probable because the correction of this rotation positions the four places _Verubium_ (No. 43), _Ila fl._ (No. 44), _Ripa alta_ (No. 45), and _Varar ae._ (No. 47; not \(\Omega\)-coordinates, only X) of the (actual) east coast almost exactly at 16°30\({}^{\prime}\) longitude, cf. Table 2, column \(\Lambda^{*}\) and Fig. 6. It is likely that the northeastern direction of this northern-most section of the east coast was not known to Ptolemy and that he positioned the places northwards at the same meridian. For the mentioned reasons, the angle \(\alpha_{\mathrm{P}}=-80\lx@math@degree\) is assumed to be the rotation angle in the following.
For computation 3) the average shift parameters of Scotland were newly determined, see Section 4.5. The resulting parameters and standard deviations were used for the observations \(l_{\Lambda_{0}}\) and \(l_{\Phi_{0}}\). \(\alpha\) was set at \(\alpha=\alpha_{\mathrm{P}}\) constantly. The estimated pivot point P\({}_{3}\) of computation 3) is located nearby P\({}_{\mathrm{P}}\) (cf. Table 3). Its confidence ellipse is significantly smaller than those of computations 1) and 2) and contains no place of the GH but only P\({}_{\mathrm{P}}\) (cf. Fig. 4). Accordingly, this subsequent adjustment confirms the pivot point P\({}_{\mathrm{P}}\) and the angle \(\alpha_{\mathrm{P}}\).
Fig. 5 shows the places of _Albion_ (\(\Omega\)-recension), wherein the places of Scotland are rotation-corrected by means of
\[\begin{pmatrix}\Lambda_{i}^{*}\\ \Phi_{i}^{*}\end{pmatrix}=\mathbf{R}(\mathrm{P}_{\mathrm{P}},\alpha_{\mathrm{P }})^{-1}\begin{pmatrix}\Lambda_{i}\\ \Phi_{i}\end{pmatrix}\;,\] (20)
with the inverse matrix of \(\mathbf{R}\) (formula (8)). The Scottish part is in good agreement with the actual shape of Scotland. The latitudinal distance from _Cantium pr._/South-Foreland (No. 41; [11, p. 45]) at the south coast of Great Britain to _Tarvedum sive Orcas pr._ (No. 11) at the north coast is \(65\lx@math@degree-54\lx@math@degree=11\lx@math@degree\), and the true latitudinal distance is ca. \(7\lx@math@degree 30^{\prime}\). Taking into account the average relative latitudinal shift of Scotland of ca. 1° (see Section 4.5) and the scale factor \(m_{\phi}=1.35\), the ancient distance becomes \((11\lx@math@degree-1\lx@math@degree)/1.35=7\lx@math@degree 24^{\prime}\), which only differs by 6\({}^{\prime}\) from the actual distance.
The correction of Ptolemy’s rotation yields an overlap of a few Scottish places with places from Ptolemy’s description of _Hibernia_. That does not contradict the assumed rotation if Ptolemy compiled his description of _Hibernia_ after the rotation of Scotland.
In Book VIII of the GH positions of the so-called _poleis episemoi_ (noteworthy cities) are given. Among the places in Ptolemaic Scotland and the northern islands, _Pinnata castra_ (No. 83), _Dumna insula_ (No. 126), and _Thule_ (centre, No. 132) are listed in Book VIII. The positions are expressed there by means of the time difference \(A\) (in hours) from the location to _Alexandria_ and the length of the longest day \(M\) (in hours) at the location. The coordinates in Book VIII were presumably determined from the coordinates in the location catalogue; the \(M\)-data probably originate from a linear interpolation of a compilation of parallels with specific \(M\) in MS II.6, see Marx [16]. The \(A\) of Nos. 83 and 126 are probably based on the longitude of _Alexandria_\(\Lambda_{\mathrm{A}}=60\lx@math@degree\) (given in GH VIII.15.10), whereas \(A\) of No. 132 is better explicable by \(\Lambda_{\mathrm{A}}=60\lx@math@degree 30^{\prime}\) (GH IV.5.9). \(\Phi\) (only \(\Omega\)) and \(M\) of No. 126 correspond to those of the 27th parallel in MS II.6, and \(\Phi\) and \(M\) of No. 132 correspond to those of the 29th parallel. No. 83 is situated between the 25th and 26th parallel. Its \(M=18^{\mathrm{h}}30^{\mathrm{m}}\) is possibly the result of a linear interpolation which yields \(\approx 18^{\mathrm{h}}27^{\mathrm{m}}\) so that \(M\) of No. 83 is a rounded value. \(\Lambda\) and \(\Phi\) of the three considered places are in good agreement with the coordinates of Book VIII. That does not hold true, however, for the rotation-corrected coordinates \(\Lambda^{*}\) and \(\Phi^{*}\) (Table 2) so that, obviously, the coordinates of Book VIII originate from Ptolemy’s rotated coordinates of the location catalogue.
The 23rd parallel in MS II.6 is at 56° and ‘goes through the middle of Great Brittania’ (Toomer [33, p. 88]), Ptolemy’s name for _Albion_ in the MS. Latitude 56° is in agreement with the latitudinal dimension of _Albion_ in the GH (cf. Fig. 2) but not with its rotation-corrected dimension (Fig. 5). Accordingly, even though the MS was written before the GH (Dilke [6, p. 212, note 30]), the rotation of Scotland was presumably performed before the preparation of MS II.6.
### Islands, waters and points at the coast
The islands near Scotland given by Ptolemy are located only in the north and northeast of Ptolemaic Scotland (cf. Fig. 2). _Dumna insula_ (No. 126) and _Orcades insulae_ (No. 127) are located at \(\Lambda=30\lx@math@degree\), the centre of _Thule_ at \(\Lambda=30\lx@math@degree 20^{\prime}\). On Ptolemy’s localisation of the _Orcades insulae_ and _Thule_ see Section 4.1. _Dumna insula_ was most likely rotated together with Scotland and can be identified as Lewis (see Rivet and Smith [26, p. 342] and Section 4.5) in the east of Scotland.
The shape of _Albion_ gives rise to the question of whether the longitudinal dimension of Ptolemaic Scotland was adjusted to a predetermined longitude of _Thule_ or whether the longitude of _Thule_ was determined by the longitudinal dimension of Ptolemaic Scotland. The latter is certainly the case because the longitudinal dimension of Ptolemaic Scotland corresponds to its latitudinal dimension before the rotation, which is, apart from further systematic errors, correct (cf. Section 4.3). Accordingly, accurate data sources can be assumed, by which the longitudinal dimension of the rotated Ptolemaic Scotland was determined. Presumably, Ptolemy simply equated the longitude of the _Orcades insulae_ with that of _Dumna insula_ so that also the longitude of _Thule_ was given by the relative position of the _Orcades insulae_ and _Thule_ (cf. Section 4.1).
_Scitis insula_ is usually identified as Skye (e.g. Watson [34], Rivet and Smith [26, p. 452]). Its odd Ptolemaic position is explicable by an error in the original latitude before the rotation, see Section 4.5.
Fig. 6 shows the rotation-corrected Scottish points and waters at the coast (formula (20), coordinates see Table 2), which are connected by straight lines indicating the shape of Ptolemaic Scotland. In contrast, the actual coast is shown in Fig. 6 together with the known and assumed modern counterparts (cf. Table 2). Some important identifications are considered in the following.
From _Tarvedum sive Orcas pr._ (No. 11) can be expected that it was that part of Great Britain which was located nearest to the _Orcades insulae_ (cf. Rivet and Smith [26, p. 115]). Therefore, it is usually identified as Dunnet Head at the north coast (e.g. Rivet and Smith [26, p. 422], Strang [28]). This identification, however, is contradictory to the rotation of Scotland. In Ptolemy’s description, _Tarvedum sive Orcas pr._ is the last, easternmost place of the north side so that it should correspond to the northwestern corner of Scotland, i.e. Cape Wrath (also suggested by Bradley [3, p. 7]). It is unlikely that this characteristic place was unknown to Ptolemy because it was surely of importance for the ancient navigation. Possibly, the name _Orcas pr._ was defined by Ptolemy himself because the point is that corner of _Albion_ which is nearest to the position of _Orcades insulae_ in his description of _Albion_. Further evidence for the identification as Cape Wrath is provided by the consistency of the Ptolemaic coordinates of _Tarvedum sive Orcas pr._ with those of other places in the northwest of Scotland, see Section 4.5.
Since _Tarvedum sive Orcas pr._ is equated with Cape Wrath, the three next places of the actual north and east coast _Virvedrum pr._ (No. 42), _Verubium pr._ (No. 43), and _Ila fl._ (No. 44) can be identified as Dunnet Head, Duncansby Head (also suggested by Bradley [3, p. 8]), and River Wick, as it is indicated by the actual shape of the coast.
From the rotation of Scotland it follows that _Nabarus fl._ (No. 10) is located at the west coast of Scotland. However, _Nabarus fl._ is usually identified as the River Naver at the Scottish north coast because of a presumable relation between the ancient and modern name (cf. Rivet and Smith [26, p. 422]). An explanation for this seeming disagreement is given in the following section.
_Novantarum chersonesus et pr._ (No. 1) is usually identified as Mull of Galloway (e.g. Rivet and Smith [26, p. 426 f.]). Since, however, it is the first place in Ptolemy’s description of the north side of _Albion_, which is confirmed by its coordinates, it is possibly rather the northwestern tip (Corsewall Point) of Rhinns of Galloway than the southeastern tip (Mull of Galloway).
### Application of the analysis method
The analysis method described in Section 2 was applied to the rotation-corrected Ptolemaic coordinates \(\Lambda^{*}_{i}\) and \(\Phi^{*}_{i}\) (formula (20)). In distortion model (1) \(\Lambda\) and \(\Phi\) had to be replaced by \(\Lambda^{*}\) and \(\Phi^{*}\). Before the rotation all considered ancient coordinate variants of a place were combined so that all possible ancient point variants were generated (e.g. in the case of two variants for \(\Lambda\) and \(\Phi\), four point variants are possible). The \(\Omega\)-coordinates and differing variants from X, Müller [18], and Nobbe [23] were used. In addition to the identifications mentioned in Section 4.4, those of Hazlitt [7], Müller [18], Rivet and Smith [26], Thomas [31], and Watson [34] were taken into consideration (for a compilation see Kleineberg et al. [11, p. 42 ff.]).
The a priori standard deviations \(\sigma_{\Lambda^{*}i}\) and \(\sigma_{\Phi^{*}i}\) of \(\Lambda^{*}_{i}\) and \(\Phi^{*}_{i}\) were chosen on the basis of those resulting from the adjustment of the English places. The smallest among them are 12\({}^{\prime}\) for \(\Lambda\) and 9\({}^{\prime}\) for \(\Phi\). Converting 12\({}^{\prime}\) from the mean latitude of England to the mean latitude of Scotland yields \(13.4^{\prime}\) so that \(\sigma_{\Lambda^{*}i}=13^{\prime}\), \(\sigma_{\Phi^{*}i}=9^{\prime}\) were applied. In the cases of seemingly rough coordinate values, larger values were used: \(\sigma_{\Lambda^{*}i}=18^{\prime}\) for \(\Lambda^{*}\) of No. 13 (\(\Phi=61\lx@math@degree\)), No. 15 (\(\Phi=60\lx@math@degree\)), No. 71 (\(\Phi=59\lx@math@degree\)); \(\sigma_{\Phi^{*}i}=12^{\prime}\) for \(\Phi^{*}\) of No. 14 (\(\Lambda=19\lx@math@degree\)), No. 71 (\(\Lambda=19\lx@math@degree\)).
The identifications determined by the analysis method and their transformation units are given in Table 2. For a few places no identification and/or transformation unit are given because either the modern or the ancient coordinates turned out to be inconsistent. Additionally, Fig. 8 shows a modern map of the Ptolemaic places including the transformation units. Column ’SI’ of Table 2 contains the sources of the identifications, they are: ’B’: Bradley [3], ’H’: Hazlitt [7], ’M’: Müller [18], ’R’: Rivet and Smith [26], ’T’: Thomas [31]. Column ’SC’ gives the sources of the determined ancient coordinate variants; in addition to \(\Omega\) and X there is only ’M’ for Müller [18] in the case of No. 73 (\(\Lambda=21\lx@math@degree 20^{\prime}\)). In columns ’\(\Delta\lambda\)’ and ’\(\Delta\phi\)’ the residuals
\[\Delta\lambda_{i} =\bar{\lambda}_{i}-\lambda_{i}\] (21)
\[\Delta\phi_{i} =\bar{\phi}_{i}-\phi_{i}\]
after the rectifying transformation (3) are given (based on \(\Lambda^{*}_{i}\), \(\Phi^{*}_{i}\)).
Table 4 gives the relative shifts with respect to transformation unit Al7 (formula (2)) with \(\Lambda_{0\,7}=21\lx@math@degree 12^{\prime}\) and \(\Phi_{0\,7}=-15\lx@math@degree 27^{\prime}\). Fig. 7 shows the scale-corrected relative shifts \(\Delta\Lambda_{0k}/m_{\lambda}\) and \(\Delta\Phi_{0k}/m_{\phi}\) in the modern reference system. The relative shifts are lower than 2°. The latitudinal shifts with respect to Al7 are systematically northwards, on average ca. 1°. Al7 is shifted with respect to the neighbouring Al8 in Ptolemaic England significantly only in longitude (\(\Lambda_{0\,8}=22\lx@math@degree 15^{\prime}\) and \(\Phi_{0\,8}=-15\lx@math@degree 28^{\prime}\)).
In Table 4 the a posteriori standard deviations of the ancient coordinates resulting from the adjustment of the transformation units are given. They are scale-corrected by means of
\[s_{\lambda^{*}i} =s_{\Lambda^{*}i}/m_{\lambda}\] (22)
\[s_{\phi^{*}i} =s_{\Phi^{*}i}/m_{\phi}\;;\]
the few coordinates with larger a priori standard deviations (see above) were not involved. Neglecting possibly underestimated values, the uncertainty is about 9–20 km and corresponds to that in England.
On the basis of the 45 places assigned to transformation units, the scale factors were adjusted (using model (1)). The results are \(m_{\lambda}=1.38\pm 0.06\) and \(m_{\phi}=1.32\pm 0.08\) based on the a priori standard deviations and \(m_{\lambda}=1.34\pm 0.04\) and \(m_{\phi}=1.38\pm 0.07\) based on the a posteriori standard deviations, which do not differ significantly from the postulated value 1.35.
Following the rating given by Kleineberg et al. [11, p. 37 ff.], at least twelve places can be considered to be surely identified: Nos. 15, 16, 17, 47, 48, 49, 51, 54, 71, 72, 80, 127. Four of them, Nos. 47, 48, 49, 51, are in transformation unit Al3 in the northern part of Scotland. The estimated uncertainties in Al3 are \(s_{\lambda^{*}i}=9^{\prime}\) and \(s_{\phi^{*}i}=5^{\prime}\) (formula (22)) so that even this northern part turned out to be accurately determined.
_Nabarus fl._ (No. 10) is consistent in transformation unit Al2 with its identification River Naver at the north coast. According to the rotation of Scotland, however, it should be at the west coast. This discrepancy is explicable by the different shifts of the northwestern places in Al1 and the northeastern places in Al2 (rotation-corrected situation, cf. Fig. 7). Possibly, Ptolemy had different data sources for these two regions, or his source already contained the significant relative shift of both regions. Viewed from Al1, the places of Al2 are shifted in a west-southwestward direction. That locates _Nabarus fl._ at a longitude coinciding with the west coast (cf. Fig. 6). Accordingly, Ptolemy assumed that _Nabarus fl._ is located at the west coast, which yielded the position at the north coast in his rotated Scotland.
The assignment of _Ripa Alta_/Tarbat Ness (No. 45) to Al1 is somewhat questionable because of its distant location at the east coast. The assignment has been kept because the residuals in Al1 are very small and the identification is inconsistent in the nearest transformation unit Al3. Possibly, the accurate position of _Ripa Alta_ with respect to the west coast originates from a common data source arisen from a circumnavigation. Alternatively, _Ripa Alta_ could be identified as Ord of Caithness (according to Thomas [31]), which is consistent in Al3.
The odd Ptolemaic position of _Skitis insula_ (No. 125) is explicable by means of an identification as Skye in conjunction with an error in the presumed original latitude of 66° (\(\approx\Phi^{*}\)) before the rotation. Thereby, it is consistent in transformation unit Al1 with its rotation-corrected longitude \(\Lambda^{*}=13\lx@math@degree 42^{\prime}\) (possibly originally \(13\lx@math@degree 40^{\prime}\)). If, for example, the point \(\lambda=-6\lx@math@degree 18^{\prime}\), \(\phi=57\lx@math@degree 42^{\prime}\) at the north coast of Skye is chosen and the latitude is changed to 64°, an assignment to Al1 and adjustment (model (1) with \(m_{\lambda}=m_{\phi}=1.35\)) indicate no error and the transformation (3) yields the residuals \(\Delta\lambda=-18^{\prime}\), \(\Delta\phi=0^{\prime}\). If Point of Sleat at \(\lambda=-6\lx@math@degree 01^{\prime}\), \(\phi=57\lx@math@degree 01^{\prime}\) at the south coast of Skye is chosen and the latitude is changed to \(63\lx@math@degree\), no errors is indicated and the residuals are \(\Delta\lambda=-4^{\prime}\), \(\Delta\phi=3^{\prime}\).
For the estimation of the pivot point (Section 4.3) on the basis of the results of the present analysis, the average shift parameters of Scotland were of interest. They were determined by means of model (1), wherein the shifted groups of places were not taken into account. The result based on the 45 places with assignment to a transformation unit is: \(\Lambda_{0}=21\lx@math@degree 30^{\prime}\pm 5^{\prime}\), \(\Phi_{0}=-14\lx@math@degree 28^{\prime}\pm 4^{\prime}\).
## 5 Summary and conclusion
The turning of Ptolemy’s places north of Hadrian’s Wall (Ptolemaic Scotland) to the east was modelled by a 3D rotation, which turned out to describe the turning satisfactorily well. The pivot point and the rotation angle \(\alpha\) of a transformation between the Ptolemaic coordinates (\(\Lambda_{i}\), \(\Phi_{i}\)) and modern coordinates (\(\lambda_{i}\), \(\phi_{i}\)) were determined by means of adjustment theory on the basis of different data sets for the identifications of the Ptolemaic places. From the results conclusions about the original rotation were derived. The presumable pivot point was at (near) \(\Lambda_{\mathrm{P}}=18\lx@math@degree\), \(\Phi_{\mathrm{P}}=58\lx@math@degree 30^{\prime}\); the presumable rotation angle was (ca.) 80°.
Based on the resulting parameters, rotation-corrected Ptolemaic coordinates \(\Lambda^{*}_{i}\) and \(\Phi^{*}_{i}\) were computed. The remaining differences between the \(\Lambda^{*}_{i}\), \(\Phi^{*}_{i}\) and \(\lambda_{i}\), \(\phi_{i}\) were modelled by scaling errors and shifts. Groups of places of homogenous shifts (transformation units) in conjunction with best fitting modern counterparts of the Ptolemaic places were determined by means of a geodetic-statistical analysis method. Based on the results, scale factors \(m_{\lambda}\), \(m_{\phi}\) of a transformation between the \(\lambda_{i}\), \(\phi_{i}\) and \(\Lambda^{*}_{i}\), \(\Phi^{*}_{i}\) were determined by means of an adjustment. \(m_{\lambda}\) and \(m_{\phi}\) are ca. 1.35 in agreement with the result for the places south of Hadrian’s Wall. The factor \(>1\) is caused by Ptolemy’s underestimation of the circumference of the earth. The transformation units have relative shifts \(<\)2° and coordinate accuracies of about 10–20 km, which shows that Ptolemy had an extensive knowledge of Scotland, possibly owing to Roman military sources.
For the places of Great Britain there was not enough space in Ptolemy’s description of the _Oikoumene_ owing to the scaling error and the preset latitude of _Thule_. Probably Ptolemy determined the geographic coordinates of the Scottish places or of some selected places first and then rotated them in order to satisfy the latitudinal limit given by _Thule_. From the position of the _Orcades insulae_/Orkney north of Ptolemaic Scotland it can be deduced that it was not rotated.
Ptolemy’s _Thule_ must be distinguished from Pytheas’ _Thule_ and is to be equated with Shetland. Reasons for this are the report of the sighting of _Thule_ by the Romans in Tacitus’ _Agricola_, the localisation of people north of _Thule_ in MS II.6, and the agreement of the relative position of _Thule_ and the _Orcades insulae_ with that of Shetland and Orkney.
Ptolemy’s procedure in the determination of the positions of Scotland was presumably: determining the latitude of the _Orcades insulae_ in the south of _Thule_; choosing a pivot point and a rotation angle such that the rotated _Dumna insula_ is located south of the _Orcades insulae_; determining the positions of Ptolemaic Scotland based on a rotation; determining the longitudes of the _Orcades insulae_ and _Thule_ such that they are located north of the eastern end of the rotated Scotland.
Further main results concerning questions about Ptolemaic positions are the following. The generally accepted equation of _Nabarus fl._ with the River Naver seems to contradict the rotation of Scotland, since the Ptolemaic and the actual position are at the north coast. That is explicable by two shifted regions in the north of Scotland, which led Ptolemy to assign it to the wrong coast side. The characteristic, northwesternmost point of Scotland, Cape Wrath, is usually not equated with one of the Ptolemaic places. It is, however, not missing in Ptolemy’s description; it can be equated with _Tarvedum sive Orcas pr._, from which a position near to the _Orcades insulae_ can be expected. That is not fulfilled by the real situation, but by Ptolemy’s description of Scotland.
The identification of the Ptolemaic places was based on the ancient coordinates, the topographic situation, and the preliminary work of other authors. An evaluation by other scientific disciplines is desirable.
## Acknowledgement
I thank Andreas Kleineberg for his collaboration on the compilation of the identifications of the Scottish Ptolemaic places to be found in the literature and for his information about the derivation of the place name _Tarvedum sive Orcas promontorium_.
## References
* Baumann [1993] Baumann, E., 1993. Vermessungskunde Band 2. Dümmler, Bonn.
* Bill [1996] Bill, R., 1996. Grundlagen der Geo-Informationssysteme Band 2. Wichmann, Heidelberg.
* Bradley [1884] Bradley, H., 1884. Remarks on Ptolemy’s Geography of the British Isles. Nichols and Sons, Westminster.
* Bronstein et al. [2008] Bronstein, I. N., Semendjajew, K. A., Musiol, G. and Mühlig, H., 2008. Taschenbuch der Mathematik. Verlag Harri Deutsch, Frankfurt am Main.
* Dilke [1984] Dilke, O. A. W., 1984. Geographical Preceptions of the North in Pomponius Mela and Ptolemy. Arctic, 37: 347–351.
* Dilke [1985] Dilke, O. A. W., 1985. Greek and Roman Maps. Thames and Hudson, London.
* Hazlitt [1851] Hazlitt, W., 1851. The Classical Gazetteer. A dictionary of Ancient Sites. reprint 1995, Senate, London.
* Hennig [1944] Hennig, R., 1944. Terrae Incognitae. 2nd edition, E. J. Brill, Leiden.
* Jäger et al. [2005] Jäger, R., Müller, T., Saler, H. and Schwäble, R., 2005. Klassische und robuste Ausgleichungsverfahren. Wichmann, Heidelberg.
* Jones and Keillar [1996] Jones, B. and Keillar, I., 1996. Marinus, Ptolemy and the Turning of Scotland. Britannia, 27: 43–49.
* Kleineberg et al. [2012] Kleineberg, A., Marx, C. and Lelgemann, D., 2012. Europa in der Geographie des Ptolemaios. Die Entschlüsselung des “Atlas der Oikumene”: Zwischen Orkney, Gibraltar und den Dinariden. Wissenschaftliche Buchgesellschaft, Darmstadt.
* Koch [2007] Koch, K. R., 2007. Outlier Detection in Observations Including Leverage Points by Monte Carlo Simulations. Allgemeine Vermessungsnachrichten, 114: 330–336.
* Marx [2011a] Marx, C., 2011. On the precision of Ptolemy’s geographic coordinates in his Geographike Hyphegesis. History of Geo- and Space Sciences, 2: 29–37.
* Marx [2011b] Marx, C., 2011. Geodätische Entzerrung der ptolemäischen Koordinatenangaben. In: Nüsse et al.: Germania magna – Ein neuer Blick auf eine alte Karte. Germania, 89 (in print).
* Marx [2012a] Marx, C., 2012a. Rectification of the ancient geographic coordinates in Ptolemy’s Geographike Hyphegesis. History of Geo- and Space Sciences, 3: 99–112.
* Marx [2012b] Marx, C., 2012b. Investigations of the coordinates in Ptolemy’s Geographike Hyphegesis Book 8. Archive for History of Exact Sciences, 66: 531–555.
* Marx and Kleineberg [2012] Marx, C. and Kleineberg, A., 2012. Die Geographie des Ptolemaios. Geographike Hyphegesis Buch 3: Europa zwischen Newa, Don und Mittelmeer. epubli GmbH, Berlin.
* Müller [1883–1901] Müller, C., 1883–1901. Claudii Ptolemaei Geographia. 2 vols., Paris.
* Neitzel [2005] Neitzel, F., 2005. Die Methode der maximalen Untergruppe (MSS) und ihre Anwendung in der Kongruenzuntersuchung geodätischer Netze. zfv Zeitschrift für Geodäsie, Geoinformation und Landmanagement, 130: 82–91.
* Neitzel [2010] Neitzel, F., 2010. Generalization of total least-squares on example of unweighted and weighted 2D similarity transformation. Journal of Geodesy, 84: 751–762.
* Neugebauer [1975] Neugebauer, O., 1975. A history of ancient mathematical astronomy. Springer, Berlin.
* Niemeier [2002] Niemeier, W., 2002. Ausgleichungsrechnung. Walter de Gruyter, Berlin.
* Nobbe [1843–1845] Nobbe, K. F. A. (ed.), 1843–45. Claudii Ptolemaei Geographia. 3 vols., reprint 1966, Georg Olms Verlagsbuchhandlung, Hildesheim.
* Pedersen [2011] Pedersen, O., 2011. A Survey of the Almagest. With Annotation and New Commentary by Alexander Jones. Springer, New York.
* Richmond [1922] Richmond, I. A., 1922. Ptolemaic Scotland. Proceedings of the Society of Antiquaries of Scotland, 56: 288–301.
* Rivet and Smith [1979] Rivet, A. L. F. and Smith, C., 1979. The Place-Names of Roman Britain. Princeton University Press, Princeton.
* Strang [1997] Strang, A., 1997. Explaining Ptolemy’s Roman Britain. Britannia, 28: 1–30.
* Strang [1998] Strang, A., 1998a. Recreating a possible Flavian map. Proceedings of the Society of Antiquaries of Scotland, 128: 425–440.
* Strang [1998] Strang, A., 1998b. The Analysis of Ptolemy’s Geography. The Cartographic Journal, 35: 27–47.
* Stückelberger and Graßhoff [2006] Stückelberger, A. and Graßhoff, G. (eds.), 2006. Klaudios Ptolemaios Handbuch der Geographie. 2 vols., Schwabe Verlag, Basel.
* Thomas [1875] Thomas, F. W. L., 1875. Analysis of the Ptolemaic Geography of Scotland. With two maps. Proceedings of the Society of Antiquaries of Scotland, 11: 198–225.
* Tierney [1959] Tierney, J. J., 1959. Ptolemy’s map of Scotland. The Journal of Hellenic Studies, 79: 132–148.
* Toomer [1984] Toomer, G. J., 1984. Ptolemy’s Almagest. reprint 1998, Princeton University Press, Princeton.
* Watson [1926] Watson, W. J., 1926. The History of the Celtic Place-Names of Scotland. W. Blackwood & Sons, Edinburgh.
TU | n | ΔΛ0k | ΔΦ0k
---|---|---|---
Al8 | 6 | 1°43′ | −0°16′
Al9 | 6 | 0°41′ | −0°36′
Al10 | 18 | 0°45′ | −0°05′
Al11 | 4 | −0°50′ | −0°33′
Al12 | 3 | −1°42′ | −0°25′
Al13 | 9 | −0°48′ | 0°10′
Al14 | 10 | — | —
Al15 | 13 | 0°09′ | −0°32′
Al16 | 4 | 1°17′ | −0°30′
Table 1: Relative shifts of the transformation units (TU) in England with
respect to Al14
No. | Ancient name | SC | Λ∗ | Φ∗ | Identification | SI | λ | ϕ | Δλ | Δϕ | TU
---|---|---|---|---|---|---|---|---|---|---|---
| | Λ,Φ | [\lx@math@degree,′] | [\lx@math@degree,′] | | | [\lx@math@degree,′] | [\lx@math@degree,′] | [′] | [′] |
1 | Novantarum chersonesus et pr. | Ω,Ω | 12,08 | 60,20 | Corsewall Point (?) | — | −5,10 | 55,00 | — | — | —
2 | Rerigonius sinus | Ω,Ω | 13,46 | 60,03 | Loch Ryan | R | −5,05 | 55,02 | 1 | 12 | Al6
3 | Vindogara sinus | Ω,Ω | 14,30 | 60,26 | Irvine Bay | R | −4,40 | 55,34 | 8 | −3 | Al6
4 | Clota ae. | Ω,Ω | 16,16 | 60,49 | R. Clyde | R | −4,31 | 55,56 | 5 | −8 | Al4
5 | Lemannonius sinus | Ω,Ω | 15,42 | 61,43 | Loch Fyne | T | −4,56 | 56,16 | 4 | 12 | Al4
6 | Epidium pr. | Ω,Ω | 14,16 | 61,15 | Mull of Kintyre | R | −5,45 | 55,17 | — | — | —
7 | Longus fl. | Ω,Ω | 14,18 | 61,45 | Loch Linnhe | R | −5,38 | 56,29 | −16 | 0 | Al4
8 | Itys fl. | Ω,Ω | 14,19 | 63,13 | Loch Alsh | T | −5,36 | 57,16 | 7 | −8 | Al1
9 | Volsas sinus | Ω,Ω | 14,37 | 64,12 | Loch Broom | T | −5,08 | 57,53 | −7 | −1 | Al1
10 | Nabarus fl. | Ω,Ω | 14,34 | 64,41 | R. Naver | R | −4,14 | 58,32 | −7 | −7 | Al2
11 | Tarvedum sive Orcas pr. | Ω,Ω | 15,03 | 65,22 | Cape Wrath | B | −5,00 | 58,38 | 4 | 6 | Al1
12 | Novantarum chersonesus | Ω,Ω | 12,08 | 60,20 | Rhinns of Galloway | R | −5,10 | 55,00 | — | — | —
13 | Abravannus fl. | Ω,Ω | 13,21 | 59,29 | Water of Luce | R | −4,49 | 54,52 | −34 | −2 | Al6
14 | Iena ae. | Ω,X | 14,38 | 59,16 | R. Cree | R | −4,24 | 54,54 | −2 | −14 | Al6
15 | Deva fl. | Ω,Ω | 15,09 | 58,44 | R. Dee | R | −4,04 | 54,50 | −25 | 7 | Al7
16 | Novius fl. | Ω,Ω | 16,09 | 58,50 | R. Nith | R | −3,35 | 55,00 | −10 | 1 | Al7
17 | Itunae ae. | Ω,Ω | 17,37 | 58,48 | R. Eden | R | −3,04 | 54,57 | 24 | 3 | Al7
42 | Virvedrum pr. | Ω,Ω | 15,40 | 65,13 | Dunnet Head | — | −3,22 | 58,40 | −10 | 8 | Al2
43 | Verubium pr. | Ω,Ω | 16,30 | 64,59 | Duncansby Head | B | −3,01 | 58,39 | 6 | −1 | Al2
44 | Ila fl. | Ω,Ω | 16,31 | 64,44 | R. Wick | — | −3,05 | 58,26 | 10 | 0 | Al2
45 | Ripa alta | Ω,Ω | 16,32 | 64,13 | Tarbat Ness | R | −3,47 | 57,52 | −3 | 0 | Al1
46 | Loxa fl. | X,X | 17,17 | 63,28 | R. Lossie | R | −3,17 | 57,43 | −9 | 4 | Al3
47 | Varar ae. | X,X | 16,32 | 63,13 | Beauly Firth | R | −4,14 | 57,30 | 14 | 6 | Al3
48 | Tuesis ae. | Ω,Ω | 18,01 | 63,12 | R. Spey | R | −3,06 | 57,40 | 12 | −5 | Al3
49 | Celnius fl. | Ω,Ω | 18,34 | 63,11 | R. Deveron | R | −2,31 | 57,40 | 2 | −5 | Al3
50 | Taezalorum pr. | Ω,Ω | 19,09 | 63,26 | Kinnairds Head | R | −2,00 | 57,42 | −3 | 4 | Al3
51 | Deva fl. | Ω,Ω | 19,03 | 62,39 | R. Dee (at Aberdeen) | R | −2,05 | 57,08 | −3 | 2 | Al3
52 | Tava ae. | Ω,Ω | 18,57 | 62,08 | R. South Esk | T | −2,27 | 56,42 | 0 | 4 | Al4
53 | Tina fl. | Ω,Ω | 18,51 | 61,36 | Firth of Tay | T | −2,47 | 56,27 | 16 | −4 | Al4
54 | Boderia ae. | Ω,X | 17,39 | 60,53 | Firth of Forth | R | −2,39 | 56,08 | 3 | −5 | Al5
55 | Alaunus fl. | Ω,Ω | 18,34 | 60,24 | R. Tweed | T | −1,59 | 55,46 | 3 | −5 | Al5
66 | Lucopibia | Ω,Ω | 14,38 | 59,16 | Wigtown | T | −4,27 | 54,52 | 1 | −12 | Al6
67 | Rerigonium | Ω,Ω | 14,05 | 59,52 | Stranrear | R | −5,01 | 54,54 | 10 | 13 | Al6
68 | Carbantorigum | Ω,Ω | 16,34 | 59,08 | — | — | — | — | — | — | —
69 | Uxellum | Ω,Ω | 16,30 | 58,53 | Roman fort at Ward Law | R | −3,32 | 54,59 | 2 | 5 | Al7
70 | Corda | Ω,Ω | 16,02 | 59,41 | Roman fort at Castledykes | R | −3,43 | 55,41 | −7 | −1 | Al7
71 | Trimontium | Ω,Ω | 17,12 | 59,06 | Newstead | R | −2,42 | 55,36 | −16 | −23 | Al7
72 | Colania | Ω,Ω | 17,05 | 59,53 | — | — | — | — | — | — | —
73 | Vandogara | M,Ω | 15,31 | 60,23 | Ayr | T | −4,37 | 55,28 | −23 | 1 | Al4
74 | Coria | Ω,Ω | 16,52 | 60,24 | — | — | — | — | — | — | —
75 | Alauna | Ω,Ω | 17,00 | 61,02 | Stirling | T | −3,56 | 56,07 | 2 | −9 | Al4
76 | Lindum | X,Ω | 16,41 | 61,11 | Drumquhassle | R | −4,27 | 56,04 | 19 | 0 | Al4
77 | Victoria | Ω,Ω | 17,46 | 61,24 | Kinross | H | −3,25 | 56,12 | 5 | 2 | Al4
78 | Curia | X,Ω | 17,25 | 59,51 | Borthwick Castle | M | −3,00 | 55,50 | 12 | −2 | Al7
79 | Alauna | X,X | 18,19 | 60,46 | — | — | — | — | — | — | —
80 | Bremenium | Ω,Ω | 17,59 | 60,05 | High Rochester | R | −2,16 | 55,17 | −6 | 10 | Al5
81 | Banatia | Ω,Ω | 16,45 | 61,41 | Roman fort at Dalginross | R | −3,59 | 56,22 | −6 | 5 | Al4
82 | Tamia | Ω,Ω | 17,10 | 62,11 | — | — | — | — | — | — | —
83 | Pinnata castra | Ω,Ω | 17,17 | 63,20 | Burghead | L | −3,29 | 57,42 | 3 | −1 | Al3
84 | Tuesis | Ω,Ω | 17,38 | 63,04 | Roman camp at Bellie | R | −3,06 | 57,37 | −5 | −7 | Al3
85 | Orrea | Ω,Ω | 18,20 | 61,38 | near Monifieth | R | −2,49 | 56,29 | −6 | −5 | Al4
86 | Devana | Ω,Ω | 18,31 | 62,48 | Roman camp at Kintore | R | −2,21 | 57,14 | −10 | 3 | Al3
125 | Scitis insula | Ω,Ω | 13,42 | 65,58 | Skye | R | −6,18 | 57,42 | — | — | —
126 | Dumna insula | Ω,X | 12,37 | 64,38 | Lewis | R | −6,45 | 58,07 | 0 | 4 | Al1
127 | Orcades insulae | Ω,Ω | 11,51 | 64,36 | Orkney | R | −2,59 | 59,00 | — | — | —
Table 2: Identifications of the places in Ptolemaic Scotland
λ & ϕ | 1) Rivet & Smith | 2) Strang | 3) Table 2
---|---|---|---
Λ & Φ | a) Ω | b) X | a) Ω | b) X |
σΛ∗/σΦ∗ | 1.2°/0.6° | 1.0°/0.5° | 1.2°/0.6° | 1.0°/0.5° | 0.8°/0.4°
n | 47 | 47 | 50 | 50 | 45
ΛP | 17\lx@math@degree45′±0\lx@math@degree25′ | 17\lx@math@degree47′±0\lx@math@degree26′ | 17\lx@math@degree55′±0\lx@math@degree25′ | 17\lx@math@degree58′±0\lx@math@degree23′ | 17\lx@math@degree54′±0\lx@math@degree07′
ΦP | 58\lx@math@degree36′±0\lx@math@degree14′ | 58\lx@math@degree34′±0\lx@math@degree15′ | 58\lx@math@degree31′±0\lx@math@degree15′ | 58\lx@math@degree36′±0\lx@math@degree13′ | 58\lx@math@degree35′±0\lx@math@degree04′
α | −79\lx@math@degree53′±2\lx@math@degree56′ | −80\lx@math@degree34′±2\lx@math@degree38′ | −81\lx@math@degree23′±2\lx@math@degree57′ | −82\lx@math@degree48′±2\lx@math@degree24′ | (const.) −80\lx@math@degree
Λ0 | 21\lx@math@degree16′±0\lx@math@degree30′ | 21\lx@math@degree16′±0\lx@math@degree32′ | 21\lx@math@degree16′±0\lx@math@degree31′ | 21\lx@math@degree16′±0\lx@math@degree30′ | 21\lx@math@degree35′±0\lx@math@degree05′
Φ0 | −14\lx@math@degree28′±0\lx@math@degree15′ | −14\lx@math@degree28′±0\lx@math@degree16′ | −14\lx@math@degree27′±0\lx@math@degree15′ | −14\lx@math@degree27′±0\lx@math@degree15′ | −14\lx@math@degree28′±0\lx@math@degree04′
s0 | 0.99 | 1.07 | 1.03 | 1.00 | 1.03
Table 3: Results of the adjustment of the rotation of Ptolemaic Scotland
TU | n | ΔΛ0k | ΔΦ0k | sλ∗ | sϕ∗
---|---|---|---|---|---
Al1 | 5 | 0°31′ | 1°33′ | 6′ | ( 6 km) | 5′ | ( 9 km)
Al2 | 4 | −0°46′ | 1°17′ | 10′ | (10 km) | 6′ | (11 km)
Al3 | 9 | 0°43′ | 0°54′ | 9′ | (9 km) | 5′ | ( 9 km)
Al4 | 11 | 1°03′ | 0°56′ | 12′ | (12 km) | 6′ | (11 km)
Al5 | 3 | −0°02′ | 0°41′ | 5′ | ( 5 km) | 9′ | (17 km)
Al6 | 6 | −0°35′ | 0°55′ | 13′ | (14 km) | 11′ | (20 km)
Al7 | 7 | — | — | 15′ | (16 km) | 8′ | (15 km)
Table 4: Relative shifts of the transformation units (TU) in Scotland with
respect to Al7 and scale-corrected a posteriori standard deviations
<figure><img src="content_image/1511.06691/x1.png"><figcaption>Figure 1: Residual vectors between the transformed Ptolemaic positions(circle) and the actual positions (arrowhead) in England; the transformationis based on formula (3) with mΛ=0.86436, mΦ=0.83333 and a centring at London</figcaption></figure>
<figure><img src="content_image/1511.06691/x2.png"><figcaption>Figure 2: Places of Ptolemaic England (square) and Scotland (circle) based onthe Ω-coordinates</figcaption></figure>
<figure><img src="content_image/1511.06691/x3.png"><figcaption>Figure 3: Transformation units in England and their relative shifts withrespect to Al14</figcaption></figure>
<figure><img src="content_image/1511.06691/x4.png"><figcaption>Figure 4: Assumed pivot point PP, estimated pivot points P1, P2, P3(triangle), their confidence ellipses, places of Ptolemaic England (square)and Scotland (circle) based on the X-coordinates (differences to Ω in Nos. 19,20, 57, 68, 72, 78, 95)</figcaption></figure>
<figure><img src="content_image/1511.06691/x5.png"><figcaption>Figure 5: Places of Ptolemaic England (square) and Scotland (circle) based onthe Ω-coordinates; the Scottish places are rotation-corrected by a rotationaround PP=(Λ=18\lx@math@degree,Φ=58\lx@math@degree30′) (triangle) by−αP=80\lx@math@degree</figcaption></figure>
<figure><img src="content_image/1511.06691/x6.png"><figcaption></figcaption></figure>
<figure><img src="content_image/1511.06691/x8.png"><figcaption>Figure 7: Transformation units in Scotland and their relative shifts withrespect to Al7</figcaption></figure>
<figure><img src="content_image/1511.06691/x9.png"><figcaption>Figure 8: Map of Ptolemaic Scotland (star: no transformation unit)</figcaption></figure>
|
1010.3276 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 5657,
"num_imgs": 0,
"llama3_tokens_count": 1140
} | [] | # Complex dynamics of evaporation-driven convection in liquid layers
F. Chauvet, S. Dehaeck, P. Colinet
TIPs (Transfers, Interfaces and Processes),
Université Libre de Bruxelles, Belgium
###### Abstract
The spontaneous convective patterns induced by evaporation of a pure liquid layer are studied experimentally. A volatile liquid layer placed in a cylindrical container is left free to evaporate into air at rest under ambient conditions. The liquid/gas interface of the evaporating liquid layer is visualized using an infrared (IR) camera. The phenomenology of the observed convective patterns is qualitatively analysed, showing in particular that the latter can be quite complex especially at moderate liquid thicknesses. Attention is also paid to the influence of the container diameter on the observed patterns sequence.
##
During the evaporation of a pure liquid layer into dry air the liquid/gas interface is cooled because of the energy consumption for the phase change. This temperature difference across the liquid layer can generate surface-tension-driven convection and/or buoyancy-driven convection in the liquid, depending on the layer thickness. Stable and homogeneous convective patterns have been observed in evaporating liquid layers without heating in [1], for not too high layer thicknesses. Here, the focus is on the complex convective motions which appear when the layer thickness is higher than typically 0.5 mm in our experimental set-up.
In practice, a small amount of volatile liquid (HFE-7100 from 3M) is initially poured in a cylindrical container to form the liquid layer. The container is made of a plexiglas cylinder glued by silicone to an aluminium block. The height of the cylinder is 2 cm and two inner diameters have been tested, namely 59 mm and 34 mm. The liquid volume injected initially is adjusted such as to start with a 3 mm liquid layer thickness. Then the liquid is just left free to evaporate into air at rest under ambient conditions. In these conditions, the evaporation process is limited by diffusion of vapor into air and the evaporation rate stays almost constant until the layer is too thin and begins to dewets the aluminium. This has been verified by liquid weight measurements as a function of time, using a precision balance. This can be simply explained by the fact that the diffusive resistance for vapor mass transfer (\(\propto\) container height) does not significantly depend on the variation of the small layer thickness. As a consequence, the layer thickness decreases almost linearly with time.
As convection in the liquid is necessarily associated with temperature variations within the liquid (and in particular at the interface), it is chosen here to use an IR camera to follow the time evolution of the temperature distribution, representative of the pattern organization. The IR camera used is a 16-bit Focal Plane Array camera-type (Thermosensorik, InSb 640 SM) with a spectral band sensitivity ranging from 1.0 to 4.8 \(\mu\)m and a 640x512 pixels\({}^{2}\) sensor cooled at 77 K by a Stirling engine. As the liquid layer is semi-transparent, in the spectral band sensitivity of the IR camera specific work has to be done to convert the recorded IR signal in temperature. Here, the IR camera is just used as a visualization tool to analyze the spatio-temporal dynamics of convective motions in the liquid layer.
Evaporation experiments have been done for the two container inner diameters, 59 mm and 34 mm. In both cases convection appears immediately after the injection of the liquid. In the 59 mm diameter container, fluid motion is complex but we can identify some individual large-scale structures which are quite stable. As the layer thickness decreases because of the evaporation, these large structures are more and more organized, their number increases and their size decreases. Progressively, the pattern consists of stable convective cells which split into smaller ones. During this period, the contrast between cells decreases on the recorded IR images until there is no visible convection. Finally, the layer thickness continues to decrease without convection motion until the dewetting sequence starts.
In the 34 mm diameter container, the phenomenology is different. No individual large-scale structures are observed as it is the case in the 59 mm diameter container. The pattern is globally axisymetric with a cold point at the center of the container. After a while, convective cells are suddenly created from the container center and ejected towards the container wall. Contrary to all other patterns transitions seen here, this transition can be particularly well identified. After this violent event, the pattern looks like to the pattern observed in the 59 mm diameter container with more and more organized structures, displaying similar patterns sequences.
It can be concluded that, for layer thicknesses investigated here, chaotic convective patterns depend on the container diameter when the convective structures are large compared to the container diameter (strong spatial confinement). At smaller depths and hence smaller pattern wavelengths, the dynamics is much less affected by the container walls (weak spatial confinement). In the former case, a particular symmetry-breaking transition has been identified, which will be the subject of further investigations.
## Acknowledgments
Supported by the Marie Curie MULTIFLOW Network, by ESA & BELSPO PRODEX projects, and by FRS - FNRS
## References
[1] H. Mancini, D. Maza, Pattern formation without heating in an evaporative convection experiment, Europhys. Lett., 66 (6), 812 -818, 2004
|
1807.10732 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 38999,
"num_imgs": 11,
"llama3_tokens_count": 10207
} | [
"content_image/1807.10732/x1.png",
"content_image/1807.10732/x2.png",
"content_image/1807.10732/x3.png",
"content_image/1807.10732/x4.png",
"content_image/1807.10732/x5.png",
"content_image/1807.10732/x6.png",
"content_image/1807.10732/x7.png",
"content_image/1807.10732/x8.png",
"content_image/1807.10732/x9.png",
"content_image/1807.10732/x10.png",
"content_image/1807.10732/x11.png"
] | # The Polarization Behavior of Relativistic Synchrotron Jets
A. L. Peirson¹ & Roger W. Romani¹
¹ Dept. of Physics and Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Stanford, CA 94305
[FOOTNOTE:1][ENDFOOTNOTE]
###### Abstract
We describe a geometric model for synchrotron radiation from blazar jets, involving multiple emission zones with turbulent magnetic fields and a transient core with a helical B field. Including the effects of jet divergence, particle cooling and the Relativistic PA rotation (RPAR) to the observer frame, we find polarization behavior consistent with recent data from monitoring campaigns. We predict that under some circumstances multi-\(\pi\) rotation phases should exhibit relativistically-induced steps in rate \(d{\rm PA}/dt\) and modulation in polarization \(\Pi\) that can be helpful in pinning down the jet \(\Gamma\) and \(\theta_{\rm obs}\). Also, RPAR enhances waveband differences that will be particularly interesting for comparing radio, optical and, soon, X-ray PA and \(\Pi\) variations.
Valid PACS appear here
## 1. Introduction
Blazars are active galactic nuclei whose powerful relativistic jets point at small angle \(\theta_{\rm obs}\) to the Earth line-of-sight (Urry & Padovani, 1995), so that the Doppler-boosted jet emission dominates the observed spectral energy distribution (SED). This SED is characterized by a low energy peak caused by synchrotron radiation from energetic electrons, and a high energy peak generally attributed to Inverse Compton scattering of photons by these same electrons (Maraschi et al., 1992). The seed photons can either be from the synchrotron emission (SSC) or from an external source such as the accretion disk or broad line region (EC). The sources are further subdivided by the frequency of the \(\nu F_{\nu}\) synchrotron peak (Abdo et al., 2010), with \({\rm log}\,\nu_{\rm sy}<14\) labeled LBL (Low peak BL Lacs, and most Flat spectrum Radio Quasars FSRQ) and \({\rm log}\,\nu_{\rm sy}>15\) called HBL (High peak BL Lacs). Here the frequency is in Hz, and IBL represent the intermediate case. We have yet to determine how the jets are energized and launched with bulk Lorentz factor \(\Gamma\), but an attractive origin is the Blandford & Znajek (1977) process, so that the jet axis may be associated with the spin axis of the central black hole and the angular momentum axis of the surrounding accretion disk. The jet \(e^{+}/e^{-}\) obtain an energy distribution extending to \(\gamma_{\rm max}\sim 10^{4}\) or higher, often attributed to shock acceleration. Radiation from these particles spiraling in the embedded magnetic field \(B\) can be used to constrain the geometry and energetics of the emission zone and, by inference, the jet accelerator.
In studying jet geometry polarization can be particularly useful. Radio VLBI studies have long shown that the pc-scale jet can be substantially polarized. Recently much effort has been spent on measuring the optical polarization properties of blazars, since this probes even smaller scales, closer to the acceleration zone. This polarization is often quite variable, offering new dynamical information on the jet structure (e.g. Blinov et al., 2015; Lynch et al., 2018). In the near future we also hope to measure the X-ray polarization of a significant population of blazars with _IXPE_(Weisskopf et al., 2016) and similar new facilities.
### Blazar EVPA Variability
The polarization fraction \(\Pi\) and electric vector position angle \(\theta_{\rm EVPA}\) of blazar emission have long been known to exhibit stochastic variability. Indeed optical polarization variability is a defining property of the BL Lac class. Recent monitoring campaigns have revealed new polarization patterns. The typical behavior is a stochastic variation about \(\Pi\sim 0.05-0.15\) fluctuating with \(\Pi/\sigma_{\Pi}\) and \(\sigma_{\theta_{\rm EVPA}}\sim 1\). In addition, periods of relatively steady rotation of the EVPA, sometimes extending many \(\times\pi\), can occur lasting weeks or months (Blinov et al., 2015), after which the EVPA returns to the stochastic phase. These may be associated with flares in the total intensity (Blinov et al., 2016), but this is not always the case.
A few other trends have been noted. Blinov et al. (2016) indicate that \(\Pi\) is on average smaller in the rotating phases. However there are many examples where \(\pi\) increases during rotation; these seem more common for the long \(\Delta{\rm PA}>\pi\) rotations (I. Liodakis, priv. comm.). Also \(\sigma_{\Pi}\) of a given source appears to be similar in the stochastic and rotating phases. Thus while a systematic B structure should be present to drive the large angle EVPA swings an underlying stochastic process must continue. Further, there is a tendency for the mean EVPA during the stochastic phase to correlate on the sky with the projected jet axis (Jorstad et al., 2006). Radio and optical polarization behavior can be similar (D’arcangelo et al., 2009), with \(\Pi\) often higher in the optical. There appear to be an association between GeV flares and rotation phases (Blinov et al., 2018). Also rotations appear to be recurrent in some blazars and absent in others. However none of these trends is universal. An example of a source making a transition from stochastic to rotation phase and back is shown in Figure 1.
<figure><img src="content_image/1807.10732/x1.png"><figcaption>Figure 1.— The polarization fraction and EVPA in the R-band against the timein days during an observed rotation by RoboPol Blinov et al. (2016) of the LBLblazar J1751+0939. Filled points mark their identification of a ‘rotating’epoch.</figcaption></figure>
Attempts to model such behavior have taken a variety of forms. In Hughes et al. (1989) a set of multiple shocks in the jet was used to reproduce the stochastic fluctuations, while in Marscher (2014) multizone turbulence in a conical standing shock was posited to generate the stochastically variable emission. Such stochastic variation can induce epochs of relatively constant PA sweep, but it was shown (Blinov et al., 2016) that the incidence (and persistence over many \(\times\pi\)) of the observed sweeps are inconsistent with purely stochastic models. Accordingly, models for the rotating phase generally invoke helical structures in the jet. For example Zhang et al. (2015) invoke a helical field energized by a standing shock. Such fields are suggested by Faraday rotation gradients transverse to the local jet directions in several nearby blazars e.g. PKS 0745+241, PKS 0820+225, Mrk 501, 3C 371 (Gabuzda et al., 2004) and could quite naturally be attributed to field symmetries imposed at the jet base by the Blandford-Znajek process (Blandford & Znajek, 1977). Alternatively, Lyutikov & Kravchenko (2017) assume that the jet itself takes on a helical form, e.g. due to precession, with an aligned embedded field. A third picture (Nalewajko, 2017) posits a helical kink propagating along a conical jet with an embedded toroidal B field. Each of these pictures can accommodate smooth multicycle rotations.
We explore here a heuristic model that incorporates the main features above, in an attempt to reproduce the range of observed optical polarization phenomena and to predict new correlations to be tested with multiwavelength observations. We start (§2) with a description of the important, but under-appreciated, effect of relativistic boosting on the observed polarization. We then describe (§3) a toy geometry with multiple zones transitioning between random and helical magnetic patterns and propagating downstream in a conical jet. We then couple this with a radiation model that follows the cooling and synchrotron radiation of the \(e^{+}/e^{-}\) (§4) and comment on the patterns and multiwavelength correlations of the resulting polarization signal. §5 applies this picture to the bright HBL Mrk 501, and we conclude with general predictions for future extensions and comparisons with the data.
## 2. Relativistic PA Rotation
<figure><img src="content_image/1807.10732/x2.png"><figcaption>Figure 2.— EVPA in the observers frame on the plane of the sky as a functionof jet bulk gamma Γ for a helical B-field with a 45∘ pitch angle. Thedirection of the jet is 4∘ off our line of sight in the ^x direction on theplane of the sky, corresponding to 0∘ on the plot. The B-field is sampledevery ϕB=20∘ from 0−360∘. The black lines on the y-axis denote the observedEVPAs without RPAR (i.e. Γ=1). The solid cyan and blue lines mark ϕB=0∘,120∘respectively.</figcaption></figure>
<figure><img src="content_image/1807.10732/x3.png"><figcaption>Figure 3.— Arrows denoting the observed EVPA on the plane of the sky for thejet described in Figure 2 at the 3 different Γ cuts. The black arrows show theinitial EVPA absent RPAR (i.e. for stationary Γ=1), the orange arrows show theRPAR-shifted EVPAs. The projected jet axis is along the green arrow. Arrowsare plotted as single headed for clarity, although observations will span 180∘(i.e. -90 to +90 in Figure 2).</figcaption></figure>
In most cases models assume that the observed polarization direction is that of the EVPA in the jet fluid frame \(\theta_{\rm jf}\). However, Lyutikov et al. (2003), following Blandford & Koenigl (1979), show that the relativistic transformations can induce significant rotation, with the direction and magnitude of the observed angle depending on \(\theta_{\rm jf}\), \(\Gamma_{\rm bulk}\), \(\theta_{\rm obs}\) and \(\underline{\hat{v}}\) (the direction of the jet). We refer to this process as Relativistic PA rotation (RPAR) in the following discussion. Taking \(\mathbf{\hat{e}^{\prime}=\hat{n}^{\prime}\times\hat{B}^{\prime}}\) as the EVPA in the jet frame, they show that
\[\mathbf{\hat{e}}=\frac{\mathbf{n\times q^{\prime}}}{\sqrt[]{q^{\prime 2}-( \mathbf{n\cdot q^{\prime}})^{2}}},\] (1)
\[\mathbf{q^{\prime}}=\mathbf{\hat{B}^{\prime}+n\times(v\times\hat{B}^{\prime})} -\frac{\Gamma}{1+\Gamma}(\mathbf{\hat{B}^{\prime}\cdot v})\mathbf{v}.\] (2)
Here prime quantities are measured in the jet frame, \(\mathbf{n}\) is the vector to the observer, \(\mathbf{\hat{B}}\) is the magnetic field vector and \(\mathbf{v}\) is the jet bulk velocity vector.
For radiation emitted in a jet zone containing a helical B-field, where
\[\mathbf{\hat{B}^{\prime}}=\frac{{\rm sin}(\phi_{B})\mathbf{\hat{x}^{\prime}}+{ \rm cos}(\phi_{B})\mathbf{\hat{y}^{\prime}}+{\rm tan}(\Psi_{B})\mathbf{\hat{z} ^{\prime}}}{\sqrt{1+{\rm tan}^{2}(\Psi_{B})}}\] (3)
and \(\hat{z}^{\prime}\) is parallel the jet axis. \(\phi_{B}\) represents the phase angle along the helix and \(\Psi_{B}\) is the pitch angle. We can see the effects of RPAR on the observed EVPAs for different \(\Gamma_{\rm bulk}\) regimes in Figures 2 and 3. The jet shown here is pointed 4\({}^{\circ}\) off our line of sight in the \(\hat{x}\) direction on the plane of the sky. The field is pitched at \(\Psi_{B}=45^{\circ}\) to the jet axis. This pitch angle affects the magnitude of the RPAR effect (as in Qian & Zhang, 2004). RPAR effects are reduced for smaller pitch angles and increased for larger ones; we thus choose \(\Psi_{B}=45^{\circ}\) for illustrative purposes. As Figure 3 shows there are three characteristic regimes for the RPAR effects. In the ‘low \(\Gamma\)’ case RPAR induces a counter-clockwise shift of the EVPA for half a cycle and clockwise for the other half (with the phase controlled by the component of the helical B along the jet axis). This case is represented by the blue dotted line of Figure 2 and the first row of Figure 3. The net effect is that, for a smoothly increasing \(\phi_{B}\) (an EVPA rotating smoothly in the fluid frame), the lab EVPA rotates quickly for half a cycle and slows in the second half, centered perpendicular to the projected jet axis. Thus an intrinsic smooth rotation is seen as a set of ‘stair-steps’ spanning 360\({}^{\circ}\).
For somewhat faster jets, the ‘high \(\Gamma\)’ case, the RPAR bias (here toward +y) dominates so the EVPA is driven increasingly transverse to the jet. In this regime the observed rotation can switch between clockwise and counter-clockwise rotations for the same field geometry. This is the green dotted line of Figure 2 and the second row of Figure 3. For very large jet bulk motion, the ‘extreme \(\Gamma\)’ case, the observed EVPA is actually reflected across the jet axis from the initial vector. One can understand these behaviors by picturing a relativistic electron emitting synchrotron radiation in the fluid frame within a uniform B-field at arbitrary pitch angle, where the radiation is intermittently directed towards the observer. The electron will emit beamed radiation polarized mostly perpendicular to the B-field. As this emission is boosted to the lab frame, slightly off axis, the radiation initially directed to the side is boosted toward the observer, rotating the PA. For extreme \(\Gamma\), the radiation initially at large angle to the line of sight, is boosted toward the observer and begins to dominate the received signal. The EVPA flips across the projected jet axis. Of course, the bulk of the radiation is boosted into an angle \(1/\Gamma\) (red dot-dashed line). Thus to see the ‘extreme \(\Gamma\)’ case one needs to observe so far off-axis that the observer sees only a tiny portion of the observable jet power. Thus virtually all observers will be in the first two cases, with typical jet alignment resulting in the ‘Low \(\Gamma\)’ case and only jets viewed well off-axis reaching the ‘High \(\Gamma\)’ regime. Typical HBL parameters inferred from VLBI observations and SED fits have \(\theta_{\rm obs}=1^{\circ}-5^{\circ}\), generally in the ’Low \(\Gamma\)’ case. Thus ‘stair-steps’, regular slope variations in extended rotation phases, will often be present, although the amplitude is quite sensitive to field pitch angle and jet parameters.
In Figure 2 one can see that for small \(\Gamma\) the observed EVPA is generally driven toward the jet axis by RPAR. Although for the particular pitch angle \(\Psi_{\rm B}\) shown there is a range of ‘Large \(\Gamma\)’ vectors that are driven perpendicular to B, when one averages over all magnetic inclinations the former effect dominates. At large \(\Gamma\) the forward boosting drives to the jet axis as well. Thus we see that in Figure 4 that a random set of initial EVPA vectors is driven to the jet axis, except for a range around \(1/\theta_{\rm obs}\), which would be seldom observed. As noted in the introduction, a statistical bias for such alignment is indeed observed (Jorstad et al., 2006).
<figure><img src="content_image/1807.10732/x4.png"><figcaption>Figure 4.— EVPA bias along jet axis against Γbulk for a set of 127 randomB-field zones in the jet frame. The bias is calculated as the difference insquared EVPA components parallel (^x) and perpendicular (^y) to the jet axis.The bias is averaged over 500 random realizations. θobs=4.0∘.</figcaption></figure>
## 3. Geometrical Jet Model
The modest, and fluctuating, blazar polarization in the stochastic phases suggests that many zones contribute to the radiation seen with uncorrelated, and varying magnetic field orientation. We thus follow many authors (e.g. Marscher (2014)’s TEMZ model) in assuming a multizone emission region. We attribute this to shock-induced turbulence. While a typical spectral index indicates a saturated polarization level \(\Pi_{\rm max}\sim 0.7\), we more commonly observe \(\Pi\approx\)10%, indicating \(N\sim(\Pi_{\rm max}/\Pi)^{2}\sim 50\) uncorrelated zones contribute to the observed emission. At any one time one receives radiation from an angle \(\sim 1/\Gamma\) about the line of sight. Thus if the jet opening angle \(\theta_{\rm op}\) is \(<1/\Gamma\), then this is the full number of zones seen at a given radius; wider jets will have \(\sim N\) zones in the observed subset (see Figure 5). With large \(\Gamma\) and small viewing angle we in general expect radiation from a single excitation radius (spanning \(N\) zones across the jet) to dominate the received radiation at any one time. The detected spectrum will represent a time averaged output of this excited radiating zone as it travels downstream in near-synchrony with its early emission.
However, as noted, the prevalence of persistent rotating phases is inconsistent with a simple superposition of random polarization vectors Blinov et al. (2016) and the presence of rotations \(>\pi\) demands a deterministic underlying structure. Again with high \(\Gamma\) and small \(\theta_{\rm obs}\) we observe only one jet radial zone at a given time. Thus if there is an underlying helical field passing through the excitation zone (e.g. a standing shock) then only one portion of the helix will be visible to the observer at any time and the polarization will mark the orientation of its field. To implement this, we assume that a fraction of the jet zones are imprinted with this coherent, helically varying, field (shaded zones in Figure 5). Given the modest \(\Pi\) during the rotating phase and the similarity of \(\sigma_{\Pi}\) to that of the stochastic phase, the helical coherent field occupies only a small fraction of the observed jet zones. To illustrate strong rotational signals we use \(\sim 1/6\) here, although a smaller fraction may be coherent in actual jets.
<figure><img src="content_image/1807.10732/x5.png"><figcaption>Figure 5.— Map of zones in a jet section Marscher (2014). Each zone has aB-field direction, with random initially isotropic orientation during thestochastic phase. During a rotating phase the central zones (shaded) turnhelical. The computations typically use 127 zones (6 concentric rings); fewerare shown here for clarity.</figcaption></figure>
<figure><img src="content_image/1807.10732/x6.png"><figcaption>Figure 6.— Schematic showing the shape of the ‘conical’ jet model with some ofthe important parameters labelled, Potter & Cotter (2013). A ‘section’ of thejet is a dx slice.</figcaption></figure>
We expect the jet to be imperfectly collimated, so that its cross section will increase as it propagates. This has two principal effects. First, each zone of the jet has a different \(\theta_{\rm obs}\). If one views within \(\theta_{\rm op}\) then values from 0 to \(\sim{\rm min({1/\Gamma},\theta_{\rm op})}\) contribute. This averages out some of the RPAR effects. Second, the spreading jet affects the coherent field geometry. Imagine a helical field injected into such a conical jet. Assuming uniform constant \(\Gamma\), the distance and time between rotations remains fixed. However the radius of the spiral grows. Hence the pitch angle flattens and the field become more nearly transverse. This leads to evolution in the PA behavior as the jet propagates.
<figure><img src="content_image/1807.10732/x7.png"><figcaption>Figure 7.— EVPA (blue) and polarization fraction (red) for stochastic androtating phases in a jet with projection on the plane of the sky along the xaxis. We are viewing at θobs=4.0∘ with Γ=5 and θop=9.5∘. RPAR has beenapplied. This low Γ means the vast majority of zones in the jet contribute.The black line shows the same helical rotation without RPAR or turbulence. Therotation phase spans 770∘ with starting phase ϕ=3π/2.</figcaption></figure>
We now show these geometrical effects in two simulated realizations. Both have low \(\Gamma=5\) and opening angle \(\theta_{op}=9.5^{\circ}<1/\Gamma\), and we show a 770\({}^{\circ}\) rotating bracketed by stochastic phases. RPAR effects and beaming geometry can introduce an especially interesting behavior in extended rotating phases. Especially in the low \(\Gamma\) regime, even a smoothly varying helical field produces non-uniform sweep of the observed PA (Figure 3, top line). For extended rotation events we can expect to see this rate modulation as a series of rotation rate steps. Figure 7 shows a simulation of this behavior. This is most prominent when the effective \(\theta\) to our line of sight of the helical zones within the \(1/\Gamma\) solid angle corresponds with the appropriate \(\Gamma\) given by Figures 2 and 3. Possible examples of such stepped sweeps are seen in PKS 1510-089 (Marscher et al., 2010) and S5 0716+71 (Larionov et al., 2013).
Of course, in this picture the coherent fields in a portion of the \(N\) zones during the rotating phase induce higher polarization. This is not universally seen. In as far as rotations are associated with flare events, turbulence associated with the flare power injection could increase the effective \(N\), lowering \(\Pi\). Also increased activity during flares may drive higher energy emission closer to the standing shock, where the initially isotropic zones dominate (see below), also lowering \(\Pi\). Regardless, as long as the fraction of zones participating in the rotation is not large, the polarization fluctuations \(\sigma_{\Pi}\) will remain substantial and closely resemble those in the non-rotating phase (Blinov et al., 2016).
<figure><img src="content_image/1807.10732/x8.png"><figcaption>Figure 8.— As for Figure 7 for the rotational phases of a conical jet viewedat θobs=1.5∘. This represents a 770∘ rotation with starting phase ϕ=160∘.Stepping is less prominent.</figcaption></figure>
Next we show the behavior for a conical jet for the same \(\Gamma\) and \(\theta_{op}\) but a lower \(\theta_{\rm obs}\) in Figure 8. \(\Pi\) is larger and fluctuations are smaller since the helical zones now dominate the central position in our line of sight. The PA sweep is steady in the rotation phase now with only small stepping behavior present. A reduction in the effective \(\theta\) of the helical zones to our line of sight means that for the same \(\Gamma\) we move to the left on Figure 2, bringing us to lower RPAR regime. The stair-stepping effect is much weaker here. Indeed the lack of measurable steps during the rotation phase of J1751+0939 (Figure 1) places some limits on \(\theta_{\rm obs}\) and \(\Gamma\). This source has a VLBI estimated \(\beta_{\perp}=7.9\pm 0.8\)(Lister et al., 2013) and we find that \(\theta_{\rm obs}<2^{\circ}\) and \(\Gamma>11\) when pitch angle \(\Psi_{\rm B}\) is near \(0^{\circ}\). Steeper pitch angles place more stringent constraints, e.g. \(\Psi_{\rm B}=25^{\circ}\) implies \(\theta_{\rm obs}\leq 0.8^{\circ}\) and \(\Gamma\geq 16\).
## 4. Radiation Model
Our radiation model is inspired by Potter & Cotter (2013) and uses the basic jet setup shown in Figure 6. We take the jet to be composed of an electron-positron plasma (hereafter electron). Under the assumption of equipartition between electron energy and magnetic field energy, the radius and electron population are initialized at the base of the jet from the input parameters: the length of the jet \(L_{0}\), the total jet power \(W_{j}\), the bulk Lorentz factor \(\Gamma\), the magnetic field strength at the base \(B_{0}\), the maximum initial electron energy \(E_{max}\), the electron power law index \(\alpha\), the jet observation angle \(\theta_{\rm obs}\) and the jet opening angle \(\theta_{op}\). Furthermore, it is assumed that the electrons and magnetic field are homogeneously distributed throughout each jet section and that the magnetic field energy is conserved.
The initial electron population is assumed to have a power law energy distribution with an exponential cutoff (Bregman, 1985) and is represented by a discrete set of energy bins.
\[N(E_{e})=AE_{e}^{-\alpha}e^{\frac{-E_{e}}{E_{max}}}.\] (4)
For each jet zone we divided into sections of length \(dx\) (see Fig. 6). In a particular section, the synchrotron emission from each electron energy bin is calculated using the B field in that section. The synchrotron losses then determine the evolution in the electron energy bin populations for the next section. Furthermore, since we are viewing down the jet, the self-absorption opacities for each jet section are calculated and applied to the spectrum - these depend on \(\theta_{\rm obs}\) and synchrotron photon energy. This is a quiescent jet model, without flux variability in time; only the B-field directions in the zones change for the purpose of polarization modeling.
To find the polarization in the conical jet, we calculate the electrons’ synchrotron power per unit frequency perpendicular and parallel to the B-field projection on the plane of the sky for each zone in each section. These are (Rybicki & Lightman, 1979):
\[P_{\perp}(\omega)=\frac{\sqrt{3}q^{3}B{\rm sin}\alpha}{4\pi mc^{2}}\big{[}F(x) +G(x)\big{]},\] (5)
\[P_{\parallel}(\omega)=\frac{\sqrt{3}q^{3}B{\rm sin}\alpha}{4\pi mc^{2}}\big{[} F(x)-G(x)\big{]}.\] (6)
Here \(F(x)\) and \(G(x)\) are the modified Bessel functions (e.g. Longair, 2011). This radiation is then subject to RPAR and the intensity is Doppler boosted by \(\delta^{4}\). We next sum, using Stokes’ parameters, the contribution of all zones and sections to obtain the total powers in a coordinate system aligned with the projected jet axis on the plane of the sky. These then provide the polarization fraction:
\[\Pi(\omega)=\frac{P_{\perp}(\omega)-P_{\parallel}(\omega)}{P_{\perp}(\omega)+P _{\parallel}(\omega)}.\] (7)
The projected net EVPA (relative to the jet axis) can also be referenced to an absolute angle.
Note that in this model the electron population evolves (cools) along the jet. For a conical jet we assume that the field also evolves, becoming increasingly transverse as the jet expands. This means that we expect measurable differences in \(\Pi\) and PA as a function of observed frequency. Since the high energy electrons cool most quickly, for appropriate parameters their radiation may come only from the base of the jet. Lower energies come from a large range of jet radii and thus, for a conical jet, a more transverse field structure. Thus we expect an energy-dependent shift in the observed polarization properties. Note also that, for a given number of emission zones \(N\), the conical jet model will have a higher polarization than the simple cylindrical jet case. This is both because the jet divergence means that the received radiation is dominated by zones close to the line of sight and because down-stream fields are increasingly transverse and hence, even for the stochastic case, increasingly coherent. In particular, at low (radio) energies one expects EVPA increasingly aligned with the projected jet axis (Figure 9). This is indeed observed.
The change in the effective B field pitch angle as one moves along the jet also introduces energy-dependent shifts in the EVPA at a given phase. From the underlying geometry, with small \(\theta_{\rm obs}\) and conical jets, the effects are relatively subtle, since with the jet viewed nearly end-on, even dramatic pitch angle changes make modest change to the projection on the sky. However, RPAR effects make the observed PA sensitive to the full polarization vector and greatly enhance the sensitivity to the magnetic field inclination, even for nearly aligned jets.
Interestingly, this can introduce a rotation-phase dependent modulation of amplitude \({\bar{A}}\) in the degree of polarization \(\Pi\), which is especially strong for the low energy synchrotron emission from larger distances in the spread jet (Figure 10). It will be interesting to see if this pattern can be recovered from monitoring observations at high (core dominated) radio frequencies.
<figure><img src="content_image/1807.10732/x9.png"><figcaption>Figure 9.— Energy variation for EVPA and Π during a non-rotating (stochastic)phase. Colors denote low (Radio, red), mid-range (Optical, green) and nearcut-off (e.g. X-ray, blue) synchrotron bands. This is for a jet withθobs=1.5∘, Γ=5 and θop=9.5∘. Both panels use the random seed for the generatedB-fields. The top panel shows a RPAR affected jet: substantial energydependence is seen, with low energy (radio) PA better aligned with the parentjet. Low energy fluctuations are also somewhat smaller. In the lower panelRPAR effects are ignored and the behavior is essentially achromatic with loweroverall polarization fraction.</figcaption></figure>
<figure><img src="content_image/1807.10732/x10.png"><figcaption>Figure 10.— Energy dependence during a rotating phase (the 770∘ rotation shownin Figure 8), as for Figure 9. Strong sinusoidal variation of the radiopolarization fraction occurs as the rotating helical zone EVPA aligns paralleland perpendicular to the jet axis. Since the random zones’ EVPA areincreasingly aligned with this axis as one progresses along the jet (i.e.observes at lower frequency; see top panel, Figure 9) the total Π has asinusoidal modulation. The strength of the modulation depends on the ratio ofhelical zones to random zones.</figcaption></figure>
Fig | ¯ΠR | ¯σR | ¯AR | ¯ΠX | ¯σX | ¯AX
---|---|---|---|---|---|---
9 | 0.15 | 0.051 | −−− | 0.09 | 0.045 | −−−
10 | 0.32 | 0.031 | 0.11 | 0.36 | 0.029 | 0.016
Table 1Radio and X-ray polarization fractions for Figures 9 (top panel) and
10. For the Figure 9 row ¯σ describes the fluctuations about the mean ¯Π,
while for Figure 10 it represents the uncertainty in the amplitude ¯A of the
best-fit sinusoid.
By taking the electron population to be homogeneously distributed across the jet zones, our model assumes the main source of energy dependence in polarization to originate from the B-field change along the jet linked with RPAR. However, having higher energy electrons relegated to fewer zones as in Marscher & Jorstad (2010) or Angelakis et al. (2016) has been invoked to explain the blazar sequence polarization trends described in the studies of Itoh et al. (2016) and Angelakis et al. (2016) which suggest LBLs have higher polarization fraction (up to 40%) and variability than HBLs (up to 10%) in the optical. Indeed if the X-ray polarization fraction is observed to be much greater than the optical in HBL sources, one could plausibly extend this model by having X-ray synchrotron emission from fewer zones as above.
In this paper we treat only polarization of the synchrotron emission, so this model describes up to the peak \(\nu_{\rm sy}\). This is in the X-ray band for HBL, but typically in the IR/optical band for other blazars. Treatment of Inverse-Compton regime polarization is covered in Zhang et al. (2016). In general IC components should have a lower polarization fraction, including when SSC dominates. We expect this will decrease the observed \(\Pi\) in our model for a given number of radiation zones \(N\). When external photon fields dominate, the observed \(\Pi\) will be even smaller.
## 5. Application to MRk 501
One motivation for this study is the new prospect of measuring blazar X-ray polarization with _IXPE_(Weisskopf et al., 2016) or other upcoming missions. To date only a handful of X-ray polarization measurements have been made (OSO-8, PoGo+, X-Calibur) mostly of the Crab Nebula with no blazar measurements as of yet (Kislat et al., 2018). However new facilities should provide a number of good polarization measurements, making an evaluation of energy dependence timely. For many LBLs, thermal emission is important in the optical band and synchrotron emission does not dominate. Also, the radio emission is often dominated by larger scale jet flux far from the acceleration zone. So intraband comparisons of HBLs are especially interesting since the optical and even the X-ray can come from the synchrotron peak, allowing multiband comparisons to probe the RPAR and jet geometry effects described above.
We thus illustrate the various polarization phenomena with simulations of an HBL source, Mrk 501. This blazar displays substantial optical PA variability, including EVPA rotation and can be well measured by IXPE in a few day’s exposure. Mrk 501’s Doppler factor \(\delta\) has been estimated as \(\sim 6-22\), leaving much leeway in choosing the \(\Gamma\), \(\theta_{\rm obs}\) and \(\theta_{\rm op}\). As shown in §2, different \(\Gamma\) lead to different RPAR behaviors, so we have modeled for two values consistent with allowed \(\delta\) range, adjusting jet parameters to fit the overall SED (bottom panel of Figure 11). Table 2 shows the selected fit parameters.
<figure><img src="content_image/1807.10732/x11.png"><figcaption>Figure 11.— A fit to Mrk 501’s SED (bottom panel, approximatelycontemporaneous fluxes drawn from the ASDC compilation for 2010;<https://tools.asdc.asi.it/SED/>) using the radiative model. The top twopanels show the energy dependence of Π and EVPA averaged over 200 iterationsfor the stochastic phase, with solid line for a low δ solution and a dashedline for high δ. The observed EVPA is expected to fluctuate about these meanvalues during stochastic phases. The jet projection on the plane of the sky isθsky=−40∘, as observed for Mrk 501.</figcaption></figure>
δ | B0[G] | γmax | α | θop[∘] | θobs[∘] | Γ
---|---|---|---|---|---|---
6.6 | 40 | 4.9∗104 | 1.95 | 13.0 | 4.0 | 7.5
17.4 | 30 | 6.3∗104 | 1.95 | 3.9 | 3.3 | 17.5
Table 2Parameters used for the Mrk 501 fits of Figure 11. The top and bottom
rows are for the solid and dashed fits respectively. Both fits use Wj=2∗1037W,
Lj=5∗1020m and γmin=10.
Both sets of parameters fit the SED reasonably well while producing different polarization behavior. The solid line set provides a higher average polarization fraction across all bands as slightly fewer B-field zones lie within its \(1/\Gamma\) range (Figure 5). Also, for this model, rotating phases (not shown) will produce strongly ‘stepped modulation’, due to the proximity of the helical zones to the line of sight. In contrast, the dashed line set does not produce EVPA rotations since the helical zones are located on the periphery of its more restricted \(1/\Gamma\) range. Note that in this picture, we can predict that low \(\Gamma\) jets are, in general, more likely to produce rotation events. Finally, the difference in the low energy polarization behavior provides additional observables that are sensitive to \(\theta_{\rm obs}\) and \(\theta_{\rm op}\).
## 6. Conclusions
We have explored a simple geometrical model of a conical blazar jet with multiple emission zones across the interior. By introducing a coherently rotating helical field in a subset of these emission zones and by noting that the jet magnetic fields can become increasingly transverse as the jet expands, we have been able to mimic a variety of observed blazar PA behavior. Our study treats the often neglected effect of the jet boost on the observed EVPA (RPAR) which provides interesting effects on the observed PA behavior in some regimes. Finally, by computing with a simple emission model in this jet geometry we have seen that the energy dependence of some polarization observables can also display useful dependence on the model parameters.
Although many of our ingredients have been considered in past studies, by combining these into a single model, we have been able to reproduce a large fraction of the observed EVPA phenomena. Moreover, the model makes interesting predictions for the (modest) energy-dependence of EVPA across the synchrotron peak of the blazar emission. With the possibility of X-ray polarization measurements in the near future, the model allows for some useful comparison with observed data sets. Novel dependence of the EVPA observables on \(\Gamma\) and \(\theta_{\rm obs}\) offer new ways of constraining these important parameters.
Of course extensions are needed: for most blazars IC is relevant in X-ray emission, so we should add simulation of this more weakly polarized emission to the model. But the interesting patterns in \(\Pi\) and PA introduced by the jet expansion and the relativistic boost provide a range of observables that can be sought in extensive polarization monitoring programs. These patterns can be useful in constraining jet geometry and, eventually, in guiding detailed RMHD modeling that will follow the shocks and acceleration giving rise to the energetic electron populations responsible for the observed (polarized) blazar emission.
We thank I. Liodakis for discussions on the state of optical polarization measurements. This work was supported in part by grant NNM17AA26C.
## References
* Abdo et al. (2010) Abdo, A. A., et al. 2010, ApJ, 716, 30
* Angelakis et al. (2016) Angelakis, E., et al. 2016, MNRAS, 463, 3365, arXiv: 1609.00640
* Blandford & Koenigl (1979) Blandford, R. D., & Koenigl, A. 1979, ApJ, 232, 34
* Blandford & Znajek (1977) Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179, 433
* Blinov et al. (2018) Blinov, D., et al. 2018, MNRAS, 474, 1296
* Blinov et al. (2015) Blinov, D., et al. 2015, MNRAS, 453, 1669
* Blinov et al. (2016) Blinov, D., et al. 2016, MNRAS, 457, 2252
* Bregman (1985) Bregman, J. N. 1985, ApJ, 288, 32
* D’arcangelo et al. (2009) D’arcangelo, F. D., et al. 2009, ApJ, 697, 985
* Gabuzda et al. (2004) Gabuzda, D. C., Murray, ., & Cronin, P. 2004, MNRAS, 351, L89
* Hughes et al. (1989) Hughes, P. A., Aller, H. D., & Aller, M. F. 1989, ApJ, 341, 68
* Itoh et al. (2016) Itoh, R., et al. 2016, ApJ, 833, 77, arXiv: 1610.04313
* Jorstad et al. (2006) Jorstad, S., et al. 2006, ChJP, 6, 247
* Kislat et al. (2018) Kislat, F., Abarr, Q., Beheshtipour, B., Geronimo, G. D., Dowkontt, P., Tang, J., & Krawczynski, H. 2018, JATIS, 4, 011004
* Larionov et al. (2013) Larionov, V. M., et al. 2013, ApJ, 768, 40
* Lister et al. (2013) Lister, M. L., et al. 2013, AJ, 146, 120
* Longair (2011) Longair, M. S. 2011, High Energy Astrophysics (Cambridge University Press), Google-Books-ID: KGe3FVbDNk4C
* Lynch et al. (2018) Lynch, R. S., et al. 2018, ApJ, 859, 93
* Lyutikov & Kravchenko (2017) Lyutikov, M., & Kravchenko, E. 2017, MNRAS, 467, 3876, arXiv: 1702.02354
* Lyutikov et al. (2003) Lyutikov, M., Pariev, V. I., & Blandford, R. 2003, ApJ, 597, 998, arXiv: astro-ph/0305410
* Maraschi et al. (1992) Maraschi, L., Ghisellini, G., & Celotti, A. 1992, ApJ, 397, L5
* Marscher (2014) Marscher, A. P. 2014, ApJ, 780, 87
* Marscher & Jorstad (2010) Marscher, A. P., & Jorstad, S. G. 2010, arXiv:1005.5551 [astro-ph], arXiv: 1005.5551
* Marscher et al. (2010) Marscher, A. P., et al. 2010, ApJ, 710, L126
* Nalewajko (2017) Nalewajko, K. 2017, Galaxies, 5, 64, arXiv: 1711.00899
* Potter & Cotter (2013) Potter, W. J., & Cotter, G. 2013, Ph.D. thesis, Oxford University, UK
* Qian & Zhang (2004) Qian, S.-J., & Zhang, X.-Z. 2004, Chin. J. Astron. Astrophys., 4, 37
* Rybicki & Lightman (1979) Rybicki, G. B., & Lightman, A. P. 1979, Radiative Processes in Astrophysics (John Wiley & Sons), Google-Books-ID: LtdEjNABMlsC
* Urry & Padovani (1995) Urry, C. M., & Padovani, P. 1995, Publ Astron Soc Pac, 107, 803, arXiv: astro-ph/9506063
* Weisskopf et al. (2016) Weisskopf, M. C., et al. 2016, Results Phys, 6, 1179
* Zhang et al. (2015) Zhang, H., Chen, X., Böttcher, M., Guo, F., & Li, H. 2015, ApJ, 804, 58
* Zhang et al. (2016) Zhang, H., Deng, W., Li, H., & Böttcher, M. 2016, ApJ, 817, 63
|
0808.4147 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 9427,
"num_imgs": 0,
"llama3_tokens_count": 2081
} | [] | Introduction
Nature distinguishes between two fundamental types of particles: fermions and bosons, depending on the spin of the particle. Particles with half-integer spin are called fermions and obey Fermi-Dirac statistics. As a result of Fermi-Dirac statistics, in a system of indistinguishable particles, at most one particle can occupy a given quantum state. Particles with integer spin are called bosons and obey Bose-Einstein statistics. Any single-particle eigenstate of a physical system can be occupied by an arbitrary number of bosons.
While all of the basic building blocks of an atom are fermions, an atom as a whole has bosonic or fermionic character depending on its total angular momentum. For a gas of atoms confined in an external potential, the quantum statistical properties of the atoms become important at ultralow temperatures where the thermal deBroglie wavelength of the constituents is on the order of the interparticle separation. For bosonic atoms, this leads to the onset of Bose-Einstein condensation as observed for the first time in 1995 in a gas of \({}^{87}\)Rb at JILA Anderson1995a, in \({}^{23}\)Na at MIT Davis1995a and for \({}^{7}\)Li at Rice University Bradley1995a in the special case of attractive interactions. For fermionic atoms, the onset of degeneracy is less spectacular due to the absence of a phase transition. In an ultracold spin-polarized fermionic gas, the appearance of a macroscopic Fermi sea has first been demonstrated at JILA in 1999 DeMarco1999b.
While the pioneering work on quantum degenerate gases revealed important quantum phenomena such as interference, superfluidity and nonlinear atom optics, recent years have seen spectacular progress in the realization of novel strongly correlated systems with ultracold quantum matter. Strong correlations are observed either when the interactions between the constituents become very strong (e. g. at Feshbach resonances) or when strong confinement imposes stringent boundary conditions (e. g. in the periodic potential of an optical lattice).
Atomic systems offer a high degree of control of both the external confinement and the interactions between the constituents. The latter has become possible with the advent of Feshbach resonances Courteille1998a, Inouye1998a which allow control of \(s\)-wave and even higher order scattering between atoms by means of external fields. Feshbach resonances have been the key to a series of ground breaking experiments. For two-component fermionic gases, they have allowed the exploration of the BCS-BEC crossover and demonstrated that fermions and bosons are not as far from one another as it may seem: A Bose-Einstein condensate of diatomic molecules made from two fermionic atoms can be continuously transformed into a BCS state of atomic Cooper pairs Regal2004a,Bartenstein2004a,Zwierlein2004a,Bourdel2004a,Kinast2004a, Zwierlein2005a.
The “confinement-induced” approach to strongly correlated phases was first proposed for a gas of repulsively interacting bosons in 1998 by D. Jaksch and coworkers Jaksch1998a. In particular, it was demonstrated that bosonic atoms loaded into an optical lattice are an ideal model system for the simulation of the Bose Hubbard Hamiltonian (see Fisher1989a) known from condensed matter physics, and it was predicted that the phase transition from a superfluid to a Mott-insulating state can be induced by merely increasing the lattice depth. The theoretical prediction, together with the experimental demonstration of Greiner2002a, highlighted the potential of ultracold atoms in optical lattices for simulations of quantum many-body systems and for the realization of strongly-correlated systems. Optical lattices have since been the key to the observation of intriguing phenomena such as a Tonks-Girardeau gas of atoms in a one-dimensional geometry Paredes2004a, Toshiya2004a or a Kosterlitz-Thouless transition in a 1D lattice (2D geometry) Hadzibabic2006a. Two-body bound states (molecules) in homonuclear systems have been engineered Stoeferle2006a,Thalhammer2006a and evidence for fermionic superfluidity has been reported in a cloud of \({}^{6}\)Li loaded into a 3D optical lattice Chin2006a.
As a completely new area in the field of ultracold quantum gases, multicomponent quantum gases in 3D optical lattices have recently attracted a lot of attention. In the case of mixtures of fermionic and bosonic atoms, the different quantum-statistical behavior of the components gives rise to fundamentally novel quantum many-body phases. In the extreme case of pairing of fermions with one or more bosons, a whole zoo of new quantum phases of these “composite fermions” has been predicted Lewenstein2004a. Fermi-Bose mixtures in 3D optical lattices may exhibit fermionic pairing which is mediated by the presence of bosonic atoms in full analogy to solid state superconductivity, and there are interesting connections to high-\(T_{C}\) superconductivity Heiselberg2000a,Albus2004a, Wang2005a. Even before such “atom pairs” form, Fermi-Bose correlations are predicted to become manifest in polaron-related physics of fermions dressed by a bosonic cloud Mathey2004a and quantum percolation Sanpera2004a. These phenomena are connected to disorder induced localization scenarios. In reduced dimensionality, phenomena such as charge-density waves Mathey2004a, Wang2005a and supersolids Buchler2003a are predicted to occur.
Any atomic Fermi-Bose mixture is necessarily also a special case of a heteronuclear system, i. e. a two-component system where both nuclei have a different decomposition. In some cases, such as \({}^{6}\)Li – \({}^{7}\)Li, the resulting mass difference is small, whereas in others, such as \({}^{6}\)Li – \({}^{133}\)Cs, it is very large. The mass difference, as we shall see, gives rise to interesting features already in the case of harmonic trapping of a mixture, but it ultimately opens up an interesting and highly promising approach to dipolar gases, quantum computation and simulation: if the components of such a heteronuclear mixture are brought together to form a molecule in its internal (rovibrational and electronic) ground state, the mass difference gives rise to a permanent molecular dipole moment. The resulting dipolar interaction, together with the high density and ultracold temperature of the initial atomic samples, would pave the way for novel quantum gases with dipolar interactions and quantum computation with polar molecules DeMille2002a. In more exotic heteronuclear systems, these techniques could be used for precision measurements Sandars1967,Kozlov1995a,Hudson2006.
From the experimental point of view, there has been an impressive series of experiments on homonuclear systems which we have partly mentioned above, but experiments on heteronuclear systems in lattices, which Fermi-Bose mixtures are a special case of, have been scarce. By the end of 2005, the only experiment with Fermi-Bose mixtures in optical lattices has been reported by Ott and coworkers at LENS Ott2004b. In these experiments, the “insulating” behavior of a trapped ideal Fermi gas in a 1D lattice has been compared to collisionally induced transport of fermionic atoms in the presence of a bosonic cloud.
In this tutorial article, we present experiments performed at the University of Hamburg as part of our PhD theses SilkeThesis,OspelkausC2006a. The article is organized as follows: we start our discussion by giving a cartoon picture of harmonically trapped Fermi-Bose mixtures and review a simple Thomas-Fermi model for the density distributions. As a function of the heteronuclear interaction, we identify regimes of stable mixtures with attractive and repulsive interaction as well as regimes of collapse and phase separation for a large heteronuclear interaction strength Molmer1998a.
We show how we create large harmonically trapped Fermi-Bose mixtures of and in a magnetic trap. We analyze stages of evaporative cooling and identify signatures of a dynamical mean-field collapse of the mixture as a result of attractive interactions. We show that large particle numbers of \(7\cdot 10^{5}\) and \(1.2\cdot 10^{6}\) atoms can be achieved Ospelkaus2006b, only limited by the aforementioned mean field collapse.
We show how heteronuclear interactions can be tailored by means of Feshbach resonances. The tunability of interactions gives access to the full phase diagram of the harmonically trapped mixture and allows us to produce both stable attractive and repulsive mixtures and to observe phase separation and collapse Ospelkaus2006c.
As a novel quantum many-body system, we discuss properties of Fermi-Bose mixtures trapped in 3D optical lattices. We show how already a small admixture of fermionic atoms reduces the bosonic coherence at much lower lattice depths than for the pure bosonic (superfluid – Mott insulator transition) case Ospelkaus2006e. We discuss various theoretical scenarios, from mean field models over thermodynamic processes and disorder-enhanced scenarios.
Combining the ability to load the mixture into a 3D lattice and tunability of interactions, we demonstrate creation of heteronuclear Feshbach molecules in a 3D optical lattice Ospelkaus2006d,two_particles_hh. When combined with coherent Raman de-excitation schemes for the molecular ro-vibrational manifold, this constitutes a key step towards the production of all-ground state samples of dense, polar molecules.
|
1012.3510 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 31573,
"num_imgs": 3,
"llama3_tokens_count": 11755
} | [
"content_image/1012.3510/x1.png",
"content_image/1012.3510/x2.png",
"content_image/1012.3510/x3.png"
] | # Confirmation of the \(X(1835)\) and observation of the resonances \(X(2120)\) and \(X(2370)\) in \(J/\psi\rightarrow\gamma\pi^{+}\pi^{-}\eta^{\prime}\)
M. Ablikim\({}^{1}\), M. N. Achasov\({}^{5}\), L. An\({}^{9}\), Q. An\({}^{36}\), Z. H. An\({}^{1}\), J. Z. Bai\({}^{1}\), R. Baldini\({}^{17}\), Y. Ban\({}^{23}\), J. Becker\({}^{2}\), N. Berger\({}^{1}\), M. Bertani\({}^{17}\), J. M. Bian\({}^{1}\), I. Boyko\({}^{15}\), R. A. Briere\({}^{3}\), V. Bytev\({}^{15}\), X. Cai\({}^{1}\), G. F. Cao\({}^{1}\), X. X. Cao\({}^{1}\), J. F. Chang\({}^{1}\), G. Chelkov\({}^{15a}\), G. Chen\({}^{1}\), H. S. Chen\({}^{1}\), J. C. Chen\({}^{1}\), M. L. Chen\({}^{1}\), S. J. Chen\({}^{21}\), Y. Chen\({}^{1}\), Y. B. Chen\({}^{1}\), H. P. Cheng\({}^{11}\), Y. P. Chu\({}^{1}\), D. Cronin-Hennessy\({}^{35}\), H. L. Dai\({}^{1}\), J. P. Dai\({}^{1}\), D. Dedovich\({}^{15}\), Z. Y. Deng\({}^{1}\), I. Denysenko\({}^{15b}\), M. Destefanis\({}^{38}\), Y. Ding\({}^{19}\), L. Y. Dong\({}^{1}\), M. Y. Dong\({}^{1}\), S. X. Du\({}^{42}\), M. Y. Duan\({}^{26}\), R. R. Fan\({}^{1}\), J. Fang\({}^{1}\), S. S. Fang\({}^{1}\), F. Feldbauer\({}^{2}\), C. Q. Feng\({}^{36}\), C. D. Fu\({}^{1}\), J. L. Fu\({}^{21}\), Y. Gao\({}^{32}\), C. Geng\({}^{36}\), K. Goetzen\({}^{7}\), W. X. Gong\({}^{1}\), M. Greco\({}^{38}\), S. Grishin\({}^{15}\), M. H. Gu\({}^{1}\), Y. T. Gu\({}^{9}\), Y. H. Guan\({}^{6}\), A. Q. Guo\({}^{22}\), L. B. Guo\({}^{20}\), Y.P. Guo\({}^{22}\), X. Q. Hao\({}^{1}\), F. A. Harris\({}^{34}\), K. L. He\({}^{1}\), M. He\({}^{1}\), Z. Y. He\({}^{22}\), Y. K. Heng\({}^{1}\), Z. L. Hou\({}^{1}\), H. M. Hu\({}^{1}\), J. F. Hu\({}^{6}\), T. Hu\({}^{1}\), B. Huang\({}^{1}\), G. M. Huang\({}^{12}\), J. S. Huang\({}^{10}\), X. T. Huang\({}^{25}\), Y. P. Huang\({}^{1}\), T. Hussain\({}^{37}\), C. S. Ji\({}^{36}\), Q. Ji\({}^{1}\), X. B. Ji\({}^{1}\), X. L. Ji\({}^{1}\), L. K. Jia\({}^{1}\), L. L. Jiang\({}^{1}\), X. S. Jiang\({}^{1}\), J. B. Jiao\({}^{25}\), Z. Jiao\({}^{11}\), D. P. Jin\({}^{1}\), S. Jin\({}^{1}\), F. F. Jing\({}^{32}\), M. Kavatsyuk\({}^{16}\), S. Komamiya\({}^{31}\), W. Kuehn\({}^{33}\), J. S. Lange\({}^{33}\), J. K. C. Leung\({}^{30}\), Cheng Li\({}^{36}\), Cui Li\({}^{36}\), D. M. Li\({}^{42}\), F. Li\({}^{1}\), G. Li\({}^{1}\), H. B. Li\({}^{1}\), J. C. Li\({}^{1}\), Lei Li\({}^{1}\), N. B. Li\({}^{20}\), Q. J. Li\({}^{1}\), W. D. Li\({}^{1}\), W. G. Li\({}^{1}\), X. L. Li\({}^{25}\), X. N. Li\({}^{1}\), X. Q. Li\({}^{22}\), X. R. Li\({}^{1}\), Z. B. Li\({}^{28}\), H. Liang\({}^{36}\), Y. F. Liang\({}^{27}\), Y. T. Liang\({}^{33}\), G. R Liao\({}^{8}\), X. T. Liao\({}^{1}\), B. J. Liu\({}^{29}\), B. J. Liu\({}^{30}\), C. L. Liu\({}^{3}\), C. X. Liu\({}^{1}\), C. Y. Liu\({}^{1}\), F. H. Liu\({}^{26}\), Fang Liu\({}^{1}\), Feng Liu\({}^{12}\), G. C. Liu\({}^{1}\), H. Liu\({}^{1}\), H. B. Liu\({}^{6}\), H. M. Liu\({}^{1}\), H. W. Liu\({}^{1}\), J. P. Liu\({}^{40}\), K. Liu\({}^{23}\), K. Y Liu\({}^{19}\), Q. Liu\({}^{34}\), S. B. Liu\({}^{36}\), X. Liu\({}^{18}\), X. H. Liu\({}^{1}\), Y. B. Liu\({}^{22}\), Y. W. Liu\({}^{36}\), Yong Liu\({}^{1}\), Z. A. Liu\({}^{1}\), Z. Q. Liu\({}^{1}\), H. Loehner\({}^{16}\), G. R. Lu\({}^{10}\), H. J. Lu\({}^{11}\), J. G. Lu\({}^{1}\), Q. W. Lu\({}^{26}\), X. R. Lu\({}^{6}\), Y. P. Lu\({}^{1}\), C. L. Luo\({}^{20}\), M. X. Luo\({}^{41}\), T. Luo\({}^{1}\), X. L. Luo\({}^{1}\), C. L. Ma\({}^{6}\), F. C. Ma\({}^{19}\), H. L. Ma\({}^{1}\), Q. M. Ma\({}^{1}\), T. Ma\({}^{1}\), X. Ma\({}^{1}\), X. Y. Ma\({}^{1}\), M. Maggiora\({}^{38}\), Q. A. Malik\({}^{37}\), H. Mao\({}^{1}\), Y. J. Mao\({}^{23}\), Z. P. Mao\({}^{1}\), J. G. Messchendorp\({}^{16}\), J. Min\({}^{1}\), R. E. Mitchell\({}^{14}\), X. H. Mo\({}^{1}\), C. Motzko\({}^{2}\), N. Yu. Muchnoi\({}^{5}\), Y. Nefedov\({}^{15}\), Z. Ning\({}^{1}\), S. L. Olsen\({}^{24}\), Q. Ouyang\({}^{1}\), S. Pacetti\({}^{17}\), M. Pelizaeus\({}^{34}\), K. Peters\({}^{7}\), J. L. Ping\({}^{20}\), R. G. Ping\({}^{1}\), R. Poling\({}^{35}\), C. S. J. Pun\({}^{30}\), M. Qi\({}^{21}\), S. Qian\({}^{1}\), C. F. Qiao\({}^{6}\), X. S. Qin\({}^{1}\), J. F. Qiu\({}^{1}\), K. H. Rashid\({}^{37}\), G. Rong\({}^{1}\), X. D. Ruan\({}^{9}\), A. Sarantsev\({}^{15c}\), J. Schulze\({}^{2}\), M. Shao\({}^{36}\), C. P. Shen\({}^{34}\), X. Y. Shen\({}^{1}\), H. Y. Sheng\({}^{1}\), M. R. Shepherd\({}^{14}\), X. Y. Song\({}^{1}\), S. Sonoda\({}^{31}\), S. Spataro\({}^{38}\), B. Spruck\({}^{33}\), D. H. Sun\({}^{1}\), G. X. Sun\({}^{1}\), J. F. Sun\({}^{10}\), S. S. Sun\({}^{1}\), X. D. Sun\({}^{1}\), Y. J. Sun\({}^{36}\), Y. Z. Sun\({}^{1}\), Z. J. Sun\({}^{1}\), Z. T. Sun\({}^{36}\), C. J. Tang\({}^{27}\), X. Tang\({}^{1}\), X. F. Tang\({}^{8}\), H. L. Tian\({}^{1}\), D. Toth\({}^{35}\), G. S. Varner\({}^{34}\), X. Wan\({}^{1}\), B. Q. Wang\({}^{23}\), K. Wang\({}^{1}\), L. L. Wang\({}^{4}\), L. S. Wang\({}^{1}\), M. Wang\({}^{25}\), P. Wang\({}^{1}\), P. L. Wang\({}^{1}\), Q. Wang\({}^{1}\), S. G. Wang\({}^{23}\), X. L. Wang\({}^{36}\), Y. D. Wang\({}^{36}\), Y. F. Wang\({}^{1}\), Y. Q. Wang\({}^{25}\), Z. Wang\({}^{1}\), Z. G. Wang\({}^{1}\), Z. Y. Wang\({}^{1}\), D. H. Wei\({}^{8}\), S. P. Wen\({}^{1}\), U. Wiedner\({}^{2}\), L. H. Wu\({}^{1}\), N. Wu\({}^{1}\), W. Wu\({}^{19}\), Z. Wu\({}^{1}\), Z. J. Xiao\({}^{20}\), Y. G. Xie\({}^{1}\), G. F. Xu\({}^{1}\), G. M. Xu\({}^{23}\), H. Xu\({}^{1}\), Y. Xu\({}^{22}\), Z. R. Xu\({}^{36}\), Z. Z. Xu\({}^{36}\), Z. Xue\({}^{1}\), L. Yan\({}^{36}\), W. B. Yan\({}^{36}\), Y. H. Yan\({}^{13}\), H. X. Yang\({}^{1}\), M. Yang\({}^{1}\), T. Yang\({}^{9}\), Y. Yang\({}^{12}\), Y. X. Yang\({}^{8}\), M. Ye\({}^{1}\), M. H. Ye\({}^{4}\), B. X. Yu\({}^{1}\), C. X. Yu\({}^{22}\), L. Yu\({}^{12}\), C. Z. Yuan\({}^{1}\), W. L. Yuan\({}^{20}\), Y. Yuan\({}^{1}\), A. A. Zafar\({}^{37}\), A. Zallo\({}^{17}\), Y. Zeng\({}^{13}\), B. X. Zhang\({}^{1}\), B. Y. Zhang\({}^{1}\), C. C. Zhang\({}^{1}\), D. H. Zhang\({}^{1}\), H. H. Zhang\({}^{28}\), H. Y. Zhang\({}^{1}\), J. Zhang\({}^{20}\), J. W. Zhang\({}^{1}\), J. Y. Zhang\({}^{1}\), J. Z. Zhang\({}^{1}\), L. Zhang\({}^{21}\), S. H. Zhang\({}^{1}\), T. R. Zhang\({}^{20}\), X. J. Zhang\({}^{1}\), X. Y. Zhang\({}^{25}\), Y. Zhang\({}^{1}\), Y. H. Zhang\({}^{1}\), Z. P. Zhang\({}^{36}\), Z. Y. Zhang\({}^{40}\), G. Zhao\({}^{1}\), H. S. Zhao\({}^{1}\), Jiawei Zhao\({}^{36}\), Jingwei Zhao\({}^{1}\), Lei Zhao\({}^{36}\), Ling Zhao\({}^{1}\), M. G. Zhao\({}^{22}\), Q. Zhao\({}^{1}\), S. J. Zhao\({}^{42}\), T. C. Zhao\({}^{39}\), X. H. Zhao\({}^{21}\), Y. B. Zhao\({}^{1}\), Z. G. Zhao\({}^{36}\), Z. L. Zhao\({}^{9}\), A. Zhemchugov\({}^{15a}\), B. Zheng\({}^{1}\), J. P. Zheng\({}^{1}\), Y. H. Zheng\({}^{6}\), Z. P. Zheng\({}^{1}\), B. Zhong\({}^{1}\), J. Zhong\({}^{2}\), L. Zhong\({}^{32}\), L. Zhou\({}^{1}\), X. K. Zhou\({}^{6}\), X. R. Zhou\({}^{36}\), C. Zhu\({}^{1}\), K. Zhu\({}^{1}\), K. J. Zhu\({}^{1}\), S. H. Zhu\({}^{1}\), X. L. Zhu\({}^{32}\), X. W. Zhu\({}^{1}\), Y. S. Zhu\({}^{1}\), Z. A. Zhu\({}^{1}\), J. Zhuang\({}^{1}\), B. S. Zou\({}^{1}\), J. H. Zou\({}^{1}\), J. X. Zuo\({}^{1}\), P. Zweber\({}^{35}\) (BESIII Collaboration) \({}^{1}\) _Institute of High Energy Physics, Beijing 100049, P. R. China \({}^{2}\) Bochum Ruhr-University, 44780 Bochum, Germany \({}^{3}\) Carnegie Mellon University, Pittsburgh, PA 15213, USA \({}^{4}\) China Center of Advanced Science and Technology, Beijing 100190, P. R. China \({}^{5}\) G.I. Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia \({}^{6}\) Graduate University of Chinese Academy of Sciences, Beijing 100049, P. R. China \({}^{7}\) GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany \({}^{8}\) Guangxi Normal University, Guilin 541004, P. R. China \({}^{9}\) Guangxi University, Naning 530004, P. R. China \({}^{10}\) Henan Normal University, Xinxiang 453007, P. R. China \({}^{11}\) Huangshan College, Huangshan 245000, P. R. China \({}^{12}\) Huazhong Normal University, Wuhan 430079, P. R. China \({}^{13}\) Hunan University, Changsha 410082, P. R. China \({}^{14}\) Indiana University, Bloomington, Indiana 47405, USA \({}^{15}\) Joint Institute for Nuclear Research, 141980 Dubna, Russia \({}^{16}\) KVI/University of Groningen, 9747 AA Groningen, The Netherlands \({}^{17}\) Laboratori Nazionali di Frascati - INFN, 00044 Frascati, Italy \({}^{18}\) Lanzhou University, Lanzhou 730000, P. R. China \({}^{19}\) Liaoning University, Shenyang 110036, P. R. China \({}^{20}\) Nanjing Normal University, Nanjing 210046, P. R. China \({}^{21}\) Nanjing University, Nanjing 210093, P. R. China \({}^{22}\) Nankai University, Tianjin 300071, P. R. China \({}^{23}\) Peking University, Beijing 100871, P. R. China \({}^{24}\) Seoul National University, Seoul, 151-747 Korea \({}^{25}\) Shandong University, Jinan 250100, P. R. China \({}^{26}\) Shanxi University, Taiyuan 030006, P. R. China \({}^{27}\) Sichuan University, Chengdu 610064, P. R. China \({}^{28}\) Sun Yat-Sen University, Guangzhou 510275, P. R. China \({}^{29}\) The Chinese University of Hong Kong, Shatin, N.T., Hong Kong. \({}^{30}\) The University of Hong Kong, Pokfulam, Hong Kong \({}^{31}\) The University of Tokyo, Tokyo 113-0033 Japan \({}^{32}\) Tsinghua University, Beijing 100084, P. R. China \({}^{33}\) Universitaet Giessen, 35392 Giessen, Germany \({}^{34}\) University of Hawaii, Honolulu, Hawaii 96822, USA \({}^{35}\) University of Minnesota, Minneapolis, MN 55455, USA \({}^{36}\) University of Science and Technology of China, Hefei 230026, P. R. China \({}^{37}\) University of the Punjab, Lahore-54590, Pakistan \({}^{38}\) University of Turin and INFN, Turin, Italy \({}^{39}\) University of Washington, Seattle, WA 98195, USA \({}^{40}\) Wuhan University, Wuhan 430072, P. R. China \({}^{41}\) Zhejiang University, Hangzhou 310027, P. R. China \({}^{42}\) Zhengzhou University, Zhengzhou 450001, P. R. China \({}^{a}\) also at the Moscow Institute of Physics and Technology, Moscow, Russia \({}^{b}\) on leave from the Bogolyubov Institute for Theoretical Physics, Kiev, Ukraine \({}^{c}\) also at the PNPI, Gatchina, Russia_
February 22, 2024
###### Abstract
With a sample of \((225.2\pm 2.8)\times 10^{6}\)\(J/\psi\) events registered in the BESIII detector, \(J/\psi\rightarrow\gamma\pi^{+}\pi^{-}\eta^{\prime}\) is studied using two \(\eta^{\prime}\) decay modes: \(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\eta\) and \(\eta^{\prime}\rightarrow\gamma\rho^{0}\). The \(X(1835)\), which was previously observed by BESII, is confirmed with a statistical significance that is larger than \(20\sigma\). In addition, in the \(\pi^{+}\pi^{-}\eta^{\prime}\) invariant mass spectrum, the \(X(2120)\) and the \(X(2370)\), are observed with statistical significances larger than \(7.2\sigma\) and \(6.4\sigma\) , respectively. For the \(X(1835)\), the angular distribution of the radiative photon is consistent with expectations for a pseudoscalar.
pacs: 12.39.Mk, 12.40.Yx, 13.20.Gd, 13.75.Cs A \(\pi^{+}\pi^{-}\eta^{\prime}\) resonance, the \(X(1835)\), was observed in \(J/\psi\rightarrow\gamma\pi^{+}\pi^{-}\eta^{\prime}\) decays with a statistical significance of \(7.7\sigma\) by the BESII experiment [1]. A fit to a Breit-Wigner function yielded a mass \(M=1833.7\pm 6.1({\rm stat})\pm 2.7({\rm syst})~{}{\rm MeV}/c^{2}\), a width \(\Gamma=67.7\pm 20.3({\rm stat})\pm 7.7({\rm syst})~{}{\rm MeV}/c^{2}\), and a product branching fraction \(B(J/\psi\rightarrow\gamma X)\cdot B(X\rightarrow\pi^{+}\pi^{-}\eta\prime)=(2.2 \pm 0.4({\rm stat})\pm 0.4({\rm syst}))\times 10^{-4}\). The study was stimulated by the anomalous \(p\bar{p}\) invariance mass threshold enhancement, that was reported in \(J/\psi\rightarrow\gamma p\bar{p}\) decays by the BESII experiment [2] and was recently confirmed in an analysis of \(\psi^{\prime}\rightarrow\pi^{+}\pi^{-}J/\psi,~{}J/\psi\rightarrow\gamma p\bar{p}\) decays by the BESIII experiment [3]. The possible interpretations of the \(X(1835)\) include a \(p\bar{p}\) bound state [4; 5; 6; 7], a glueball [8; 9; 10], a radial excitation of the \(\eta^{\prime}\) meson [11], etc. A high statistics data sample collected with BESIII provides an opportunity to confirm the existence of the \(X(1835)\) and look for possible related states that decay to \(\pi^{+}\pi^{-}\eta^{\prime}\).
Lattice QCD predicts that the lowest lying pseudo-scalar glueball meson has a mass that is around \(2.3~{}{\rm GeV}/c^{2}\) [12]. This pseudo-scalar glueball may have properties in common with the \(\eta_{c}\) , due to its similar decay dynamics that favor decays into gluons. One of the strongest decay channels of the \(\eta_{c}\) is \(\pi^{+}\pi^{-}\eta^{\prime}\) . Thus \(J/\psi\rightarrow\gamma\pi^{+}\pi^{-}\eta^{\prime}\) decays may be a good channel for finding \(0^{-+}\) glueballs.
In this letter, we report a study of \(J/\psi\rightarrow\gamma\pi^{+}\pi^{-}\eta^{\prime}\) that uses two \(\eta^{\prime}\) decay modes, \(\eta^{\prime}\rightarrow\gamma\rho\) and \(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\eta\). The analysis uses a sample of \((225.2\pm 2.8)\times 10^{6}\)\(J/\psi\) events [13] accumulated in the new Beijing Spectrometer (BESIII) [14] located at the Beijing Electron-Positron Collider (BEPCII) [15] at the Beijing Institute of High Energy Physics.
The design peak luminosity of the BEPCII double-ring \(e^{+}e^{-}\) collider, is \(10^{33}\) cm\({}^{-2}s^{-1}\) with beam currents of 0.93 A. The BESIII detector has a geometrical acceptance of 93% of 4\(\pi\) and consists of four main components. 1) A small-celled, helium-based main draft chamber (MDC) with 43 layers. The average single wire resolution is 135 \(\mu\)m, and the momentum resolution for 1 GeV\(/c\) charged particles in a 1 T magnetic field is 0.5%. 2) An electromagnetic calorimeter (EMC) comprised of 6240 CsI (Tl) crystals arranged in a cylindrical shape (barrel) plus two endcaps. The energy resolution for 1.0 GeV photons is 2.5% in the barrel and 5% in the endcaps, and the position resolution is 6 mm in the barrel and 9 mm in the endcaps. 3) A Time-Of-Flight system (TOF) for particle identification composed of a barrel part with two layers with 88 pieces of 5 cm thick, 2.4 m long plastic scintillators in each layer, and two endcaps each with 96 fan-shaped, 5 cm thick, plastic scintillators. The time resolution is 80 ps in the barrel, and 110 ps in the endcaps, corresponding to a \(2\sigma\) K/\(\pi\) separation for momenta up to 1.0 GeV\(/c\); 4) A muon chamber system (MUC) made of 1000 m\({}^{2}\) of Resistive Plate Chambers (RPC) arranged in 9 layers in the barrel and 8 layers in the endcaps and incorporated in the return iron of the superconducting magnet. The position resolution is about 2 cm.
Charged-particle tracks in the polar angle range \(|\cos\theta|<0.93\) are reconstructed from hits in the MDC. Tracks that extrapolate to be within \(20~{}{\rm cm}\) of the interaction point in the beam direction and \(2~{}{\rm cm}\) in the plane perpendicular to the beam are selected. The TOF and \(dE/dx\) information are combined to form particle identification confidence levels for the \(\pi\), \(K\), and \(p\) hypotheses; each track is assigned to the particle type that corresponds to the hypothesis with the highest confidence level. Photon candidates are required to have at least \(100~{}{\rm MeV}\) of energy in the EMC regions \(|\cos\theta|<0.8\) and \(0.86<|\cos\theta|<0.92\) and be isolated from all charged tracks by more than \(5^{\circ}\). In this analysis, candidate events are required to have four charged tracks (zero net charge) with at least three of the charged tracks identified as pions. At least two photons (three photons) are required for the \(\eta^{\prime}\rightarrow\gamma\rho\) (\(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\eta\)) channel.
For \(J/\psi\rightarrow\gamma\pi^{+}\pi^{-}\eta^{\prime}(\eta^{\prime}\to \gamma\rho)\), a four-constraint (4C) energy-momentum conservation kinematic fit is performed to the \(\gamma\gamma\pi^{+}\pi^{-}\pi^{+}\pi^{-}\) hypothesis. For events with more than two photon candidates, the combination with the minimum \(\chi^{2}\) is used, and \(\chi^{2}_{4C}<40\) is required. Events with \(|M_{\gamma\gamma}-m_{\pi^{0}}|<0.04~{}{\rm GeV}/c^{2}\), \(|M_{\gamma\gamma}-m_{\eta}|<0.03~{}{\rm GeV}/c^{2}\), \(0.72~{}{\rm GeV}/c^{2}<M_{\gamma\gamma}<0.82~{}{\rm GeV}/c^{2}\) or \(|M_{\gamma\pi^{+}\pi^{-}}-m_{\eta}|<0.007{~{}\rm GeV}/c^{2}\) are rejected to suppress the background from \(\pi^{0}\pi^{+}\pi^{-}\pi^{+}\pi^{-}\), \(\eta\pi^{+}\pi^{-}\pi^{+}\pi^{-}\), \(\omega(\omega\rightarrow\gamma\pi^{0})\pi^{+}\pi^{-}\pi^{+}\pi^{-}\) and \(\gamma\pi^{+}\pi^{-}\eta(\eta\rightarrow\gamma\pi^{+}\pi^{-})\), respectively. A clear \(\eta^{\prime}\) signal with a \(5~{}{\rm MeV}/c^{2}\) mass resolution is evident in the mass spectrum of all selected \(\gamma\pi^{+}\pi^{-}\) combinations shown in Fig. 1(a). Candidate \(\rho\) and \(\eta^{\prime}\) mesons are reconstructed from the \(\pi^{+}\pi^{-}\) and \(\gamma\pi^{+}\pi^{-}\) pairs with \(|M_{\pi^{+}\pi^{-}}-m_{\rho}|<0.2~{}{\rm GeV}/c^{2}\) and \(|M_{\gamma\pi^{+}\pi^{-}}-m_{\eta^{\prime}}|<0.015{~{}\rm GeV}/c^{2}\), respectively. If more than one combination passes these criteria, the combination with \(M_{\gamma\pi^{+}\pi^{-}}\) closest to \(m_{\eta^{\prime}}\) is selected. After the above selection, the \(X(1835)\) resonance is clearly visible in the \(\pi^{+}\pi^{-}\eta^{\prime}\) invariant mass spectrum of Fig. 1(b). Also, additional peaks are evident around 2.1 and 2.4\(~{}{\rm GeV}/c^{2}\) as well as a distinct signal for the \(\eta_{c}\).
<figure><img src="content_image/1012.3510/x1.png"><figcaption>Figure 1: Invariant-mass distributions for the selected candidate events. (a)and (b) are the γπ+π− invariant-mass spectrum and the π+π−η′ invariant-massspectrum for η′→γρ, respectively. (c) and (d) are the π+π−η invariant-massspectrum and the π+π−η′ invariant-mass spectrum for η′→π+π−η, respectively.The histograms in (b) and (d) are from J/ψ→γπ+π−η′ phase-space MC events (witharbitrary normalization) for η′→γρ and η′→π+π−η, respectively.</figcaption></figure>
For \(J/\psi\rightarrow\gamma\pi^{+}\pi^{-}\eta^{\prime}(\eta^{\prime}\rightarrow\pi ^{+}\pi^{-}\eta)\), a 4C kinematic fit to the \(\gamma\gamma\gamma\pi^{+}\pi^{-}\pi^{+}\pi^{-}\) hypothesis is performed. If there are more than three photon candidates, the combination with the minimum \(\chi^{2}_{4C}\) is selected, and \(\chi^{2}_{4C}<40\) is required. In order to reduce the combinatorial background events from \(\pi^{0}\rightarrow\gamma\gamma\), \(|M_{\gamma\gamma}-m_{\pi^{0}}|>0.04{~{}\rm GeV}/c^{2}\) is required for all photon pairs. The \(\eta\) candidates are selected by requiring \(|M_{\gamma\gamma}-m_{\eta}|<0.03{~{}\rm GeV}/c^{2}\). A five-constraint (5C) fit with an \(\eta\) mass constraint is used to improve the mass resolution from \(8~{}{\rm MeV}/c^{2}\)(4C) to \(3~{}{\rm MeV}/c^{2}\), as shown in Fig. 1(c) where \(\chi^{2}_{5C}<40\) is required. To select \(\eta^{\prime}\) mesons, \(|M_{\pi^{+}\pi^{-}\eta}-m_{\eta^{\prime}}|<0.01{~{}\rm GeV}/c^{2}\) is required. If more than one combination passes the above selection, the combination with \(M_{\pi^{+}\pi^{-}\eta}\) closest to \(m_{\eta^{\prime}}\) is selected. After the above selection, structures similar to those seen for the \(\eta^{\prime}\rightarrow\gamma\rho\) channel in the \(\pi^{+}\pi^{-}\eta^{\prime}\) invariant mass spectrum can be seen in Fig. 1(d), namely peaks near 1.8, 2.1 and 2.4 \({~{}\rm GeV}/c^{2}\) as well as the \(\eta_{c}\).
Potential background processes are studied with an inclusive sample of \(2\times 10^{8}\)\(J/\psi\) events generated according to the Lund-Charm model [16] and the Particle Data Group (PDG) decay tables [17]. There are no peaking backgrounds at the positions of the three resonances. To ensure further that the three peaks are not due to background, we have studied potential exclusive background processes using data. The main background channel is from \(J/\psi\rightarrow\pi^{0}\pi^{+}\pi^{-}\eta^{\prime}\). Non-\(\eta^{\prime}\) processes are studied with \(\eta^{\prime}\) mass-sideband events. Neither of these produce peaking structures.
The \(\pi^{+}\pi^{-}\eta^{\prime}\) invariant mass spectrum for the combined two \(\eta^{\prime}\) decay modes is presented in Fig. 2. Here a small peak at the position of the \(f_{1}(1510)\) signal is also present. Fits to the mass spectra have been made using four efficiency-corrected Breit-Wigner functions convolved with a Gaussian mass resolution plus a non-resonant \(\pi^{+}\pi^{-}\eta^{\prime}\) contribution and background representations, where the efficiency for the combined channels is obtained from the branching-ratio-weighted average of the efficiencies for the two \(\eta^{\prime}\) modes. The contribution from non-resonant \(\gamma\pi^{+}\pi^{-}\eta^{\prime}\) production is described by reconstructed MC-generated \(J/\psi\rightarrow\gamma\pi^{+}\pi^{-}\eta^{\prime}\) Phase Space (PS) decays, and it is treated as an incoherent process. The background contribution can be divided into two different components, the contribution from non-\(\eta^{\prime}\) events estimated from \(\eta^{\prime}\) mass sideband, and the contribution from \(J/\psi\rightarrow\pi^{0}\pi^{+}\pi^{-}\eta^{\prime}\). For the second background, we obtain the background \(\pi^{+}\pi^{-}\eta^{\prime}\) mass spectrum from data by selecting \(J/\psi\rightarrow\pi^{0}\pi^{+}\pi^{-}\eta^{\prime}\) events and reweighting their mass spectrum with a weight equal to the MC efficiency ratio of the \(\gamma\pi^{+}\pi^{-}\eta^{\prime}\) and \(\pi^{0}\pi^{+}\pi^{-}\eta^{\prime}\) selections for \(J/\psi\rightarrow\pi^{0}\pi^{+}\pi^{-}\eta^{\prime}\). The masses, widths and number of event of the \(f_{1}(1510)\), the \(X(1835)\) and the resonances near 2.1 and 2.4 \({\rm GeV}/c^{2}\), the \(X(2120)\) and \(X(2370)\), are listed in Table 1. The statistical significance is determined from the change in \(-2{\rm ln}L\) in the fits to mass spectra with and without signal assumption while considering the change of degree of freedom of the fits. With the systematic uncertainties in the fit taken into account, the statistical significance of the \(X(1835)\) is larger than \(20\sigma\), while those for the \(f_{1}(1510)\), the \(X(2120)\) and the \(X(2370)\) are larger than \(5.7\sigma\), \(7.2\sigma\) and \(6.4\sigma\), respectively. The mass and width from the fit of the \(f_{1}(1510)\) are consistent with PDG values [17]. With MC-determined selection efficiencies of \(16.0\%\) and \(11.3\%\) for the \(\eta^{\prime}\rightarrow\gamma\rho\) and \(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\eta\) decay modes respectively, the branching fraction for the \(X(1835)\) is measured to be \(B(J/\psi\rightarrow\gamma X(1835))\cdot B(X(1835)\rightarrow\pi^{+}\pi^{-}\eta ^{\prime})=(2.87\pm 0.09)\times 10^{-4}\). The consistency between the two \(\eta^{\prime}\) decay modes is checked by fitting their \(\pi^{+}\pi^{-}\eta^{\prime}\) mass distribution separately with the procedure described above.
<figure><img src="content_image/1012.3510/x2.png"><figcaption>Figure 2: (a) The π+π−η′ invariant-mass distribution for the selected eventsfrom the two η′ decay modes. (b) mass spectrum fitting with four resonances,here, the dash-dot line is contributions of non-η′ events and the π0π+π−η′background for two η′ decay modes and the dash line is contributions of thetotal background and non-resonant π+π−η′ process.</figcaption></figure>
resonance | M( MeV/c2) | Γ( MeV/c2) | Nevent
---|---|---|---
f1(1510) | 1522.7±5.0 | 48±11 | 230±37
X(1835) | 1836.5±3.0 | 190.1±9.0 | 4265±131
X(2120) | 2122.4±6.7 | 83±16 | 647±103
X(2370) | 2376.3±8.7 | 83±17 | 565±105
Table 1: Fit results with four resonances for the combined two η′ decay modes
For radiative \(J/\psi\) decays to a pseudoscalar meson, the polar angle of the photon in the \(J/\psi\) center of mass system, \(\theta_{\gamma}\), should be distributed according to \(1+\cos^{2}\theta_{\gamma}\). We divide the \(|\cos\theta_{\gamma}|\) distribution into 10 bins in the region of \([0,1.0]\). With the same procedure as described above, the number of the \(X(1835)\) events in each bin can be obtained by fitting the mass spectrum in this bin, and then the background-subtracted, acceptance-corrected \(|\cos\theta_{\gamma}|\) distribution for the \(X(1835)\) is obtained as shown in Fig. 3, where the errors are statistical only. It agrees with \(1+\cos^{2}\theta_{\gamma}\), which is expected for a pseudoscalar, with \(\chi^{2}/d.o.f=11.8/9\).
<figure><img src="content_image/1012.3510/x3.png"><figcaption>Figure 3: The background-subtracted, acceptance-corrected |cosθγ|distribution of the X(1835) for two η′ decay modes for J/ψ→γπ+π−η′.</figcaption></figure>
The systematic uncertainties on the mass and width are mainly from the uncertainty of background representation, the mass range included in the fit, different shapes for background contributions and the non-resonant process and contributions of possible additional resonances in the \(1.6{~{}\rm GeV}/c^{2}\) and \(2.6{~{}\rm GeV}/c^{2}\) mass regions. From the study of \(J/\psi\to p\bar{p}\pi^{+}\pi^{-}\), the PID efficiency difference between data and MC is determined. Using this difference and reweighting each MC event with a weight equal to the efficiency ratio between data and MC, we re-fit the mass spectra and take the changes as systematic uncertainties associated with data and MC inconsistencies for PID efficiencies. The total systematic errors on the mass and width are \({}^{+5.6}_{-2.1}\) and \({}^{+38}_{-36}\)\({\rm MeV}/c^{2}\) for the \(X(1835)\), \({}^{+4.7}_{-2.7}\) and \({}^{+31}_{-11}\)\({\rm MeV}/c^{2}\) for the \(X(2120)\), \({}^{+3.2}_{-4.3}\) and \({}^{+44}_{-6}\)\({\rm MeV}/c^{2}\) for the \(X(2370)\) respectively. For the systematic error of the branching fraction measurement, we additionally include the uncertainties of the MC generator, charged track detection efficiency, photon detection efficiency, kinematic fit, the \(\eta^{\prime}\) decay branching fractions to \(\pi^{+}\pi^{-}\eta\) and \(\gamma\rho\) [17], the requirement on the \(\gamma\gamma\) invariant-mass distribution, signals selection of \(\rho\), \(\eta\) and \(\eta^{\prime}\) and the total number of \(J/\psi\) events [13]. The main contribution also comes from the uncertainty in the background estimation, and the total relative systematic error on the product branching fraction for the \(X(1835)\) is \({}^{+17\%}_{-18\%}\).
In summary, the decay channel \(J/\psi\rightarrow\pi^{+}\pi^{-}\eta^{\prime}\) is analyzed using two \(\eta^{\prime}\) decay modes, \(\eta^{\prime}\rightarrow\gamma\rho\) and \(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\eta\). The \(X(1835)\), which was first observed at BESII, has been confirmed with a statistical significance larger than \(20\sigma\). Meanwhile, two resonances, the \(X(2120)\) and the \(X(2370)\) are observed with statistical significances larger than \(7.2\sigma\) and \(6.4\sigma\) respectively. The masses and widths are measured to be:
* \(X(1835)\) \(M=1836.5\pm 3.0({\rm stat})^{+5.6}_{-2.1}({\rm syst})~{}{\rm MeV}/c^{2}\) \(\Gamma=190\pm 9({\rm stat})^{+38}_{-36}({\rm syst})~{}{\rm MeV}/c^{2}\)
* \(X(2120)\) \(M=2122.4\pm 6.7({\rm stat})^{+4.7}_{-2.7}({\rm syst})~{}{\rm MeV}/c^{2}\) \(\Gamma=83\pm 16({\rm stat})^{+31}_{-11}({\rm syst})~{}{\rm MeV}/c^{2}\)
* \(X(2370)\) \(M=2376.3\pm 8.7({\rm stat})^{+3.2}_{-4.3}({\rm syst})~{}{\rm MeV}/c^{2}\) \(\Gamma=83\pm 17({\rm stat})^{+44}_{-6}({\rm syst})~{}{\rm MeV}/c^{2}\)
For the \(X(1835)\), the product branching fraction is \(B(J/\psi\rightarrow\gamma X(1835))\cdot B(X(1835)\rightarrow\pi^{+}\pi^{-}\eta ^{\prime})=(2.87\pm 0.09{\rm(stat)}^{+0.49}_{-0.52}{\rm(syst)})\times 10^{-4}\), and the angular distribution of the radiative photon is consistent with a pseudoscalar assignment. The mass of the \(X(1835)\) is consistent with the BESII result, but the width is significantly larger. If we fit the mass spectrum with one resonance as BESII, the mass and width of the X(1835) are \(1841.2\pm 2.9~{}{\rm MeV}/c^{2}\) and \(109\pm 11~{}{\rm MeV}/c^{2}\), where the errors are statistical only.
In the mass spectrum fitting in Fig. 2(b), possible interferences among different resonances and the non-resonant process are not taken into account which might be a source of the large \(\chi^{2}\) value for the fit (\(\chi^{2}/d.o.f=144/62\)). The dips around \(2.2{~{}\rm GeV}/c^{2}\) and \(2.5{~{}\rm GeV}/c^{2}\) may not be fitted well because of the neglect of such interferences. In the absence of knowledge of the spin-parities of the resonances and their decay intermediate states, reliable fits that include interference cannot be done. To determine the spin and parity of the \(X(1835)\), \(X(2120)\) and \(X(2370)\), and to measure their masses and widths more precisely, a partial wave analysis must be performed, which will be possible with the much higher statistics \(J/\psi\) data samples planned for future runs of the BESIII experiment.
The BESIII collaboration thanks the staff of BEPCII and the computing center for their hard efforts. This work is supported in part by the Ministry of Science and Technology of China under Contract No. 2009CB825200; National Natural Science Foundation of China (NSFC) under Contracts Nos. 10625524, 10821063, 10825524, 10835001, 10935007; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; CAS under Contracts Nos. KJCX2-YW-N29, KJCX2-YW-N45; 100 Talents Program of CAS; Istituto Nazionale di Fisica Nucleare, Italy; Russian Foundation for Basic Research under Contracts Nos. 08-02-92221, 08-02-92200-NSFC-a; Siberian Branch of Russian Academy of Science, joint project No 32 with CAS; U. S. Department of Energy under Contracts Nos. DE-FG02-04ER41291, DE-FG02-91ER40682, DE-FG02-94ER40823; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt; WCU Program of National Research Foundation of Korea under Contract No. R32-2008-000-10155-0.
## References
* (1) M. Ablikim _et al._ (BES Collaboration), Phys. Rev. Lett. **95**, 262001 (2005).
* (2) J.Z. Bai _et al._ (BES Collaboration), Phys. Rev. Lett. **91**, 022001 (2003).
* (3) M. Ablikim _et al._ (BESIII Collaboration), Chin.Phys. **C34**, 421 (2010).
* (4) G.J. Ding and M.L. Yan, Phys. Rev. **C 72**, 015208 (2005); G. J. Ding and M. L. Yan, Eur. Phys. J. **A28**, 351 (2006).
* (5) J. P. Dedonder _et al._, Phys. Rev. **C 80**, 045207 (2009).
* (6) C. Liu, Eur. Phys. J. **C 53**, 413 (2008).
* (7) Z. G. Wang and S. L. Wan, J. Phys. **34**, 505 (2007).
* (8) G. Hao, C. F. Qiao and A. L. Zhang, Phys. Lett. **B 642**, 53 (2006).
* (9) B. A. Li, Phys. Rev. **D 74**, 034019 (2006).
* (10) N. Kochelev and D. P. Min, Phys. Lett. **B 633**, 283 (2006).
* (11) T. Huang and S. L. Zhu, Phys. Rev. **D 73**, 014023 (2006).
* (12) C. Amsler and N. A. Tornqvist, Phys. Rev. **389**, 61 (2004); E. Klempt and A. Zaitsev, Phys. Rep. **454**, 1 (2007); Y. Chen _et al._, Phys. Rev. **D 73**, 014516 (2006).
* (13) M. Ablikim _et al._ (BESIII Collaboration), arXiv:1012.1117 [hep-ex].
* (14) M. Ablikim _et al._ (BESIII Collaboration), Nucl. Instrum. Meth. **A 614**, 345 (2010).
* (15) J. Z. Bai _et al._ (BES Collaboration), Nucl. Instrum. Meth. **A 344**, 319 (1994); Nucl. Instrum. Meth. **A 458**, 627 (2001).
* (16) J.C. Chen _et al._, Phys. Rev. **D 62**, 034003 (2000).
* (17) K. Nakamura _et al._ (Particle Data Group), J. Phys. **G 37**, 075021 (2010).
|
1805.11878 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 26522,
"num_imgs": 2,
"llama3_tokens_count": 6381
} | [
"content_image/1805.11878/x1.png",
"content_image/1805.11878/x2.png"
] | # Modeling Cognitive Processes in Social Tagging to Improve Tag Recommendations
Dominik Kowald
Supervised by: Prof. Stefanie Lindstaedt (Know-Center)
Know-Center, Graz University of Technology
Inffeldgasse 13, Graz, Austria
dkowald@know-center.at
###### Abstract
With the emergence of Web 2.0, tag recommenders have become important tools, which aim to support users in finding descriptive tags for their bookmarked resources. Although current algorithms provide good results in terms of tag prediction accuracy, they are often designed in a data-driven way and thus, lack a thorough understanding of the cognitive processes that play a role when people assign tags to resources. This thesis aims at modeling these cognitive dynamics in social tagging in order to improve tag recommendations and to better understand the underlying processes.
As a first attempt in this direction, we have implemented an interplay between individual micro-level (e.g., categorizing resources or temporal dynamics) and collective macro-level (e.g., imitating other users’ tags) processes in the form of a novel tag recommender algorithm. The preliminary results for datasets gathered from BibSonomy, CiteULike and Delicious show that our proposed approach can outperform current state-of-the-art algorithms, such as Collaborative Filtering, FolkRank or Pairwise Interaction Tensor Factorization. We conclude that recommender systems can be improved by incorporating related principles of human cognition.
personalized tag recommendations, time-dependent recommender systems, human cognition, social tagging systems Copyright is held by the author.
1
H.2.8Database ManagementDatabase Applications[Data mining] H.3.3Information Storage and RetrievalInformation Search and Retrieval[Information filtering]
## 1 Introduction
Social tagging systems enable users to collaboratively assign freely chosen keywords, so-called tags, to resources. These tags can then be used for navigating, searching, organizing and finding content, and serendipitous browsing [12, 14]. Hence, tags have become an essential instrument of Web 2.0, the social Web, assisting users during these activities. While in social tagging systems users can freely choose keywords for their bookmarked resources, they have to create a set of descriptive tags on their own, which can be a demanding task [19].
As a solution, a variety of tag recommender algorithms, such as Collaborative Filtering, FolkRank or Pairwise Interaction Tensor Factorization, have been proposed. Tag recommenders suggest a set of tags for a given user-resource pair based on previously used and assigned tags and aim at helping not only the individual to find appropriate tags but also the collective to consolidate the shared tag vocabulary [19]. Furthermore, Dellschaft & Staab [4] have shown that personalized tag recommenders can increase the indexing quality of resources, making it easier for users to understand the information content of an indexed resource based on its assigned tags.
### Problem Statement
Although current state-of-the-art tag recommender approaches (see Section 2) perform reasonably well in terms of recommender accuracy, most of them are designed purely data-driven. As a result, they are based on either simply counting tag frequencies or computationally expensive calculation steps (e.g., calculating user similarities or factorizing entities). Hence, these approaches typically ignore important insights originating from cognitive research of how people assign words or tags to resources, which is essential for the design of tag recommenders, that should attempt to mimic the user’s tagging behavior.
As a prominent example in this respect, the work proposed by Fu [7] discusses an interplay between individual micro-level (e.g., categorizing resources or temporal dynamics) and collective macro-level (e.g., imitating other users’ tags) processes in social tagging systems (see Section 3). Based on that, we state the hypothesis, that a theory-driven approach that is build upon such insights can not only improve recommender accuracy in general but also can help to better understand the underlying cognitive processes.
### Contributions
This thesis aims to develop a novel tag recommender approach, which models the cognitive processes that play a role when people assign tags to resources. At the current stage, our contributions are as follows:
* We propose a novel theory-driven approach for recommending tags, which models cognitive processes in social tagging in order to mimic the way humans assign tags to resources.
* We conduct an extensive evaluation using dataset samples gathered from three real-world folksonomies (BibSonomy, CiteULike and Delicious) to show the effectiveness of our theory-driven approach.
* We show that our approach can outperform several state-of-the-art tag recommender algorithms, such as FolkRank or Pairwise Interaction Tensor Factorization, in terms of recommender accuracy.
* We introduce an open-source tag recommender benchmarking framework termed TagRec, which contains not only our proposed approach but also standardized baseline algorithms and evaluation methods.
## 2 Related Work
To date, two types of tag recommenders have been established: folksonomy- and content-based approaches [19]. At the moment, in this work we focus on folksonomy-based algorithms. The most basic approach in this respect is the unpersonalized _MostPopular (MP)_ algorithm that recommends for any user and any resource the same set of tags weighted by the frequency in all tag assignments [11]. A personalized extension of MP is the _MostPopular\({}_{u,r}\) (MP\({}_{u,r}\))_ algorithm that suggests the most frequent tags in the tag assignments of the user, _MostPopular\({}_{u}\) (MP\({}_{u}\))_, and the resource, _MostPopular\({}_{r}\) (MP\({}_{r}\))_[11]. Another classic recommender approach is _Collaborative Filtering (CF)_, which has been adapted for tag recommendations by Marinho et al. [20] to form the neighborhood of a user based on the tag assignments in the user profile. According to Gemmell et al. [8], the best results for CF in social tagging systems are obtained with a neighborhood size of 20 users.
Another well-known tag recommender approach is the _FolkRank (FR)_ algorithm, which is an improvement of the _Adapted PageRank (APR)_ approach [10]. FR and APR adapt the Google PageRank algorithm to rank the nodes within the graph structure of a folksonomy based on their importance in the network [11]. A different popular and recent tag recommender mechanism is _Pairwise Interaction Tensor Factorization (PITF)_ proposed by Rendle & Schmidt-Thieme [23], which is an extension of _Factorization Machines (FM)_[22] and explicitly models the pairwise interactions between users, resources and tags in order to predict future tag assignments. As for algorithms that utilize topic models, to date mainly methods have been proposed based on _Latent Dirichlet Allocation (LDA)_ (e.g., [17]).
With regard to time-dependent tag recommenders, there are two notable approaches. First, the _GIRPTM_ algorithm presented by Zhang et al. [26], which considers the frequency and the temporal usage of a user’s tag assignments. GIRPTM models the temporal tag usage via an exponential distribution. Second, the _BLL+C_ algorithm introduced in [15], which incorporates the activation equation from the cognitive model ACT-R proposed by Anderson et al. [1] and uses a power-law function to mimic the temporal decay of tag reuse. Both approaches imitate tagging by simply taking into account the most popular tags previously assigned to the resource (MP\({}_{r}\)). Kowald et al. [15] have demonstrated that BLL+C outperforms GIRPTM and other well-established algorithms, such as CF, FR and PITF.
## 3 Proposed Approach
Our approach is based on an interplay between micro-level (i.e., the individual level) and macro-level (i.e., the collective level) processes in social tagging systems (e.g., [7]). Among others, micro-level processes (see Sections 3.1 and 3.2) involve categorizing a resource (e.g., modeled as LDA topics) and turning the latent (i.e., non-observable) categorization into manifest (i.e., observable) words or tags [13]. Beyond that, temporal dynamics have turned out to influence the choice of words for a given resource, i.e., recently used tags have a higher probability of being reused than “older” ones [21]. The effect of macro-level structures (see Section 3.3) is mediated by the user’s tendency to imitate other users’ tags [6, 25].
In this section we describe how we modeled and implemented these micro-macro dynamics and the corresponding cognitive processes in the form of a novel tag recommender approach. Needless to say, there are also other types of dynamics and processes (e.g., decision making or associative memory activations [1]) that play a role when people choose tags for resources but at the current stage of this thesis, we focus on the mentioned ones.
<figure><img src="content_image/1805.11878/x1.png"><figcaption>Figure 1: Schematic illustration of our basic 3Layers (3L) approach showingthe connections between the semantic matrix (MS) encoding the latent topicsand the lexical matrix (ML) encoding the tags.</figcaption></figure>
### Categorizing Resources
The basic version of our proposed approach is solely based on human categorization processes [13]. It is termed 3Layers (3L) and is schematically represented in Figure 1. Similar to Kwantes [18], we apply a mechanism from MINERVA2, a computational theory of human categorization [9], to process the network constituted by the input, hidden and output layers shown in Figure 1. First, the latent semantic topics of the resource to be tagged are represented in the input layer in the form of vector \(I\) with \(n\) features (i.e., the latent topics). The latent semantic topics of the resources have been calculated in advance using Latent Dirichlet Allocation (LDA) [17] based on the given tag distributions of the resources with a number of latent topics \(Z\) of 1000 [24].
Subsequently, \(I\) is forwarded to the hidden layer, which represents the past posts of a user as a semantic matrix, \(M_{S}\) (\(l\) posts \(\cdot\)\(n\) latent topics matrix), and an interconnected lexical matrix, \(M_{L}\) (\(l\) posts \(\cdot\)\(m\) tags matrix). This way, each post of the user is represented by two associated vectors: a semantic vector of latent topics \(S_{i,k}\) stored in \(M_{S}\) and a verbatim vector of tags \(L_{i,j}\) stored in \(M_{L}\). \(I\) acts as a cue to activate each post (\(P_{i}\)) in \(M_{S}\) depending on the cosine-similarity (\(Sim_{i}\)) between both vectors, i.e., \(I\) and \(P_{i}\). To transform the resulting similarity values into activation values (\(A_{i}\)) and to reduce the influence of very low similarity values, \(Sim_{i}\) is raised to the power of 3, i.e., \(A_{i}=Sim_{i}^{3}\) (see [18]).
Finally, these activation values are propagated to \(M_{L}\) to activate tags that are associated with highly activated posts in the semantic matrix \(M_{S}\) (circled numbers 2 and 3 in Figure 1). This step finalizes our basic 3L algorithm and is accomplished via the following equation that yields the value \(o_{j}\) for each of the \(m\) tags on the output layer:
\[o_{j}=\underbrace{\sum_{i=1}^{l}(L_{i,j}\cdot A_{i})}_{3L}\] (1)
### Temporal Dynamics
To refine our approach drawing on temporal dynamics [21], we integrate a time (or recency) component \(T\) to assign higher activation values to tags that have been used more recently. As shown by Anderson & Schooler [2], the temporal decay of the users’ word choices follows a power-law function. Thus, it can be modeled via the activation equation from the cognitive model ACT-R [1] (see also [15]).
We use a simplified version of this equation to calculate the time component \(T\): \(BLL(j)=ln((t_{ref}-t_{j})^{-d})\), where \(t_{ref}\) is the timestamp of the most recent post of the user and \(t_{j}\) is the timestamp of the last occurrence of tag \(j\) in the user’s posts. The exponent \(d\) accounts for the power-law of temporal decay of the user’s tag choices and is typically set to \(.5\)[1]. Summed up, 3LT, our time-dependent extension of 3L, can be realized using the following equation:
\[o^{T}_{j}=\underbrace{\sum_{i=1}^{l}(L_{i,j}\cdot BLL(j)\cdot A_ {i})}_{3LT}\] (2)
### Imitating Tags
Research on social tagging [6, 25] has shown that a substantial variance in a user’s tag choices can be explained by her tendency to imitate tags previously assigned by other users to a resource. Furthermore, modeling this imitation process allows recommending new tags, i.e., tags that were not used by the current user before. We realize tag imitation by taking into account the most popular tags in the tag assignments of the resource (i.e., MP\({}_{r}\)[11]). This approach was also chosen by other researchers in the field (e.g., [26, 15]). Taken together, the list of \(k\) recommended tags \(RecTags(u,r)\) according to our proposed 3LT+MP\({}_{r}\) approach for the current user \(u\) and the current resource \(r\) is given by:
\[RecTags(u,r)=\operatorname*{arg\,max}_{j\in\text{Tags}}^{k}( \underbrace{\beta\|o^{T}_{j}\|+(1-\beta)\||Y_{j,r}|\|}_{3LT+MP_{r}})\] (3)
where \(|Y_{j,r}|\) is the number of assignments of tag \(j\) for \(r\) and \(\beta\) can be used to inversely weigh the two components. \(\beta\) was set to .5 to assign equal weights to individual and collective processes. Furthermore, we normalized the values of \(o^{T}_{j}\) and \(|Y_{j,r}|\) in order to combine them [15].
Our proposed approach presented in this section is fully implemented within our open-source and Java-based _TagRec_ framework [16], which is freely available via our Github Repository¹. Among others, this framework also contains the baseline algorithms discussed in Section 2 and the evaluation method described in Section 4.2.
[FOOTNOTE:1][ENDFOOTNOTE]
## 4 Methodology
In this section we describe the methodology to validate our novel approach, including the datasets and evaluation method used.
### Datasets
For reasons of reproducibility, we focused on three well-known folksonomy datasets, that are freely available for scientific purposes. Hence, we utilized datasets from the social tagging and publication sharing system BibSonomy², the reference management system CiteULike³ and the social bookmarking platform Delicious⁴.
[FOOTNOTE:2][ENDFOOTNOTE]
[FOOTNOTE:3][ENDFOOTNOTE]
[FOOTNOTE:4][ENDFOOTNOTE]
We excluded all automatically-generated tags from the datasets (e.g., _no-tag_ or _bibtex-import_) and decapitalized all tags as suggested in related work (e.g., [23]). Furthermore, to reduce the computational effort, we randomly selected 10% of CiteULike and 3% of Delicious user profiles (see also [8]) but did not apply a \(p\)-core pruning to avoid a biased evaluation (see [5]). The statistics of our used dataset samples are shown in Table 1.
Dataset | |P| | |U| | |R| | |T| | |TAS|
---|---|---|---|---|---
BibSonomy | 400,983 | 5,488 | 346,444 | 103,503 | 1,479,970
CiteULike | 379,068 | 8,322 | 352,343 | 138,091 | 1,751,347
Delicious | 1,416,151 | 15,980 | 931,993 | 180,084 | 4,107,107
Table 1: Properties of the datasets, where |P| is the number of posts, |U| is
the number of users, |R| is the number of resources, |T| is the number of tags
and |TAS| is the number of tag assignments.
<figure><img src="content_image/1805.11878/x2.png"><figcaption>(a) BibSonomy</figcaption></figure>
### Evaluation Method
To evaluate our tag recommender approach, we followed a standard procedure in recommender research (e.g., [11]) and split the three datasets into training and test sets. In order to preserve the chronological order of the data, for each user we selected her most recent post (in time) and placed it into the test set. The remaining posts were then used to train the algorithms. This procedure is a promising simulation of a real-world environment, since it predicts the user’s future tagging behavior based on her tagging behavior in the past [3]. To ensure that a minimum amount of tagging “history” is available for training, we focused on users with at least 20 posts. We conducted this evaluation by applying a post-filtering method: while recommendations were calculated based on the entire folksonomy graph, accuracy estimates were computed only on the basis of the filtered user profiles. This resulted in 780 users in the case of BibSonomy, 1,757 in the case of CiteULike and 7,469 in the case of Delicious.
In order to finally quantify the recommender quality and to benchmark our approach against other tag recommendation algorithms mentioned in Section 2, we compared the top-\(10\) tags an algorithm suggested for a user-resource pair with the set of relevant tags in the corresponding post in the test set. Based on these comparisons, various evaluation metrics can be calculated, that have originated from information retrieval and recommender systems research (e.g., Precision, Recall, F1-score, MRR, MAP, nDCG) [11, 19]. At the moment, in this work we focus on Precision and Recall for \(k\) = 1 - 10 recommended tags.
## 5 Preliminary Results
The preliminary results of our evaluation for BibSonomy, CiteULike and Delicious are shown in the three Precision/ Recall plots in Figure 2. When looking at the results for the baseline algorithms (see Section 2), it is apparent that, as expected, all personalized algorithms outperform the unpersonalized MP approach. Additionally, the highest accuracy estimates among these baselines are reached by the two time-dependent methods GIRPTM and BLL+C. This emphasizes the importance of taking temporal dynamics into account in tag recommender research. Moreover, BLL+C, which uses a power-law decay function, outperforms GIRPTM, which relies on an exponential decay function.
A comparison of the results for the basic version of our proposed approach 3L, which is solely based on human categorization processes, with 3LT, which further integrates temporal dynamics, shows that, as expected, 3LT provides higher Recall and Precision values than 3L. This further proves that temporal dynamics play an important role in tag prediction tasks. Additionally, the complete version of our approach 3LT+MP\({}_{r}\), which also utilizes macro-level processes in the form of imitating the most popular tags assigned to the target resource by other users (MP\({}_{r}\)), provides better results than 3LT. This is especially true in the case of Delicious, where numerous tags from other users are available for imitation.
Apart from that, and even more important, our proposed approach 3LT+MP\({}_{r}\) also outperforms the current state-of-the-art algorithms mentioned in Section 2 in all three settings. This includes the time-based GIRPTM and BLL+C approaches, that also utilize temporal dynamics and tag imitation but ignore categorization, which further reveals the importance of all three examined types of cognitive processes. In brief, the results of our experiments not only show that an interplay between individual micro-level and collective macro-level processes can be used to develop an effective tag recommender approach but also that such an approach can outperform current state-of-the-art algorithms.
## 6 Conclusion and Future Work
This thesis aims at modeling the cognitive micro- and macro-level processes that play a role when people assign tags to resources. At the current stage of this thesis, this is achieved by the development and evaluation of a novel tag recommender termed 3LT+MP\({}_{r}\). The preliminary results of our evaluation for datasets gathered from BibSonomy, CiteULike and Delicious show that 3LT+MP\({}_{r}\) can not only outperform current state-of-the-art algorithms but also provides higher accuracy estimates than 3L and 3LT that implement only some of the examined processes. These results corroborate our hypothesis from Section 1.1, that our theory-driven tag recommender can not only improve recommender accuracy in general but also can help to better understand the underlying cognitive processes. Additionally, we introduce an open-source tag recommender benchmarking framework termed TagRec (see also [16]), which contains not only our proposed approach but also standardized baseline algorithms and evaluation methods.
At present, one limitation of this thesis is that we only focus on one cognitive model, namely MINERVA2, to account for the individual micro-level processes in social tagging systems. With that regard, in the future we would like to use and evaluate also another cognitive model (e.g., ACT-R) to realize these (and also additional) processes. We also plan to further improve our approach by investigating content data of the resources (e.g., title and description), which could highly expand the set of possible tags that can be recommended. Moreover, we will evaluate our proposed approach in terms of not only recommender accuracy but also computational costs in order to validate the efficiency of our theory-driven approach. We also plan to integrate our approach into a real online social tagging system, since only then it will be possible to examine our tag recommender’s performance with regard to user acceptance. Finally, we would like to adapt the idea of our approach also for other problems in the recommender systems domain, such as the recommendation of resources, topics and users.
**Acknowledgments:** The author would like to thank Elisabeth Lex, Tobias Ley, Paul Seitlinger and Christoph Trattner for valuable discussions related to this thesis. This work is funded by the Know-Center and the EU-funded project Learning Layers (Grant Agreement 318209). The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Ministry of Transport, Innovation and Technology, the Austrian Ministry of Economics and Labor and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency (FFG).
## References
* [1] J. R. Anderson, M. D. Byrne, S. Douglass, C. Lebiere, and Y. Qin. An integrated theory of the mind. _Psychological Review_, 111(4):1036–1050, 2004.
* [2] J. R. Anderson and L. J. Schooler. Reflections of the environment in memory. _Psychological Science_, 2(6):396–408, 1991.
* [3] P. G. Campos, F. Díez, and I. Cantador. Time-aware recommender systems: a comprehensive survey and analysis of existing evaluation protocols. _User Modeling and User-Adapted Interaction_, pages 1–53, 2013.
* [4] K. Dellschaft and S. Staab. Measuring the influence of tag recommenders on the indexing quality in tagging systems. In _Proc. of HT’12_, pages 73–82, New York, NY, USA, 2012. ACM.
* [5] S. Doerfel and R. Jäschke. An analysis of tag-recommender evaluation procedures. In _Proc. of Recsys’13_, pages 343–346, New York, NY, USA, 2013. ACM.
* [6] F. Floeck, J. Putzke, S. Steinfels, K. Fischbach, and D. Schoder. Imitation and quality of tags in social bookmarking systems–collective intelligence leading to folksonomies. In _On collective intelligence_, pages 75–91. Springer, 2011.
* [7] W.-T. Fu. The microstructures of social tagging: a rational model. In _Proc. of CSCW’08_, pages 229–238. ACM, 2008.
* [8] J. Gemmell, T. Schimoler, M. Ramezani, L. Christiansen, and B. Mobasher. Improving folkrank with item-based collaborative filtering. _Recommender Systems & the Social Web_, 2009.
* [9] D. L. Hintzman. Minerva 2: A simulation model of human memory. _Behavior Research Methods, Instruments, & Computers_, 16(2):96–101, 1984.
* [10] A. Hotho, R. Jäschke, C. Schmitz, and G. Stumme. Information retrieval in folksonomies: Search and ranking. In _The semantic web: research and applications_, pages 411–426. Springer, 2006.
* [11] R. Jäschke, L. Marinho, A. Hotho, L. Schmidt-Thieme, and G. Stumme. Tag recommendations in folksonomies. In _Proc. of PKDD’07_, pages 506–514. Springer, 2007.
* [12] R. Jäschke, L. Marinho, A. Hotho, L. Schmidt-Thieme, and G. Stumme. Tag recommendations in social bookmarking systems. _Ai Communications_, 21(4):231–247, 2008.
* [13] W. Kintsch and P. Mangalath. The construction of meaning. _Topics in Cognitive Science_, 3(2):346–370, 2011.
* [14] C. Körner, D. Benz, A. Hotho, M. Strohmaier, and G. Stumme. Stop thinking, start tagging: tag semantics emerge from collaborative verbosity. In _Proc. of WWW’10_, WWW ’10, pages 521–530, New York, NY, USA, 2010. ACM.
* [15] D. Kowald, S. Kopeinik, P. Seitlinger, T. Ley, D. Albert, and C. Trattner. Refining frequency-based tag reuse predictions by means of time and semantic context. In _Mining, Modeling, and Recommending Things’ in Social Media_, pages 55–74. Springer, 2015.
* [16] D. Kowald, E. Lacic, and C. Trattner. Tagrec: Towards a standardized tag recommender benchmarking framework. In _Proc. of HT’14_, New York, NY, USA, 2014. ACM.
* [17] R. Krestel, P. Fankhauser, and W. Nejdl. Latent dirichlet allocation for tag recommendation. In _Proc. of Recsys’09_, pages 61–68. ACM, 2009.
* [18] P. J. Kwantes. Using context to build semantics. _Psychonomic Bulletin & Review_, 12(4):703–710, 2005.
* [19] M. Lipczak. _Hybrid Tag Recommendation in Collaborative Tagging Systems_. PhD thesis, Dalhousie University, 2012.
* [20] L. B. Marinho and L. Schmidt-Thieme. Collaborative tag recommendations. In _Data Analysis, Machine Learning and Applications_, pages 533–540. Springer, 2008.
* [21] S. M. Polyn, K. A. Norman, and M. J. Kahana. A context maintenance and retrieval model of organizational processes in free recall. _Psychological review_, 116(1):129, 2009.
* [22] S. Rendle. Factorization machines. In _Proc. of ICDM’10_, pages 995–1000. IEEE, 2010.
* [23] S. Rendle and L. Schmidt-Thieme. Pairwise interaction tensor factorization for personalized tag recommendation. In _Proc. of WSDM’10_, pages 81–90, New York, NY, USA, 2010. ACM.
* [24] P. Seitlinger, D. Kowald, C. Trattner, and T. Ley. Recommending tags with a model of human categorization. In _Proc. of CIKM’13_, pages 2381–2386, New York, NY, USA, 2013. ACM.
* [25] P. Seitlinger and T. Ley. Implicit imitation in social tagging: familiarity and semantic reconstruction. In _Proc. of CHI’12_, pages 1631–1640, New York, NY, USA, 2012. ACM.
* [26] L. Zhang, J. Tang, and M. Zhang. Integrating temporal usage pattern into personalized tag prediction. In _Web Technologies and Applications_, pages 354–365. Springer, 2012.
|
1503.01142 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 21406,
"num_imgs": 4,
"llama3_tokens_count": 6855
} | [
"content_image/1503.01142/x1.png",
"content_image/1503.01142/x3.png",
"content_image/1503.01142/x5.png",
"content_image/1503.01142/x9.png"
] | # Charge Symmetry Violation in the Electromagnetic Form Factors of the Proton
P.E. Shanahan
ARC Centre of Excellence in Particle Physics at the Terascale and CSSM, School of Physical Sciences, University of Adelaide, Adelaide SA 5005, Australia
R. Horsley
School of Physics and Astronomy, University of Edinburgh, Edinburgh EH9 3FD, UK
Y. Nakamura
RIKEN Advanced Institute for Computational Science, Kobe, Hyogo 650-0047, Japan
D. Pleiter
JSC, Forschungzentrum Jülich, 52425 Jülich, Germany
Institut für Theoretische Physik, Universität Regensburg, 93040 Regensburg, Germany
P.E.L. Rakow
Theoretical Physics Division, Department of Mathematical Sciences, University of Liverpool, Liverpool L69 3BX, UK
G. Schierholz
Deutsches Elektronen-Synchrotron DESY, 22603 Hamburg, Germany
H. Stüben
Regionales Rechenzentrum, Universität Hamburg, 20146 Hamburg, Germany
A.W. Thomas
ARC Centre of Excellence in Particle Physics at the Terascale and CSSM, School of Physical Sciences, University of Adelaide, Adelaide SA 5005, Australia
R.D. Young
ARC Centre of Excellence in Particle Physics at the Terascale and CSSM, School of Physical Sciences, University of Adelaide, Adelaide SA 5005, Australia
J.M. Zanotti
ARC Centre of Excellence in Particle Physics at the Terascale and CSSM, School of Physical Sciences, University of Adelaide, Adelaide SA 5005, Australia
###### Abstract
Experimental tests of QCD through its predictions for the strange-quark content of the proton have been drastically restricted by our lack of knowledge of the violation of charge symmetry (CSV). We find unexpectedly tiny CSV in the proton’s electromagnetic form factors by performing the first extraction of these quantities based on an analysis of lattice QCD data. The resulting values are an order of magnitude smaller than current bounds on proton strangeness from parity violating electron-proton scattering experiments. This result paves the way for a new generation of experimental measurements of the proton’s strange form factors to challenge the predictions of QCD.
Charge Symmetry Breaking, Electromagnetic form factor, Chiral symmetry pacs: 13.40.Gp, 12.39.Fe, 14.20.Dh †
[FOOTNOTE:†][ENDFOOTNOTE]
CSSM and QCDSF/UKQCD Collaborations
Charge symmetry is the invariance of the strong interaction under an isospin rotation exchanging \(u\) and \(d\) quarks (i.e., exchanging the proton and neutron). The violation of this symmetry (CSV) is arguably small: the proton-neutron mass difference is one part in a thousand Borsanyi _et al._ (2014) and many nuclear reactions proceed identically if protons and neutrons are interchanged. The effects of this small CSV, however, may be hugely significant. For example, if the proton-neutron mass difference were reversed protons could decay and atoms could not form. Charge symmetry violation also explains the discrepancy between the calculated and measured binding energy differences of mirror nuclei (Okamoto-Nolen-Schiffer anomaly) Nolen and Schiffer (1969); Negele (1971) and may play a role in precision tests of the Standard Model Horsley _et al._ (2011), including those at the LHC Londergan _et al._ (2010).
In the late 1980s it was suggested that one could use measurements of neutral weak current matrix elements by parity-violating electron scattering (PVES) Kaplan and Manohar (1988); Mckeown (1989); Beck (1989) to determine the contribution of strange quark-antiquark pairs to the elastic electroweak form factors of the nucleon. These ‘strange form factors’ have been the focus of intensive experimental and theoretical effort for the past two decades Armstrong and McKeown (2012). At present, the accuracy of theoretical calculations of these quantities Shanahan _et al._ (2015); Doi _et al._ (2009); Leinweber _et al._ (2006, 2005) exceeds that of the best experimental values Young _et al._ (2007) by almost an order of magnitude—a remarkable exception in strong-interaction physics. The limiting factor in future state-of-the-art PVES measurements at Mainz Maas _et al._ (2005); Baunack _et al._ (2009) and JLab Aniol _et al._ (17, 18); Acha _et al._ (2007) is theoretical, arising from the assumption that CSV in the proton’s electromagnetic form factors is negligible.
Precisely, CSV form factors \(G_{\textrm{CSV}}\), if not accounted for, mimic the strange-quark contribution \(G^{s}_{E/M}\) in the combination of form factors accessed by experiment: the measured neutral weak current matrix elements \(G^{p,Z}_{E/M}\) may be expressed as
\[G^{p,Z}_{E/M}=\left(1-4\sin^{2}\theta_{W}\right)G^{p,\gamma}_{E/M}-G^{n,\gamma }_{E/M}-G^{s}_{E/M}+G_{\textrm{CSV}},\] (1)
where the weak mixing-angle, \(\theta_{W}\), and the total electromagnetic form factors, \(G^{p/n,\gamma}_{E/M}\), are precisely determined from other experimental studies. With theoretical predictions of the size of \(G_{\textrm{CSV}}\) varying through several orders of magnitude Wagman and Miller (2014); Kubis and Lewis (2006); Miller _et al._ (2006), this uncertainty has halted experimental parity-violating electron scattering programs Acha _et al._ (2007).
In this Letter we report the first determination of CSV in the proton’s electromagnetic form factors based on an analysis of lattice QCD data. In terms of individual \(u\) and \(d\)-quark contributions to the Sachs electric and magnetic form factors of the proton and neutron (conventionally defined without the charge factors), the CSV form factors which we calculate are defined as
\[\delta^{u}_{E/M}=G_{E/M}^{p,u}-G_{E/M}^{n,d},\hskip 14.226378pt\delta^{d}_{E/M }=G_{E/M}^{p,d}-G_{E/M}^{n,u},\] (2)
where we explicitly calculate \(G_{E/M}^{p/n,u/d}\) and perform the subtractions indicated. The combination relevant to experimental determinations of nucleon strangeness using Eq. (1) is
\[G_{\textrm{CSV}}=\left(\frac{2}{3}\delta^{d}_{E/M}-\frac{1}{3}\delta^{u}_{E/M} \right).\] (3)
The lattice results used here are an extension of those reported in Refs. Shanahan _et al._ (23, 24); we include two independent sets of \(2+1\)-flavor simulations at different values of the finite lattice spacing \(a\). Any discretization artifacts should appear at \(\mathcal{O}(a^{2})\). Each set consists of results for the individual connected quark contributions to the electromagnetic form factors of the entire outer-ring baryon octet at a range of pion masses down to 220 MeV and at 6 (set I) or 7 (set II) fixed values of the momentum transfer \(Q^{2}\) up to 1.4 GeV\({}^{2}\). These values of \(Q^{2}\) are relevant to experimental studies of the strange nucleon form factors. The lattice volumes are \(L^{3}\times T=32^{3}\times 64\) and \(48^{3}\times 96\), and the lattice spacings are \(a^{2}=0.0055(3)\) fm\({}^{2}\) and \(0.0038(2)\) fm\({}^{2}\) (set using various singlet quantities Horsley _et al._ (2013); Bietenholz _et al._ (2011)) for the two sets respectively.
Our extraction of the CSV form factors from the lattice simulations is based on the extrapolation of those results to infinite volume and to the physical pseudoscalar masses using a formalism based on connected chiral perturbation theory Leinweber (2004); Tiburzi (2009). The extrapolation procedure is detailed in Refs. Shanahan _et al._ (23, 24). The small finite-volume corrections are model-independent and the chiral extrapolation is demonstrated to be under control—the fit includes lattice data at low meson masses within the convergence regime of the effective theory, and it reproduces the experimental form factors at the physical masses Shanahan _et al._ (23, 24). To determine the CSV terms we must extend that work to incorporate the breaking of the flavor-SU(2) symmetry, i.e., to allow for unequal light quark masses, \(m_{u}\neq m_{d}\). This is a simple extension, and is performed precisely as in previous work where the same procedure was used to evaluate the mass splittings among members of baryon isospin multiplets Shanahan _et al._ (29), the CSV sigma terms Shanahan _et al._ (2012), and the CSV parton distribution moments Shanahan _et al._ (31) from 2+1-flavor lattice simulation results. In brief, the low-energy parameters which appear in the SU(2)-breaking terms in the chiral extrapolation expressions for the CSV form factors also appear in the isospin-averaged expressions. These parameters are thus fixed by the fits to the \(N_{f}=2+1\) lattice QCD simulations on the baryon octet which are presented in Refs. Shanahan _et al._ (23, 24).
In principle, the CSV form factors on an infinite volume and at the physical pseudoscalar masses may thus, given the extrapolations of Refs. Shanahan _et al._ (23, 24), be evaluated simply by performing the subtractions shown in Eq. (2). This procedure, however, suffers from a significant systematic effect resulting from the omission of quark-line disconnected contributions in the simulations. To account for this omission we use the chiral extrapolation expressions to model the disconnected pieces of the loop integral expressions. This amounts to the replacement of the ‘connected’ extrapolation coefficients of Refs. Shanahan _et al._ (23, 24) by the ‘full’ expressions, where the free parameters remain as fixed by the connected fits. The resulting expressions for the CSV electric and magnetic form factors (including disconnected quark-line contributions) as a function of meson masses can be written as
\[\delta^{u}_{M}= \frac{1}{6}\left(2c^{M}_{1}-3c^{M}_{10}-3c^{M}_{12}-4c^{M}_{2}-2c ^{M}_{5}-5c^{M}_{6}-54c^{M}_{7}+3c^{M}_{9}\right)\mathcal{B}(m_{d}-m_{u})\]
\[+\frac{M_{N}}{16\pi^{3}f_{\pi}^{2}}\frac{1}{9}\left[\mathcal{C}^{ 2}\left(I_{D}^{M}(m_{K^{0}})-I_{D}^{M}(m_{K^{\pm}})\right)-12\left(D^{2}+3F^{2 }\right)\left(I_{O}^{M}(m_{K^{0}})-I_{O}^{M}(m_{K^{\pm}})\right)\right],\] (4)
\[\delta^{d}_{M}= \frac{1}{6}\left(2c^{M}_{1}+2c^{M}_{10}-4c^{M}_{11}+2c^{M}_{12}-4 c^{M}_{2}+4c^{M}_{5}+c^{M}_{6}+54c^{M}_{7}-c^{M}_{9}\right)\mathcal{B}(m_{d}-m _{u})\]
\[-\frac{M_{N}}{16\pi^{3}f_{\pi}^{2}}\frac{2}{9}\left[\mathcal{C}^{ 2}\left(I_{D}^{M}(m_{K^{0}})-I_{D}^{M}(m_{K^{\pm}})\right)-9\left(D-F\right)^{ 2}\left(I_{O}^{M}(m_{K^{0}})-I_{O}^{M}(m_{K^{\pm}})\right)\right],\] (5)
\[\delta^{u}_{E}= \frac{1}{6}\left(2c^{E}_{1}-3c^{E}_{10}-3c^{E}_{12}-4c^{E}_{2}-2c ^{E}_{5}-5c^{E}_{6}-54c^{E}_{7}+3c^{E}_{9}\right)Q^{2}\mathcal{B}(m_{d}-m_{u})\]
\[-\frac{1}{16\pi^{3}f_{\pi}^{2}}\frac{1}{9}\left[\mathcal{C}^{2} \left(I_{D}^{E}(m_{K^{0}})-I_{D}^{E}(m_{K^{\pm}})\right)+6\left(D^{2}+3F^{2} \right)\left(I_{O}^{E}(m_{K^{0}})-I_{O}^{E}(m_{K^{\pm}})\right)\right.\]
\[\left.+18\left(I_{T}^{E}(m_{K^{0}})-I_{T}^{E}(m_ {K^{\pm}})\right)\right],\] (6)
\[\delta^{d}_{E}= \frac{1}{6}\left(2c^{E}_{1}+2c^{E}_{10}-4c^{E}_{11}+2c^{E}_{12}-4 c^{E}_{2}+4c^{E}_{5}+c^{E}_{6}+54c^{E}_{7}-c^{E}_{9}\right)Q^{2}\mathcal{B}(m_ {d}-m_{u})\]
\[+\frac{1}{16\pi^{3}f_{\pi}^{2}}\frac{1}{9}\left[2\mathcal{C}^{2} \left(I_{D}^{E}(m_{K^{0}})-I_{D}^{E}(m_{K^{\pm}})\right)+9\left(D-F\right)^{2} \left(I_{O}^{E}(m_{K^{0}})-I_{O}^{E}(m_{K^{\pm}})\right)\right.\]
\[\left.+9\left(I_{T}^{E}(m_{K^{0}})-I_{T}^{E}( m_{K^{\pm}})\right)\right],\] (7)
where all symbols, including the low-energy constants \(c_{i}^{E/M}\), are defined in Refs. Shanahan _et al._ (23, 24). The leading-order loop integral expressions include meson loops with octet-baryon (\(I_{O}\)) or decuplet-baryon (\(I_{D}\)) intermediate states, as well as tadpole loops (\(I_{T}\)). The Gell-Mann-Oakes-Renner relation suggests the definition
\[\mathcal{B}(m_{d}-m_{u})=\frac{(1-R)}{(1+R)}m_{\pi}^{2},\] (8)
where \(R\) denotes the light-quark mass ratio \(R=m_{u}/m_{d}\). We take \(R=0.553(43)\), determined by a fit to meson decay rates Leutwyler (1996). The final results are all consistent within uncertainties if we instead take the FLAG value \(R=0.46(2)(2)\) Aoki _et al._ (2014).
All of the low-energy parameters, other than \(c_{1}^{E/M}\), \(c_{2}^{E/M}\) and \(c_{7}^{E/M}\), are determined from the chiral fits to the connected contribution to the isospin-averaged electromagnetic form factors which are described in Refs. Shanahan _et al._ (23, 24). While this procedure systematically includes some of the disconnected contribution to the CSV form factors, other disconnected terms—those which are linear in \(\mathcal{B}(m_{d}-m_{u})\) and not generated by chiral logarithms from meson loops—cannot be determined in this way. Precisely, the terms which are generated by the Lagrangian pieces with coefficients \(c_{1}^{E/M}\), \(c_{2}^{E/M}\) and \(c_{7}^{E/M}\) cannot be determined from the present lattice simulations. Physically, these terms arise from the diagrams illustrated and described in Fig. 1. These contributions are anticipated to be small based on the success of valence quark models in reproducing form factor data. This is also supported by the results of direct lattice QCD calculations of \(G_{E/M}\) which find that the disconnected contributions at small finite momentum transfer are consistent with zero and are bounded at the 1% level Abdel-Rehim _et al._ (2014). The terms corresponding to the low-energy parameters \(c_{1}^{E/M}\), \(c_{2}^{E/M}\) and \(c_{7}^{E/M}\) are only part of that small disconnected contribution.
<figure><img src="content_image/1503.01142/x1.png"><figcaption>(a) b</figcaption></figure>
<figure><img src="content_image/1503.01142/x3.png"><figcaption>(a) b</figcaption></figure>
We choose to set contributions from the unknown \(c_{1}^{E/M}\), \(c_{2}^{E/M}\) and \(c_{7}^{E/M}\) terms to \(0\), with an uncertainty taken to be twice the magnitude of the corresponding contributions from meson loop diagrams, evaluated with a dipole cutoff regulator with mass scale \(\Lambda=0.8(2)\;\!\mathrm{GeV}\). We suggest that this error estimate is extremely conservative. The use of this method to evaluate the loops is justified by the well-established and successful use of this model to relate full and partially-quenched lattice QCD calculations Wang _et al._ (2014). The loop diagram used to estimate the \(c^{E/M}_{1,2}\) terms is represented in Fig. 2(b), where only the ‘loop spectator’ quark mass (i.e., the valence-quark part of the meson mass) is changed. For the \(c^{E/M}_{7}\) term, represented in Fig. 2(a), only the sea-quark part of the loop meson mass is considered. These contributions are added in quadrature. The magnitude of this contribution to the total uncertainty varies with \(Q^{2}\); it is largest at our lowest \(Q^{2}\)-values where it contributes 20–60% of the quoted uncertainty on the final result (depending which of \(\delta_{E/M}^{u/d}\) one is considering), while at larger values of \(Q^{2}\), consistent with the suppression of meson loops at high-\(Q^{2}\), it contributes 1–15%.
The results of this analysis for the individual \(u\) and \(d\)-quark contributions to the CSV electric and magnetic form factors of the proton are shown in Fig. 3. The close agreement of the two sets of simulations (at different lattice spacings \(a\) and on different simulation volumes) confirms that the finite-volume corrections and chiral extrapolations are under control and that any discretization effects resulting from the finite lattice spacing are small. The size of the CSV form factor combination, \(G_{\textrm{CSV}}\), relevant to PVES experiments probing the strange electric and magnetic form factors of the nucleon by Eq. (1), is shown in Fig. 4. This result gives quantitative confirmation that CSV effects in the electromagnetic form factors, for momentum transfers up to approximately \(1.4\;\!\mathrm{GeV}^{2}\), are at the level of 0.2% of the relevant proton form factors—an order of magnitude smaller than the precision of existing PVES studies. To put this in perspective, the level of CSV shown in Fig. 4 is equivalent to a CSV difference in charge radii of less than one attometer. These precise results open the door for a new generation of experiments to probe the structure of the quantum vacuum through the strange quark form factors.
<figure><img src="content_image/1503.01142/x5.png"><figcaption>(a) l</figcaption></figure>
<figure><img src="content_image/1503.01142/x9.png"><figcaption></figcaption></figure>
## Acknowledgements
The numerical configuration generation was performed using the BQCD lattice QCD program Nakamura and Stüben (2010) on the IBM BlueGeneQ using DIRAC 2 resources (EPCC, Edinburgh, UK), the BlueGene P and Q at NIC (Jülich, Germany) and the Cray XC30 at HLRN (Berlin-Hannover, Germany). The BlueGene codes were optimised using Bagel Boyle (2009). The Chroma software library Edwards and Joo (2005) was used in the data analysis. This work was supported by the EU grants 283286 (HadronPhysics3), 227431 (Hadron Physics2) and by the University of Adelaide and the Australian Research Council through the ARC Centre of Excellence for Particle Physics at the Terascale and grants FL0992247 (AWT), FT120100821 (RDY), DP140103067 (RDY and JMZ) and FT100100005 (JMZ).
## References
* Borsanyi _et al._ (2014)S. Borsanyi, S. Dürr, Z. Fodor, C. Hoelbling, S. Katz, _et al._ (BMW Collaboration), (2014), arXiv:1406.4088 [hep-lat] .
* Nolen and Schiffer (1969)J. A. Nolen and J. P. Schiffer, Ann. Rev. Nucl. Part. Sci. **19**, 471 (1969).
* Negele (1971)J. W. Negele, Nucl. Phys. **A165**, 305 (1971).
* Horsley _et al._ (2011)R. Horsley, Y. Nakamura, D. Pleiter, P. Rakow, G. Schierholz, _et al._, Phys.Rev. **D83**, 051501 (2011), arXiv:1012.0215 [hep-lat] .
* Londergan _et al._ (2010)J. T. Londergan, J. C. Peng, and A. W. Thomas, Rev. Mod. Phys. **82**, 2009 (2010), arXiv:0907.2352 [hep-ph] .
* Kaplan and Manohar (1988)D. B. Kaplan and A. Manohar, Nucl. Phys. **B310**, 527 (1988).
* Mckeown (1989)R. D. Mckeown, Phys. Lett. **B219**, 140 (1989).
* Beck (1989)D. H. Beck, Phys. Rev. **D39**, 3248 (1989).
* Armstrong and McKeown (2012)D. S. Armstrong and R. D. McKeown, Ann. Rev. Nucl. Part. Sci. **62**, 337 (2012), arXiv:1207.5238 [nucl-ex] .
* Shanahan _et al._ (2015)P. Shanahan, R. Horsley, Y. Nakamura, D. Pleiter, P. Rakow, _et al._, Phys. Rev. Lett. **114**, 091802 (2015), arXiv:1403.6537 [hep-lat] .
* Doi _et al._ (2009)T. Doi _et al._, Phys. Rev. **D80**, 094503 (2009), arXiv:0903.3232 [hep-ph] .
* Leinweber _et al._ (2006)D. B. Leinweber _et al._, Phys. Rev. Lett. **97**, 022001 (2006), arXiv:hep-lat/0601025 .
* Leinweber _et al._ (2005)D. B. Leinweber _et al._, Phys. Rev. Lett. **94**, 212001 (2005), arXiv:hep-lat/0406002 .
* Young _et al._ (2007)R. D. Young, R. D. Carlini, A. W. Thomas, and J. Roche, Phys. Rev. Lett. **99**, 122003 (2007), arXiv:0704.2618 [hep-ph] .
* Maas _et al._ (2005)F. E. Maas _et al._, Phys. Rev. Lett. **94**, 152001 (2005), arXiv:nucl-ex/0412030 .
* Baunack _et al._ (2009)S. Baunack, K. Aulenbacher, D. Balaguer Rios, L. Capozza, J. Diefenbach, _et al._, Phys. Rev. Lett. **102**, 151803 (2009), arXiv:0903.2733 [nucl-ex] .
* Aniol _et al._ (2006a)K. A. Aniol _et al._ (HAPPEX Collaboration), Phys. Lett. **B635**, 275 (2006a), arXiv:nucl-ex/0506011 .
* Aniol _et al._ (2006b)K. A. Aniol _et al._ (HAPPEX Collaboration), Phys. Rev. Lett. **96**, 022003 (2006b), arXiv:nucl-ex/0506010 .
* Acha _et al._ (2007)A. Acha _et al._ (HAPPEX Collaboration), Phys. Rev. Lett. **98**, 032301 (2007), arXiv:nucl-ex/0609002 .
* Wagman and Miller (2014)M. Wagman and G. A. Miller, Phys. Rev. **C89**, 065206 (2014), arXiv:1402.7169 [nucl-th] .
* Kubis and Lewis (2006)B. Kubis and R. Lewis, Phys. Rev. **C74**, 015204 (2006), arXiv:nucl-th/0605006 .
* Miller _et al._ (2006)G. A. Miller, A. K. Opper, and E. J. Stephenson, Ann. Rev. Nucl. Part. Sci. **56**, 253 (2006), arXiv:nucl-ex/0602021 .
* Shanahan _et al._ (2014a)P. E. Shanahan _et al._, Phys. Rev. **D90**, 034502 (2014a), arXiv:1403.1965 [hep-lat] .
* Shanahan _et al._ (2014b)P. E. Shanahan _et al._, Phys. Rev. **D89**, 074511 (2014b), arXiv:1401.5862 [hep-lat] .
* Horsley _et al._ (2013)R. Horsley _et al._, **PoS**, 249 (LATTICE 2013), arXiv:1311.5010 [hep-lat] .
* Bietenholz _et al._ (2011)W. Bietenholz _et al._, Phys. Rev. **D84**, 054509 (2011), arXiv:1102.5300 [hep-lat] .
* Leinweber (2004)D. B. Leinweber, Phys. Rev. **D69**, 014005 (2004), arXiv:hep-lat/0211017 .
* Tiburzi (2009)B. C. Tiburzi, Phys. Rev. **D79**, 077501 (2009), arXiv:0903.0359 [hep-lat] .
* Shanahan _et al._ (2013a)P. E. Shanahan, A. W. Thomas, and R. D. Young, Phys. Lett. **B718**, 1148 (2013a), arXiv:1209.1892 [nucl-th] .
* Shanahan _et al._ (2012)P. E. Shanahan, A. W. Thomas, and R. D. Young, PoS **LATTICE2012**, 165 (2012), arXiv:1301.3231 [hep-lat] .
* Shanahan _et al._ (2013b)P. E. Shanahan, A. W. Thomas, and R. D. Young, Phys. Rev. **D87**, 094515 (2013b), arXiv:1303.4806 [nucl-th] .
* Leutwyler (1996)H. Leutwyler, Phys. Lett. **B378**, 313 (1996), arXiv:hep-ph/9602366 .
* Aoki _et al._ (2014)S. Aoki _et al._, Eur. Phys. J. **C74**, 2890 (2014), arXiv:1310.8555 [hep-lat] .
* Abdel-Rehim _et al._ (2014)A. Abdel-Rehim _et al._, Phys. Rev. **D89** (2014), arXiv:1310.6339 [hep-lat] .
* Wang _et al._ (2014)P. Wang, D. B. Leinweber, and A. W. Thomas, Phys. Rev. **D89**, 033008 (2014).
* Nakamura and Stüben (2010)Y. Nakamura and H. Stüben, PoS **LATTICE2010**, 040 (2010), arXiv:1011.0199 [hep-lat] .
* Boyle (2009)P. A. Boyle, Comput. Phys. Commun. **180**, 2739 (2009).
* Edwards and Joo (2005)R. G. Edwards and B. Joo (SciDAC Collaboration, LHPC Collaboration, UKQCD Collaboration), Nucl. Phys. Proc. Suppl. **140**, 832 (2005), arXiv:hep-lat/0409003 .
|
1905.04529 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 65626,
"num_imgs": 12,
"llama3_tokens_count": 22537
} | [
"content_image/1905.04529/x1.png",
"content_image/1905.04529/x2.png",
"content_image/1905.04529/x3.png",
"content_image/1905.04529/x5.png",
"content_image/1905.04529/x7.png",
"content_image/1905.04529/x9.png",
"content_image/1905.04529/x11.png",
"content_image/1905.04529/x13.png",
"content_image/1905.04529/x15.png",
"content_image/1905.04529/x17.png",
"content_image/1905.04529/x19.png",
"content_image/1905.04529/x21.png"
] | # Novel Algorithms based on Majorization Minimization for Nonnegative Matrix Factorization
R. Jyothi, P. Babu and R. Bahl ¹
[FOOTNOTE:1][ENDFOOTNOTE]
###### Abstract
Matrix decomposition is ubiquitous and has applications in various fields like speech processing, data mining and image processing to name a few. Under matrix decomposition, nonnegative matrix factorization is used to decompose a nonnegative matrix into a product of two nonnegative matrices which gives some meaningful interpretation of the data. Thus, nonnegative matrix factorization has an edge over the other decomposition techniques. In this paper, we propose two novel iterative algorithms based on Majorization Minimization (MM)-in which we formulate a novel upper bound and minimize it to get a closed form solution at every iteration. Since the algorithms are based on MM, it is ensured that the proposed methods will be monotonic. The proposed algorithms differ in the updating approach of the two nonnegative matrices. The first algorithm-**I**terative **No**nnegative **M**atrix Factorization (**INOM**) sequentially updates the two nonnegative matrices while the second algorithm-**Par**allel **I**terative **No**nnegative **M**atrix Factorization (**PARINOM**) parallely updates them. We also prove that the proposed algorithms converge to the stationary point of the problem. Simulations were conducted to compare the proposed methods with the existing ones and was found that the proposed algorithms performs better than the existing ones in terms of computational speed and convergence.
_Index terms—_ Nonnegative matrix factorization, Majorization Minimization, Big Data, Parallel, Multiplicative Update
## I Introduction
Recent advancements in sensor technology and communications has created huge collection of data resulting in big data matrices, which when appropriately analyzed can give useful insights about the data. Many times, researchers reduce the dimension of the data matrix for easier visualization and to lessen the computational load [1]. There are various tools to reduce the dimension of the data, some of them are - _Singular Value Decomposition (SVD)_[2], _Principal Component Analysis (PCA)_[3] and _Factor Analysis (FA)_[4]. However, the major shortcoming of these techniques is that when the input data matrix is nonnegative as in the case of speech/image processing, the reduced data matrix can have negative entries, which makes the interpretation difficult. _Nonnegative matrix factorization (NMF)_ is another dimension reduction technique, which as the name suggests decomposes the matrix such that the reduced dimension matrices are always nonnegative. NMF has found applications in music processing [5], data mining [6], image processing [7] and in neurobiology [8] to name a few. Mathematically, NMF problem can be written as:
\[\begin{array}[]{ll}\textrm{NMF:}\quad\underset{{\mathbf{W}},{\mathbf{H}}\geq 0 }{\rm minimize}\:\{f_{{}_{\rm NMF}}\left({\mathbf{W}},{\mathbf{H}}\right) \overset{\Delta}{=}\:\|{\mathbf{V}}-{\mathbf{W}}{\mathbf{H}}\|_{F}^{2}\}\end{array}\] (1)
where, \({\|{\mathbf{X}}\|_{F}}\) denotes the Frobenious norm of matrix \({{\mathbf{X}}}\), \({{\mathbf{V}}}\) is a data matrix made of real entries and of size \({n\times m}\), \({{\mathbf{W}}}\) and \({{\mathbf{H}}}\) are decomposed matrices of size \(n\times r\) and \(r\times m\), respectively. Here, \(r\) is chosen to be lesser than \(m\) and \(n\). The constraint (\({\mathbf{W}},{\mathbf{H}}\geq 0\)) is such that \({\mathbf{W}}\) and \({\mathbf{H}}\) matrices must not have any nonnegative element. The \(i^{th}\) column of \({\mathbf{V}}\) matrix, represented as \({\mathbf{v}}_{i}\), can be expressed as nonnegative weighted linear combination of columns of \({\mathbf{W}}\) matrix:
\[{\mathbf{v}}_{i}=\sum_{j=1}^{r}h_{ji}{\mathbf{w}}_{j}\] (2)
where \({\mathbf{w}}_{j}\) is the \({j^{th}}\) column of \({\mathbf{W}}\) matrix and \(h_{ji}\) is the \((j,i)^{th}\) element of \({\mathbf{H}}\) matrix. Geometrically, this means that the columns of \({\mathbf{W}}\) matrix generates a _simplicial convex cone_ ([9], [10]) that contains the data matrix \({\mathbf{v}}_{i}\), where _simplicial convex cone C_ is defined as:
\[C=\left\{\sum_{i=1}^{r}\theta_{i}{\mathbf{u}}_{i}:\theta_{i}\geq 0\right\}\] (3)
where \(\theta_{i}\in\mathbf{R^{+}}\) and \({\mathbf{u}}_{i}\) is a vector in \(\mathbf{R}\) of arbitrary dimension. From (2), it can be seen that there exists many cones (whose vertices are defined by columns of \({\mathbf{W}}\)), containing the data \({\mathbf{v}}_{i}\). In order to have a unique cone, one must normalize the columns of \({\mathbf{W}}\) and \({\mathbf{V}}\) matrix [10] and hence in this paper we do the same. The NMF problem in (1) is not jointly convex in \({{\mathbf{W}}}\) and \({{\mathbf{H}}}\). However, it is separately convex in either \({{\mathbf{W}}}\) or \({{\mathbf{H}}}\). Hence, alternatingly minimizing over \({{\mathbf{H}}}\) and \({{\mathbf{W}}}\) seems to be a favorable direction to solve the NMF problem in (1). This approach is traditionally named as _Alternating minimization_. In alternating minimization, constrained minimization is performed with respect to one matrix while keeping the other matrix fixed and vice-versa. The pseudo code of Alternating minimization is shown below:
\begin{tabular}{l}
\hline \hline
**Table 1: Pseudocode of Alternating Minimization** ([11], [12] and [13]) \\
\hline \hline
**Input**: Data sample \({\mathbf{V}}\), with each column normalized, _r_, _m_ and _n_. \\
**Initialize**: Set _k_ = 0. Initialize \({{\mathbf{W}}^{0}}\) and \({{\mathbf{H}}^{0}}\). Each column of \({{\mathbf{W}}^{0}}\) normalized. \\
**Repeat**: \\
1) Fix \({{\mathbf{W}}}^{k}\) and find \({{\mathbf{H}}^{k+1}}\)= \(\underset{{\mathbf{H}}\geq 0}{\rm arg\,minimize}\:f_{{}_{\rm NMF}}\left({ \mathbf{W}}^{k},{\mathbf{H}}\right)\) \\
2) Fix \({{\mathbf{H}}}^{k+1}\) and find \({{\mathbf{W}}^{k+1}}\) = \(\underset{{\mathbf{W}}\geq 0}{\rm arg\,minimize}\:f_{{}_{\rm NMF}}\left({ \mathbf{W}},{\mathbf{H}}^{k+1}\right)\) \\
3) Normalize the columns of \({{\mathbf{W}}}^{k+1}\) \\
\(k\gets k+1\) \\
**until convergence** \\ \hline \hline
\end{tabular}
Many algorithms have been proposed to solve the NMF problem. Some of them are discussed as follows: The baseline algorithm used to solve the above problem is the _Multiplicative Update(MU)_ algorithm proposed by Lee et. al. [14]. This algorithm which is iterative in nature, is based on _Block Majorization Minimization (MM)_ (which we will introduce shortly in section II). The final update equations of both the matrices has only multiplication operation between the matrices and hence simple to implement, however, it was reported to have slow convergence [15]. Gradient descent algorithms [16], [17] are also employed to solve the NMF problem in (1); wherein the \({{\mathbf{W}}}\) and \({{\mathbf{H}}}\) matrices are updated by taking a step in a direction opposite to the direction of gradient of the function \(f_{{}_{\rm NMF}}\left({\mathbf{W}},{\mathbf{H}}\right)\). The update equation of a gradient descent algorithm for NMF problem is as follows:
\[\begin{array}[]{ll}{\mathbf{W}}^{k+1}={\mathbf{W}}^{k}-\alpha\dfrac{\partial f _{{}_{\rm NMF}}\left({\mathbf{W}}^{k},{\mathbf{H}}^{k}\right)}{\partial{ \mathbf{W}}^{k}}={\mathbf{W}}^{k}-\alpha\left(\left({\mathbf{W}}^{k}{\mathbf{H }}^{k}{{\mathbf{H}}^{k}}^{T}\right)-{\mathbf{V}}{{\mathbf{H}}^{k}}^{T}\right) \end{array}\] (4)
\[\begin{array}[]{ll}{\mathbf{H}}^{k+1}={\mathbf{H}}^{k}-\beta\dfrac{\partial f_ {{}_{\rm NMF}}\left({\mathbf{W}}^{k+1},{\mathbf{H}}^{k}\right)}{\partial{ \mathbf{H}}^{k}}={\mathbf{H}}^{k}-\beta\left(\left(\left({{\mathbf{W}}^{k+1}} \right)^{T}{\mathbf{W}}^{k+1}{\mathbf{H}}^{k}\right)-{\left({\mathbf{W}}^{k+1} \right)}^{T}{\mathbf{V}}\right)\\ \end{array}\] (5)
where \(\alpha\) and \(\beta\) are the step sizes. MU algorithm can also be viewed as the gradient descent algorithm [18] with iteration dependent or adaptive step size \({\alpha^{k}}\) and \({\beta^{k}}\) defined for updating matrices \({{\mathbf{W}}}\) and \({{\mathbf{H}}}\) respectively as:
\[\begin{array}[]{ll}\alpha^{k}={\mathbf{W}}^{k}\oslash{({\mathbf{W}}^{k}{ \mathbf{H}}^{k}{{\mathbf{H}}^{k}}^{T})}\\ \beta^{k}={\mathbf{H}}^{k}\oslash{\left({\left({\mathbf{W}}^{k+1}\right)}^{T}{ \mathbf{W}}^{k+1}{\mathbf{H}}^{k}\right)}\end{array}\] (6)
where \({\oslash}\) denotes element wise division. Hence, the update equation becomes:
\[\begin{array}[]{ll}{\mathbf{W}}^{k+1}={\mathbf{W}}^{k}\circ\left(\left({ \mathbf{V}}{{\mathbf{H}}^{k}}^{T}\right)\oslash\left({{\mathbf{W}}^{k}}{ \mathbf{H}}^{k}{{\mathbf{H}}^{k}}^{T}\right)\right)\\ {\mathbf{H}}^{k+1}={\mathbf{H}}^{k}\circ\left(\left({{\mathbf{W}}^{k}}^{T}{ \mathbf{V}}\right)\oslash\left({\left({\mathbf{W}}^{k+1}\right)}^{T}\left({ \mathbf{W}}^{k+1}\right){{\mathbf{H}}^{k}}\right)\right)\\ \end{array}\] (7)
where \({\circ}\) denotes element wise multiplication. Note that the above update equation for \({\mathbf{H}}\) and \({\mathbf{W}}\) have only multiplication operation between the matrices. This kind of algorithm falls under multiplicative update algorithms. If in contrast, if the step sizes are chosen such that the update equation have only addition operations between the matrices, then it is called an additive update algorithm [19]. Another group of algorithms use the fact that steps 1 and 2 of Alternating minimization (refer to pseudocode of alternating minimization in Table 1) fall under _Nonnegative Least Squares problem (NLS)_ and solves each subproblem using active set methods ([20], [21], [22]), nevertheless, this approach can be computationally expensive [23]. Instead of alternatingly minimizing over the two blocks - \({\mathbf{W}}\) and \({\mathbf{H}}\) matrices, one can form \(2r\) blocks by further partitioning the columns of \({\mathbf{W}}\) matrix and rows of \({\mathbf{H}}\) matrix as \({\mathbf{W}}=\)\([{\mathbf{w}}_{1},{\mathbf{w}}_{2}\cdots{\mathbf{w}}_{r}]\), \({\mathbf{H}}\) = \([{\mathbf{h}}_{1},{\mathbf{h}}_{2}\cdots{\mathbf{h}}_{r}]\) and solve the problem in (1) by updating the \({\mathbf{w}}_{j}^{th}\) and \({\mathbf{h}}_{j}^{th}\) block, while keeping the other blocks constant. Mathematically, this can be written as:
\[\begin{array}[]{ll}\underset{{\mathbf{w}}_{j},{\mathbf{h}}_{j}\geq 0}{\rm minimize }\:{{\|{\mathbf{V}}^{(j)}-{\mathbf{w}}_{j}{\mathbf{h}}_{j}^{T}\|}_{F}^{2}}, \textrm{for}\,\{j=1,2\cdots r\}\end{array}\] (8)
where \({\mathbf{V}}^{(j)}={\mathbf{V}}-{\mathbf{W}}^{k}{\mathbf{H}}^{k}-{\mathbf{w}}_ {j}^{k}{{\mathbf{h}}_{j}^{k}}^{T}\). _Hierarchical Alternating Least Square (HALS)_ algorithm [24], [25] follows this strategy and finds a closed form solution for the problem in (8) by alternatingly minimizing over \({\mathbf{w}}_{j}\) and \({\mathbf{h}}_{j}\). The advantage of this approach is that the computation of the solution for problem in (8) is computationally inexpensive compared to NLS. Fast-HALS is an efficient implementation of HALS algorithm wherein they do not explicitly compute \({\mathbf{V}}^{(j)}\) and hence is computationally faster than HALS algorithm. Recently, Alternating Direction Method of Multipliers (ADMM) was also used to solve the NMF problem [26], [27]. Vandaele et.al. [28] proposed greedy randomized adaptive search procedure and simulated annealing to solve the NMF problem.
In this paper, we present two novel algorithms to solve the NMF problem-a sequential and a parallel algorithm. The major contributions of the paper are as follows:
1. A Block MM based sequential algorithm-**I**terative **No**nnegative **M**atrix Factorization (**INOM**) is proposed to solve the NMF problem. The update equation of this algorithm looks like that of the update equation of gradient descent algorithm with iteration dependent or adaptive step size. Typically, many algorithms use line search to estimate the step size to be taken at every iteration but our algorithm is developed in such a way that the MM procedure gives us the step size to be taken at every iteration and hence we don’t have to search for the optimal step size.
2. A parallel algorithm based on MM is presented, which parallely updates the \({\mathbf{W}}\) and \({\mathbf{H}}\) matrices. We address this algorithm as **Par**allel **I**terative **No**nnegative **M**atrix Factorization (**PARINOM**).
3. We discuss the convergence of the proposed algorithms and prove that they always converge to the stationary point.
4. Numerical simulations were conducted to compare the proposed algorithms with the existing algorithms and analyze the performance of the algorithms on the application of Blind Source Separation.
The paper is organized as follows. An overview of MM and block MM can be found in section II. In section III, we first propose **INOM** to solve the NMF problem in (1). Next, we propose **PARINOM** and in the same section, we also show that the proposed algorithms converge to stationary point and discuss the computational complexity of the algorithms. At the end of the section, we discuss an acceleration scheme to further accelerate the convergence of **PARINOM** algorithm. In section IV, we compare the algorithms with the existing algorithms via computer simulations and evaluate the performance of our algorithm on the application of Blind Source Separation and conclude the paper in section V.
Throughout the paper, **bold** capital and **bold** small letters are used to denote the matrix and vector, respectively. A scalar is denoted by a small letter. The \({i^{th}}\) entry of vector \({{\mathbf{v}}}\) is denoted by \({v_{i}}\) and \({(i,j)^{th}}\) entry of the matrix \({{\mathbf{V}}}\) is denoted by \({v_{ij}}\). \({\circ}\) and \({\oslash}\) denotes element wise multiplication and division operation respectively.
## II Majorization Minimization (MM): Vanilla MM and Block MM
### _Vanilla MM_
Majorization Minimization (MM) is an iterative procedure which is mostly used to solve a non-convex, non-smooth or even a convex problem more efficiently. In MM, instead of minimizing the difficult optimization problem \(f({\mathbf{x}})\) over the constraint set \(\mathbf{\chi}\) directly, a “surrogate” function which majorizes the problem (at a given value of \({\mathbf{x}}\) = \({\mathbf{x}}^{k}\in\chi\)) is minimized at every iteration. The surrogate function \(g({\mathbf{x}})\) is the global upper bound of the objective function \(f({\mathbf{x}})\) i.e., it satisfies the following properties:
\[g\left({\mathbf{x}}^{k}|{\mathbf{x}}^{k}\right)=f\left({\mathbf{x}}^{k}\right)\] (9)
\[g\left({\mathbf{x}}|{\mathbf{x}}^{k}\right)\geq f\left({\mathbf{x}}\right) \quad\textrm{for any}\,{\mathbf{x}}\in\chi\] (10)
where, \({{\mathbf{x}}^{k}}\) is the value taken by \({\mathbf{x}}\) at the \(k^{th}\) iteration. Hence, the MM algorithm generates a sequence of points \({\{{\mathbf{x}}^{k}\}}\) according to the following rule:
\[{\mathbf{x}}^{k+1}\in\underset{{\mathbf{x}}\in\chi}{\rm arg\:min}\quad g\left( {\mathbf{x}}|{\mathbf{x}}^{k}\right)\] (11)
The MM procedure is depicted in Fig. 1, wherein \(g({\mathbf{x}}|{\mathbf{x}}^{k})\) is the surrogate function which majorizes \(f({\mathbf{x}})\) around \({\mathbf{x}}^{k}\) at the \(k^{th}\) iteration. Pictorially, it can be seen that \(f({\mathbf{x}}^{k+2})<f({\mathbf{x}}^{k+1})<f({\mathbf{x}}^{k})\).
<figure><img src="content_image/1905.04529/x1.png"><figcaption>Figure 1: MM procedure [29]</figcaption></figure>
By using (9), (10) and (11) it can be shown mathematically that the objective function is monotonically decreased at every iteration:
\[f({\mathbf{x}}^{k+1})\leq g\left({\mathbf{x}}^{k+1}|{\mathbf{x}}^{k}\right) \leq g\left({\mathbf{x}}^{k}|{\mathbf{x}}^{k}\right)=f({\mathbf{x}}^{k})\] (12)
The first inequality and the last equality are by using (9) and (10), respectively. The second inequality is by (11). The convergence rate and computational complexity of the algorithm depends on how well one formulates the surrogate function. The convergence rate depends on how well the surrogate function follows the shape of the objective function \(f({\mathbf{x}})\) and to have lower computational complexity, the surrogate function must be easy to minimize. Hence, the novelty of the algorithm based on MM lies in the design of the surrogate function. Moreover, in the case of multivariate optimization problem, if the surrogate function is separable in the optimization variables - then the minimization problem could be solved parallely, which gives the computational advantage. To design surrogate function there are no set steps to follow. However, there are few papers which give guidelines for designing various surrogate functions [30], [29].
### _Block MM_
Suppose that the optimization variable \({{\mathbf{x}}}\) can be split into \({m}\) blocks as \({\mathbf{x}}=({\mathbf{x}}_{1},{\mathbf{x}}_{2},\cdots,{\mathbf{x}}_{m})\), then one could apply Block MM to solve the optimization problem \(f({\mathbf{x}})\). Block MM, an extension of vanilla MM, is a combination of block coordinate descent and vanilla MM wherein the optimization variable is split into several blocks and each block is updated using vanilla MM while keeping the other blocks fixed. Hence, the \({i}^{th}\) block is updated by minimizing the surrogate function \(g_{i}\left({\mathbf{x}}_{i}|{\mathbf{x}}^{k}\right)\) which majorizes \(f({\mathbf{x}})\) on the \({i}^{th}\) block and has to satisfy the following properties:
\[g_{i}{({\mathbf{x}}_{i}\textsuperscript{k}|{\mathbf{x}}\textsuperscript{k})}=f ({\mathbf{x}}^{k})\] (13)
\[g_{i}({\mathbf{x}}_{i}|{\mathbf{x}}^{k})\geq f({{\mathbf{x}}_{1}}^{k},\cdots{ \mathbf{x}}_{i},\cdots{{\mathbf{x}}_{m}}^{k})\] (14)
where \({\mathbf{x}}^{k}\) is the value taken by \({\mathbf{x}}\) at the \(k^{th}\) iteration. The \({i}\)th block at \({k+1}\) iteration is updated by solving the following problem:
\[{{\mathbf{x}}_{i}}^{k+1}\in\underset{{\mathbf{x}}_{i}}{\rm arg\:min}\quad g_{i }\left({\mathbf{x}}_{i}|{\mathbf{x}}^{k}\right)\] (15)
Each block in Block MM is usually updated in a cyclic fashion. The Block MM procedure is as shown in Fig. 2.
<figure><img src="content_image/1905.04529/x2.png"><figcaption>Figure 2: Block MM procedure</figcaption></figure>
The surrogate function for block MM must be chosen in such a way that the surrogate function is easy to minimize and must also approximate the objective function well to have faster convergence. Moreover, it is also reported that in some cases the surrogate function in case of block MM can approximate the objective function better than using a single block, leading to a faster convergence rate [29].
## III Algorithms for NMF problem
In this section, we propose two iterative algorithms to solve the NMF problem in (1) based on block MM and vanilla MM. The first algorithm - **INOM** is based on block MM and sequentially updates both the matrices. The second algorithm - **PARINOM** is based on vanilla MM and parallely updates the \({\mathbf{W}}\) and \({\mathbf{H}}\) matrices. We then discuss the convergence of the two algorithms and show that they always converge to the stationary point. At the end of the section, we also show a way to accelerate the convergence of **PARINOM** algorithm, with a little additional computational cost.
### **Inom**_:_**I**_terative_ **No**_nnegative_ **M**_atrix Factorization_
This algorithm is based on Block Majorization Minimization principle wherein \({\mathbf{W}}\) and \({\mathbf{H}}\) are considered as blocks and each block is updated by vanilla MM scheme while keeping the other blocks fixed.
Before we derive the update equation for \({\mathbf{W}}\) and \({\mathbf{H}}\) matrices, we first point out an important observation that is subsequently used to form the surrogate functions \(g_{{}_{{\mathbf{W}}}}\left({\mathbf{W}}|{\mathbf{H}}^{k},{\mathbf{W}}^{k}\right)\) and \(g_{{}_{{\mathbf{H}}}}\left({\mathbf{H}}|{\mathbf{W}}^{k},{\mathbf{H}}^{k}\right)\), which is used to majorize the function \(f_{{}_{\rm NMF}}\left({\mathbf{W}},{\mathbf{H}}\right)\) on the \({\mathbf{W}}^{th}\) and \({\mathbf{H}}^{th}\) block, respectively. To discuss the same, we re-write \(f_{{}_{\rm NMF}}\left({\mathbf{W}},{\mathbf{H}}\right)\) as:
\[\begin{array}[]{ll}f_{{}_{\rm NMF}}\left({\mathbf{W}},{\mathbf{H}}\right)=\|{ \mathbf{V}}-{\mathbf{W}}{\mathbf{H}}\|_{F}^{2}=\sum_{j=1}^{m}\|{ \mathbf{v}}_{j}-{\mathbf{W}}{\mathbf{h}}_{j}\|^{2}\end{array}\] (16)
where \({\mathbf{v}}_{j}\) and \({\mathbf{h}}_{j}\) is the \(j^{th}\) column of \({\mathbf{V}}\) and \({\mathbf{H}}\), respectively. From (16), it can be seen that given \({\mathbf{W}}={\mathbf{W}}^{k}\), the \(f_{{}_{\rm NMF}}\left({\mathbf{W}}^{k},{\mathbf{H}}\right)\) is separable on each column of \({\mathbf{H}}\). On taking the partial derivative of the above function with respect to \({{\mathbf{h}}_{j}}\), we get
\[\begin{array}[]{ll}\nabla f_{{}_{\rm NMF}}({\mathbf{h}}_{j}|{\mathbf{W}}^{k})= -2({\mathbf{W}}^{k})^{T}{\mathbf{v}}_{j}+2({\mathbf{W}}^{k})^{T}{\mathbf{W}}^{ k}{\mathbf{h}}_{j}\\ \\ \nabla^{2}f_{{}_{\rm NMF}}({\mathbf{h}}_{j}|{\mathbf{W}}^{k})=2({\mathbf{W}}^{ k})^{T}{\mathbf{W}}^{k}\end{array}\] (17)
Note that the Hessian matrix \(2({\mathbf{W}}^{k})^{T}{\mathbf{W}}^{k}\) consists only of nonnegative entries and is independent of \({\mathbf{h}}_{j}\). We now show that \(f_{{}_{\rm NMF}}\left({\mathbf{W}},{\mathbf{H}}^{k}\right)\) is separable on each row of \({\mathbf{W}}\) and the Hessian of \(f_{{}_{\rm NMF}}\left({\mathbf{W}},{\mathbf{H}}^{k}\right)\) with respect to \(j^{th}\) row of \({\mathbf{W}}\) is nonnegative and is equal to \(2{\mathbf{H}}^{k}({\mathbf{H}}^{k})^{T}\):
\[\begin{array}[]{ll}f_{{}_{\rm NMF}}\left({\mathbf{W}},{\mathbf{H}}^{k}\right)= \|{\mathbf{V}}-{\mathbf{W}}{\mathbf{H}}^{k}\|_{F}^{2}=\sum_{j=1}^ {n}\|{\mathbf{v}}^{\prime}_{j}-{\mathbf{w}}_{j}{\mathbf{H}}^{k}\|_{F}^{2}\end{array}\] (18)
where \({\mathbf{v}}^{\prime}_{j}\) and \({\mathbf{w}}_{j}\) represent the \({j^{th}}\) row of \({\mathbf{V}}\) and \({\mathbf{W}}\), respectively.
\[\begin{array}[]{ll}\nabla f_{{}_{\rm NMF}}({\mathbf{w}}_{j}|{\mathbf{H}}^{k})= -2{\mathbf{v}}^{\prime}_{j}{{{\mathbf{H}}^{k}}}^{T}+2{\mathbf{w}}_{j}{\mathbf{ H}}^{k}{{\mathbf{H}}^{k}}^{T}\\ \\ \nabla^{2}f_{{}_{\rm NMF}}({\mathbf{w}}_{j}|{\mathbf{H}}^{k})=2{\mathbf{H}}^{k }{{\mathbf{H}}^{k}}^{T}\end{array}\] (19)
We smartly use the nonnegative Hessian matrix to design the upper bounds \(g_{{}_{{\mathbf{W}}}}\left({\mathbf{W}}|{\mathbf{H}}^{k},{\mathbf{W}}^{k}\right)\) and \(g_{{}_{{\mathbf{H}}}}\left({\mathbf{H}}|{\mathbf{W}}^{k},{\mathbf{H}}^{k}\right)\), based on the following lemmas.
**Lemma III.1**: **Lower Quadratic Bound Principle**_[30], [31]: Given \({\mathbf{x}}={\mathbf{x}}^{k}\), a twice differentiable function \(f({\mathbf{x}})\), a square matrix \(\Lambda\) such that \(\Lambda\succcurlyeq\nabla^{2}f({\mathbf{x}})\), then \(f({\mathbf{x}})\) can be upper bounded as_
\[\begin{array}[]{ll}f({\mathbf{x}})\leq f({\mathbf{x}}^{k})+\nabla f({\mathbf{x }}^{k})^{T}({\mathbf{x}}-{\mathbf{x}}^{k})+\dfrac{1}{2}({\mathbf{x}}-{\mathbf{ x}}^{k})^{T}\Lambda({\mathbf{x}}-{\mathbf{x}}^{k})\end{array}\] (20)
_The upper bound for \(f({\mathbf{x}})\) is quadratic and differentiable in \({\mathbf{x}}\)._
Proof:: Suppose there exists a matrix \(\Lambda\), such that \(\Lambda\succcurlyeq\nabla^{2}f({\mathbf{x}})\), then we have the following equality by second order Taylor expansion:
\[f({\mathbf{x}})\leq f({\mathbf{x}}^{k})+\nabla f({\mathbf{x}}^{k})^{T}({ \mathbf{x}}-{\mathbf{x}}^{k})+\dfrac{1}{2}({\mathbf{x}}-{\mathbf{x}}^{k})^{T} \Lambda({\mathbf{x}}-{\mathbf{x}}^{k})\]
and equality is achieved at \({\mathbf{x}}={\mathbf{x}}^{k}\).
**Lemma III.2**: _Let \({{\mathbf{A}}}\) of size \(n\times n\) denote a nonnegative, square and symmetric matrix and let \(p\)\(\in\)\(\mathbf{R^{+}}\). We define \(\Lambda\) as a diagonal matrix with its diagonal elements equal to \(p\), the maximum row sum of \({{\mathbf{A}}}\) matrix. Then, \(\Lambda\succcurlyeq{{\mathbf{A}}}\)._
Proof:: The above lemma is based on **Perron-Frobenius Theorem**[32], [33]. According to the theorem, there is a positive real number \(p\), called the Perron root, such that any other eigenvalue \(\lambda\) of \({\mathbf{A}}\) in absolute value is strictly smaller than \(p\) i.e. \(|\lambda|<p\). To calculate \(p\), an important corollary of the theorem is used:
\[\begin{array}[]{ll}p\leq\underset{i}{\rm max}\sum_{j=1}^{n}a_{ij} \end{array}\] (21)
where \(a_{ij}\) is the \((i,j)^{th}\) element of \({\mathbf{A}}\) matrix. Hence, we get the following inequality:
\[\underset{i}{\rm max}\sum_{j=1}^{n}a_{ij}\geq p>\lambda\] (22)
By constructing a diagonal matrix \(\Lambda\) with its diagonal elements as maximum row sum of \({\mathbf{A}}\) and by using Eigen Value Decomposition, the inequality \(\Lambda\succcurlyeq{{\mathbf{A}}}\) is attained.
Using lemma III.1 and lemma III.2, we construct the surrogate function \(g_{{}_{\mathbf{H}}}\left({\mathbf{H}}|{\mathbf{H}}^{k},{\mathbf{W}}^{k}\right)\) for the problem in (16) to update the \({\mathbf{H}}^{th}\) block with \({\mathbf{W}}^{k}\) fixed.
\[\begin{array}[]{ll}g_{{}_{\mathbf{H}}}\left({\mathbf{H}}|{\mathbf{H}}^{k},{ \mathbf{W}}^{k}\right)=\sum_{j=1}^{m}g_{{\mathbf{h}}_{j}}\left({{ \mathbf{h}}_{j}}|{{\mathbf{h}}_{j}}^{k},{\mathbf{W}}^{k}\right)= \sum_{j=1}^{m}f({\mathbf{h}}_{j}^{k})+\nabla f({\mathbf{h}}_{j}^{k})^{T}({ \mathbf{h}}_{j}-{\mathbf{h}}_{j}^{k})+\dfrac{1}{2}({\mathbf{h}}_{j}-{\mathbf{h }}_{j}^{k})^{T}\Lambda^{k}({\mathbf{h}}_{j}-{\mathbf{h}}_{j}^{k})\end{array}\] (23)
where \(\Lambda^{k}\) matrix is a diagonal matrix with its diagonal elements as maximum row sum of \({2{({\mathbf{W}}^{k})}^{T}{{\mathbf{W}}^{k}}}\) and \(\nabla f({\mathbf{h}}_{j}^{k})\) is equal to \(-2{({\mathbf{W}}^{k})}^{T}{\mathbf{v}}_{j}+2{({\mathbf{W}}^{k})}^{T}{\mathbf{W }}^{k}{{\mathbf{h}}_{j}}^{k}\). The surrogate function is separable in each column of \({\mathbf{H}}\). Hence at any iteration, given \({\mathbf{W}}={\mathbf{W}}^{k}\) and \({\mathbf{H}}={\mathbf{H}}^{k}\), the surrogate minimization problem is:
(24)
which has a closed form solution given by:
\[\begin{array}[]{ll}{\mathbf{h}}_{j}^{k+1}={\mathbf{h}}_{j}^{k}+\dfrac{1}{\mu^{ k}}\left({2{{\left({\mathbf{W}}^{k}\right)}^{T}}{{\mathbf{v}}_{j}}-{2{{\left({ \mathbf{W}}^{k}\right)}^{T}}{{\mathbf{W}}^{k}}{\mathbf{h}}_{j}^{k}}}\right)\\ \\ {\mathbf{h}}_{j}^{k+1}={\mathbf{h}}_{j}^{k}+\dfrac{1}{\mu^{k}}\left({\left(2{{ \left({\mathbf{W}}^{k}\right)}^{T}}{\mathbf{V}}\right)_{j}}-{\left(2{\left({ \mathbf{W}}^{k}\right)}^{T}{\mathbf{W}}^{k}{\mathbf{H}}^{k}\right)_{j}}\right) \end{array}\] (25)
where \(\mu^{k}\) = maximum row sum of \({2{({\mathbf{W}}^{k})}^{T}{{\mathbf{W}}^{k}}}\). Since the update of \(j^{th}\) column of \({\mathbf{H}}\) does not depend on the update of \(({j+1})^{th}\) column of \({\mathbf{H}}\), we can re-write the above update equation as:
(26)
We now construct the surrogate function \(g_{{}_{\mathbf{W}}}\left({\mathbf{W}}|{\mathbf{W}}^{k},{\mathbf{H}}^{k+1}\right)\) using lemma III.1 and lemma III.2 for the problem in (18) to update the \({\mathbf{W}}^{th}\) block with \({\mathbf{H}}^{k+1}\) fixed.
(27)
where \({\Lambda^{k}}\) matrix is constructed with its diagonal elements as maximum row sum of \({2({\mathbf{H}}^{k+1})({{\mathbf{H}}^{k+1}})^{T}}\) and \(\nabla f({\mathbf{w}}_{j}^{k})=-2{\mathbf{v}}^{\prime}_{j}({{{\mathbf{H}}}^{k+ 1}})^{T}+2{\mathbf{w}}_{j}({\mathbf{H}}^{k+1})({{\mathbf{H}}^{k+1}})^{T}\). The surrogate function is separable in each row of \({\mathbf{W}}\). Hence at any iteration, given \({\mathbf{W}}={\mathbf{W}}^{k}\) and \({\mathbf{H}}={\mathbf{H}}^{k+1}\), the surrogate minimization problem is:
(28)
which has a closed form solution given by:
(29)
where \(\nu^{k}\) = maximum row sum of \({2{\mathbf{H}}^{k+1}{{\mathbf{H}}^{k+1}}^{T}}\). Note that once \({\mathbf{H}}^{k+1}\) and \({\mathbf{W}}^{k+1}\) are computed using (26) and (29), the negative elements must be projected back to \(\mathbf{R^{+}}\), to satisfy the nonnegative constraint. Also, observe that the update equations looks similar to gradient descent algorithm with “adaptive” step sizes equal to \(\mu^{k}\) and \(\nu^{k}\). Typically, line search algorithm is implemented to iteratively search for the optimal step sizes until the objective function decreases [34]. Nevertheless, we don’t have to iteratively search for the optimal step sizes; since the proposed algorithm is based on block MM-it is guaranteed that the step sizes - \(\nu^{k}\) and \(\mu^{k}\) will ensure the monotonic decrease of the respective cost functions. We now calculate the computational complexity of **INOM** algorithm. The complexity of computing \(\mu^{k}\) and \(\nu^{k}\) is \(\mathcal{O}(r^{2}n)\) and \(\mathcal{O}(r^{2}m)\), respectively. The complexity of computing \({\mathbf{H}}^{k+1}\) is \(\mathcal{O}(rnm)+\mathcal{O}(r^{2}(n+m))\). The complexity of computing \({\mathbf{W}}^{k+1}\) is \(\mathcal{O}(rnm)+\mathcal{O}(r^{2}(n+m))\). Since **INOM** is a sequential algorithm, the total complexity of the algorithm is \(\mathcal{O}(2rnm+2r^{2}(n+m))\). Pseudo code of **INOM** is shown in Table 2.
\begin{tabular}{l}
\hline \hline
**Table 2: INOM** \\
\hline \hline
**Input**: Data samples \({{\mathbf{V}}}\) with each column normalized; \({r}\), \({m}\) and \({n}\). \\
**Initialize**: Set \({k}\) = 0. Initialize \({{\mathbf{W}}^{0}}\) and \({{\mathbf{H}}^{0}}\). Each column of \({{\mathbf{W}}^{0}}\) is normalized. \\
**Repeat**: \\
Update \({{\mathbf{H}}}\) \\
1) \(\mu^{k}\) = maximum row sum of \({2{{\mathbf{W}}^{k}}^{T}{{\mathbf{W}}^{k}}}\) \\
2) \({\mathbf{H}}^{k+1}={\mathbf{H}}^{k}+\dfrac{1}{\mu^{k}}\left(2{{\mathbf{W}}^{k} }^{T}{\mathbf{V}}-2{{\mathbf{W}}^{k}}^{T}{{\mathbf{W}}^{k}}{\mathbf{H}}^{k}\right)\) \\
3) \({\mathbf{H}}^{k+1}=\textrm{max}(\bf{\bf{0}},{\mathbf{H}}^{k+1})\) \\
Update \({{\mathbf{W}}}\) \\
4) \(\nu^{k}\) = maximum row sum of \({2{\mathbf{H}}^{k+1}{{\mathbf{H}}^{k+1}}^{T}}\) \\
5) \({\mathbf{W}}^{k+1}={\mathbf{W}}^{k}+\dfrac{1}{\nu^{k}}\left(2{\mathbf{V}}{{ \mathbf{H}}^{k+1}}^{T}-2{\mathbf{W}}^{k}{\mathbf{H}}^{k+1}{{\mathbf{H}}^{k+1}} ^{T}\right)\) \\
6) \({\mathbf{W}}^{k+1}=\textrm{max}({\bf{0}},{\mathbf{W}}^{k+1})\) \\
7) normalize each column of \({{\mathbf{W}}}\) \\
\(k\gets k+1\), **until** \\ \hline \hline
\end{tabular}
### **Parinom**_:_**Par**_allel_ **I**_terative_ **No**_nnegative_ **M**_atrix Factorization_
**PARINOM** solves the problem in (1) using Vanilla MM without alternatingly minimizing over \({\mathbf{W}}\) and \({\mathbf{H}}\) and hence at iteration \(i+1\), \({\mathbf{H}}^{i+1}\) matrix does not depend on \({\mathbf{W}}^{i+1}\) matrix. Therefore, \({\mathbf{H}}^{i+1}\) and \({\mathbf{W}}^{i+1}\) matrix can be parallely updated. On expanding \(f_{{}_{\textrm{NMF}}}\left({\mathbf{W}},{\mathbf{H}}\right)\) we get:
\[\begin{array}[]{ll}\|{\mathbf{V}}-{\mathbf{W}}{\mathbf{H}}\|_{F}^{2}={\rm Tr}( {\mathbf{V}}^{T}{\mathbf{V}})+{\rm Tr}({\mathbf{H}}^{T}{\mathbf{W}}^{T}{ \mathbf{W}}{\mathbf{H}})-2{\rm Tr}({\mathbf{V}}^{T}{\mathbf{W}}{\mathbf{H}}) \end{array}\] (30)
Ignoring the constant terms in (30), we will now show that the second and third term are sigmoidal functions, which is defined as:
_Sigmoidal function_[35], [36]: Let \(c\) be a positive or a negative number and \({x_{1},x_{2}\cdots x_{n}}\) be the nonnegative components of a \(n\)-dimensional vector \({{\mathbf{x}}}\). Let \({\alpha_{j}}\) be the \({j^{th}}\) component of \({\alpha}\), which is the fractional power (can be positive, negative or zero) of each component of \({{\mathbf{x}}}\). Then, \(c\prod_{j=1}^{n}x_{j}^{\alpha_{j}}\) is called a sigmoidal function.
The second term can be re-written as:
\[\begin{array}[]{ll}\sum_{k=1}^{m}\sum_{j=1}^{n}\left(\sum_{l=1}^{ r}\sum_{m=1}^{r}w_{jl}h_{lk}w_{jm}h_{mk}\right)\end{array}\] (31)
where \(w_{ab}\) and \(h_{ab}\) represent the \((a,b)^{th}\) element of \({\mathbf{W}}\) and \({\mathbf{H}}\) matrix. The terms in (31) are sigmoidal functions with positive coefficient. Now, we show that the third term in (30) can also be written as sigmoidal function.
\[\begin{array}[]{ll}-2{\rm Tr}({\mathbf{V}}^{T}{\mathbf{W}}{\mathbf{H}})=-2 \sum_{j=1}^{n}\sum_{k=1}^{m}\left(v_{jk} \sum_{l=1}^{r}w_{jl}h_{lk}\right)\end{array}\] (32)
This is a sigmoidal function with negative coefficient. Hence, the objective function \(f_{{}_{\textrm{NMF}}}\left({\mathbf{W}},{\mathbf{H}}\right)\) becomes:
\[\begin{array}[]{ll}\|{\mathbf{V}}-{\mathbf{W}}{\mathbf{H}}\|_{F}^{2}= \sum_{k=1}^{m}\sum_{j=1}^{n}\left(\sum_{l=1}^{r}\sum_{m=1}^{r}w_{ jl}h_{lk}w_{jm}h_{mk}\right)-2\sum_{j=1}^{n}\sum_{k= 1}^{m}\left(v_{jk}\sum_{l=1}^{r}w_{jl}h_{lk}\right)\end{array}\] (33)
Now, we propose the following lemma which is used to majorize the above objective function.
**Lemma III.3**: _When c in \(c\prod_{j=1}^{n}x_{j}^{\alpha_{j}}\) is positive, the following inequality holds:_
\[\begin{array}[]{ll}\textrm{c}\prod_{j=1}^{n}x_{j}^{\alpha_{j}} \leq\textrm{c}\prod_{j=1}^{n}\left({{x_{j}}^{i}}\right)^{\alpha_{ j}}\sum_{j=1}^{n}\dfrac{\alpha_{j}}{\|{\alpha}\|_{1}}\left(\dfrac {x_{j}}{x_{j}^{i}}\right)^{\|{\alpha}\|_{1}}\end{array}\] (34)
_The above inequality is equal when \({{\mathbf{x}}}\) is equal to \({{\mathbf{x}}^{i}}\). When c is negative, the inequality in (34) changes to:_
\[\begin{array}[]{ll}\textrm{c}\prod_{j=1}^{n}x_{j}^{\alpha_{j}} \leq\textrm{c}\left(\prod_{j=1}^{n}\left({{x_{j}}^{i}}\right)^{ \alpha_{j}}\right)\alpha_{j}\textrm{ln}(x_{j})\end{array}\] (35)
Proof:: See [Section 3, [36]].
Using the above lemma we get the following surrogate function \(g\left(w_{jl},h_{lk}|w_{jl}^{i},h_{lk}^{i}\right)\) for \(f_{{}_{\textrm{NMF}}}\left({\mathbf{W}},{\mathbf{H}}\right)\).
\[\begin{array}[]{ll}g\left(w_{jl},h_{lk}|w_{jl}^{i},h_{lk}^{i}\right)= \sum_{k=1}^{m}\sum_{j=1}^{n}\left(\sum_ {l=1}^{r}\sum_{m=1}^{r}w_{jl}^{i}h_{lk}^{i}w_{jm}^{i}h_{mk}^{i} \left(\dfrac{1}{4}\left(\dfrac{w_{jl}}{w_{jl}^{i}}\right)^{4}+\dfrac{1}{4} \left(\dfrac{h_{lk}}{h_{lk}^{i}}\right)^{4}+\dfrac{1}{4}\left(\dfrac{w_{jm}}{w _{jm}^{i}}\right)^{4}+\dfrac{1}{4}\left(\dfrac{h_{mk}}{h_{mk}^{i}}\right)^{4} \right)\right)\\ -2\sum_{j=1}^{n}\sum_{k=1}^{m}\left(v_{jk} \sum_{l=1}^{r}\left(w_{jl}^{i}h_{lk}^{i}\right)\left(\textrm{ln}w _{jl}+\textrm{ln}h_{lk}\right)\right)\end{array}\] (36)
where \(w_{ab}^{i}\) and \(h_{ab}^{i}\) represent the \((a,b)^{th}\) element of \({\mathbf{W}}\) and \({\mathbf{H}}\) matrix at the \(i^{th}\) iteration. Note that the surrogate function \(g\left(w_{jl},h_{lk}|w_{jl}^{i},h_{lk}^{i}\right)\) is separable in the optimization variables: \(w_{jl}\) and \(h_{lk}\). Hence at any iteration, given \(w_{jl}=w_{jl}^{i}\) and \(h_{lk}=h_{lk}^{i}\), the surrogate minimization problem is:
\[\begin{array}[]{ll}\underset{w_{jl}>0,\,h_{lk}>0}{\rm minimize}\:g\left(w_{jl} ,h_{lk}|w_{jl}^{i},h_{lk}^{i}\right)\end{array}\] (37)
where \(g\left(w_{jl},h_{lk}|w_{jl}^{i},h_{lk}^{i}\right)\) is given by (36) which has a closed form solution is given by:
\[\begin{array}[]{ll}w_{jl}^{i+1}=\sqrt[4]{\dfrac{{p_{(jl)}^{i}}(w_{jl}^{i})^{4} }{z_{1(jl)}^{i}}}\\ \\ h_{lk}^{i+1}=\sqrt[4]{\dfrac{{q_{(lk)}^{i}}(h_{lk}^{i})^{4}}{z_{2(lk)}^{i}}} \end{array}\] (38)
where
\[\begin{array}[]{ll}z_{1(jl)}^{i}=\sum_{k=1}^{m}\sum_ {M=1}^{r}h_{lk}^{i}w_{jM}^{i}h_{Mk}^{i}\quad p_{(jl)}^{i}=\sum_{k =1}^{m}v_{jk}h_{lk}^{i}\\ \\ z_{2(lk)}^{k}=\sum_{j=1}^{n}\sum_{M=1}^{r}w_{jl}^{i} w_{jM}^{k}h_{Mk}^{i}\quad q_{(lk)}^{i}=\sum_{j=1}^{n}v_{jk}w_{jl} ^{i}\end{array}\] (39)
Taking the entire matrix into consideration, (38) can be re-written as:
\[\begin{array}[]{ll}{\mathbf{W}}^{i+1}=\sqrt[4]{\left({\left({\mathbf{V}}{{ \mathbf{H}}^{i}}^{T}\right)\circ{{\mathbf{W}}^{i}}^{4}}\right)\oslash\left({ \mathbf{W}}^{i}{\mathbf{H}}^{i}{{\mathbf{H}}^{i}}^{T}\right)}\\ {\mathbf{H}}^{i+1}=\sqrt[4]{\left({\left({{\mathbf{W}}^{i}}^{T}{\mathbf{V}} \right)\circ{{\mathbf{H}}^{i}}^{4}}\right)\oslash\left({{\mathbf{W}}^{i}}^{T}{ \mathbf{W}}^{i}{\mathbf{H}}^{i}\right)}\end{array}\] (40)
In (40), the fourth power of the matrix \({\mathbf{W}}\) and \({\mathbf{H}}\) are done element wise. When compared to MU and **INOM** algorithm, in **PARINOM**, update of \({\mathbf{H}}^{i+1}\) does not depend on \({\mathbf{W}}^{i+1}\) at \({i+1}\) iteration. Hence, \({\mathbf{W}}^{i+1}\) and \({\mathbf{H}}^{i+1}\) matrices can be updated parallely. The complexity in computing \({\mathbf{W}}^{i+1}\) is \(\mathcal{O}(nmr+r(m+n))\). The complexity to update \({{\mathbf{H}}^{i+1}}\) is also \(\mathcal{O}(nmr+r(m+n))\). Since both the matrices can be updated parallely, the complexity of **PARINOM** is \(\mathcal{O}(nmr+r(m+n))\). The Pseudo code of **PARINOM** is shown in Table 3:
\begin{tabular}{l}
\hline \hline
**Table 3: PARINOM** \\
\hline \hline
**Input**: Data samples \({{\mathbf{V}}}\), with each column normalized, \(r\), \(m\) and \(n\). \\
**Initialize**: Set _i_ = 0. Initialize \({{\mathbf{W}}^{0}}\) and \({{\mathbf{H}}^{0}}\). Each column of \({{\mathbf{W}}^{0}}\) is normalized. \\
**Repeat**: \\
Update \({{\mathbf{H}}}\) and \({{\mathbf{W}}}\)**parallely** \\
1)\({\mathbf{W}}^{i+1}=\sqrt[4]{\left({\left({\mathbf{V}}{{\mathbf{H}}^{i}}^{T} \right)\circ{{\mathbf{W}}^{i}}^{4}}\right)\oslash\left({\mathbf{W}}^{i}{ \mathbf{H}}^{i}{{\mathbf{H}}^{i}}^{T}\right)}\) \\
2)\({\mathbf{H}}^{i+1}=\sqrt[4]{\left({\left({{\mathbf{W}}^{i}}^{T}{\mathbf{V}} \right)\circ{{\mathbf{H}}^{i}}^{4}}\right)\oslash\left({{\mathbf{W}}^{i}}^{T}{ \mathbf{W}}^{i}{\mathbf{H}}^{i}\right)}\) \\
normalize each column of \({{\mathbf{W}}^{i+1}}\) \\
\(i\gets i+1\), **until \(\left|\dfrac{f_{{}_{\textrm{NMF}}}\left({\mathbf{W}}^{i+1},{\mathbf{H}}^{i+1} \right)-f_{{}_{\textrm{NMF}}}\left({\mathbf{W}}^{i},{\mathbf{H}}^{i}\right)}{f _{{}_{\textrm{NMF}}}\left({\mathbf{W}}^{i},{\mathbf{H}}^{i}\right)}\right|\leq 1 0^{-6}\)** \\ \hline \hline
\end{tabular}
### _Convergence Analysis_
We first discuss the convergence of **INOM**, which is based on Block MM. To discuss the same, we first prove that the surrogate functions \(g_{{}_{\mathbf{H}}}\left({\mathbf{H}}|{\mathbf{H}}^{k},{\mathbf{W}}^{k}\right)\) in (23) and \(g_{{}_{\mathbf{W}}}\left({\mathbf{W}}|{\mathbf{W}}^{k},{\mathbf{H}}^{k+1}\right)\) in (27) are quasi-convex and the problems in (24) and (28) have a unique minimum
Proof:: From (23) and (27), it can be seen that \(g_{{}_{\mathbf{H}}}\left({\mathbf{H}}|{\mathbf{H}}^{k},{\mathbf{W}}^{k}\right)\) and \(g_{{}_{\mathbf{W}}}\left({\mathbf{W}}|{\mathbf{W}}^{k},{\mathbf{H}}^{k+1}\right)\) are separable in the columns and rows of \({\mathbf{H}}\) and \({\mathbf{W}}\), respectively i.e. \(g_{{}_{\mathbf{H}}}\left({\mathbf{H}}|{\mathbf{H}}^{k},{\mathbf{W}}^{k}\right) =\sum_{j=1}^{m}g_{{\mathbf{h}}_{j}}\left({{\mathbf{h}}_{j}}|{{ \mathbf{h}}_{j}}^{k},{\mathbf{W}}^{k}\right)\) and \(g_{{}_{\mathbf{W}}}\left({\mathbf{W}}|{\mathbf{W}}^{k},{\mathbf{H}}^{k+1} \right)=\sum_{j=1}^{n}g_{{\mathbf{w}}_{j}}\left({{\mathbf{w}}_{j} }|{{\mathbf{w}}_{j}}^{k},{\mathbf{H}}^{k}\right)\). The Hessian of \(g_{{\mathbf{h}}_{j}}\left({{\mathbf{h}}_{j}}|{{\mathbf{h}}_{j}}^{k},{\mathbf{W }}^{k}\right)\) and \(g_{{\mathbf{w}}_{j}}\left({{\mathbf{w}}_{j}}|{{\mathbf{w}}_{j}}^{k},{\mathbf{H }}^{k}\right)\) for every \(j\) is \(\Lambda\) matrix - which is a diagonal matrix made of nonnegative elements and hence is positive semi-definite. This implies that \(g_{{\mathbf{h}}_{j}}\left({{\mathbf{h}}_{j}}|{{\mathbf{h}}_{j}}^{k},{\mathbf{W }}^{k}\right)\) and \(g_{{\mathbf{w}}_{j}}\left({{\mathbf{w}}_{j}}|{{\mathbf{w}}_{j}}^{k},{\mathbf{H }}^{k}\right)\) are convex functions. Since the sum of convex functions is convex, \(g_{{}_{\mathbf{H}}}\left({\mathbf{H}}|{\mathbf{H}}^{k},{\mathbf{W}}^{k}\right)\) and \(g_{{}_{\mathbf{W}}}\left({\mathbf{W}}|{\mathbf{W}}^{k},{\mathbf{H}}^{k+1}\right)\) are also convex functions. Since every convex function has convex sublevel sets [37], \(g_{{}_{\mathbf{H}}}\left({\mathbf{H}}|{\mathbf{H}}^{k},{\mathbf{W}}^{k}\right)\) and \(g_{{}_{\mathbf{W}}}\left({\mathbf{W}}|{\mathbf{W}}^{k},{\mathbf{H}}^{k+1}\right)\) are also quasi-convex functions. Also, the problems in (24) and (28) has a unique minimum; since we are minimizing a convex function.
Razaviyayn et. al. in Theorem 2.(a) of [38] showed that the sequence of points generated by Block MM converge to the stationary point, provided the surrogate function is quasi-convex and the minimizer of the surrogate minimization problem is unique. Hence, **INOM** algorithm converges to the stationary point of the problem in (1) which is a direct application of Theorem 2. (a) in [38].
We now discuss the convergence of **PARINOM**; which is based on Vanilla MM. Note that \(f_{{}_{\rm NMF}}\left({\mathbf{W}},{\mathbf{H}}\right)\) in (1) is bounded below by zero and the constraint set is closed and convex. Also from (13), the sequence of points \(\{{\mathbf{W}}^{k},{\mathbf{H}}^{k}\}\) monotonically decrease the NMF problem. Hence, the sequence \(f_{{}_{\rm{NMF}}}({\mathbf{W}}^{k},{\mathbf{H}}^{k})\) generated by **PARINOM** will at the least converge to the finite value.
We now show that the sequence \(\{{\mathbf{W}}^{k},{\mathbf{H}}^{k}\}\) will converge to the stationary point. To prove the same, we first group the variables \({\mathbf{W}}\), \({\mathbf{H}}\) into a single block \({\mathbf{X}}\). From (13), we have:
\[\begin{array}[]{ll}f_{{}_{\rm NMF}}\left({\mathbf{X}}^{0}\right)\geq f_{{}_{ \rm NMF}}\left({\mathbf{X}}^{1}\right)\geq f_{{}_{\rm NMF}}\left({\mathbf{X}}^ {2}\right)\cdots\end{array}\] (41)
Assume that there is a subsequence \({{\mathbf{X}}^{r_{j}}}\) converging to a limit point \({\mathbf{Z}}\). Then, from (9), (10) and from (41) we obtain:
(42)
Letting _j_\(\rightarrow\)\(\infty\), we get
(43)
which implies \(g^{\prime}({\mathbf{Z}}|{\mathbf{Z}})\geq 0\). Since the first order behavior of surrogate function is same as function \(f\left(\cdot\right)\), ([38]), \(g^{\prime}({\mathbf{Z}}|{\mathbf{Z}})\geq 0\) implies \(f_{{}_{\rm NMF}}^{\prime}({\mathbf{Z}})\geq 0\). Hence, \({\mathbf{Z}}\) is the stationary point of \(f_{{}_{\rm NMF}}\left(\cdot\right)\) and therefore the proposed algorithm converges to the stationary point.
### _Squarem Acceleration Scheme_
We now describe a way to further accelerate the convergence of **PARINOM** algorithm based on Squared Iterative method (SQUAREM) [39] acceleration scheme. Originally, this scheme was proposed for fixed - point Expectation Maximization algorithm. However, since MM is a generalization of EM, this scheme could be used for accelerating MM based algorithms as well. SQUAREM is based on Cauchy - Brazilai - Brownein method, which is a combination of Cauchy and BB (Brazilai and Browein) method to accelerate the convergence rate. This acceleration scheme is proven to be monotonic.
Let \({{f_{1}({\mathbf{W}}_{k},{\mathbf{H}}_{k})}}\) and \({{f_{2}({\mathbf{W}}_{k},{\mathbf{H}}_{k})}}\) denote fixed - point functions which update \({\mathbf{W}}\) and \({\mathbf{H}}\) respectively i.e
\[\begin{array}[]{ll}{\mathbf{W}}^{k+1}=f_{1}\left({\mathbf{W}}^{k},{\mathbf{H}} ^{k}\right)=\sqrt[4]{({{\mathbf{V}}({\mathbf{H}}^{k})^{T}\circ({\mathbf{W}}^{k })^{4}})\oslash({\mathbf{V}}^{k}(({\mathbf{H}}^{k})^{T}))}\\ {\mathbf{H}}^{k+1}=f_{2}\left({\mathbf{W}}^{k},{\mathbf{H}}^{k}\right)=\sqrt[4 ]{({({\mathbf{W}}^{k})^{T}{\mathbf{V}}\circ({\mathbf{H}}^{k})^{4}})\oslash(({ \mathbf{W}}^{k})^{T}({\mathbf{V}}^{k}))}\end{array}\] (44)
The pseudo code of the acceleration scheme is shown in Table 4.
\begin{tabular}{l}
\hline \hline
**Table 4: Acc-PARINOM** \\
\hline \hline
**Input**: Data samples \({{\mathbf{V}}}\), with each column normalized, \(r\), \(m\) and \(n\). \\
**Initialize**: Set \({k}\) = 0. Initialize \({{\mathbf{W}}^{0}}\) and \({{\mathbf{H}}^{0}}\). Each column of \({{\mathbf{W}}^{0}}\) is normalized. \\
**Repeat**: \\
1. Parallely update: \({\mathbf{W}}^{1}=f_{1}({\mathbf{W}}^{k},{\mathbf{H}}^{k}),{\mathbf{H}}^{1}=f_{ 2}({\mathbf{W}}^{k},{\mathbf{H}}^{k})\) \\
2. Normalize each column of \({{\mathbf{W}}^{1}}\) \\
3. Parallely update: \({\mathbf{W}}^{2}=f_{1}({\mathbf{W}}^{1},{\mathbf{H}}^{1}),{\mathbf{H}}^{2}=f_{ 2}({\mathbf{W}}^{1},{\mathbf{H}}^{1})\) \\
4. Normalize each column of \({{\mathbf{W}}^{2}}\) \\
5. Compute: \({r_{h}={\mathbf{H}}^{1}-{\mathbf{H}}^{k}}\), \({v_{h}={\mathbf{H}}^{2}-{\mathbf{H}}^{1}-r_{h}}\) and \({\alpha_{h}=-\dfrac{\|r_{h}\|}{\|v_{h}\|}}\) \\
6. Parallely compute \(r_{w},v_{w},\alpha_{w}\) \\
7. \({\mathbf{H}}=\textrm{max}\left(0,{\mathbf{H}}^{k}-2\alpha_{h}r_{h}+\alpha_{h}^ {2}v_{h}\right)\) \\
8. \({\mathbf{W}}=\textrm{max}\left(0,{\mathbf{W}}^{k}-2\alpha_{w}r_{w}+\alpha_{w}^ {2}v_{w}\right)\) \\
9. Normalize each column of \({{\mathbf{W}}}\) \\
10. **while\(\,{{\|{\mathbf{V}}-{\mathbf{W}}{\mathbf{H}}\|}_{F}>{\|{\mathbf{V}}-{\mathbf{W} }^{k}{\mathbf{H}}^{k}\|}_{F}}\)****do** \\
11. \(\alpha_{h}\leftarrow\dfrac{\alpha_{h}-1}{2}\) , \(\alpha_{w}\leftarrow\dfrac{\alpha_{w}-1}{2}\) \\
12. \({\mathbf{H}}=\textrm{max}\left(0,{\mathbf{H}}^{k}-2\alpha_{h}r_{h}+\alpha_{h}^ {2}v_{h}\right)\) \\
13. \({\mathbf{W}}=\textrm{max}\left(0,{\mathbf{W}}^{k}-2\alpha_{w}r_{w}+\alpha_{w}^ {2}v_{w}\right)\) \\
14. **end while** \\
15. \({{\mathbf{W}}^{k+1}}={\mathbf{W}}\), \({{\mathbf{H}}^{k+1}}={\mathbf{H}}\) \\
16. normalize each column of \({{\mathbf{W}}^{k+1}}\) \\
17. \(k\gets k+1\) \\
**until convergence** \\ \hline \hline
\end{tabular}
SQUAREM acceleration scheme will sometimes violate the non-negative constraint. Hence, the negative values of the matrix must be made zero. To retain the descent property of the proposed algorithm; the value of \(\alpha_{h}\) and \(\alpha_{w}\) is found by backtracking which halves the distance between alpha and -1 until the descent property is satisfied. Note that when \(\alpha_{h}\) is equal to -1, \(\textrm{max}\left(0,{\mathbf{H}}^{k}-2\alpha_{h}r_{h}+\alpha_{h}^{2}v_{h}\right)\) becomes equal to \({{\mathbf{H}}^{2}}\). Due to the monotonic behavior of MM, \({{\mathbf{H}}^{2}}\) will be less than \({{\mathbf{H}}^{k}}\). Similarly when \(\alpha_{w}\) is equal to -1, \(\textrm{max}\left(0,{\mathbf{W}}^{k}-2\alpha_{w}r_{w}+\alpha_{w}^{2}v_{w}\right)\) is equal to \({{\mathbf{W}}^{2}}\) - which is lesser than \({{\mathbf{W}}^{k}}\). Hence the descent property is guaranteed to hold as alpha is pushed towards -1.
## IV Simulations
In this section, we present numerical simulations to compare the proposed methods with the state of the art algorithms - Fast-HALS and MU. To have a fair comparison we accelerate the MU algorithm using the SQUAREM acceleration scheme. All the simulations were carried out on a PC with 2.40GHz Intel Xeon Processor with 64 GB RAM.
1. In the first simulation, we fix \(n=100\), \(m=200\) and \(r=1\) and compare the convergence rate and per iteration cost of the proposed algorithms vs Fast-HALS, MU and Accelerated MU. The elements of \({\mathbf{V}}\) matrix was randomly generated from a uniform distribution from \([100,200]\). Initial values of \({\mathbf{W}}\) and \({\mathbf{H}}\) was also randomly generated from a uniform distribution from \([0,1]\). The columns of \({\mathbf{V}}\) and \({\mathbf{W}}\) were normalized. The initial objective value \(f_{{}_{\textrm{NMF}}}({\mathbf{W}}^{0},{\mathbf{H}}^{0})\) for all the algorithms were kept same. Fig. 3 (a) shows the run time vs the objective value in log scale of all the algorithms. Fig. 3 (b) compares the run time vs the objective value in log scale of MU, Fast-HALS and **INOM**.
<figure><img src="content_image/1905.04529/x3.png"><figcaption>(a) Objective value vs run time of the proposed and existing algorithms- Fast-HALS, MU and Accelerated MU</figcaption></figure>
From Fig. 3(a), it can be seen that the proposed algorithms are monotonic in nature. Accelerated **PARINOM** has faster convergence when compared to **PARINOM** algorithm. However, due to the additional steps involved in accelerated **PARINOM**, the per iteration cost is more in Accelerated **PARINOM** when compared to **PARINOM**. From Fig. 3 (b), it can be seen that **INOM** algorithm takes lesser time to converge when compared to the rest of the algorithms.
2.a. In this simulation, we compare the proposed algorithms with Fast-HALS, MU and accelerated MU for a dense matrix \({\mathbf{V}}\) with \(m=50000\) and \(n=10000\). Dense matrix factorization has application in image processing and in video analysis. The elements of \({\mathbf{V}}\) matrix was randomly generated from a uniform distribution from [100, 200]. Initial values of \({\mathbf{W}}\) and \({\mathbf{H}}\) was randomly generated from a uniform distribution from [0, 1]. The comparison is done based on how quickly the algorithms reduce the initial objective value \(f_{{}_{\textrm{NMF}}}({\mathbf{W}}^{0},{\mathbf{H}}^{0})\) to about \(70\%\) of the initial objective value \(f_{{}_{\textrm{NMF}}}({\mathbf{W}}^{0},{\mathbf{H}}^{0})\) for different values of \(r\)- which was varied from \(500\) to \(5000\) in steps of \(500\). The columns of \({\mathbf{V}}\) and \({\mathbf{W}}\) were normalized. The initial objective value for all the algorithms were kept same. The run time was averaged over \(50\) trials. Fig. 4 (a) compares the proposed algorithms with Fast-HALS, MU and accelerated MU algorithm for \(m=50000\) and \(n=10000\).
<figure><img src="content_image/1905.04529/x5.png"><figcaption>(a) Comparison of run time of proposed algorithms with Fast-Hals, MU and Acc-MU.</figcaption></figure>
From Fig. 4 (a), it can be seen that as the size of \(r\) increases, Fast-HALS takes the most time to reduce the initial objective value \(f_{{}_{\textrm{NMF}}}({\mathbf{W}}^{0},{\mathbf{H}}^{0})\) to about \(70\%\) of the initial objective value \(f_{{}_{\textrm{NMF}}}({\mathbf{W}}^{0},{\mathbf{H}}^{0})\). Fig.4 (b) compares MU, **INOM** and **PARINOM**. From this figure, it can be seen that **INOM** takes the least time.
2.b. We repeat the above simulation for a sparse matrix \({\mathbf{V}}\). Sparse matrix factorization has application in text mining. The elements of a \(70\%\) sparse matrix was randomly generated from a normal distribution with negative elements pushed to zero. Fig.5 (a) compares the proposed algorithms with Fast-Hals, MU and accelerated MU algorithm for \(m=50000\) and \(n=10000\) when the matrix \({\mathbf{V}}\) is \(70\%\) sparse.
<figure><img src="content_image/1905.04529/x7.png"><figcaption>(a) Comparison of run time of proposed algorithms with Fast-Hals, MU and Acc-MU.</figcaption></figure>
From Fig.5 (a), it can be seen that **INOM** and **PARINOM** takes the least time when compared to the state-of-the art algorithms. Also, as the size of \(r\) increases, Fast-HALS takes the most time to reduce the initial objective value \(f_{{}_{\textrm{NMF}}}({\mathbf{W}}^{0},{\mathbf{H}}^{0})\) to \(70\%\) of the initial objective value \(f_{{}_{\textrm{NMF}}}({\mathbf{W}}^{0},{\mathbf{H}}^{0})\). Fig.5 (b) shows only the performance of **INOM**, **PARINOM** and MU for better readability.
3.a. In this simulation, we vary the size of \({\mathbf{V}}\) and compare the performance of the algorithm. The comparison is done based on how quickly the algorithms reduce the initial objective value \(f_{{}_{\textrm{NMF}}}({\mathbf{W}}^{0},{\mathbf{H}}^{0})\) to about \(70\%\) of the initial objective value \(f_{{}_{\textrm{NMF}}}({\mathbf{W}}^{0},{\mathbf{H}}^{0})\). \(m\) was varied from \(100000\) to \(1000000\) in steps of \(100000\) and \(n\) and \(r\) were equal to \(1000\) and \(100\), respectively. The elements of \({\mathbf{V}}\) matrix was randomly generated from a uniform distribution from [100, 200]. Initial values of \({\mathbf{W}}\) and \({\mathbf{H}}\) was randomly generated from a uniform distribution from [0, 1]. The columns of \({\mathbf{V}}\) and \({\mathbf{W}}\) were normalized. The initial objective value for all the algorithms were kept same. The run time was averaged over \(50\) trials. Fig. 6 (a) compares the proposed algorithms with Fast-Hals, MU and accelerated MU algorithm for \(r=100\) and \(n=1000\). Fig. 6 (b) compares the performance of **INOM**, MU and Fast-Hals.
<figure><img src="content_image/1905.04529/x9.png"><figcaption>(a) Comparison of run time of proposed algorithms with Fast-Hals, MU and Acc-MU.</figcaption></figure>
From Fig.6 (b), it can be seen that **INOM** takes the least time when compared to the state-of-the art algorithms.
3.b. The above simulation is repeated for a sparse matrix \({\mathbf{V}}\). The elements of a \(70\%\) sparse matrix was randomly generated from a normal distribution with negative elements pushed to zero. Fig.7 (a) compares the proposed algorithms with Fast-Hals, MU and accelerated MU algorithm for \(r=100\) and \(n=1000\). Fig.7 (b) compares the performance of **INOM**, MU and Fast-Hals. From Fig.7 (b), it can be seen that **INOM** performs better than MU and Fast-Hals.
<figure><img src="content_image/1905.04529/x11.png"><figcaption>(a) Comparison of run time of proposed algorithms with Fast-Hals, MU and Acc-MU.</figcaption></figure>
4. In this simulation, an application of Nonnegative Matrix Factorization on Blind Source Separation(BSS) is shown. In BSS, source signals have to be separated from a set of mixed signals without knowing the mixing process. It is commonly used in audio signal processing, image processing, biomedical signal processing and in digital communications. In the latter case, the \({\mathbf{W}}\) matrix can be thought as the channel response and each row of \({\mathbf{H}}\) consists of the source signal send by the transmitter array. The receiver sensor array then receives a linear mixture of the source signal. The task of BSS is to reconstruct the source signal from the received signal.
Five source signals for \(10\) seconds were simulated-a square wave, a rectangular wave, two sine waves of frequency \(2\) Hz and \(20\) Hz and a chirp signal which begins with \(0\) Hz at t = \(0\) sec and cross \(30\) Hz after \(t=5\) sec. The negative values of the source signals were made zero. \({\mathbf{W}}\) matrix of size \(200\)\(\times\)\(5\) was randomly generated. To evaluate the proposed algorithm in presence of noise, random noise with variance \(0.01\) was added to the five source signals. Source signals and some of the observed signals are shown in Fig. 8 and Fig. 9 respectively.
<figure><img src="content_image/1905.04529/x13.png"><figcaption>(a)</figcaption></figure>
<figure><img src="content_image/1905.04529/x15.png"><figcaption>(a)</figcaption></figure>
\({\mathbf{W}}\) and \({\mathbf{H}}\) were initialized randomly from Uniform distribution from [100, 500] and [200, 400], respectively. The algorithms were made to run till 1000 iterations or unless the relative change in cost function was \(10^{-8}\). Figure. 10 and 11 shows the reconstructed signal by **INOM** and Fast-Hals, respectively. MU algorithm was not able to reconstruct the signal in the presence of noise and the same is shown in Fig. 12.
<figure><img src="content_image/1905.04529/x17.png"><figcaption>(a)</figcaption></figure>
<figure><img src="content_image/1905.04529/x19.png"><figcaption>(a)</figcaption></figure>
<figure><img src="content_image/1905.04529/x21.png"><figcaption>(a)</figcaption></figure>
## V Possible Extension and Conclusion
In this paper, we proposed two different MM based algorithms - **INOM** and **PARINOM** to solve the NMF problem. **INOM** sequentially updates the \({\mathbf{W}}\) and \({\mathbf{H}}\) matrices, while **PARINOM** parallely updates them. The update equations of \({\mathbf{W}}\) and \({\mathbf{H}}\) matrices in case of **INOM** resembles the update equation of Gradient Descent algorithms with adaptive step sizes. We also prove that the proposed algorithms are monotonic and converge to the stationary point of the NMF problem. Various computer simulations were performed to compare the proposed algorithms with the existing algorithms. It was found that **INOM** performed better than existing algorithms.
The effect of parallel update will be predominantly visible when one has to decompose a multidimensional array into matrices of varied dimensions. This problem is called NonNegative Tensor Factorization (NTF) [40], [41], [42], [43]. A brief overview of NTF is given below.
Suppose, a tensor **X** of order N \(\mathbb{R}^{{m_{1}\times{m_{2}}}\cdots{m_{N}}}\) is given. This can be represented as sum of rank one tensors i.e.
\[\begin{array}[]{ll}\mathbf{X}=\sum_{k=1}^{K}{\mathbf{a_{k}}}^{(1) }\otimes{\mathbf{a_{k}}}^{(2)}\cdots{\mathbf{a_{k}}^{(N)}}\end{array}\] (45)
where \({\mathbf{a_{k}}}^{(1)}\) belongs to \(\mathbb{R}^{m_{1}}\), \({\mathbf{a_{(k)}}}^{(2)}\) belongs to \(\mathbb{R}^{m_{2}}\) and so on. K is rank of the tensor and \(\otimes\) represents outer product. This can be written in a compact form as:
\[\begin{array}[]{ll}\mathbf{X}=[\mathbf{A}^{(1)},\mathbf{A}^{(2)}\cdots\mathbf{ A}^{(N)}]\end{array}\] (46)
where \({\mathbf{A}}^{(1)}=[{\mathbf{a_{1}}}^{(1)},{\mathbf{a_{2}}}^{(1)},\cdots,{ \mathbf{a_{K}}}^{(1)}]\) is of size \(m_{1}\times K\). Similarly, \({\mathbf{A}}^{(2)}=[{\mathbf{a_{1}}}^{(2)},{\mathbf{a_{2}}}^{(2)},\cdots,{ \mathbf{a_{K}}}^{(2)}]\) and is of size \(m_{2}\times K\). \([\mathbf{A}^{(1)},\mathbf{A}^{(2)}\cdots\mathbf{A}^{(N)}]\) represents \(\sum_{k=1}^{K}{\mathbf{a_{k}}}^{(1)}\otimes{\mathbf{a_{k}}}^{(2)}\cdots{ \mathbf{a_{k}}^{(N)}}\). Hence, the NTF problem can be formulated as:
\[\begin{array}[]{ll}\textrm{NTF:}\quad\underset{\mathbf{A}^{(1)},\mathbf{A}^{(2 )}\cdots\mathbf{A}^{(N)}\geq 0}{\rm minimize}\:\left(\|\mathbf{X}-[\mathbf{A}^ {(1)},\mathbf{A}^{(2)}\cdots\mathbf{A}^{(N)}]\|_{F}^{2}\right)\end{array}\] (47)
When compared to parallel update of matrices, alternating minimization will definitely take significantly more time to solve the problem. Since NTF is a generalization of NMF, the proposed algorithms can also be applied to solve NTF and the parallel update algorithm - **PARINOM** would be more handy for NTF.
## References
* [1] S. Kaski and J. Peltonen, “Dimensionality reduction for data visualization [applications corner],” _IEEE Signal Processing Magazine_, vol. 28, no. 2, pp. 100–104, 2011.
* [2] G. Strang, _Linear Algebra and Its Applications_. Thomson, Brooks/Cole, 2006.
* [3] B. Moore, “Principal component analysis in linear systems: Controllability, observability, and model reduction,” _IEEE Transactions on Automatic Control_, vol. 26, no. 1, pp. 17–32, 1981.
* [4] B. Thompson, “Factor analysis,” _The Blackwell Encyclopedia of Sociology_, 2007.
* [5] P. Smaragdis and J. C. Brown, “Non-negative matrix factorization for polyphonic music transcription,” vol. 3, no. 3, pp. 177–180, 2003.
* [6] V. P. Pauca, F. Shahnaz, M. W. Berry, and R. J. Plemmons, “Text mining using non-negative matrix factorizations,” pp. 452–456, 2004.
* [7] J. Zhang, L. Wei, Q. Miao, and Y. Wang, “Image fusion based on nonnegative matrix factorization,” vol. 2, pp. 973–976, 2004.
* [8] P. Fernsel and P. Maass, “A survey on surrogate approaches to non-negative matrix factorization,” _arXiv preprint arXiv:1808.01975_, 2018.
* [9] D. Donoho and V. Stodden, “When does non-negative matrix factorization give a correct decomposition into parts?” pp. 1141–1148, 2004.
* [10] S. Essid, “A single-class SVM based algorithm for computing an identifiable NMF,” pp. 2053–2056, 2012.
* [11] J. Kim, Y. He, and H. Park, “Algorithms for nonnegative matrix and tensor factorizations: A unified view based on block coordinate descent framework,” _Journal of Global Optimization_, vol. 58, no. 2, pp. 285–319, 2014.
* [12] Y. Xu and W. Yin, “A globally convergent algorithm for nonconvex optimization based on block coordinate update,” _Journal of Scientific Computing_, vol. 72, no. 2, pp. 700–734, 2017.
* [13] S. Bonettini, “Inexact block coordinate descent methods with application to non-negative matrix factorization,” _IMA Journal of Numerical Analysis_, vol. 31, no. 4, pp. 1431–1452, 2011.
* [14] D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in _Advances in neural information processing systems_, 2001, pp. 556–562.
* [15] E. F. Gonzalez and Y. Zhang, “Accelerating the Lee-Seung algorithm for nonnegative matrix factorization,” Tech. Rep., 2005.
* [16] C.-J. Lin, “Projected gradient methods for nonnegative matrix factorization,” _Neural computation_, vol. 19, no. 10, pp. 2756–2779, 2007.
* [17] N.-D. Ho, P. Van Dooren, and V. D. Blondel, “Descent methods for nonnegative matrix factorization,” pp. 251–293, 2011.
* [18] T. D. Hien, D. Van Tuan, and P. Van At, “Additive update algorithm for nonnegative matrix factorization,” _arXiv preprint arXiv:1209.5647_, 2012.
* [19] A. Cichocki, R. Zdunek, and S.-i. Amari, “Nonnegative matrix and tensor factorization [lecture notes],” _IEEE Signal Processing Magazine_, vol. 25, no. 1, pp. 142–145, 2008.
* [20] H. Kim and H. Park, “Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method,” _SIAM journal on matrix analysis and applications_, vol. 30, no. 2, pp. 713–730, 2008.
* [21] J. Kim and H. Park, “Toward faster nonnegative matrix factorization: A new algorithm and comparisons,” pp. 353–362, 2008.
* [22] M. H. Van Benthem and M. R. Keenan, “Fast algorithm for the solution of large-scale non-negativity-constrained least squares problems,” _Journal of Chemometrics: A Journal of the Chemometrics Society_, vol. 18, no. 10, pp. 441–450, 2004.
* [23] N. Gillis _et al._, “Nonnegative matrix factorization: Complexity, algorithms and applications,” _Unpublished doctoral dissertation, Université catholique de Louvain. Louvain-La-Neuve: CORE_, 2011.
* [24] A. Cichocki, R. Zdunek, and S.-i. Amari, “Hierarchical als algorithms for nonnegative matrix and 3d tensor factorization,” _International Conference on Independent Component Analysis and Signal Separation_, pp. 169–176, 2007.
* [25] A. Cichocki and A.-H. Phan, “Fast local algorithms for large scale nonnegative matrix and tensor factorizations,” _IEICE transactions on Fundamentals of Electronics, Communications and Computer sciences_, vol. 92, no. 3, pp. 708–721, 2009.
* [26] D. Hajinezhad, T.-H. Chang, X. Wang, Q. Shi, and M. Hong, “Nonnegative matrix factorization using ADMM: Algorithm and convergence analysis,” pp. 4742–4746, 2016.
* [27] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein _et al._, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” _Foundations and Trends in Machine learning_, vol. 3, no. 1, pp. 1–122, 2011.
* [28] A. Vandaele, N. Gillis, F. Glineur, and D. Tuyttens, “Heuristics for exact nonnegative matrix factorization,” _Journal of Global Optimization_, vol. 65, no. 2, pp. 369–400, 2016.
* [29] Y. Sun, P. Babu, and D. P. Palomar, “Majorization-minimization algorithms in signal processing, communications, and machine learning,” _IEEE Transactions on Signal Processing_, vol. 65, no. 3, pp. 794–816, 2017.
* [30] D. R. Hunter and K. Lange, “A tutorial on MM algorithms,” _The American Statistician_, vol. 58, no. 1, pp. 30–37, 2004.
* [31] D. Böhning and B. G. Lindsay, “Monotonicity of quadratic-approximation algorithms,” _Annals of the Institute of Statistical Mathematics_, vol. 40, no. 4, pp. 641–663, 1988.
* [32] S. U. Pillai, T. Suel, and S. Cha, “The Perron-Frobenius theorem: some of its applications,” _IEEE Signal Processing Magazine_, vol. 22, no. 2, pp. 62–75, 2005.
* [33] S. Gaubert and J. Gunawardena, “The Perron-Frobenius theorem for homogeneous, monotone functions,” _Transactions of the American Mathematical Society_, vol. 356, no. 12, pp. 4931–4950, 2004.
* [34] W. Sun and Y.-X. Yuan, _Optimization theory and methods: nonlinear programming_. Springer Science & Business Media, 2006, vol. 1.
* [35] C. D. Maranas and C. A. Floudas, “Global optimization in generalized geometric programming,” _Computers & Chemical Engineering_, vol. 21, no. 4, pp. 351–369, 1997.
* [36] K. Lange and H. Zhou, “MM algorithms for geometric and signomial programming,” _Mathematical programming_, vol. 143, no. 1-2, pp. 339–356, 2014.
* [37] S. Boyd and L. Vandenberghe, _Convex optimization_. Cambridge university press, 2004.
* [38] M. Razaviyayn, M. Hong, and Z.-Q. Luo, “A unified convergence analysis of block successive minimization methods for nonsmooth optimization,” _SIAM Journal on Optimization_, vol. 23, no. 2, pp. 1126–1153, 2013.
* [39] R. Varadhan and C. Roland, “Simple and globally convergent methods for accelerating the convergence of any EM algorithm,” _Scandinavian Journal of Statistics_, vol. 35, no. 2, pp. 335–353, 2008.
* [40] Y. Qian, F. Xiong, S. Zeng, J. Zhou, and Y. Y. Tang, “Matrix-vector nonnegative tensor factorization for blind unmixing of hyperspectral imagery,” _IEEE Transactions on Geoscience and Remote Sensing_, vol. 55, no. 3, pp. 1776–1792, 2017.
* [41] N. B. Erichson, A. Mendible, S. Wihlborn, and J. N. Kutz, “Randomized nonnegative matrix factorization,” _Pattern Recognition Letters_, vol. 104, pp. 1–7, 2018.
* [42] A. Sapienza, A. Bessi, and E. Ferrara, “Non-negative tensor factorization for human behavioral pattern mining in online games,” _Information_, vol. 9, no. 3, p. 66, 2018.
* [43] N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” _IEEE Transactions on Signal Processing_, vol. 65, no. 13, pp. 3551–3582, 2017.
|
1903.07766 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 42004,
"num_imgs": 8,
"llama3_tokens_count": 8865
} | [
"content_image/1903.07766/sample_2.png",
"content_image/1903.07766/exercise-paper.png",
"content_image/1903.07766/colors.png",
"content_image/1903.07766/1_2_shp.png",
"content_image/1903.07766/subj_histogram.png",
"content_image/1903.07766/nlp_stat.png",
"content_image/1903.07766/factors.png",
"content_image/1903.07766/h2h_all.png"
] | # Lemotif: An Affective Visual Journal Using Deep Neural Networks
X. Alice Li
Georgia Tech
xali@gatech.edu &Devi Parikh
Facebook AI Research & Georgia Tech
parikh@gatech.edu
###### Abstract
We present Lemotif, an integrated natural language processing and image generation system that uses machine learning to (1) parse a text-based input journal entry describing the user’s day for salient themes and emotions and (2) visualize the detected themes and emotions in creative and appealing image motifs. Synthesizing approaches from artificial intelligence and psychology, Lemotif acts as an affective visual journal, encouraging users to regularly write and reflect on their daily experiences through visual reinforcement. By making patterns in emotions and their sources more apparent, Lemotif aims to help users better understand their emotional lives, identify opportunities for action, and track the effectiveness of behavioral changes over time. We verify via human studies that prospective users prefer motifs generated by Lemotif over corresponding baselines, find the motifs representative of their journal entries, and think they would be more likely to journal regularly using a Lemotif-based app.
## Introduction
Our emotional well being is important. In part due to its subjective nature, it is difficult to find patterns in what we feel, how often we feel it, and what the source of those feelings tends to be. Without this assessment, it is difficult to tweak our choices to optimize our emotional well being.
Meanwhile, innovations in artificial intelligence have produced powerful neural networks capable of sophisticated analytic and generative tasks. There exists great potential for machine learning to address human-centered needs, using creative interdisciplinary approaches to model subjective qualities like emotion which can be difficult to quantify.
In this paper we introduce Lemotif, an integrated natural language processing (NLP) and image generation system serving as an affective visual journal. Given a text-based journal entry describing aspects of the user’s day, a multi-label classifier building upon the Bidirectional Encoder Representations from Transformers (BERT) language model [4] extracts salient topics and associated emotions from the provided input. An image generation algorithm then creates motifs conditioned upon the detected topics and emotions; we offer several representation styles, including a neural network trained on abstract art.
<figure><img src="content_image/1903.07766/sample_2.png"><figcaption></figcaption></figure>
The concrete instantiation of Lemotif is shown in Fig. 1. The core principles behind our approach are: (1) The generated motif separately depicts each salient topic to make the source of feelings visually apparent to the user. (2) The generated motif depicts topics visually using outline shapes as seen in Fig. 1 so the feeling-topic association is more apparent and better grounded in the user’s mind. (3) The generated motif depicts emotions visually as well, using color mappings as seen in Fig. 1. (4) The generated motif is creative and attractive to provide visual reinforcement for user engagement. (5) The overall system uses machine intelligence to automate text analysis and image synthesis, allowing users to write naturally and receive computationally generated analysis of typical journal entry inputs.
We evaluate Lemotif qualitatively via human studies assessing (1) whether the topic-shape and feeling-color mappings are meaningful to users, (2) whether subjects favor the generated motifs over corresponding baselines, (3) whether subjects consider the generated motifs representative of their journal entries, and (4) whether subjects would engage positively with such an app, including being willing to use an app like Lemotif and feeling like the app would encourage them to journal more regularly. The NLP model is evaluated as a multi-label classifier calculating F1 and normalized accuracy through cross-validation. We report favorable results on all fronts. Our code and trained models are available at https://github.com/xaliceli/lemotif. A demo is available at http://lemotif.cloudcv.org.
## Related Work
Our work processes text to extract key topic and emotion labels, maps these abstract concepts to visual entities (shape and color), and generates a visual depiction of the input text according to the extracted labels. In this section we discuss prior work relating to each individual component as well as our overall goal of creatively summarizing journal entries.
#### Journaling Tools
Our work is motivated by psychological research indicating that writing about emotions can support mental health [11]. Most existing journaling tools and apps allow users to log their lives without focusing on identifying themes or patterns. Emphasis is often on easy incorporation of external content, including multimedia, hand annotations, maps, and search tools. Our focus is more on making associations between a user’s feelings and aspects of their life apparent. When journaling apps claim to be ‘visual’ (e.g., HeyDay), they typically refer to allowing visual input modalities such as images and videos. Our work produces a visual modality as an output. Life Calendar (journallife.me) comes closest to our approach, showing a single-colored dot (red, yellow, or green) for each week that captures the mood of the user in that week (negative, neutral, positive). This allows one to find correlations between time of month or year and emotion (e.g., happier in the summer). But it does not help identify sources of nuanced emotions on a day-to-day basis. In our experiments, we compare our motifs to a visualization that mimics this and find that subjects strongly prefer our nuanced and creative visualizations.
#### Natural Language Processing
Our task of identifying topics and emotions from text is related to existing work on keyword extraction and sentiment analysis, though sentiment analysis is commonly approached as a binary ("positive" vs "negative") or trinary ("positive", "neutral", "negative") problem. Recently, BERT [4] and GPT-2 [14] have successfully pre-trained NLP models on large unlabeled text datasets, learning language representations that can then be generalized to downstream tasks. We fine-tune BERT (pre-trained on the BooksCorpus and English Wikipedia datasets) on a custom dataset for our specific task of identifying up to 11 topics and up to 18 associated emotions, a form of aspect-based sentiment analysis [13].
#### Visual Representation
Our work draws upon existing research on common associations between colors and emotions [10]. Studies have also indicated associations between the visual qualities of lines and the emotions they evoke [12]. There exists fascinating work on projecting a spectrum of human emotions on an interactive map with associated video clips that elicit those emotions [3] and to audio gasps that people make when expressing those emotions [2]. [16] studied the emotions evoked in viewers of paintings generated by a creative AI system, while [1] collected emotional labels for artworks and trained a generative adversarial network to synthesize new images conditioned upon emotional labels. We extend the foundational idea that emotions can be represented in visual form by using a relatively rich set of 18 nuanced emotions as a core design principle of Lemotif. The use of recognizable icons to represent topics is also a common feature in popular note-taking apps such as Notion (notion.so); we take a similar approach, using shapes to represent topics in Lemotif.
#### Image Synthesis
Approaches in multi-modal AI that generate natural images from their language descriptions [15] using generative adversarial networks (GANs) [7] are relevant. Convolutional neural networks comprising an encoder block for feature extraction and a decoder block for synthesis and reconstruction, or an "autoencoder" model [9], have been widely used for image-to-image translation tasks, including image super-resolution taking small samples as inputs and reconstructing larger outputs with greater detail [5]. We take this super-resolution approach for our autoencoder visualization style, with more details in the following section. We also draw from the tradition of generative art [6] in our geometric visualization styles, using computational methods and mathematical principles for aesthetic purposes.
## Approach
<figure><img src="content_image/1903.07766/exercise-paper.png"><figcaption>Exercise</figcaption></figure>
<figure><img src="content_image/1903.07766/colors.png"><figcaption>Figure 3: Colors used to represent various feelings or emotions.</figcaption></figure>
Below we describe our approach to processing journal entries, mapping concepts to visual content, and generating representative motifs.
### Natural Language Processing
Our NLP objective is to take free-form text input and predict salient topic and emotion labels. To that end, we fine-tune BERT to serve as a multi-label classifier. We use the BERT-Base model containing 12 encoder layers, 768 hidden units per layer, and 12 attention heads for a total of 110M parameters [4]. To BERT-Base we append a fully-connected multi-layer perceptron containing 768 hidden units with sigmoid activation and 29 output nodes corresponding to our 11 topics and 18 emotions. We fine-tune this model on our dataset of text samples with user-annotated labels (more details in the Dataset section), optimizing over sigmoid cross-entropy loss. Labels above a set probability threshold are returned as the salient topics and associated emotions; we use 0.2 as our threshold, chosen through cross-validation. These labels are then used as inputs for the image generation algorithms, such that each motif represents one topic with the highest probability and a set of up to four emotions with the highest probabilities.
### Topics and Emotions
Human experiences are complex, multimodal, and subjective. A system that identifies and visualizes abstract content about an individual’s emotional life must be both comprehensive and intelligible, addressing most common themes in life through discrete labels while representing this information in a format humans recognize and approve of. Below we outline our approach to identifying our target labels and mapping these concepts to visual representations.
#### Topics
The 11 topics in our pre-defined list are shown in Fig. 2. This list was determined by a mix of brainstorming and searching online for what topics users typically talk about in their journals. As part of our evaluation, we asked survey participants in an Amazon Mechanical Turk (AMT) study if they felt a topic they would like to talk about was missing. 99 subjects out of 100 said this list was sufficient. One user suggested adding pets as a topic.
#### Emotions
The 18 emotions in our pre-defined list are shown in Fig. 3. This list was curated from [3] and our assessment of what emotions are likely on a day-to-day basis. Again, as part of our evaluation, we asked users from the same AMT study described above if they felt an emotion they would like to talk about was missing. All 100 subjects said the list was sufficient.
#### Shapes for topics
Lemotif uses a pre-defined mapping from topics to visual icons of shapes depicting that topic. These are shown in Fig. 2. To identify our list of icons, we started with The Noun Project (http://thenounproject.com) which contains over two million binary icons created by designers all over the world. We searched The Noun Project for each of the topics to ensure that the icons we pick are relevant to the topic (e.g., book for school). From the relevant icons, we selected those that are not visually complex so the generated motif is clear. We automatically binarize the image, crop the icon, and resize it to a canonical size. To further simplify the icons, we post-process them to retain only their outer shape and discard the inner details. This was done by keeping only the extreme points of the shape in each row and column of the image, providing a thin and sparse outline of the icon. For completeness, we dilate the sparse outline using morphological filtering. The resulting icons are shown in the bottom row of Fig. 2.
#### Colors for emotions
Lemotif uses a pre-defined mapping from emotions to corresponding colors associated with that emotion, as shown in Fig. 3. These colors were selected based on common associations (e.g., dark red for angry) as indicated by studies [10] while making sure that each color is visually distinct [17].
### Image Synthesis
Taking a set of labels extracted by the NLP model consisting of topics and emotions, Lemotif generates image motifs depicting these salient themes in visual form. Acknowledging that creative preferences are inherently subjective and individual, we offer six creative visualization styles described next. The generated visualization image is then bounded by a shape icon representing the relevant topic. The human user exercises creative input in selecting a motif style and adjusting various input parameters according to personal taste, while the algorithm produces unique motifs with stochastic variations for each generated image.
#### Autoencoder
We train a convolutional neural network designed as an autoencoder, taking a low-resolution image as its input and predicting a high-resolution version of the same image as its output (generated output shown in A6 in Fig. 4). We design our model to perform this form of super-resolution because we want to provide the model a set of colors representing emotions and allow the model to generate creative and stochastic detail — in other words, we want the model to begin with limited information (colors) and learn higher-resolution artistic representations of the provided colors. Our model consists of three residual blocks [8] encoding a 16x16 input image to feature space and a standard convolutional decoder architecture containing 2D Convolution + BatchNormalization + LeakyReLU blocks producing a 256x256 output image. For our research study, this model is trained on a dataset of 14,621 abstract paintings from WikiArt (downloaded from https://github.com/cs-chan/ArtGAN), randomly cropped to 256x256 high-resolution ground truths and resized to 16x16 low-resolution inputs. In training, we minimize mean squared error loss between the generated output and the original cropped image. In inference, we randomly populate a 16x16 image with pixel colors corresponding to the provided emotions, producing an output image in the style of abstract art in our target colors. This model can also be trained on different datasets containing artworks from varying artists or artistic movements to produce motifs of diverse styles.
<figure><img src="content_image/1903.07766/1_2_shp.png"><figcaption>A1 - Circle Packing</figcaption></figure>
#### Carpet
Carpet (A3 in Fig. 4) divides the image into a grid, repeatedly placing parallel lines in each cell of the grid at one of four possible angles and filling in the resulting connected regions with a random color from the set of colors associated with the detected emotions. Users can adjust the thickness and angles of the lines placed and the grid size that the canvas is divided into.
#### Circle packing
In circle packing (A1 in Fig. 4), we fill a blank region of a given shape with circles of differing sizes, each filled with a random color out of the colors associated with the detected emotions. We start with a set of circle radii and the desired number of circles to be placed in the region for each radii, which can be adjusted by users to taste. Starting from the largest size, we sample a random location in the shape region. If a circle can be placed there without any part of the circle falling outside the region or overlapping an existing circle, we draw the circle. If not, we sample another random location and try again until a max number of trials is reached or the circle is successfully placed. This is repeated for the specified number of circles to be placed for each size.
#### Glass
Glass (A5 in Fig. 4) attempts to mimic the appearance of stained glass by placing an assortment of icons in the topic shape at differing colors and opacities. By overlapping the canvas region with translucent icons across multiple passes, a random pattern of colors and shapes emerges. Users can customize the number of passes, how densely or sparsely icons are placed, and the distribution of icon sizes.
#### Tile
Tile (A4 in Fig. 4) divides the image into a grid, randomly placing a line in each cell along one of two diagonals and filling in the resulting connected regions with randomly chosen colors corresponding to the detected emotions. Users can adjust the grid size, line width, and probability that each one of the two diagonals is picked.
#### String Doll
String Doll (A2 in Fig. 4) draws quadratic bezier curves that connect two random points on a blank topical shape’s boundary, without the stroke going outside the boundary of the shape. As the control point of the quadratic bezier curve, we take the mid point of the two end points and add zero-mean gaussian noise to it. The standard deviation of the gaussian is set to 20% of the size of the canvas. The strokes are colored uniformly randomly by one of the colors corresponding to the emotions detected in the user’s journal entry for that topic. The width of the stroke is sampled from a distribution of sizes. To add some texture to the visualization, each stroke is overlaid by a stroke that is lighter or darker in color and a quarter of the original stroke’s width. Users can adjust the number and width of strokes and the standard deviation of the gaussian controlling the placement of each quadratic bezier curve’s control point.
## Dataset
We collected a dataset of 500 journal entries from 500 anonymous subjects on Amazon Mechanical Turk (AMT) describing their day, in response to the prompt “What were salient aspects of your day yesterday? How did you feel about them?” Figure 1 contains an example of one entry from a respondent. Each journal entry contains up to three text samples describing different aspects of the subject’s day, referred to as sub-entries henceforth in this paper. We asked subjects to annotate each sub-entry by selecting its associated topics and emotions from a drop down list populated with our set of topics and emotions, serving as ground truth labels for our natural language model evaluation. Our dataset is available at https://github.com/xaliceli/lemotif.
For entries in our dataset where subjects wrote meaningful responses relevant to the prompt, the mean entry (containing up to three sub-entries) was 507.6 characters (100.6 words) long; on average, each entry included 5.9 emotions and 3 topics. Fig. 5 shows the distribution of topics and feelings subjects chose to talk about. Given that not all 500 respondents wrote three sub-entries and some responses were omitted due to irrelevance to the prompt, 1,473 sub-entries were ultimately used for training and analysis.
Subjects were from the US (to ensure fluent English), had \(\geq\)95% approval rating on AMT, and had completed at least \(\geq\)5000 tasks on AMT in the past. The same qualifications were used for all AMT evaluations discussed in this paper.
<figure><img src="content_image/1903.07766/subj_histogram.png"><figcaption></figcaption></figure>
## Experiments and results
#### Evaluating icon and color choices
We showed subjects on AMT our list of 11 topics and a randomly ordered list of the 11 icons shown in Fig. 2. Subjects were asked to assign each icon to exactly one topic. 170 subjects performed this task. Given a topic, the right icon was picked 69% of the times (mean across subjects), compared to the random chance probability of \(\sim\)9%. If we assign a topic to the icon that was picked most often for that topic (majority vote across subjects), the accuracy is 82%. For a given topic, we sort all icons by how often they were selected across subjects. We find that the right icon falls at rank 1.27 out of 11 (on average across topics). The right icon falls in the top 20% of the sorted list 91% of the time across topics, and in the top third of the list 100% of the time. Overall, subjects appear to find our topic-icon mapping intuitive and natural.
We ran a similar study to evaluate our feeling-color mapping shown in Fig. 3. This is a more challenging task because (1) icons have descriptive shapes that can be recognized as objects with semantic meaning, while colors are significantly more ambiguous, and (2) there are 18 feelings and colors as opposed to fewer topics and icons. Note that the choice of colors (and icon) being intuitive and natural to users is a bonus, but not a requirement; as seen in Fig. 1, the topics and feelings are explicitly listed on the motif. 99 subjects participated in this study; because this task was more involved, fewer AMT users elected to participate compared to the topic-icon evaluation. We find that given a feeling, the right color was picked 15% of the time (mean across subjects). Chance performance would be \(\sim\)6%. If we assign a feeling to the color that was picked most often for that feeling (majority vote across subjects), the accuracy is 33%. For a given feeling, we sort all colors by how often they were selected across subjects. We find that the right color falls at rank 5.28 out of 18 (on average across feelings). The right color falls in the top 20% of the sorted list 61% of the time across feelings, and in the top third of the list 67% of the time. Overall, this shows that despite the mapping being ambiguous and subjective, subjects do find an intuitive and natural signal in our feelings-color mappings as well.
<figure><img src="content_image/1903.07766/nlp_stat.png"><figcaption>Figure 6: Cross-validation F1 and normalized accuracy statistics by varyingprobability thresholds used to indicate a positive prediction. Line at 0.2represents the threshold we select for inference.</figcaption></figure>
#### Evaluating natural language model
We trained our NLP model on our text dataset with user-supplied topic-emotion labels serving as ground-truth labels. We performed cross-validation across five train-test splits (80% train, 20% test) to calculate normalized accuracy and F1 metrics comparing ground truth versus predicted labels across the full dataset containing 1,473 text samples (sub-entries). Normalized accuracy is the mean between true positive and true negative rates. F1 (also known as F-score) is the harmonic mean of precision and recall. Recall that our NLP model outputs multi-label probability values between 0 and 1. Figure 6 shows normalized accuracy and F1 scores for various probability thresholds above which a label is counted as a positive classification. At our chosen threshold of 0.2, our model has a normalized accuracy of 82% and an F1 score of 0.62, compared to random chance values of 50% and 0.5. Since different thresholds yield similar accuracies, we use 0.2 during inference partially based on experimentation using new and arbitrary input samples. Given that our training set is fairly small, we find that lower thresholds produce undesirably many false positives when exposed to new text samples not from AMT. Additionally, because the distribution of supplied labels is uneven across topics and emotions (as shown in Fig. 5), we acknowledge that our model may not perform well on new samples referring to topics or emotions underrepresented in our dataset.
#### Evaluating creative motifs
The generated motifs should (1) separate the topical sources of emotions, (2) depict these sources visually, (3) depict the emotions visually, and (4) be creative and attractive. To evaluate this hypothesis, we design several baselines that allow us to measure the role of each factor. We strip away one factor at a time to derive our various baselines. To keep the number of baselines manageable, we create these baseline versions only for circle packing (Fig. 4 A1) and string doll (Fig. 4 A2).
* We start with our generated motif and remove the shape depiction, retaining the creative design, color depictions, and separate topic depictions. We replace each icon shape with a square whose contents are rendered according to our visualization styles. This gives us two baselines (B1 and B2) in Fig. 4.
* We can also start with our generated motifs and remove the creative design, while maintaining the shape and color depictions, as well as the topic breakdown. We color each shape with solid colors associated with the feelings mentioned for that topic. This gives us B3 in Fig. 4.
* We can now remove the shape information from the above baseline, and depict squares (instead of icons) for each topic colored in with solid colors (no creative aspect). This gives us B4 in Fig. 4.
* We can start with B3 and remove the detailed color information. Instead of using a color for each of the 18 feelings, we use just three colors: red, yellow, and green to depict negative, neutral or positive feelings. We mapped afraid, angry, anxious, ashamed, disgusted, frustrated, jealous and sad to negative; awkward, bored, calm, confused, nostalgic and surprised to neutral; and excited, happy, proud and satisfied to positive. We use the majority label across reported feelings to pick a color for that topic. This gives us B5 in Fig. 4.
* We can remove shape information from the above baseline to have squares colored in either red, yellow or green representing each topic. This gives us B6 in Fig. 4.
* Finally, we can remove the topic breakdown from the above baseline and have the entire day depicted as a red, yellow, or green square based on the most common label across reported feelings for the day. This gives us B7 in Fig. 4. As mentioned in the related work section, this mimics an existing app (Life Calendar) that shows a single colored dot for every week in the year.
To start, we combine these seven baselines with two of our proposed visualizations (Fig. 4 A1 and A2), giving us nine approaches to compare for this evaluation; later on we will compare all six visualization styles described in the Approaches section against each other and a smaller set of three baselines. We generate these visualizations for a subset of 100 journal entries from our dataset, using user-supplied ground-truth topic-emotion labels; each entry contains three sub-entries and their corresponding motifs. For parameters users can vary, such as line width and spacing, we set their values based on what we found most visually appealing and representative of each style’s overall design. We conduct pairwise evaluations on AMT. We show subjects a journal entry from our dataset, and all \(\binom{9}{2}=36\) pairs of visualizations. For each pair, we ask subjects “If you were using a journaling tool or app that automatically produced a visual summary of your day, which one of these visualizations would you prefer?” 936 unique subjects participated in this study, each providing us a rating for the 36 pairs for a single journal entry. Each journal entry was evaluated by 6 to 17 subjects, with an average of 9.4 and mode of 10.
By comparing pairs of the proposed approaches, we can evaluate the role of the four visual factors listed above. How often subjects pick B6 over B7 reveals how important it is for the motif to have a breakdown across topics. Similarly, comparing B5 to B6, B3 to B4, A1 to B1, and A2 to B2, indicates the importance of a topic being depicted by a shape as opposed to a generic square. Comparing B3 to B5, and B4 to B6, indicates the importance of each feeling being depicted by a nuanced rather than coarse color for negative, neutral, and positive feelings. Comparing A1 to A2 indicates which of the two creative motifs subjects prefer. We find that subjects prefer circle packing (A1) to string doll (A2) 72% of the time. We focus our evaluation of the creative aspect on A1. Comparing A1 to B3 and B1 to B4 reveals how much subjects prefer creative designs.
<figure><img src="content_image/1903.07766/factors.png"><figcaption>Figure 7: Percentage of times subjects prefer a visualization with the fourfactors over corresponding baselines, for subjects who were consistent acrosstheir pairwise preferences and those who were not. P-value is from a one-sample t-test compared to null hypothesis of 50%, i.e. random chance (shown asdashed line). N reflects the number of pairs in which each relevant comparisonwas performed. Error bars represent 95% confidence intervals.</figcaption></figure>
In Fig. 7, for each of the four factors, we show how often a visualization with that factor is preferred over a corresponding visualization without that factor (as described above). We show these statistics separately for subjects who were consistent in their preferences vs. those that had some contradictory preferences. Recall that we had each subject report their preferences for all \(\binom{9}{2}=36\) visualization pairs for a single journal entry. We can check whether the pairwise preferences reported are consistent across the board or not (if a \(>\) b and b \(>\) c, then a should be \(>\) c). Presumably, subjects who provide consistent preferences are likely to be doing the task more carefully and/or have more clear preferences. We find that 36% of our subjects were perfectly consistent across the 36 pairwise comparisons. Across the board in Fig. 7, the four factors are preferred, especially for subjects who were consistent in their responses.
<figure><img src="content_image/1903.07766/h2h_all.png"><figcaption>(a) Percent of pairs in which style was preferred. P-value is from a one-sample t-test compared to null hypothesis of 50%. N reflects the number ofpairs in which each style was compared.</figcaption></figure>
#### Evaluating additional visualization styles
Having established that our four creative factors are generally preferred by human subjects, we next evaluate all six visualization styles (A1-A6 in Fig. 4) against a smaller set of three baselines: B3 (topical shapes and emotional colors with no additional creative style), B5 (topical shapes and positive-neutral-negative colors), and B6 (squares and positive-neutral-negative colors). Similar to the first evaluation performed, we generate 36 visualization pairs and ask respondents to select the style they prefer. 854 unique subjects participated in this study.
Fig. 8 shows user preferences across all six visualization styles and three baseline comparisons. Overall, the most preferred styles were the creative visualizations, consistent with what we saw in the prior evaluation. The one baseline comparison that performed comparably to random chance was B3, which includes both topical shapes and our full set of 18 emotional colors; though this baseline was not intentionally designed as a creative style, one could argue that placing colors in equally distributed regions _is_ a creative visualization. After all, there are entire artistic movements such as color field painting with similarly "flat" aesthetics. We also note that, when evaluating which style was each respondent’s favorite (defined as the style that was most frequently preferred for each respondent), preferences are widely distributed across styles. For example, even though the autoencoder was preferred fewer than 50% of the time overall, 12% of respondents preferred it above all other styles, comparable to the 12% of respondents who most favored the glass visualization which scored highest in pairwise comparisons. The diversity of preferences highlights the personal nature of aesthetics and how the act of choosing a motif to use can be a creative decision in and of itself.
#### Evaluating engagement
The real evaluation of a system like Lemotif is how users would engage with it — would users journal more regularly, feel more creative, and/or gain actionable insights from their motifs? Such a longitudinal evaluation is outside the scope of this paper. As a proxy, we ran two surveys on AMT. The first survey (study S1 with 100 unique and valid responses) described the concept of Lemotif to subjects and showed example circle packing motifs for reference. The second survey (study S2 with 99 unique and valid responses) directed subjects to a web demo asking them to write three pieces of text about their day and generated motifs in all six visualization styles based on labels automatically detected from their entries. Note that S2 evaluates our entire system end-to-end on free-form entries.
Table 1 shows the percentage of respondents agreeing with each statement. In both studies, a majority of subjects stated they would use an app like Lemotif, that such an app would make journalling more enjoyable, that they would write more regularly with an app like Lemotif, and that the motifs "get their creative juices flowing." 71% of respondents who used the end-to-end demo agreed the motifs were representative of what they wrote, further affirming that our NLP model effectively extracts topic-emotion labels and that our mapping of abstract concepts to visual representations feels intuitive to human subjects. Between S1 and S2, responses to the metrics shown are comparable, with the exception of "more enjoyable" and "get creative juices flowing" receiving lower scores in the end-to-end demo. Since S1 only describes Lemotif to users, they are free to imagine an ideal user interface. Moreover, assessment of the end-to-end demo would also suffer from errors in the NLP model, which is not perfect. We posit that a full app with attention to user experience design and our full set of customization options would likely score higher than S2 currently.
Question | %Yes (S1) | % Yes (S2)
---|---|---
Representative of entry? | NA | 71%
Would use? | 59% | 56%
Make more enjoyable? | 68% | 59%
Would write more regularly? | 61% | 61%
Get creative juices flowing? | 59% | 51%
Table 1: Survey responses to Lemotif app
In the full demo, respondents were also asked to select their favorite visualization style out of all six presented in randomized order. Table 2 shows the percentage of respondents selecting each style as their favorite out of 84 respondents who answered this question. Similar to our other evaluations, we see that no one style is dominantly favored.
Style | % Favorite
---|---
Autoencoder (A6) | 14%
Carpet (A3) | 19%
Circle Packing (A1) | 23%
Glass (A5) | 8%
Tile (A4) | 25%
String Doll (A2) | 11%
Table 2: Percentage of respondents choosing style as favorite
Overall across our multiple evaluations, we see that (1) a majority of subjects find our visual representation of abstract concepts intuitive, (2) our NLP model extracts accurate labels for a majority of entries, (3) a majority of subjects prefer our motif designs over corresponding baselines, and (4) a majority of prospective users consider Lemotif a useful system that would increase their likelihood to journal and enjoyment of journaling.
## Future Work
Future work involves developing Lemotif into an app that allows users to accumulate their entries over time and view temporal (weekly, monthly, etc.) summaries. The app could allow for custom mappings, such as allowing users to specify the name of their job so the NLP model always identifies it as "work," or correct the detected topics and emotions so that over time the model learns the user’s personal life and writing style. Training our model with more data containing more diverse labels would also likely improve its accuracy.
Additional visualization styles are possible given the diversity of generative art. Our autoencoder model would likely improve with architectural changes, adversarial discriminator loss (like a GAN), and hyperparameter tuning. With a sufficiently large and annotated dataset, a conditional GAN could be trained that takes in color labels directly rather than as low-resolution images. Multiple models could be trained on different artists and artistic movements. Within an app system, users could provide feedback on generated motifs they like more or less, further training the image models to the user’s own taste. Additional input dimensions like the intensity of emotion could be incorporated, such that stronger emotions appear more saturated.
## Conclusion
In summary, we present Lemotif. It takes as input a text-based journal entry indicating what aspects of the user’s day were salient and how they made them feel and generates as output a motif – a creative abstract visual depiction – of the user’s day. As a visual journal used over periods of time, Lemotif aims to make associations between feelings and parts of a user’s life more apparent, presenting opportunities to take actions towards improved emotional well being.
Lemotif is built on five underlying principles: (1) separate out the sources of emotions, (2) depict these sources visually, (3) depict these emotions visually, (4) generate visualizations that are creative and attractive, and (5) identify and visualize detected topics and emotions automatically using machine learning and computational methods. We verify via human studies that each of the first four factors contributes to the proposed motifs being favored over corresponding baselines; accuracy and F1 metrics indicate the NLP model greatly outperforms random chance. We also find that subjects are interested in using an app like Lemotif and consider the generated motifs representative of their journal entries.
#### Acknowledgments
Thanks to Ayush Shrivastava, Gauri Shri Anantha, Abhishek Das, Amip Shah, Sanyam Agarwal, Eakta Jain, and Geeta Shroff for participating in an earlier version of this study. Special thanks to Abhishek Das for useful discussions and feedback. At Facebook AI Research, we understand that researching a topic like emotion is nuanced and complicated. This work does not research what causes emotional well being (or not). It does not mine Facebook data to extract emotions, or use Facebook data or the Facebook platform in any other way. It simply generates visualizations based on topics and emotions reported by subjects explicitly electing to participate in our study, and analyzes which visualizations subjects prefer. Creative applications of AI are a powerful avenue by which AI can collaborate with humans for positive experiences. This work is one (small) step in that direction.
## References
* [1]D. Alvarez-Melis and J. Amores (2017) The Emotional GAN : priming adversarial generation of art with emotion. Cited by: Visual Representation.
* [2]A. Cowen, H. Elfenbein, P. Laukka, and D. Keltner (2018) Mapping 24 emotions conveyed by brief human vocalization. American Psychologist. Cited by: Visual Representation.
* [3]A. S. Cowen and D. Keltner (2017) Self-report captures 27 distinct categories of emotion bridged by continuous gradients. Proceedings of the National Academy of Sciences. External Links: Document Cited by: Visual Representation, Emotions.
* [4]J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: Introduction, Natural Language Processing, Natural Language Processing.
* [5]C. Dong, C. C. Loy, K. He, and X. Tang (2015) Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence38 (2), pp. 295–307. Cited by: Image Synthesis.
* [6]P. Galanter (2003) What is generative art? Complexity theory as a context for art theory. In Proceedings of the International Conference on Generative Art, Cited by: Image Synthesis.
* [7]I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27, Cited by: Image Synthesis.
* [8]K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: Autoencoder.
* [9]G. E. Hinton and R. R. Salakhutdinov (2006) Reducing the dimensionality of data with neural networks. Science313 (5786), pp. 504–507. Cited by: Image Synthesis.
* [10]N. A. Nijdam (2005) Mapping emotion to color. Cited by: Visual Representation, Colors for emotions.
* [11]J. W. Pennebaker (1997) Writing about emotional experiences as a therapeutic process. Psychological Science8 (3), pp. 162–166. Cited by: Journaling Tools.
* [12]A. T. Poffenberger and B. Barrows (1924) The feeling value of lines.. Journal of Applied Psychology8 (2), pp. 187. Cited by: Visual Representation.
* [13]M. Pontiki, D. Galanis, H. Papageorgiou, I. Androutsopoulos, S. Manandhar, M. Al-Smadi, M. Al-Ayyoub, Y. Zhao, B. Qin, O. De Clercq, et al. (2016) Semeval-2016 task 5: aspect based sentiment analysis. In 10th International Workshop on Semantic Evaluation (SemEval 2016), Cited by: Natural Language Processing.
* [14]A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. OpenAI Blog1 (8), pp. 9. Cited by: Natural Language Processing.
* [15]S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee (2016) Generative adversarial text to image synthesis. In Proceedings of The 33rd International Conference on Machine Learning, Cited by: Image Synthesis.
* [16]S. Salevati and S. DiPaola (2015) A creative artificial intelligence system to investigate user experience, affect, emotion and creativity. In Proceedings of the Conference on Electronic Visualisation and the Arts, Cited by: Visual Representation.
* [17]S. Trubetskoy (2017) List of 20 simple, distinct colors. External Links: Link Cited by: Colors for emotions.
|
1406.6202 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 137970,
"num_imgs": 1,
"llama3_tokens_count": 50579
} | [
"content_image/1406.6202/x1.png"
] | # The foundations of fractional Mellin transform analysis†
[FOOTNOTE:†][ENDFOOTNOTE]
Carlo Bardaro
Department of Mathematics and Computer Sciences, University of Perugia, via Vanvitelli 1, I-06123 Perugia, Italy, e-mail: carlo.bardaro@unipg.it; corresponding author
Paul L. Butzer
Lehrstuhl A fur Mathematik RWTH-Aachen Templergraben 55, D-52062 Aachen, Germany, e-mail: butzer@rwth-aachen.de
Ilaria Mantellini
Department of Mathematics and Computer Sciences, University of Perugia, via Vanvitelli 1, I-06123 Perugia, Italy, e-mail: ilaria.mantellini@unipg.it
_In Memory of Rashid Gamid-oglu Mamedov, a pioneer in Mellin Analysis_
**Abstract:**In this article we study the basic theoretical properties of Mellin-type fractional integrals, known as generalizations of the Hadamard-type fractional integrals. We give a new approach and version, specifying their semigroup property, their domain and range. Moreover we introduce a notion of strong fractional Mellin derivatives and we study the connections with the pointwise fractional Mellin derivative, which is defined by means of Hadamard-type fractional integrals. One of the main results is a fractional version of the fundamental theorem of differential and integral calculus in the Mellin frame. In fact, in this article it will be shown that the very foundations of Mellin transform theory and the corresponding analysis are quite different to those of the Fourier transform, alone since even in the simplest non-fractional case the integral operator (i.e. the anti-differentiation operator) applied to a function \(f\) will turn out to be the \(\int_{0}^{x}f(u)du/u\) with derivative \((xd/dx)f(x).\) Thus the fundamental theorem in the Mellin sense is valid in this form, one which stands apart from the classical Newtonian integral and derivative. Among the applications two fractional order partial differential equations are studied.**AMS Subject Classification:**47G10, 26A33, 44A15.**KeyWords:**Mellin transform, Hadamard-type fractional derivatives and integrals, strong fractional Mellin derivative, generalized Stirling functions and Stirling numbers, fractional order partial differential equations.
## 1 **Introduction**
The theory of Mellin transforms as well as Mellin approximation theory was introduced by R.G. Mamedov in his treatise [45], which includes also previous results in this subject obtained in collaboration with G.N. Orudzhev (see [46, 47, 48]). In his review Professor H.J. Glaeske (MR1235339–94:44003) writes: _This book deals with the theory of The Mellin transform and its applications to approximation theory based on results of the school of I.M. Dzhrbashyan and the methods of the school of P.L. Butzer on Fourier Analysis and approximation_. Somewhat later Mellin transform theory was presented in a systematic form, fully independently of Fourier analysis, by Butzer and Jansche in their papers [18], [19]. Further important developments were then given in [20], and later on in the present line of research in [6, 7, 8, 9, 10, 11, 12, 49, 2, 3].
In the papers [22, 23, 24, 25, 26] a broad study of fractional Mellin analysis was developed in which the so-called Hadamard- type integrals, which represent the appropriate extensions of the classical Riemann-Liouville and Weyl fractional integrals, are considered (see also the book [42]). These integrals are also connected with the theory of moment operators (see [12],[11], [14]). The purpose of this article is not only a continuation of these topics but also to present a new, almost independent approach, one starting from the very foundations. As remarked in [22], in terms of Mellin analysis, the natural operator of fractional integration is not the classical Riemann-Liouville fractional integral of order \(\alpha>0\) on \(\mathbb{R}^{+}\), namely (see [54], [50], [33], [34])
\[(I^{\alpha}_{0+}f)(x)=\frac{1}{\Gamma(\alpha)}\int_{0}^{x}(x-u)^{\alpha-1}f(u) du\leavevmode\nobreak\ \leavevmode\nobreak\ (x>0)\] (1)
but the Hadamard fractional integral, introduced essentially by Hadamard in [39],
\[(J^{\alpha}_{0+}f)(x)=\frac{1}{\Gamma(\alpha)}\int_{0}^{x}\bigg{(}\log\frac{x} {u}\bigg{)}^{\alpha-1}f(u)\frac{du}{u}\leavevmode\nobreak\ \leavevmode\nobreak \ (x>0).\] (2)
Thus not \(\int_{0}^{x}f(u)du\) is the natural operator of integration (anti-differentiation) in the Mellin setting, but \(\int_{0}^{x}f(u)\frac{du}{u}\) (the case \(\alpha=1\)). It is often said that a study of Mellin transforms as an independent discipline is fully superfluous since one supposedly can reduce its theorems and results to the corresponding ones of Fourier analysis by a simple change of variables and functions. It may be possible to reduce a formula by such a change of operations but not the precise hypotheses under which a formula is valid. But alone since (1) is not the natural operator of integration in the Mellin frame but that the Hadamard fractional integral (2), (which is a compact form of the iterated integral (6) (see section 4) ) will turn out to be the operator of integration, thus anti-differentiation to the operator of differentiation \(D_{0+,0}f\) in (4) (see below)–in the sense that the fundamental theorem of the differential and integral calculus must be valid in the Mellin frame–makes the change of operation argument fully obsolete. This will become evident as we proceed along, especially in Theorems 3-4, and Theorems 6-12 below. Thus the very foundations to Mellin analysis are quite different to those of classical Fourier analysis.
For the development of the theory, it will be important to consider the following generalization of the fractional integral, known as the Hadamard-type fractional integrals, for \(\mu\in\mathbb{R},\) namely (see [22, 23, 24, 25, 26, 42])
\[(J^{\alpha}_{0+,\mu}f)(x)=\frac{1}{\Gamma(\alpha)}\int_{0}^{x}\bigg{(}\frac{u} {x}\bigg{)}^{\mu}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}f(u)\frac{du}{u} \leavevmode\nobreak\ \leavevmode\nobreak\ (x>0)\] (3)
for functions belonging to the space \(X_{c}\) of all measurable complex-valued functions defined on \(\mathbb{R}^{+},\) such that \((\cdot)^{c-1}f(\cdot)\in L^{1}(\mathbb{R}^{+}).\) As regards the classical Hadamard fractional integrals and derivatives, some introductory material about fractional calculus in the Mellin setting was already treated in [45] and [54].
In Section 2 we recall some basic tools and notations of Mellin analysis, namely the Mellin transform, along with its fundamental properties, the notion of the basic Mellin translation operator, which is now defined via a dilation operator instead of the usual traslation (see [18]. For other classical references see [15], [36], [52], [57], [62], [61]).
In Section 3 we will introduce and study a notion of a strong fractional derivative in the spaces \(X_{c},\) which represents an extension of the classical strong derivative of Fourier analysis in \(L^{p}-\)spaces (see [28]). The present notion is inspired by an analogous construction given in [59], [33] for the Riemann-Liouville fractional derivatives in a strong sense. This method is based on the introduction of certain fractional differences, which make use of the classical translation operator. Another important fact is that fractional differences are now defined by an infinite series. Our definition here, follows this approach, using the Mellin translation operator. Our definition reproduces the Mellin differences of integral order, as given in [20], in which we have a finite sum.
It should be noted that a different approach for spaces \(X_{0}\) was introduced in [45], pages 175-176, starting with the incremental ratios of the integral (2). A relevant part of the present paper (Section 4) deals with the pointwise fractional derivative of order \(\alpha>0,\) known as the ”Hadamard-type fractional derivative” in the local spaces \(X_{c,loc},\) and with its links with the strong derivatives. This notion originates from the analogous concept of Riemann-Liouville theory, and was introduced in [23] using the Hadamard-type fractional integrals. It read as follows
\[(D^{\alpha}_{0+,\mu}f)(x)=x^{-\mu}\delta^{m}x^{\mu}(J^{m-\alpha}_{0+,\mu}f)(x),\] (4)
where \(m=[\alpha]+1\) and \(\delta:=(x\frac{d}{dx})\) is the Mellin differential operator \((\delta f)(x)=xf^{\prime}(x),\) provided \(f^{\prime}(x)\) exists. For \(\mu=0\) we have the so called Hadamard fractional derivative, treated also in [45], [54]. Note that the above definition reproduces exactly the Mellin derivatives \(\Theta_{c}^{k}f\) of integral order when \(\alpha=k\in\mathbb{N}.\) Thus \(D^{\alpha}_{0+,\mu}f\) represents the natural fractional version of the differential operator \(\Theta_{c}^{k},\) in the same way that the Riemann-Liouville fractional derivative is the natural extension of the usual derivative. Paper [41], gives some sufficient conditions for the existence of the pointwise fractional derivative for functions defined in bounded intervals \(I\subset\mathbb{R}^{+},\) involving spaces of absolutely continuous functions in \(I.\)
Since the definition of the pointwise fractional derivatives is based on a Hadamard-type integral, it is important to study in depth the domain and the range of these integral operators. As far as we are aware this was not sufficiently developed in the literature so far. Here we define the domain of the operator (3) as the subspace of all functions such that the integral exists as a Lebesgue integral. A basic result in this respect is the semigroup property of \(J^{\alpha}_{0+,c}.\) This was first studied in [45] and [54] for the Hadamard integrals (2) and then developed for the integrals (3) in [24] and [41] (see also the recent books [42], [5]). However, the above property was studied only for functions belonging to suitable subspaces of the domain, namely the space \(X^{p}_{c}\) of all the functions \(f:\mathbb{R}^{+}\rightarrow\mathbb{C}\) such that \((\cdot)^{c-1}f(\cdot)\in L^{p}(\mathbb{R}^{+}),\) or for \(L^{p}(a,b)\) where \(0<a<b<\infty.\)
Here we prove the semigroup property in a more general form, using minimal assumptions. This extension enables us to deduce the following chain of inclusions for the domains of the operators \(J^{\alpha}_{0+,c}.\)
\[DomJ^{\beta}_{0+,c}\subset X_{c,loc}=DomJ^{1}_{0+,c}\subset DomJ^{\alpha}_{0+, c},\]
for \(\alpha<1<\beta,\) and all inclusions are strict.
Concerning the range, we show that \(J^{\alpha}_{0+,c}f\in X_{c,loc}\) whenever \(f\in DomJ^{\alpha+1}_{0+,c}f\) and in general \(f\in DomJ^{\alpha}_{0+,c}\) does not imply that \(J^{\alpha}_{0+,c}f\in X_{c,loc}.\)
For spaces \(X_{c}\) we have the surprising result that \(J^{\alpha}_{0+,c}f\not\in X_{c}\) for any nontrivial non-negative function \(f\in DomJ^{\alpha}_{0+,c}.\) This fact gives problems for the evaluation of the Mellin transform of the function \(J^{\alpha}_{0+,c}f.\) In order to avoid this problem, we prove that if \(f\in DomJ^{\alpha}_{0+,c}\cap\bigcap_{\mu\in[\nu,c]}X_{\mu},\) then \(J^{\alpha}_{0+,c}f\in X_{\nu}\) and so its Mellin transform can be evaluated on the line \(\nu+it,\) with \(\nu<c.\)
We then apply the theory to deduce one of the main results of this paper, namely the fundamental theorem of the fractional differential and integral calculus in the Mellin frame, here established under sharp assumptions. We consider also some more general formulae, involving different orders of fractional integration and differentiation. Similar results were also given in [41], [42] however in resrticted subspaces (see the remarks in Section 4). In particular, one of the two fundamental formulae is given there under the strong assumption that the functions \(f\) belongs to the range of \(J^{\alpha}_{0+,\mu}(X^{p}_{0+,c}),\) with \(\mu>c.\)
In Section 5 we prove an equivalence theorem with four equivalent statements, which connects fractional Hadamard-type integrals, strong and pointwise fractional Mellin derivatives and the Mellin transform (see Theorem 8 below). As far as we know, a fundamental theorem with four equivalent assertions in the form presented here for the Mellin transform in the fractional case has never been stated explicitly, for the Fourier transform. As a fundamental theorem in the present sense it was first established for \(2\pi-\)periodic functions via the finite Fourier transform in [33], and for the Chebyshev transform (see e.g. [36], pp. 116-122), in [30], [31]. Fractional Chebyshev derivatives were there defined in terms of fractional order differences of the Chebyshev translation operator, the Chebyshev integral by an associate convolution product. The next fundamental theorem, after that for Legendre transforms (see e.g. [36], pp.122-131; [16], [56]), was the one concerned with the Jacobi transform, see e.g. [32]. In their inimitable book [36], H.J. Glaeske, A.P. Prudnikov and K.A. Skornik study the Mellin transform and its essential properties (pp. 55-67), not as an independent discipline but by making use of the corresponding properties of the Fourier transform, the reduction being carried out with unusual precision. In other respects their presentation is standard. Thus their integral is the classical one, i.e. \(F(x)=\int_{0}^{x}f(u)du,\) with Mellin transform \(M[F](s)=-s^{-1}M[f](s+1).\) They were not aware of [18]. However, their sections on the Chebyshev, Legendre, Gegenbauer and Jacobi transforms make interesting reading and are unorthodox. Here their chief properties are based on the definitions of an associated translation operator for each transform, an approach carried out systematically for the Chebyshev and Legendre transforms in [31] and [56], which are cited by the three authors. However, they do not continue the process and define the associated derivative concepts in terms of the respective translation operators (probably due to lack of space). This would have led them to the fundamental theorems of the differential and integral calculus in the setting of the respective transforms. Neverthless the material of these sections has never been treated in a book-form as yet. The chapter on Mellin transforms in the unique handbook [61], also written in the classical style, bears the individuaal stamp of the author, A. Zayed.
In Section 6 we describe some special cases of interest in applications, while in Section 7 we apply our theory to two fractional partial differential equations. The use of Mellin transforms for solving partial differential equations originates from certain boundary value problems in wedge-shaped regions, see e.g. [62], [43] and, in the fractional frame, was considered by various authors for the study of fractional versions of the diffusion equation (see e.g. [60], [55], [40], [42]). However, the use of Mellin transforms for solving fractional differential equations with Hadamard derivatives is not usual. Also, there are a few contributions dealing with pure Hadamard derivatives (see e.g. [42], [5], [44], [53], [37]). Most fractional equations, are studied using different types of fractional derivatives, (Riemann-Liouville, Caputo, etc). Here we apply our theory to an integro-differential equation which can be reduced to a fractional evolution equation, with Hadamard fractional derivative. A similar equation was also considered in [42] but with the Caputo fractional derivative. Here we give the exact solution of the evolution equation, using just Mellin transforms and the fractional theory developed in this paper. As a second example, we consider a boundary value problem for a fractional diffusion equation, using the same approach. In both the examples the (unique) solution is given in terms of a Mellin convolution operator.
In the very recent book [5] numerical methods for solving fractional differential equation are treated, using mainly Caputo and Riemann-Liouville fractional theories.
## 2 Preliminaries
Let \(L^{1}=L^{1}(\mathbb{R}^{+})\) be the space of all Lebesgue measurable and integrable complex-valued functions defined on \(\mathbb{R}^{+},\) endowed with the usual norm.
Let us consider the space, for some \(c\in\mathbb{R},\)
\[X_{c}=\{f:\mathbb{R}^{+}\rightarrow\mathbb{C}:f(x)x^{c-1}\in L^{1}(\mathbb{R}^ {+})\}\]
endowed with the norm
\[\|f\|_{X_{c}}=\|f(\cdot)(\cdot)^{c-1}\|_{L^{1}}=\int_{0}^{\infty}|f(u)|u^{c-1}du.\]
More generally by \(X^{p}_{c}\) we denote the space of all functions \(f:\mathbb{R}^{+}\rightarrow\mathbb{C}\) such that \((\cdot)^{c}f(\cdot)\in L^{p}(\mathbb{R}^{+}),\) with \(1<p<\infty.\) In particular when \(c=1/p,\) the space \(X^{p}_{c}\) coincides with the classical \(L^{p}(\mathbb{R}^{+})\) space.
For \(a,b\in\mathbb{R}\) we define the spaces \(X_{(a,b)},\leavevmode\nobreak\ X_{[a,b]}\) by
\[X_{(a,b)}=\bigcap_{c\in]a,b[}X_{c},\leavevmode\nobreak\ \leavevmode\nobreak\ X _{[a,b]}=\bigcap_{c\in[a,b]}X_{c}\]
and, for every \(c\) in \((a,b)\) or \([a,b]\), \(\|f\|_{X_{c}}\) is a norm on them.
Note that, for any \(a,b\in\mathbb{R},\) with \(a<b,\) if \(f\in X_{a}\cap X_{b}\), then \(f\in X_{[a,b]}\) and moreover
\[\|f\|_{X_{c}}\leq\|f\|_{X_{a}}+\|f\|_{X_{b}},\]
for every \(c\in[a,b].\) For these and other results see [18]. In what follows, we denote by \(\chi_{A}(x)\) the characteristic function of the set \(A\subset\mathbb{R}^{+}.\)
We define for every \(f\in X_{c}\) the Mellin transform \([f]^{\wedge}_{M}\) of \(f\) by
\[M[f](s)\equiv[f]^{\wedge}_{M}(s)=\int_{0}^{\infty}u^{s-1}f(u)du\]
where \(s=c+it,t\in\mathbb{R}.\)
The notation \(M[f(\cdot)](s)\) of the Mellin transform signifies the fact that one of its essential roles is to solve analytical problems by transforming them into another function space, solve the problem (which should be simpler) in the trasformed state, and then apply a (suitable) Mellin inversion formula to obtain the solution in the original function space.
Basic in this respect are the linearity and boundedness properties, thus
\[M[af(\cdot)+bg(\cdot)](s)=aM[f(\cdot)](s)+bM[g(\cdot)](s)\leavevmode\nobreak\ \leavevmode\nobreak\ (f,g\in X_{c},\leavevmode\nobreak\ a,b\in\mathbb{R})\]
\[|M[f(\cdot)](s)|\leq\|f\|_{X_{c}}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (s=c+it).\]
As a consequence of the boundedness property, if \((f_{n})_{n}\) is a sequence of functions in \(X_{c}\) convergent in \(X_{c}\) to a function \(f,\) then \(M[f_{n}]\) converges uniformly to \(M[f]\) on the line \(s=c+it,\leavevmode\nobreak\ t\in\mathbb{R}.\)
We need several operational properties.
The Mellin translation operator \(\tau_{h}^{c}\), for \(h\in\mathbb{R}^{+},\leavevmode\nobreak\ c\in\mathbb{R},\) \(f:\mathbb{R}^{+}\rightarrow\mathbb{C},\) is defined by
\[(\tau_{h}^{c}f)(x):=h^{c}f(hx)\leavevmode\nobreak\ \leavevmode\nobreak\ (x\in \mathbb{R}^{+}).\]
Setting \(\tau_{h}:=\tau^{0}_{h},\) then
\[(\tau_{h}^{c}f)(x)=h^{c}(\tau_{h}f)(x),\leavevmode\nobreak\ \|\tau_{h}^{c}f\|_ {X_{c}}=\|f\|_{X_{c}},\leavevmode\nobreak\ (\tau_{h}^{c})^{j}f(x)=h^{jc}f(h^{j }x)=(\tau_{h^{j}}^{c}f)(x).\]
Proposition 2 and Lemma 3 in [18], state the following:
**Lemma 1**: _The Mellin translation operator \(\tau_{h}^{\overline{c}}:X_{c}\to X_{c}\) for \(c,\overline{c}\in\mathbb{R},\leavevmode\nobreak\ h\in\mathbb{R}^{+}\) is an isomorphism with \((\tau_{h}^{\overline{c}})^{-1}=\tau_{1/h}^{\overline{c}}\) and_
\[\|\tau_{h}^{\overline{c}}f\|_{X_{c}}=h^{\overline{c}-c}\|f\|_{X_{c}} \leavevmode\nobreak\ \leavevmode\nobreak\ (f\in X_{c})\]
_having the properties_
* \(M[\tau_{h}^{\overline{c}}f](s)=h^{\overline{c}-s}M[f](s),\) _in particular_ \(M[\tau_{h}f](s)=h^{-s}M[f](s);\)__
* \(\lim_{h\to 1}\|\tau_{h}^{\overline{c}}f-f\|_{X_{c}}=0.\)__
When \(\overline{c}=0\) Property ii), in case of continuous functions \(f\), expresses uniform continuity in the Mellin frame, taking the usual \(L^{\infty}-\)norm, i.e.
\[\lim_{h\to 1}\|\tau_{h}f-f\|_{\infty}=0.\]
It is equivalent to the so-called log-uniform continuity due to Mamedov (see [45], page 7), which may be expressed as follows: a function \(f:\mathbb{R}^{+}\rightarrow\mathbb{C}\) is log-uniformly continuous on \(\mathbb{R}^{+}\) if for every \(\varepsilon>0\) there exists \(\delta_{\varepsilon}>0\) such that \(|f(u)-f(v)|<\varepsilon,\) whenever \(|\log u-\log v|<\delta_{\varepsilon}.\) Indeed the continuity of the operator \(\tau_{h}\) implies that \(|f(hx)-f(x)|<\varepsilon,\) for \(|h|<\delta_{\varepsilon},\) uniformly with respect to \(x\in\mathbb{R}^{+}.\) It should be noted that this notion is different from the usual uniform continuity. For example, the function \(f(u)=\sin u\) is obviously uniformly continuous, but not log-uniformly continuous on \(\mathbb{R}^{+}\), while the function \(g(u)=\sin(\log u)\) is log-uniformly continuous but not uniformly continuous on \(\mathbb{R}^{+}.\) However, the two notions are equivalent on every bounded interval \([a,b]\) with \(a>0.\)
The Mellin convolution product, denoted by \(f\ast g\), of two functions \(f,g:\mathbb{R}^{+}\rightarrow\mathbb{C},\) is defined by
\[(f\ast g)(x):=\int_{0}^{+\infty}g(\frac{x}{u})f(u)\frac{du}{u}=\int_{0}^{+ \infty}(\tau^{c}_{1/u}f)(x)g(u)u^{c}\frac{du}{u}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (x\in\mathbb{R}^{+})\]
in case the integral exists. It has the properties
**Lemma 2**:
* _If_ \(f,g\in X_{c},\) _for_ \(c\in\mathbb{R},\) _then_ \(f\ast g\) _exists (a.e.) on_ \(\mathbb{R}^{+},\) _it belongs to_ \(X_{c}\)_, and_ \[\|f\ast g\|_{c}\leq\|f\|_{X_{c}}\|g\|_{X_{c}}.\] _If in addition_ \(x^{c}f(x)\) _is uniformly continuous on_ \(\mathbb{R}^{+},\) _then_ \(f\ast g\) _is continuous on_ \(\mathbb{R}^{+}.\)__
* _(Convolution Theorem). If_ \(f,g\in X_{c}\) _and_ \(s=c+it,\leavevmode\nobreak\ t\in\mathbb{R},\) _then_ \[M[f\ast g](s)=M[f](s)M[g](s).\]
* _(Commutativity and Associativity). The convolution product is commutative and associative, thus for_ \(f_{1},f_{2},f_{3}\in X_{c}\) _there holds true (a.e.)_ \[f_{1}\ast f_{2}=f_{2}\ast f_{1},\leavevmode\nobreak\ \leavevmode\nobreak\ (f_{ 1}\ast f_{2})\ast f_{3}=f_{1}\ast(f_{2}\ast f_{3}).\] _In particular_ \(X_{c}\) _is a Banach algebra._
## 3 The strong Mellin fractional differential operator
Let us denote by \(I\) the identity operator over the space of all measurable functions on \(\mathbb{R}^{+}.\)
The Mellin fractional difference of \(f\in X_{c}\) of order \(\alpha>0,\) defined by
\[\Delta_{h}^{\alpha,c}f(x):=(\tau_{h}^{c}-I)^{\alpha}f(x)=\sum_{j= 0}^{\infty}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)(-1)^{\alpha-j}\tau_{h^{j}}^{c}f(x)\]
for \(h>0\) with
\[\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)=\frac{\alpha(\alpha-1)\cdots(\alpha-j+1)}{j!},\]
has the following properties
**Proposition 1**: _For \(f\in X_{c}\) the difference \(\Delta_{h}^{\alpha,c}f(x)\) exists a.e. for \(h>0,\) with_
* \(\|\Delta_{h}^{\alpha,c}f\|_{X_{c}}\leq\|f\|_{X_{c}}\sum_{j=0}^{\infty}\bigg{|} \left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\bigg{|}\)__
* \(M[\Delta_{h}^{\alpha,c}f](c+it)=(h^{-it}-1)^{\alpha}M[f](c+it).\)__
* _The following semigroup property holds for_ \(\alpha,\beta>0,\)__ \[(\Delta_{h}^{\alpha,c}\Delta_{h}^{\beta,c}f)(x)=(\Delta_{h}^{\alpha+\beta,c}f) (x).\]
**Proof**. At first, we have for \(x>0,\leavevmode\nobreak\ h>0\)
\[|\Delta_{h}^{\alpha,c}f(x)|\leq\frac{1}{x^{c}}\sum_{j=0}^{\infty} \bigg{|}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\bigg{|}h^{cj}x^{c}|f(h^{j}x)|;\]
thus we have to prove the convergence of the latter series. For this purpose, by integration, we have
\[\int_{0}^{\infty}\sum_{j=0}^{\infty}\bigg{|}\left(\begin{array}[] {c}\alpha\\ j\end{array}\right)\bigg{|}(h^{j}x)^{c}|f(h^{j}x)|\frac{dx}{x}=\sum_{j=0}^{ \infty}\bigg{|}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\bigg{|}\int_{0}^{\infty}(h^{j}x)^{c}|f(h^{j}x)|\frac{dx}{x }:=J.\]
Now, putting in the second integral \(h^{j}x=t\), we have
\[J=\|f\|_{X_{c}}\sum_{j=0}^{\infty}\bigg{|}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\bigg{|}.\]
Thus, since \(\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)={\cal O}(j^{-\alpha-1}),\leavevmode\nobreak\ \leavevmode \nobreak\ j\rightarrow+\infty,\) we observe that the integral is finite for any \(h>0,\) if \(f\in X_{c},\) and so the integrand is finite almost everywhere. Thus the original series defining the difference, converges almost everywhere.
As to (i), we have
\[\leq\int_{0}^{\infty}\frac{t^{c-1}}{h^{j(c-1)}}\sum_{j=0}^{\infty }\bigg{|}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\bigg{|}h^{cj}|f(t)|\frac{dt}{h^{j}}=\|f\|_{X_{c}}\sum_{j=0 }^{\infty}\bigg{|}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\bigg{|},\]
and so the assertion.
An alternative proof makes use of Lemma 1 in the following way. The left hand side of i) can be estimated by
which is independent of \(h>0.\) As to (ii), the Mellin transform on the left equals, by the linearity property, which uses an integration by series,
\[\sum_{j=0}^{\infty}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)(-1)^{\alpha-j}h^{-itj}[f]^{\wedge}_{M}(c+it),\]
which yelds (ii). Note that the complex number \(h^{-it}\) has modulus \(1,\) so it lies on in the boundary of the circle of convergence of the power series which defines the binomial expansion. But, since the following series are absolutely convergent and bounded,
\[\sum_{j=0}^{\infty}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)(-1)^{\alpha-j}h^{-itj},\leavevmode\nobreak\ \leavevmode \nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \sum_{j=0}^{\infty}\left|\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\right|,\]
using the Abel- Stolz theorem for power series (see e.g. [1]), we obtain
\[\sum_{j=0}^{\infty}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)(-1)^{\alpha-j}h^{-itj}=(h^{-it}-1)^{\alpha}.\]
In order to justify the integration by series, we have for \(s=c+it,\)
\[=\sum_{j=0}^{\infty}\bigg{|}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\bigg{|}h^{cj}|h^{j(1-s)-j}|\int_{0}^{\infty}|t^{s-1}||f(t) |dt=\sum_{j=0}^{\infty}\bigg{|}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\bigg{|}\|f\|_{X_{c}}<+\infty.\]
As to iii), using i) and ii), taking the Mellin transform of both sides of the formula, we obtain
\[[\Delta_{h}^{\alpha,c}\Delta_{h}^{\beta,c}f]^{\wedge}_{M}(c+it) = (h^{-it}-1)^{\alpha}[\Delta_{h}^{\beta,c}f]^{\wedge}_{M}(c+it)=(h ^{-it}-1)^{\alpha+\beta}[f]^{\wedge}_{M}(c+it)\]
\[= [\Delta_{h}^{\alpha+\beta,c}f]^{\wedge}_{M}(c+it),\]
and so the assertion follows from the uniqueness theorem for Mellin transforms (see Theorem 8 in [18]).
Note that the fractional differences introduced here depend fundamentally on the Mellin translation operator. In the classical theories of Riemann-Liouville and Grünwald-Letnikov fractional calculus, the corresponding differences were based on the classical translation operator, and were first studied in a precise and systematic form in [33]; see also [42], where property iii) for these differences is also given, without proof. Moreover, other generlizations of fractional differences, via the Stirling functions of first kind, were also introduced in [26], [27].
For spaces \(X_{[a,b]},\) we have the following
**Proposition 2**: _Let \(f\in X_{[a,b]},\) and let \(c\in]a,b[.\)_
**(i)**: _If_ \(0<h\leq 1,\) _we have_ \(\Delta_{h}^{\alpha,c}f\in X_{[a,c]},\) _and for every_ \(\nu\in[a,c[\)__
\[\|\Delta_{h}^{\alpha,c}f\|_{X_{\nu}}\leq\|f\|_{X_{\nu}}\sum_{j=0}^{\infty} \left|\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\right|h^{(c-\nu)j}.\]
_Moreover,_
\[M[\Delta_{h}^{\alpha,c}f](\nu+it)=(h^{c-\nu-it}-1)^{\alpha}M[f](\nu+it), \leavevmode\nobreak\ \leavevmode\nobreak\ t\in\mathbb{R}.\]
**(ii)**: _If_ \(h>1,\) _we have_ \(\Delta_{h}^{\alpha,c}f\in X_{[c,b]},\) _and for every_ \(\mu\in]c,b]\)__
\[\|\Delta_{h}^{\alpha,c}f\|_{X_{\mu}}\leq\|f\|_{X_{\mu}}\sum_{j=0}^{\infty} \left|\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\right|h^{(c-\mu)j}.\]
_Moreover,_
\[M[\Delta_{h}^{\alpha,c}f](\mu+it)=(h^{c-\mu-it}-1)^{\alpha}M[f]( \mu+it),\leavevmode\nobreak\ \leavevmode\nobreak\ t\in\mathbb{R}.\] (5)
**Proof**. We prove only (i) since the proof of (ii) is similar. Let \(\nu\in[a,c[\) be fixed. Using an analogous reasoning as in Proposition 1, we have
\[\|\Delta_{h}^{\alpha,c}f\|_{X_{\nu}}\leq\int_{0}^{\infty}x^{\nu-1 }\sum_{j=0}^{\infty}\bigg{|}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\bigg{|}h^{cj}|f(h^{j}x)|dx=\|f\|_{X_{\nu}}\sum_{j=0}^{ \infty}\bigg{|}\left(\begin{array}[]{c}\alpha\\ j\end{array}\right)\bigg{|}h^{j(c-\nu)},\]
the last series being absolutely convergent for \(0<h\leq 1.\) Moreover, as above, we can obtain, for \(\nu\in[a,c[\) the assertion (5). \(\Box.\)**Definition 1**. If for \(f\in X_{c}\) there exists a function \(g\in X_{c}\) such that
\[\lim_{h\to 1}\bigg{\|}\frac{\Delta_{h}^{\alpha,c}f(x)}{(h-1)^{\alpha}} -g(x)\bigg{\|}_{X_{c}}=0\]
then \(g\) is called the strong fractional Mellin derivative of \(f\) of order \(\alpha,\) and it is denoted by \(g(x)\) = s-\(\Theta^{\alpha}_{c}f(x).\) If \(\alpha=0\) it is easy to see that s-\(\Theta^{0}_{c}f(x)=f(x).\)
We introduce now the Mellin Sobolev space \(W^{\alpha}_{X_{c}}\) by
\[W^{\alpha}_{X_{c}}:=\{f\in X_{c}:\mbox{s-}\Theta^{\alpha}_{c}f\leavevmode \nobreak\ \mbox{exists and}\leavevmode\nobreak\ \mbox{s-}\Theta_{c}^{\alpha}f \in X_{c}\},\]
with \(W^{0}_{X_{c}}=X_{c}.\) Analogously, for any interval \(J\) we define the spaces \(W^{\alpha}_{X_{J}},\) by
\[W^{\alpha}_{X_{J}}=\{f\in X_{J}:\mbox{s-}\Theta^{\alpha}_{c}f\leavevmode \nobreak\ \mbox{exists for every}\leavevmode\nobreak\ c\in J\leavevmode \nobreak\ \mbox{and}\leavevmode\nobreak\ \mbox{s-}\Theta^{\alpha}_{c}f\in X_{J }\}.\]
For integral values of \(\alpha\) our definition of strong Mellin derivative and the corresponding Mellin-Sobolev spaces reproduce those introduced in [19], being the differences given now by a finite sum.
Using an approach introduced in [19] for the integer order case, we prove now
**Theorem 1**: _The following properties hold:_
**(i)**: _If_ \(f\in W^{\alpha}_{X_{c}},\) _then for_ \(s=c+it,t\in\mathbb{R}\) _we have_
\[M[\mbox{s-}\Theta^{\alpha}_{c}f](s)=(-it)^{\alpha}M[f](s).\]
**(ii)**: _If_ \(f\in W^{\alpha}_{X_{[a,b]}},\) _then for every_ \(\nu,c\in[a,b]\) _we have_
\[M[\mbox{s-}\Theta^{\alpha}_{c}f](\nu+it)=(c-\nu-it)^{\alpha}M[f](\nu+it) \leavevmode\nobreak\ \leavevmode\nobreak\ (t\in\mathbb{R}).\]
**Proof**. As to (i), since
\[\lim_{h\to 1}\bigg{(}\frac{h^{-it}-1}{h-1}\bigg{)}^{\alpha}=(-it)^{ \alpha},\]
we have, by Proposition 1(ii),
\[\bigg{|}(-it)^{\alpha}[f]^{\wedge}_{M}(s)-[\mbox{s-}\Theta_{c}^{ \alpha}f]^{\wedge}_{M}(s)\bigg{|} =\]
\[= \lim_{h\to 1}\bigg{|}\bigg{[}\frac{\Delta_{h}^{\alpha,c}f }{(h-1)^{\alpha}}\bigg{]}^{\wedge}_{M}(s)-[\mbox{s-}\Theta_{c}^{\alpha}f]^{ \wedge}_{M}(s)\bigg{|}\]
\[= \lim_{h\to 1}\bigg{|}\bigg{[}\frac{\Delta_{h}^{\alpha,c}f }{(h-1)^{\alpha}}-\mbox{s-}\Theta_{c}^{\alpha}f\bigg{]}^{\wedge}_{M}(s)\bigg{|}\]
\[\leq \lim_{h\to 1}\bigg{\|}\frac{\Delta_{h}^{\alpha,c}f}{(h-1) ^{\alpha}}-\mbox{s-}\Theta_{c}^{\alpha}f\bigg{\|}_{X_{c}}=0\]
and thus (i) holds. As to (ii), we can use the same approach, applying one-sided limits and Proposition 2. \(\Box\)
## 4 Mellin fractional integrals and the pointwise fractional Mellin differential operator
In terms of Mellin analysis the natural operator of fractional integration is not the classical Liouville fractional integral of order \(\alpha\in\mathbb{C},\) on \(\mathbb{R}^{+},\) with Re \(\alpha>0,\) namely (1), but the integral (2)
\[(J^{\alpha}_{0+}f)(x)=\frac{1}{\Gamma(\alpha)}\int_{0}^{x}\bigg{(}\log\frac{x} {u}\bigg{)}^{\alpha-1}f(u)\frac{du}{u}\leavevmode\nobreak\ \leavevmode\nobreak \ \leavevmode\nobreak\ (x>0).\]
The above integrals were treated already in Mamedov’s book [45], page 168, in which the fractional integral of order \(\alpha\) is defined by \((-1)^{\alpha}(J^{\alpha}_{0+}f)(x).\) This is due to a different notion of Mellin derivatives (of integral order), see Section 4.2. Our approach here is more direct and simple since it avoids the use of the complex coefficient \((-1)^{\alpha}.\)
However, for the development of the theory, it is important to consider the generalization of the fractional integral, for \(\mu\in\mathbb{R},\) in the form (3).
Note that for integer values \(\alpha=r,\) in case \(\mu=c\) and \(f\in X_{c}\) (see [18], Definition 13), this turns into the iterated representation
\[(J^{r}_{0+,c}f)(x)=x^{-c}\int_{0}^{x}\int_{0}^{u_{1}}\ldots\int_{0}^{u_{r-1}}f (u_{r})u_{r}^{c}\frac{du_{r}}{u_{r}}\ldots\frac{du_{2}}{u_{2}}\frac{du_{1}}{u_ {1}}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (x>0).\] (6)
Several important properties of the operators \(J^{\alpha}_{0+,\mu}\) were given by Butzer et al. in [22], [23], [24], (see also the recent monographs [42] and [5]). In particular, a boundedness property is given in the space \(X_{c},\) when the coefficient \(\mu\) is greater than \(c,\) (indeed a more general result is given there, for spaces \(X^{p}_{c}\)). This is due to the fact that only for \(\mu>c\) (or, in the complex case, Re \(\mu>c\)) we can view \(J^{\alpha}_{0+,\mu}f\) as a Mellin convolution between two functions \(f,g^{\ast}_{\mu}\in X_{c},\) where
\[g^{\ast}_{\mu}(\frac{x}{u}):=(\frac{x}{u})^{-\mu}\frac{\chi_{]0,x]}(u)}{\Gamma (\alpha)}\bigg{(}\log(\frac{x}{u})\bigg{)}^{\alpha-1}.\]
Indeed, we have
\[(J_{0+,\mu}f)(x)=\frac{1}{\Gamma(\alpha)}\int_{0}^{x}\bigg{(} \frac{u}{x}\bigg{)}^{\mu}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}f(u)\frac{ du}{u}\]
\[= \frac{1}{\Gamma(\alpha)}\int_{0}^{+\infty}\bigg{(}\frac{u}{x} \bigg{)}^{\mu}\chi_{]0,x]}(u)\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}f(u) \frac{du}{u}\]
\[= \int_{0}^{+\infty}g^{\ast}_{\mu}(\frac{x}{u})f(u)\frac{du}{u}=(f \ast g^{\ast}_{\mu})(x).\]
Now, for \(\mu>c\) the function:
\[g^{\ast}_{\mu}(u)=u^{-\mu}\frac{\chi_{]1,+\infty]}(u)}{\Gamma(\alpha)}(\log u) ^{\alpha-1}\]
belongs to the space \(X_{c},\) as it is immediate to verify.
However, we are interested in properties of \(J^{\alpha}_{0+,\mu}f\), when \(\mu=c,\) since in the definition of the pointwise fractional Mellin derivative (see subsection 4.2), we have to compute such an integral with parameter \(c.\) Hence in subsection 4.1 we will describe properties concerning the domain and the range of these fractional operators. As an example, we will show that for any non-trivial function \(f\) in the domain of \(J^{\alpha}_{0+,c}\) the image \(J^{\alpha}_{0+,c}f\) cannot be in \(X_{c}.\) This depends also on the fact that \(g^{\ast}_{c}\not\in X_{c}.\) This implies that we cannot compute its Mellin transform of \(g^{\ast}_{c}\) in the space \(X_{c}\).
### The domain of \(J^{\alpha}_{0+,c}\) and the semigroup property
From now on we can consider the case \(\alpha>0,\) the extension to complex \(\alpha\) with Re \(\alpha>0\) being similar but more technical. We define the domain of \(J^{\alpha}_{0+,c},\) for \(\alpha>0\) and \(c\in\mathbb{R},\) as the class of all the functions \(f:\mathbb{R}^{+}\rightarrow\mathbb{C}\) such that
\[\int_{0}^{x}u^{c}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}|f(u)| \frac{du}{u}<+\infty\] (7)
for a.e. \(x\in\mathbb{R}^{+}.\) In the following we will denote the domain of \(J^{\alpha}_{0+,c}\) by \(DomJ^{\alpha}_{0+,c}\).
Recall that \(X_{c,loc}\) is the space of all the functions such that \((\cdot)^{c-1}f(\cdot)\in L^{1}(]0,a[)\) for every \(a>0.\)
**Proposition 3**: _We have the following properties:_
* _If_ \(f\in X_{c,loc},\) _then the function_ \((\cdot)^{c}f(\cdot)\in X_{1,loc}.\)__
* _If_ \(c<c^{\prime},\) _then_ \(X_{c,loc}\subset X_{c^{\prime},loc}.\)__
**Proof.** (i) Let \(a>0\) be fixed and let \(f\in X_{c,loc}.\) Then
\[\int_{0}^{a}x^{c}|f(x)|dx=\int_{0}^{a}xx^{c-1}|f(x)|dx\leq a\int_{0}^{a}x^{c-1 }|f(x)|dx\]
and so the assertion.
(ii) Let \(f\in X_{c,loc}.\) Then, as before, setting \(\alpha=c^{\prime}-c,\) we can write
\[\int_{0}^{a}x^{c^{\prime}-1}|f(x)|dx\leq a^{\alpha}\int_{0}^{a}x^{c-1}|f(x)|dx,\]
that is (ii) holds. \(\Box\)
Note that the inclusion in (ii) does not hold for spaces \(X_{c}.\)
Concerning the domain of the operator \(J^{\alpha}_{0+,c},\) we begin with the following proposition.
**Proposition 4**: _Let \(\alpha>1,\)\(c\in\mathbb{R}\) be fixed. Then \(DomJ^{\alpha}_{0+,c}\subset X_{c,loc}.\)_
**Proof**. Assume that for a.e. \(x\in\mathbb{R}^{+}\) the integral \((J^{\alpha}_{0+,c}|f|)(x),\) exists and put \(F(u)=u^{c-1}f(u).\) We have to show that \(F\) is integrable over \(]0,a[,\) for any \(a>0.\) Let \(a>0\) be fixed and let \(x>a\) be such that \((J^{\alpha}_{0+,c}|f|)(x)\) exists. Then, for \(u\in]0,a[\) we have, since \(|\log(x/u)|\leq|\log(x/a)|\)
and the right-hand side of the inequality is integrable as a function of \(u.\)\(\Box\)
Note that for \(\alpha=1\) we have immediately \(DomJ^{1}_{0+,c}=X_{c,loc}.\)
The case \(0<\alpha<1\) is more delicate. We will show that in this instance \(X_{c,loc}\subset DomJ^{\alpha}_{0+,c}.\)
In order to give a more precise description of the domain of \(J^{\alpha}_{0+,c},\) we now give a direct proof of the semigroup property in the domain of fractional integrals. This property is treated in [23], [41] and [42], but for the spaces \(X^{p}_{c}(a,b)\) of all the functions \(f:(a,b)\rightarrow\mathbb{C}\) such that \((\cdot)^{c}f(\cdot)\in L^{p}(a,b),\) with \(0<a<b\leq+\infty,\quad 1\leq p\leq\infty.\) However we prove this property under minimal assumptions, working directly in \(DomJ^{\alpha}_{0+,c}\).
**Theorem 2**: _Let \(\alpha,\beta>0,\leavevmode\nobreak\ c\in\mathbb{R}\) be fixed. Let \(f\in DomJ^{\alpha+\beta}_{0+,c}.\) Then_
1. \(f\in DomJ^{\alpha}_{0+,c}\cap DomJ^{\beta}_{0+,c}\)__
2. \(J^{\alpha}_{0+,c}f\in DomJ^{\beta}_{0+,c}\) _and_ \(J^{\beta}_{0+,c}f\in DomJ^{\alpha}_{0+,c}.\)__
3. \((J^{\alpha+\beta}_{0+,c}f)(x)=(J^{\alpha}_{0+,c}(J^{\beta}_{0+,c}f))(x),\) _a.e._ \(x\in\mathbb{R}^{+}.\)__
4. _If_ \(\alpha<\beta\) _then_ \(DomJ^{\beta}_{0+,c}\subset DomJ^{\alpha}_{0+,c}.\)__
**Proof**. At first, let \(f\in DomJ^{\alpha+\beta}_{0+,c}\) be a positive function. Then the integral
is finite and nonnegative for a.e. \(x\in\mathbb{R}^{+}.\)
By Tonelli’s theorem on iterated integrals of non-negative functions, and using formula (2.8) concerning the Beta function in [23], namely
\[\int_{v}^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}\bigg{(}\log\frac{u}{v} \bigg{)}^{\beta-1}\frac{du}{u}=B(\beta,\alpha)\bigg{(}\log\frac{x}{v}\bigg{)}^ {\alpha+\beta-1},\]
we have
\[(J^{\alpha+\beta}_{0+,c}f)(x)=\frac{1}{\Gamma(\alpha)\Gamma(\beta )}\frac{\Gamma(\beta)\Gamma(\alpha)}{\Gamma(\alpha+\beta)}\int_{0}^{x}\bigg{(} \frac{v}{x}\bigg{)}^{c}\bigg{(}\log\frac{x}{v}\bigg{)}^{\alpha+\beta-1}f(v) \frac{dv}{v}\]
\[=\]
\[= \frac{x^{-c}}{\Gamma(\alpha)\Gamma(\beta)}\int_{0}^{x}v^{c}f(v) \bigg{[}\int_{v}^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}\bigg{(}\log \frac{u}{v}\bigg{)}^{\beta-1}\frac{du}{u}\bigg{]}\frac{dv}{v}\]
\[= \frac{x^{-c}}{\Gamma(\alpha)\Gamma(\beta)}\int_{0}^{x}\int_{0}^{x }v^{c}\chi_{]v,x[}(u)\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}\bigg{(}\log \frac{u}{v}\bigg{)}^{\beta-1}f(v)\frac{dv}{v}\frac{du}{u}\]
\[= \frac{x^{-c}}{\Gamma(\alpha)\Gamma(\beta)}\int_{0}^{x}\int_{0}^{x }v^{c}\chi_{]0,u[}(v)\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}\bigg{(}\log \frac{u}{v}\bigg{)}^{\beta-1}f(v)\frac{dv}{v}\frac{du}{u}\]
\[= \frac{1}{\Gamma(\alpha)}\int_{0}^{x}\bigg{(}\frac{u}{x}\bigg{)}^{ c}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}\bigg{[}\frac{1}{\Gamma(\beta)} \int_{0}^{u}\bigg{(}\frac{v}{u}\bigg{)}^{c}\bigg{(}\log\frac{u}{v}\bigg{)}^{ \beta-1}\frac{f(v)}{v}dv\bigg{]}\frac{du}{u}\]
\[= (J^{\alpha}_{0+,c}(J^{\beta}_{0+,c}f))(x).\]
This proves all the assertions (i), (ii), (iii), for positive functions. In the general case, we can apply the above argument to the functions \(f^{+},\leavevmode\nobreak\ f^{-}\) using the linearity property of the integrals. Property (iv) follows immediately by writing \(\beta=\alpha+(\beta-\alpha)\) and applying (i). \(\Box\)
**Corollary 1**: _Let \(0<\alpha\leq 1,\leavevmode\nobreak\ c\in\mathbb{R}\) be fixed. Then \(X_{c,loc}\subset DomJ^{\alpha}_{0+,c}.\)_
By this corollary, a consequence of (iv), we have the inclusions for \(\alpha<1<\beta,\)
\[DomJ^{\beta}_{0+,c}\subset X_{c,loc}\subset DomJ^{\alpha}_{0+,c}.\]
These inclusions are strict. Indeed **Examples**. For any \(c\in\mathbb{R},\leavevmode\nobreak\ \beta>1,\) consider the function
\[f(x)=\frac{x^{-c}}{|\log x|^{\beta}}\chi_{]0,1/2[}(x).\]
Then \(f\in X_{c,loc}\) but for any \(x>1,\)
\[\geq x^{-c}\int_{0}^{1/2}\frac{\bigg{(}\log\frac{1}{u}\bigg{)}^{\beta -1}}{u|\log u|^{\beta}}du=x^{-c}\int_{0}^{1/2}\frac{1}{u|\log u|}du=+\infty.\]
Moreover, for \(0<\alpha<1,\) consider the function:
\[f(x)=\frac{x^{-c}}{|\log x|^{\gamma}}\chi_{]0,1/2[}(x),\] (8)
where \(\alpha<\gamma<1.\) Then \(f\not\in X_{c,loc},\) but for any \(x>1/2,\) we have
\[\Gamma(\alpha)(J^{\alpha}_{0+,c}f)(x)=x^{-c}\int_{0}^{1/2}\frac{1 }{u\bigg{(}\log\frac{1}{u}\bigg{)}^{\gamma-\alpha+1}}\frac{\bigg{(}\log\frac{1 }{u}\bigg{)}^{1-\alpha}}{\bigg{(}\log\frac{x}{u}\bigg{)}^{1-\alpha}}du\]
\[\leq \frac{M}{x^{c}}\int_{0}^{1/2}\frac{1}{u|\log u|^{\gamma-\alpha+1} }du<+\infty.\]
Note that, more generally, the inclusion in (iv) of Theorem 2 is strict for any choice of \(\alpha\) and \(\beta.\) It is sufficent to consider the function (8) with \(\alpha<\gamma<\beta.\) The calculations are the same.
We now give some sufficient conditions in order that a function \(f\) belongs to the domain of the fractional integrals of order \(\alpha>1.\) In this respect we have the following:
**Proposition 5**: _Let \(\alpha>1.\) If \(f\in X_{c,loc}\) is such that \(f(u)=\mathcal{O}(u^{-(r+c-1)})\) for \(u\to 0^{+}\) and \(0<r<1,\) then \(f\in DomJ^{\alpha}_{0+,c}.\)_
**Proof.** Let \(x>0\) be fixed. Then we can write
The integral \(I_{1}\) can be estimated by considering the order of infinity at the point \(0.\) The estimate of \(I_{2}\) is easy since the function \(\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}\) is now bounded in the interval \([x/2,x].\Box\)
Let us define
\[\widetilde{X}_{c,loc}=\{f\in X_{c,loc}:\exists r\in]0,1[,\leavevmode\nobreak\ \mbox{such that}\leavevmode\nobreak\ f(u)=\mathcal{O}(u^{-(r+c-1)}), \leavevmode\nobreak\ u\to 0^{+}\}.\]
We have the following
**Corollary 2**: _Let \(\alpha>0,\leavevmode\nobreak\ c\in\mathbb{R}\) be fixed. Then_
\[\widetilde{X}_{c,loc}\subset\bigcap_{\alpha>0}DomJ^{\alpha}_{0+,c}.\]
Now let \(f\) be a convergent power series of type
\[f(x)=\sum_{k=0}^{\infty}a_{k}x^{k}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (a_{k}\in\mathbb{C},\leavevmode \nobreak\ k\in\mathbb{N}_{0}),\]
for \(x\in[0,\ell],\)\(\ell>0.\) For these functions the following series representation for \(J^{\alpha}_{0+,c}f\) holds, when \(c>0\) (see Lemma 4 and Lemma 5(i) in [25]):
\[(J^{\alpha}_{0+,c}f)(x)=\sum_{k=0}^{\infty}(c+k)^{-\alpha}a_{k}x^{k} \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode \nobreak\ (x\in[0,\ell]).\]
The assumption \(c>0\) is essential. For \(c=0,\) corresponding to the classical Hadamard integrals, we have the following
**Proposition 6**: _Let \(\alpha>0\) be fixed and let \(f\) be a convergent power series as above. Then \(f\in DomJ^{\alpha}_{0+}\) if and only if \(f(0)=0.\) In this case we have_
\[(J^{\alpha}_{0+}f)(x)=\sum_{k=1}^{\infty}a_{k}k^{-\alpha}x^{k} \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (0<x<\ell).\] (9)
**Proof**. Let \(f\in DomJ^{\alpha}_{0+}.\) Then the integral (7) is finite and
\[\int_{0}^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}f(u)\frac{du}{u}=\int_{0 }^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}\sum_{k=1}^{\infty}a_{k}u^{k} \frac{du}{u}+a_{0}\int_{0}^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}\frac{ du}{u}=I_{1}+I_{2}.\]
As to \(I_{1}\) we obtain
\[\int_{0}^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}\sum_{k=1}^{\infty}|a_{k }|u^{k-1}du\leq\sum_{k=1}^{\infty}|a_{k}|x^{k-1}\int_{0}^{x}\bigg{(}\log\frac{ x}{u}\bigg{)}^{\alpha-1}du.\]
Since, using the change of variables \(\log(x/u)=t,\)
\[\int_{0}^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}du=x\Gamma(\alpha),\]
we can integrate by series, yielding
\[I_{1}=\sum_{k=1}^{\infty}a_{k}\int_{0}^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{ \alpha-1}u^{k-1}du<+\infty.\]
As to \(I_{2},\) we get \(I_{2}<+\infty\) if and only if \(a_{0}=f(0)=0,\) since
\[\int_{0}^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}\frac{du}{u}=+\infty.\]
As to formula (9),
\[(J^{\alpha}_{0+}f)(x)=\sum_{k=1}^{\infty}a_{k}\frac{1}{\Gamma(\alpha)}\int_{0} ^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}u^{k-1}du=\sum_{k=1}^{\infty}a_{ k}(J^{\alpha}_{0+}t^{k})(x)=\sum_{k=1}^{\infty}a_{k}k^{-\alpha}x^{k},\]
where in the last step we have applied the simple Lemma 3 in [25], namely that \((J^{\alpha}_{0+}t^{k})(x)=k^{-\alpha}x^{k},\)\(k>0.\)\(\Box\)
Concerning the range of the operators \(J^{\alpha}_{0+,c},\) we have the following important propositions.
**Proposition 7**: _Let \(\alpha>0,\leavevmode\nobreak\ c\in\mathbb{R}\) be fixed. If \(f\in DomJ^{\alpha+1}_{0+,c},\) then \(J^{\alpha}_{0+,c}f\in X_{c,loc}.\)_
**Proof**. Let \(f\in DomJ^{\alpha+1}_{0+,c}.\) We can assume that \(f\) is nonnegative; thus, for any \(a>0,\)
\[= \frac{1}{\alpha}\int_{0}^{a}u^{c-1}f(u)\bigg{(}\log\frac{a}{u} \bigg{)}^{\alpha}du<+\infty.\leavevmode\nobreak\ \leavevmode\nobreak\ \Box\]
Note that, in view of Proposition 7, we can deduce that if \(f\in DomJ^{\alpha}_{0+,c},\) not necessarily does \(J^{\alpha}_{0+,c}f\in X_{c,loc},\) unless \(f\in DomJ^{\alpha+1}_{0+,c},\) which is a proper subspace of \(DomJ^{\alpha}_{0+,c}.\) For example, we can take again the function \(f\) of (8) with \(\alpha<\gamma<\alpha+1.\) Then \(f\in DomJ^{\alpha}_{0+,c}\) but \(f\not\in DomJ^{\alpha+1}_{0+,c}\) and \(J^{\alpha}_{0+,c}f\not\in X_{c,loc}.\)
For spaces \(X_{c}\) we have the following
**Proposition 8**: _Let \(\alpha>0,\leavevmode\nobreak\ c\in\mathbb{R}\) be fixed. If \(f\in DomJ^{\alpha}_{0+,c}\) is a non-negative function, then \(J^{\alpha}_{0+,c}f\not\in X_{c},\) unless \(f=0\) a.e. in \(\mathbb{R}^{+}.\)_
**Proof**. Using an analogous argument as above, assuming \(f\geq 0,\) we write
\[\int_{0}^{+\infty}x^{c-1}(J^{\alpha}_{0+,c}f)(x)dx=\int_{0}^{+ \infty}x^{-1}\bigg{(}\frac{1}{\Gamma(\alpha)}\int_{0}^{+\infty}u^{c-1}\chi_{]0 ,x[}(u)\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}f(u)du\bigg{)}dx\]
\[= \int_{0}^{+\infty}\frac{1}{\Gamma(\alpha)}\bigg{(}\int_{0}^{+ \infty}x^{-1}u^{c-1}\chi_{]u,+\infty[}(x)\bigg{(}\log\frac{x}{u}\bigg{)}^{ \alpha-1}f(u)dx\bigg{)}du\]
\[= \frac{1}{\Gamma(\alpha)}\int_{0}^{+\infty}\bigg{(}\int_{u}^{+ \infty}x^{-1}\bigg{(}\log\frac{x}{u}\bigg{)}^{\alpha-1}dx\bigg{)}u^{c-1}f(u)du.\]
Thus \((J^{\alpha}_{0+,c}f)\not\in X_{c},\) since for every \(u\)
\[\int_{u}^{+\infty}\frac{1}{x(\log\frac{x}{u})^{1-\alpha}}dx=+\infty. \leavevmode\nobreak\ \Box\]
The above result implies that a function \(f\in DomJ^{\alpha}_{0+,c}\) such that \(J^{\alpha}_{0+,c}f\in X_{c},\) must necessarily change its sign. However the converse is not true in general, as proved by the following example: for a given \(a>1,\) put
\[f(x)=-\chi_{[1/a,1]}(x)+\chi_{]1,a]}(x).\]
It is easy to see that \(f\in DomJ^{\alpha}_{0+,c}\cap X_{c},\) but \(J^{\alpha}_{0+,c}f\not\in X_{c},\) for any \(c\in\mathbb{R}.\) The following result, which will be useful in the following, is well known (see also [22], [42])
**Proposition 9**: _Let \(\alpha>0\) and \(c\in\mathbb{R}\) be fixed and let \(f\in DomJ^{\alpha}_{0+,c}\cap X_{c}\) be such that \(J^{\alpha}_{0+,c}f\in X_{c}.\) Then_
\[M[J^{\alpha}_{0+,c}f](c+it)=(-it)^{-\alpha}M[f](c+it),\leavevmode\nobreak\ \leavevmode\nobreak\ t\in\mathbb{R}.\]
Using Proposition 9 we study the structure of the functions \(f\) such that \(f\in DomJ^{\alpha}_{0+,c}\cap X_{c}\) for which \(J^{\alpha}_{0+,c}f\in X_{c}.\)
**Proposition 10**: _Let \(f\in DomJ^{\alpha}_{0+,c}\cap X_{c}.\) If \(J^{\alpha}_{0+,c}f\in X_{c}\) then_
\[\int_{0}^{+\infty}x^{c-1}f(x)dx=0.\]
**Proof**. Since \(J^{\alpha}_{0+,c}f\in X_{c},\) we can apply the Mellin transform on the line \(s=c+it,\) obtaining
\[[J^{\alpha}_{0+,c}f]^{\wedge}_{M}(s)=(-it)^{-\alpha}[f]^{\wedge}_{M}(s) \leavevmode\nobreak\ \leavevmode\nobreak\ (s=c+it,\leavevmode\nobreak\ t\in \mathbb{R})\]
and this transform is a continuous and bounded function of \(s.\) Therefore, taking \(t=0\) we must have \([f]^{\wedge}_{M}(c)=0,\) i.e. the assertion \(\Box.\)
Classes of functions \(f\in DomJ^{\alpha}_{0+,c}\cap X_{c}\) for which \(J^{\alpha}_{0+,c}f\in X_{c}\) may be easily constructed among the functions (of non-constant sign) with compact support in \(\mathbb{R}^{+}.\)
However we have the following property (see also [42], Lemma 2.33). We give the proof for the sake of completeness
**Proposition 11**: _Let \(\alpha>0,\leavevmode\nobreak\ c,\nu\in\mathbb{R},\)\(\nu<c,\) being fixed. If \(f\in DomJ^{\alpha}_{0+,c}\cap X_{[\nu,c]},\) then \(J^{\alpha}_{0+,c}f\in X_{\nu}\) and_
\[\|J^{\alpha}_{0+,c}f\|_{X_{\nu}}\leq\frac{\|f\|_{X_{\nu}}}{(c-\nu)^{\alpha}}.\]
_Moreover, for any \(s=\nu+it,\) we have_
\[M[J^{\alpha}_{0+,c}f](\nu+it)=(c-\nu-it)^{-\alpha}M[f](\nu+it), \leavevmode\nobreak\ \leavevmode\nobreak\ t\in\mathbb{R}.\]
\[|M[J^{\alpha}_{0+,c}f](s)|\leq\frac{\|f\|_{X_{\nu}}}{(c-\nu)^{\alpha}}.\]
**Proof**. We have by Tonelli’s theorem,
\[\Gamma(\alpha)\|J^{\alpha}_{0+,c}f\|_{X_{\nu}}\]
\[\leq \Gamma(\alpha)\int_{0}^{+\infty}u^{\nu-1}|(J^{\alpha}_{0+,c}f)(u) |du\]
\[\leq\]
\[= \int_{0}^{+\infty}\bigg{[}\int_{y}^{+\infty}u^{\nu-1-c}\bigg{(} \log\frac{u}{y}\bigg{)}^{\alpha-1}du\bigg{]}y^{c-1}|f(y)|dy.\]
For the inner integral, putting \(\log(u/y)=z,\) we have:
\[\int_{y}^{+\infty}u^{\nu-1-c}\bigg{(}\log\frac{u}{y}\bigg{)}^{\alpha-1}du=\int _{0}^{+\infty}y^{\nu-c}e^{-(c-\nu)z}z^{\alpha-1}dz=\frac{y^{\nu-c}}{(c-\nu)^{ \alpha}}\Gamma(\alpha),\]
and thus
\[\Gamma(\alpha)\|J^{\alpha}_{0+,c}f\|_{X_{\nu}}=\frac{\Gamma(\alpha)}{(c-\nu)^{ \alpha}}\|f\|_{X_{\nu}}.\]
As to the last part, the formula for the Mellin transform is established in [22], noting that the Mellin transform on the line \(s=\nu+it\) of the function \(g^{\ast}_{c}(u)=u^{-c}(\log u)^{\alpha-1}\chi_{]1,+\infty[}(u)(\Gamma(\alpha)) ^{-1}\) is given by \([g^{\ast}_{c}](s)=(c-s)^{-\alpha}=(c-\nu-it)^{-\alpha},\) while for the estimate we easily have
\[|M[J^{\alpha}_{0+,c}f](s)|\leq\|J^{\alpha}_{0+,c}f\|_{X_{\nu}}=\frac{\|f\|_{X_ {\nu}}}{(c-\nu)^{\alpha}}.\Box\]
Note that when \(0<\alpha<1,\) the assumption \(f\in DomJ^{\alpha}_{0+,c}\cap X_{[\nu,c]},\) can be replaced by \(f\in X_{[\nu,c]},\) since \(X_{[\nu,c]}\subset DomJ^{\alpha}_{0+,c},\) by Corollary 1.
### The pointwise fractional Mellin differential operator
The pointwise fractional Mellin derivative of order \(\alpha>0,\) or the Hadamard-type fractional derivative, associated with the integral \(J^{\alpha}_{0+,c}f\), \(c\in\mathbb{R},\) and \(f\in DomJ^{m-\alpha}_{0+,c},\) is given by
\[(D^{\alpha}_{0+,c}f)(x)=x^{-c}\delta^{m}x^{c}(J^{m-\alpha}_{0+,c} f)(x)\] (10)
where \(m=[\alpha]+1\) and \(\delta=(x\frac{d}{dx}).\) For \(c=0,\) corresponding to the Hadamard fractional derivative, we put \((D^{\alpha}_{0+}f)(x):=(D^{\alpha}_{0+,0}f)(x).\) The above definition was introduced in [22], and then further developed in [41], in which some sufficient conditions for the existence of the pointwise derivative are given in spaces of absolutely continuous type functions on bounded domains. This notion originates from the theory of the classical Mellin differential operator, studied in [18]. We give a short survey concerning this classical operator.
In the frame of Mellin transforms, the natural concept of a _pointwise_ derivative of a function \(f\) is given, as seen, by the limit of the difference quotient involving the Mellin translation; thus if \(f^{\prime}\) exists,
\[\lim_{h\to 1}\frac{\tau_{h}^{c}f(x)-f(x)}{h-1}=\lim_{h\to 1} \bigg{[}h^{c}x\frac{f(hx)-f(x)}{hx-x}+\frac{h^{c}-1}{h-1}f(x)\bigg{]}=xf^{ \prime}(x)+cf(x).\]
This gives the motivation of the following definition: the pointwise Mellin differential operator \(\Theta_{c},\) or the pointwise Mellin derivative \(\Theta_{c}f\) of a function \(f:\mathbb{R}^{+}\rightarrow\mathbb{C}\) and \(c\in\mathbb{R},\) is defined by
\[\Theta_{c}f(x):=xf^{\prime}(x)+cf(x),\leavevmode\nobreak\ \leavevmode\nobreak\ x\in\mathbb{R}^{+}\] (11)
provided \(f^{\prime}\) exists a.e. on \(\mathbb{R}^{+}.\) The Mellin differential operator of order \(r\in\mathbb{N}\) is defined iteratively by
\[\Theta^{1}_{c}:=\Theta_{c},\quad\quad\Theta^{r}_{c}:=\Theta_{c}( \Theta_{c}^{r-1}).\] (12)
For convenience set \(\Theta^{r}:=\Theta^{r}_{0}\) for \(c=0\) and \(\Theta_{c}^{0}:=I,\)\(I\) denoting the identity. For instance, the first three Mellin derivatives are given by:
\[\Theta_{c}f(x)=xf^{\prime}(x)+cf(x),\]
\[\Theta^{2}_{c}f(x)=x^{2}f^{\prime\prime}(x)+(2c+1)xf^{\prime}(x)+c^{2}f(x),\]
\[\Theta^{3}_{c}f(x)=x^{3}f^{\prime\prime\prime}(x)+(3c+3)x^{2}f^{\prime\prime}( x)+(3c^{2}+3c+1)xf^{\prime}(x)+c^{3}f(x).\]
Let us return to Mamedov’s book [45]. He defined the Mellin derivative of integral order in case \(c=0\), in a slightly different, but essentially equivalent form, using the quotients
\[\frac{f(xh^{-1})-f(x)}{\log h},\]
a definition connected with his notion of log-continuity. It must be emphasised that this was a fully innovative procedure at the time he introduced it. (His translation operator is actually \(\tau_{h}f(x)=f(xh^{-1}),\) with incremental ratio \(\log h,\) instead of \(\log(1/h)=-\log h).\) His first order derivative, \((E^{1}f)(x)\), turns into, noting L’Hospital’s rule,
\[(E^{1}f)(x)=(-x)f^{\prime}(x)=:-\Theta_{0}f(x).\]
His derivatives of higher orders are defined inductively,
\[(E^{m}f)(x)=(-1)^{m}\Theta_{0}^{m}f(x),\quad m\in\mathbb{N}.\]
This is also Mamedov’ s motivation of his definition of the fractional Mellin integral. Indeed, he used it to define the fractional derivative (\(c=0\)) for \(\alpha\in]0,1[,\) by
\[(E^{\alpha}f)(x):=\lim_{h\to 1}\frac{(-1)^{1-\alpha}(J^{1-\alpha}_{0+} f)(xh^{-1})-(J^{1-\alpha}_{0+}f)(x)]}{\log h},\]
and for \(\alpha>1,\) by
\[(E^{\alpha}f)(x)=E^{[\alpha]}(E^{\alpha-[\alpha]}f)(x).\]
Thus for example, if \(\alpha\in]0,1[,\), we have easily
\[(E^{\alpha}f)(x)=(-1)^{2-\alpha}\Theta_{0}(J^{1-\alpha}_{0+}f)(x)=(-1)^{2- \alpha}(D^{\alpha}_{0+}f)(x),\]
which also gives the link between Mamedov’s definition and our present one. Analogously he proceeds in case \(\alpha>1.\) Using his definition of the fractional integral, Mamedov then studies the Mellin tranforms of the fractional integrals and derivatives of a function \(f,\) (see section 23 of [45]). From these results it would have been possible to deduce a version of the fundamental theorem of the integral and differential calculus in his fractional frame, in the special case when the function \(f,\) its fractional derivative and fractional integral belong to the space \(X_{0}.\) However he presents it explicitely only for integer values of \(\alpha,\) (formula (22.3), page 169). Nevertheless it is indeed a surprising result, the only comparative result being that for the Chebyshev transform [30] of 1975. For this very reason is the late Prof. Mamedov a true pioneer of Mellin analysis. On the other hand, the approach given in [18] is somewhat more direct and simpler, and the present versions of the fundamental theorem in local spaces \(X_{c,loc},\) , given in Theorems 3 and 4 below, are more general and elegant. But recall that Mamedov’s first papers appeared in 1979/81, [46, 47, 48], thus almost twenty years earlier than [18].
We have the following
**Proposition 12**: _We have, for \(m\in\mathbb{N},\leavevmode\nobreak\ x>0,\)_
\[\delta^{m}x^{c}f(x)=x^{c}\Theta_{c}^{m}f(x).\]
**Proof.** For \(m=1\) we have
\[\delta x^{c}f(x)=x(cx^{c-1}f(x)+x^{c}f^{\prime}(x))=x^{c}(cf(x)+xf^{\prime}(x) )=x^{c}\Theta_{c}f(x).\]
Now we suppose that the relation holds for \(m\) and prove that it holds for \(m+1.\)
\[\delta^{m+1}(x^{c}f(x))=\delta(\delta^{m}(x^{c}f(x))=\delta(x^{c}\Theta_{c}^{m }f(x))=x^{c}\Theta_{c}(\Theta^{m}_{c}f(x))=x^{c}\Theta_{c}^{m+1}f(x),\]
and so the assertion.\(\Box\)
For \(r\in\mathbb{N},\)\(\Theta^{r}_{c}f(x)\) is given by the following proposition, also giving the connections between Mellin and ordinary derivatives (these relations was also given in [22], [42], but without proofs).
**Proposition 13**: _Let \(f\in X_{c,loc}\) be such that \(\Theta^{r}_{c}f(x)\) exists at the point \(x\) for \(r\in\mathbb{N}.\) Then \((D^{r}_{0+,c}f)(x)\) exists and_
\[(D^{r}_{0+,c}f)(x)=\Theta^{r}_{c}f(x)=\sum_{k=0}^{r}S_{c}(r,k)x^{k}f^{(k)}(x),\]
_where \(S_{c}(r,k),\)\(0\leq k\leq r,\) denote the generalized Stirling numbers of second kind, defined recursively by_
\[S_{c}(r,0):=c^{r},\leavevmode\nobreak\ S_{c}(r,r):=1,\leavevmode\nobreak\ S_{c }(r+1,k)=S_{c}(r,k-1)+(c+k)S_{c}(r,k).\]
_In particular for \(c=0\)_
\[\Theta^{r}f(x)=\sum_{k=0}^{r}S(r,k)x^{k}f^{(k)}(x)\]
\(S(r,k):=S_{0}(r,k)\) _being the (classical) Stirling numbers of the second kind._
**Proof**. For \(r=1\) (that is \(m=2\)), we have
\[(D^{1}_{0+,c}f)(x)=x^{-c}\delta^{2}x^{c}\frac{1}{\Gamma(1)}\int_{ 0}^{x}\bigg{(}\frac{u}{x}\bigg{)}^{c}\bigg{(}\log\frac{x}{u}\bigg{)}^{1-1}f(u) \frac{du}{u}\]
\[=x^{-c}\delta\bigg{(}x\frac{d}{dx}\bigg{)}\int_{0}^{x}u^{c-1}f(u) du=x^{-c}\delta x^{c}f(x)=\Theta_{c}f(x).\]
For \(r=2,\) (that is \(m=3\)), we have
\[(D^{2}_{0+,c}f)(x)=x^{-c}\delta^{3}x^{c}\frac{1}{\Gamma(2)}\int_{ 0}^{x}\bigg{(}\frac{u}{x}\bigg{)}^{c}\bigg{(}\log\frac{x}{u}\bigg{)}^{1-1}f(u) \frac{du}{u}\]
\[=x^{-c}\delta\bigg{(}x\frac{d}{dx}\bigg{)}x^{c}f(x)=x^{-c}\delta x (cx^{c-1}f(x)+x^{c}f^{\prime}(x))\]
\[=cxf^{\prime}(x)+c^{2}f(x)+(c+1)xf^{\prime}(x)+x^{2}f^{\prime \prime}(x)=x^{2}f^{\prime\prime}(x)+(2c+1)xf^{\prime}(x)+c^{2}f(x)\]
\[=\Theta^{2}_{c}f(x).\]
In the general case, using Proposition 12 we have
\[(D^{r}_{0+,c}f)(x)=x^{-c}\delta^{r}(\delta x^{c}(J^{1}_{0+,c}f)(x))\]
\[=x^{-c}\delta^{r}(x^{c}f(x))=x^{-c}x^{c}\Theta^{r}_{c}f(x)=\Theta ^{r}_{c}f(x).\]
Now in accordance with (11) and (12) we have (see [18])
\[\Theta^{r+1}_{c}f(x)=\Theta_{c}(\Theta^{r}_{c}f)(x)=x\frac{d}{dx} \Theta^{r}_{c}f(x)+c\Theta^{r}_{c}f(x)\]
\[=\sum_{k=0}^{r}S_{c}(r,k)((k+c)x^{k}f^{(k)}(x)+x^{k+1}f^{(k+1)}f( x))=\sum_{k=0}^{r+1}S_{c}(r+1,k)x^{k}f^{(k)}(x),\]
and so the assertion follows. \(\Box\)
Note that in Proposition 13 the basic assumption that \(f\in X_{c,loc}\) is essential. Let for example \(g(x)=1,\) for every \(x\in\mathbb{R}^{+}\) and \(c=0.\) Then \(g\not\in X_{0,loc}=DomJ^{1}_{0+}.\) This implies that we cannot compute \(D^{r}_{0+}f,\) while obviously we have \(\Theta^{r}f(x)=0,\) for any \(r\in\mathbb{N}.\) Another example is given by the function \(h(x)=\log x,\)\(x\in\mathbb{R}^{+}.\) In this instance, for \(c=0\) and \(r=1\) we have \(\Theta h(x)=1,\) while \(h\not\in X_{0,loc}.\)
Now we turn to the fractional case. The above Proposition shows that the notion of Hadamard-type fractional derivative \(D^{\alpha}_{0+,c}\) is the natural extension of the Mellin derivative \(\Theta_{c}^{k}f,\) with \(k\in\mathbb{N},\) to the fractional case as also applies to the ordinary and Riemann-Liouville fractional derivatives. A simple consequence of Proposition 12 is the following alternative representation of the fractional derivative of \(f\), for \(\alpha>0\)
\[(D^{\alpha}_{0+,c}f)(x)=\Theta_{c}^{m}(J^{m-\alpha}_{0+,c}f)(x)\]
where \(m=[\alpha]+1.\) Using this representation we can obtain the following Proposition
**Proposition 14**: _Let \(\alpha>0,c\in\mathbb{R},\) be fixed and \(m-1\leq\alpha<m.\) Let \(f\in X_{c,loc}\) be such that \(f^{(m)}\in X_{c,loc},\) then_
\[(D^{\alpha}_{0+,c}f)(x)=\sum_{k=0}^{m}S_{c}(m,k)x^{k}(J^{m-\alpha}_{0+,c+k}f^{ (k)})(x)).\]
**Proof.** At first note that from the assumptions, for any \(0<\gamma\leq 1\) the derivatives \(f^{(k)},\)\(k=1,\ldots m,\) belongs to the domain of \(J^{\gamma}_{0+,c+k}.\) Note that using a simple change of variable we can write, for every \(c\in\mathbb{R},\)
\[(J^{\gamma}_{0+,c}f)(x)=\frac{1}{\Gamma(\gamma)}\int_{1}^{+\infty}\frac{1}{v^{ c+1}}(\log v)^{\gamma-1}f(\frac{x}{v})dv.\]
Thus differentiating under the integral we easily have
\[(J^{\gamma}_{0+,c}f)^{\prime}(x)=\frac{1}{\Gamma(\gamma)}\int_{1}^{+\infty} \frac{1}{v^{c+2}}(\log v)^{\gamma-1}f^{\prime}(\frac{x}{v})dv=(J^{\gamma}_{0+, c+1}f^{\prime})(x)\]
and by an easy induction we obtain, for \(x>0\) and \(k\in\mathbb{N},\)
\[(J^{\gamma}_{0+,c}f)^{(k)}(x)=(J^{\gamma}_{0+,c+k}f^{(k)})(x).\]
Hence by Lemma 9 in [18], (see also Proposition 13), we have
\[(D^{\alpha}_{0+,c}f)(x)=\Theta^{m}_{c}(J^{m-\alpha}_{0+,c}f)(x)=\sum_{k=0}^{m} S_{c}(m,k)x^{k}(J^{m-\alpha}_{0+,c+k}f^{(k)})(x),\]
that is the assertion \(\Box.\)
First, let \(f\) be a convergent power series as in Proposition 6. In this instance, we obtain the following formula for the derivative \(D^{\alpha}_{0+}f\):
**Proposition 15**: _Let \(\alpha>0\) be fixed and \(f\) be as in Proposition 6, such that \(f(0)=0.\) Then for \(0<x<\ell,\)_
\[(D^{\alpha}_{0+}f)(x)=\sum_{k=1}^{\infty}a_{k}k^{\alpha}x^{k}.\]
**Proof**. Putting \(m=[\alpha]+1,\) by integration and differentiation by series, using similar reasonings as in Proposition 6, we have
\[(D^{\alpha}_{0+}f)(x)=\delta^{m}\frac{1}{\Gamma(m-\alpha)}\int_{0 }^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{m-\alpha-1}\sum_{k=1}^{\infty}a_{k}u^{k} \frac{du}{u}=\delta^{m}\sum_{k=1}^{\infty}a_{k}(J^{m-\alpha}_{0+}t^{k})(x)\]
\[= \delta^{m}\sum_{k=1}^{\infty}a_{k}k^{-(m-\alpha)}x^{k}=\sum_{k=1} ^{\infty}a_{k}k^{-(m-\alpha)}\delta^{m}x^{k}=\sum_{k=1}^{\infty}a_{k}k^{\alpha }x^{k}.\leavevmode\nobreak\ \Box\]
The above Proposition extends Lemma 5 (ii) in [25] to the case \(c=0.\) An interesting representation, for analytic functions, of the derivative \(D^{\alpha}_{0+,c}f\) is given in terms of infinite series involving the Stirling functions of the second kind \(S_{c}(\alpha,k),\) which can be defined for \(c\in\mathbb{R}\) by
\[S_{c}(\alpha,k):=\frac{1}{k!}\sum_{j=0}^{k}(-1)^{k-j}\left(\begin{array}[]{c}k \\ j\end{array}\right)(c+j)^{\alpha}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (\alpha\in\mathbb{C},\leavevmode\nobreak\ k\in I\!\!N_{0}).\]
This representation, given in [25], is as follows:
**Proposition 16**: _Let \(f:\mathbb{R}^{+}\rightarrow\mathbb{R}\) be an arbitrarily often differentiable function such that its Taylor series converges and let \(\alpha>0,\leavevmode\nobreak\ c>0.\) Then_
\[(D^{\alpha}_{0+,c}f)(x)=\sum_{k=0}^{\infty}S_{c}(\alpha,k)x^{k}f^{(k)}(x) \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (x>0).\]
For \(c=0\) also an inverse formula is available, expressing the classical Riemann-Liouville fractional derivative in terms of the Mellin derivatives (see [27]), namely
\[x^{\alpha}({\mathcal{D}}^{\alpha}_{0+}f)(x)=\sum_{k=0}^{\infty}s(\alpha,k)(D^{ k}_{0+}f)(x),\leavevmode\nobreak\ \leavevmode\nobreak\ (\alpha>0,\leavevmode \nobreak\ x>0),\]
where \({\mathcal{D}}^{\alpha}_{0+}f\) denotes the Riemann-Liouville fractional derivative and \(s(\alpha,k)\) the Stirling functions of the first kind.
An analogous representation holds also for the fractional integrals \(J^{\alpha}_{0+,c}f,\) namely (see [25])
**Proposition 17**: _Let \(f\in DomJ^{\alpha}_{0+,c},\) and \(f:\mathbb{R}^{+}\rightarrow\mathbb{R}\) satisfy the hyphothesis of Proposition 13. Then_
\[(J^{\alpha}_{0+,c}f)(x)=\sum_{k=0}^{\infty}S_{c}(-\alpha,k)x^{k}f^{(k)}(x) \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (x>0).\]
Since \((D^{\alpha}_{0+,c}f)(x),\leavevmode\nobreak\ (J^{\alpha}_{0+,c}f)(x),\) for \(\alpha>0,\) and \(S_{c}(\alpha,k),\) for \(\alpha\in\mathbb{R},\leavevmode\nobreak\ k\in\mathbb{N}_{0},\) are three continuous functions of \(c\in\mathbb{R}\) at \(c=0,\) we can let \(c\to 0\) in the previous Propositions, and can deduce corresponding representations of Hadamard fractional differentiation and integrations in terms of the Stirling functions \(S(\alpha,k)\) and classical derivatives, if both \(J^{\alpha}_{0+}f\) and \(D^{\alpha}_{0+}f\) exists (for details see [25]).
Now we introduce certain Mellin-Sobolev type spaces which will be useful in the following (see also [18]). Firstly, we define
\[AC_{loc}:=\{f:\mathbb{R}^{+}\rightarrow\mathbb{C}:f(x)=\int_{0}^{x}g(t)dt, \leavevmode\nobreak\ \mbox{for a given}\leavevmode\nobreak\ g\in L^{1}_{loc}( \mathbb{R}^{+})\}.\]
Recall that \(L^{1}_{loc}(\mathbb{R}^{+})\) stands for the space of all (Lebesgue) measurable functions \(g:\mathbb{R}^{+}\rightarrow\mathbb{C}\) such that
\[\int_{0}^{x}g(t)dt\]
exists as a Lebesgue integral for every \(x>0.\) For any \(f\in AC_{loc}\) we have \(f^{\prime}=g\) a.e., where \(f^{\prime}\) denotes the usual derivative. For any \(c\in\mathbb{R},\) we define
\[AC_{c,loc}:=\{f\in X_{c,loc}:(\cdot)^{c}f(\cdot)\in AC_{loc}\}.\]
For any \(c\in\mathbb{R}\) we define \(AC^{1}_{c,loc}=AC_{c,loc}\) and for \(m\in\mathbb{N},\)\(m\geq 2,\)
\[AC_{c,loc}^{m}:=\{f\in AC_{c,loc}:\delta^{m-1}((\cdot)^{c}f(\cdot))\in AC_{loc }\}.\]
We have the following
**Lemma 3**: _If \(f\in AC_{c,loc}^{m},\) then the Mellin derivative \(\Theta_{c}^{m}f\) exists and \(\Theta_{c}^{m}f\in X_{c,loc}.\)_
**Proof**. Since \(\delta^{m-1}((\cdot)^{c}f(\cdot))\in AC_{loc},\) we have
\[\frac{d}{dx}\delta^{m-1}(x^{c}f(x))\in L^{1}_{loc}(\mathbb{R}^{+}).\]
But, using Proposition 12
\[\frac{d}{dx}\delta^{m-1}(x^{c}f(x))=x^{-1}\delta^{m}(x^{c}f(x))=x^{c-1}\Theta_ {c}^{m}f(x),\]
and so the assertion follows. \(\Box\)
**Lemma 4**: _If \(f\in AC^{m}_{c,loc},\)\(m\geq 2,\) then \(\delta^{j}((\cdot)^{c}f(\cdot))\in AC_{loc},\) for \(j=0,1,\ldots,m-2,\) and_
\[\lim_{x\to 0^{+}}\delta^{j}((x)^{c}f(x))=0.\]
**Proof**. The case \(m=2\) follows immediately from the definitions, while for \(m>2\) one can use the relation
\[\delta^{j-1}((x)^{c}f(x))=\int_{0}^{x}\delta^{j}((u)^{c}f(u))\frac{du}{u}, \leavevmode\nobreak\ \leavevmode\nobreak\ j=1,2\ldots m-2.\Box\]
The following result gives a representation of functions in \(AC^{m}_{c,loc}.\) A similar result for functions defined on a compact interval \([a,b]\subset\mathbb{R}^{+}\) is given in [41].
**Lemma 5**: _Let \(f\in AC^{m}_{c,loc},\)\(m\geq 1,\) and let us assume that \(\varphi_{m}:=\frac{d}{dx}\delta^{m-1}((\cdot)^{c}f(\cdot))\in DomJ^{m}_{0+,1}.\) If there exists \(\alpha\in]0,1[\) such that \(\varphi_{m}(x)={\cal O}(x^{-\alpha}),\leavevmode\nobreak\ \leavevmode\nobreak \ x\to 0^{+},\) then we have necessarily_
\[f(x)=x^{1-c}J^{m}_{0+,1}\varphi_{m}(x).\]
**Proof**. For \(m=1,\) there exists \(\varphi_{1}\in L^{1}_{loc}(\mathbb{R}^{+})\) such that
\[f(x)=x^{-c}\int_{0}^{x}\varphi_{1}(t)dt=x^{1-c}J^{1}_{0+,1}\varphi_{1}(x),\]
and so the assertion follows.
For \(m=2,\)\(\delta((\cdot)^{c}f(\cdot))\in AC_{loc},\) and so there exists \(\varphi_{2}\in L^{1}_{loc}(\mathbb{R}^{+})\) such that
\[\delta(t^{c}f(t))=\int_{0}^{t}\varphi_{2}(u)du.\]
Let \(\varepsilon>0\) be fixed. Integrating the above relation in the interval \([\varepsilon,x]\) we have
\[\int_{\varepsilon}^{x}\delta(t^{c}f(t))\frac{dt}{t}=\int_{\varepsilon}^{x} \bigg{(}\int_{0}^{t}\varphi_{2}(u)du\bigg{)}\frac{dt}{t}.\]
Integrating by parts, we get
\[x^{c}f(x)-\varepsilon^{c}f(\varepsilon)=\bigg{[}\log t\int_{0}^{ t}\varphi_{2}(u)du\bigg{]}_{\varepsilon}^{x}-\int_{\varepsilon}^{x}\log t \varphi_{2}(t)dt\]
\[= \log x\int_{0}^{\varepsilon}\varphi_{2}(t)dt+\int_{\varepsilon}^{ x}\log\frac{x}{t}\leavevmode\nobreak\ \varphi_{2}(t)dt-\log\varepsilon\int_{0} ^{\varepsilon}\varphi_{2}(t)dt.\]
Letting \(\varepsilon\to 0^{+},\) since \(\varphi_{2}\in DomJ^{2}_{0+,1},\) by Lemma 4, we obtain
\[x^{c}f(x)=\int_{0}^{x}\log\frac{x}{t}\leavevmode\nobreak\ \varphi_{2}(t)dt- \lim_{\varepsilon\to 0^{+}}\log\varepsilon\int_{0}^{\varepsilon} \varphi_{2}(t)dt.\]
Since by assumption, \(\varphi_{2}(t)={\cal O}(t^{-\alpha}),\leavevmode\nobreak\ \leavevmode\nobreak \ t\to 0^{+},\) using the De L’Hopital rule, the limit on the right-hand side of the previous relation is zero. Thus,
\[f(x)=x^{-c}\int_{0}^{x}\log\frac{x}{t}\leavevmode\nobreak\ \varphi_{2}(t)dt=x^ {1-c}J^{2}_{0+,1}\varphi_{2}(x).\]
For the general case one can apply the same method, using the binomial formula \(\Box\)
Now, for every \(c\in\mathbb{R}\) and \(m\in\mathbb{N},\) we introduce the Mellin-Sobolev space by
\[{\mathcal{X}}_{c,loc}^{m}:=\{f\in X_{c,loc}:f=g\leavevmode\nobreak\ \mbox{a.e. in}\leavevmode\nobreak\ \mathbb{R}^{+},\mbox{for}\leavevmode\nobreak\ g\in AC ^{m}_{c,loc}\}\]
A non-local version of the above space, denoted by \({\mathcal{X}}_{c}^{m}\) is defined in [18]. It contains all the functions \(f:\mathbb{R}^{+}\rightarrow\mathbb{C}\) such that \(f\in X_{c}\) and there exists \(g\in AC^{m}_{c,loc}\) such that \(f=g\) a.e. in \(\mathbb{R}^{+}\) with \(\Theta^{m}_{c}f\in X_{c}.\)
Note that, if \(f\in X_{c}\) is such that \(J^{m}_{0+,c}f\in X_{c}\) then \(J^{m}_{0+,c}f\in{\mathcal{X}}^{m}_{c}\) (see [18], Theorem 11).
In particular, a function \(f\in{\mathcal{X}}_{c}^{m}\) is such that \(f\in X_{c},\)\(\Theta^{m}_{c}f\) exists and \(\Theta^{m}_{c}f\in X_{c}.\) This suggests a way to define the fractional versions of the above spaces. For a given \(\alpha>0,\) we define
\[{\mathcal{X}}_{c}^{\alpha}:=\{f\in X_{c}:(D^{\alpha}_{0+,c}f)(x)\leavevmode \nobreak\ \mbox{exists a.e. and}\leavevmode\nobreak\ D^{\alpha}_{0+,c}f\in X_{ c}\}\]
and its local version
\[{\mathcal{X}}_{c,loc}^{\alpha}:=\{f\in X_{c,loc}:(D^{\alpha}_{0+,c}f)(x) \leavevmode\nobreak\ \mbox{exists a.e. and}\leavevmode\nobreak\ D^{\alpha}_{0+ ,c}f\in X_{c,loc}\}.\]
Analogously we can define the spaces \({\mathcal{X}}^{\alpha}_{J},\) for any interval \(J\) as
\[{\mathcal{X}}^{\alpha}_{J}:=\{f\in X_{J}:(D^{\alpha}_{0+,c}f)(x)\leavevmode \nobreak\ \mbox{exists a.e. for every}\leavevmode\nobreak\ c\in J\leavevmode \nobreak\ \mbox{and}\leavevmode\nobreak\ (D^{\alpha}_{0+,c}f)(x)\in X_{J}\}\]
and its local version \({\mathcal{X}}^{\alpha}_{J,loc}.\) We begin with the following
**Proposition 18**: _Let \(f\in{\mathcal{X}}^{\alpha}_{c,loc}\) be such that \(\Theta_{c}^{m}f\in X_{c,loc},\) where \(m=[\alpha]+1.\) Then_
\[(D^{\alpha}_{0+,c}f)(x)=\Theta_{c}^{m}(J^{m-\alpha}_{0+,c}f)(x)=J^{m-\alpha}_{ 0+,c}(\Theta_{c}^{m}f)(x).\]
**Proof.** Since \(f\in X_{c,loc}\) and \(0<m-\alpha<1,\)\(f\in DomJ^{m-\alpha}_{0+,c}\) and \(\Theta^{m}_{c}f\in DomJ^{m-\alpha}_{0+,c}\) by Corollary 1. The first equality is already stated as a consequence of Proposition 12, thus we will prove the other equality. We obtain by (10)
\[(D^{\alpha}_{0+,c}f)(x)=x^{-c}\delta^{m}(x^{c}(J^{m-\alpha}_{0+,c }f))(x)\]
\[=x^{-c}\bigg{(}\delta^{m}\bigg{[}x^{c}\frac{1}{\Gamma(m-\alpha)} \int_{0}^{x}\bigg{(}\frac{v}{x}\bigg{)}^{c}\bigg{(}\log\frac{x}{v}\bigg{)}^{m- \alpha-1}f(v)\frac{dv}{v}\bigg{]}\bigg{)}(x)\]
\[=x^{-c}\bigg{(}\delta^{m}\bigg{[}\frac{1}{\Gamma(m-\alpha)}\int_{ 1}^{+\infty}\frac{x^{c}}{t^{c+1}}(\log t)^{m-\alpha-1}f(\frac{x}{t})dt\bigg{]} \bigg{)}(x)\]
\[=x^{-c}\sum_{k=0}^{m}S(m,k)x^{k}\frac{d^{k}}{dx^{k}}\bigg{[}\frac {1}{\Gamma(m-\alpha)}\int_{1}^{+\infty}\frac{x^{c}}{t^{c+1}}(\log t)^{m-\alpha -1}f(\frac{x}{t})dt\bigg{]}\]
\[=\frac{x^{-c}}{\Gamma(m-\alpha)}\int_{1}^{+\infty}\sum_{k=0}^{m}S (m,k)x^{k}\frac{d^{k}}{dx^{k}}(x^{c}f(\frac{x}{t}))(\log t)^{m-\alpha-1}\frac{ dt}{t^{c+1}}.\]
Using the elementary formula for the derivatives of the product, we have
\[(D^{\alpha}_{0+,c}f)(x)\]
\[=\frac{x^{-c}}{\Gamma(m-\alpha)}\int_{1}^{+\infty}\sum_{k=0}^{m}S (m,k)x^{k}\sum_{j=0}^{k}\left(\begin{array}[]{c}k\\ j\end{array}\right)\prod_{\nu=0}^{j-1}(c-\nu)\frac{x^{c-j}}{t^{k-j}}f^{(k-j)}( x/t)(\log t)^{m-\alpha-1}\frac{dt}{t^{c+1}}\]
\[=\frac{x^{-c}}{\Gamma(m-\alpha)}\int_{0}^{x}\sum_{k=0}^{m}S(m,k)v ^{k}\frac{d^{k}}{dv^{k}}(v^{c}f(v))(\log(x/v))^{m-\alpha-1}\frac{dv}{v}\]
\[=\frac{x^{-c}}{\Gamma(m-\alpha)}\int_{0}^{x}(\delta^{m}(v^{c}f(v) )(\log(x/v))^{m-\alpha-1}\frac{dv}{v}=J^{m-\alpha}_{0+,c}(\Theta_{c}^{m}f)(x).\]
where we have used Proposition 12. Thus the assertion follows. \(\Box\)
In order to prove a new fractional version of the fundamental theorem of the differential and integral calculus in the Mellin frame, first we give the following proposition concerning the case \(\alpha=m\in\mathbb{N}.\) Recall that in this case, using the representation in terms of iterated integrals, \(J^{m}_{0+,c}f\) is m-times differentiable, whenever \(f\in DomJ^{m}_{0+,c}.\)
**Proposition 19**: _We have:_
1. _Let_ \(f\in{\mathcal{X}}^{1}_{c,loc},\) _then_ \[J^{1}_{0+,c}(\Theta_{c}f)(x)=f(x),\leavevmode\nobreak\ \leavevmode\nobreak\ a. e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
2. _Let_ \(m\in\mathbb{N},m>1,\) _and let_ \(f\in{\mathcal{X}}^{m}_{c,loc}\) _be such that_ \(\Theta^{m}_{c}f\in DomJ^{m}_{0+,c}.\) _Then_ \[J^{m}_{0+,c}(\Theta^{m}_{c}f)(x)=f(x),\leavevmode\nobreak\ \leavevmode\nobreak \ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
3. _Let_ \(f\in X_{c,loc},\) _then_ \[\Theta_{c}(J^{1}_{0+,c}f)(x)=f(x),\leavevmode\nobreak\ \leavevmode\nobreak\ a. e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
4. _Let_ \(f\in DomJ^{m}_{0+,c},\) _then_ \[\Theta^{m}_{c}(J^{m}_{0+,c}f)(x)=f(x),\leavevmode\nobreak\ \leavevmode\nobreak \ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
**Proof**. As to (i) we have, by the absolute continuity and Lemma 4
\[J^{1}_{0+,c}(\Theta_{c}f)(x)=\int_{0}^{x}\bigg{(}\frac{u}{x}\bigg{)}^{c}( \Theta_{c}f)(u)\frac{du}{u}=x^{-c}\int_{0}^{x}\frac{d}{du}(u^{c}f(u))du=f(x),\]
a.e. \(x\in\mathbb{R}^{+}.\)
For (ii) we can obtain the result using the iterated representation of \(J^{m}_{0+,c}f,\) m-times the absolute continuity, Lemma 4 and Proposition 12.
As to (iii) note
\[\Theta_{c}(J^{1}_{0+,c}f)(x)=x\frac{d}{dx}(J^{1}_{0+,c}f)(x)+c(J^{1}_{0+,c}f)( x).\]
and we have
\[x\frac{d}{dx}(J^{1}_{0+,c}f)(x)=-cx^{-c}\int_{0}^{x}u^{c-1}f(u)du+f(x)\]
from which we obtain the assertion.
Finally we prove (iv) and we will use again induction. Assuming that (iv) holds for \(m-1,\) we have by Theorem 2 (iii),
\[\Theta^{m}_{c}(J^{m}_{0+,c}f)(x) = \Theta_{c}(\Theta^{m-1}_{c}(J^{m-1}_{0+,c}(J^{1}_{0+,c}f)))(x)=f( x),\]
a.e. \(x\in\mathbb{R}^{+},\) by the induction assumption. \(\Box\)
Proposition 19 gives a version of Theorem 11 in [18] for the spaces \(X_{c,loc},\) without the use of Mellin transforms and under sharp assumptions. As a consequence, for spaces \(X_{c},\) we deduce again the formula for the Mellin transform of \(J^{m}_{0+,c}f\) whenever \(J^{m}_{0+,c}f\in X_{c}\)
\[[J^{m}_{0+,c}f]^{\wedge}_{M}(c+it)=(-it)^{-m}[f]^{\wedge}_{M}(c+it).\]
Now we are ready to prove the fundamental theorem of the fractional differential and integral calculus in the Mellin frame.
**Theorem 3**: _Let \(\alpha>0\) be fixed and \(m=[\alpha]+1.\)_
1. _Let_ \(f\in{\mathcal{X}}^{\alpha}_{c,loc}\cap{\mathcal{X}}^{m}_{c,loc},\) _such that_ \(D^{\alpha}_{0+,c}f,\Theta^{m}_{c}f\in DomJ^{m}_{0+,c}.\) _Then_ \[(J^{\alpha}_{0+,c}(D^{\alpha}_{0+,c}f))(x)=f(x),\leavevmode\nobreak\ a.e. \leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
2. _Let_ \(f\in DomJ^{m}_{0+,c},\) _be such that_ \(J^{\alpha}_{0+,c}f\in X_{c,loc}.\) _Then_ \[(D^{\alpha}_{0+,c}(J^{\alpha}_{0+,c}f))(x)=f(x),\leavevmode\nobreak\ \leavevmode\nobreak\ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
**Proof**. As to part a), by Propositions 12, 18, 19 and Theorem 2, we have for a.e. \(x\in\mathbb{R}^{+}\)
\[=(J^{\alpha}_{0+,c}(J_{0+,c}^{m-\alpha}(\Theta_{c}^{m}f)))(x)=(J^ {m}_{0+,c}(\Theta_{c}^{m}f))(x)=f(x).\]
As to part b), we have, by Propositions 12, 19 and Theorem 2,
\[(D^{\alpha}_{0+,c}(J_{0+,c}^{\alpha}f))(x)=x^{-c}\delta^{m}(x^{c} J_{0+,c}^{m-\alpha}(J_{0+,c}^{\alpha}f))(x)\]
\[=x^{-c}\delta^{m}(x^{c}J_{0+,c}^{m}f)(x)=\Theta_{c}^{m}(J_{0+,c}^ {m}f)(x)=f(x),\]
almost everywhere. \(\Box\)
A result related to part a) is also described in [42], Lemma 2.35, for functions \(f\) belonging to the subspace
\[J^{\alpha}_{0+,\mu}(X^{p}_{c}):=\{f=J^{\alpha}_{0+,\mu}g,\leavevmode\nobreak\ \mbox{for}\leavevmode\nobreak\ g\in X^{p}_{c}\}\]
with \(\mu>c.\) In this instance the formula is a simple consequence of part b), using the integral representation of \(f.\) Related results in spaces \(X^{p}_{\nu}\) with \(c>\nu\) are given in [42], Property 2.28. Note that for \(p=1,\) if \(f\in X_{\nu},\) with \(c>\nu,\) then \(J^{\alpha}_{0+,c}f\in X_{\nu},\) so that our assumption is satisfied. For bounded intervals \(I\) similar results are also given in [41], for functions belonging to \(L^{p}(I)\).
More generally, we can, with our approach, also consider compositions between the operators of Hadamard-type fractional integrals and derivatives, in local spaces (for similar results in \(X^{p}_{c}\) spaces see [41] on bounded intervals, and [42] in \(\mathbb{R}^{+}\)).
**Theorem 4**: _Let \(\alpha,\beta>0\) with \(\beta>\alpha\) and \(m=[\alpha]+1.\)_
1. _Let_ \(f\in{\mathcal{X}}^{\alpha}_{c,loc}\cap{\mathcal{X}}^{m}_{c,loc},\) _such that_ \(D^{\alpha}_{0+,c}f\in DomJ^{\beta}_{0+,c}\) _and_ \(\Theta^{m}_{c}f\in DomJ^{m+\beta-\alpha}_{0+,c}.\) _Then_ \[(J^{\beta}_{0+,c}(D^{\alpha}_{0+,c}f))(x)=(J^{\beta-\alpha}_{0+,c}f)(x), \leavevmode\nobreak\ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
2. _Let_ \(f\in DomJ^{m+\beta-\alpha}_{0+,c}.\) _Then_ \[(D^{\alpha}_{0+,c}(J^{\beta}_{0+,c}f))(x)=(J^{\beta-\alpha}_{0+,c}f)(x), \leavevmode\nobreak\ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
**Proof**. Regarding a’), as in the proof of Theorem 3, we have for a.e. \(x\in\mathbb{R}^{+}\)
\[(J^{\beta}_{0+,c}(D^{\alpha}_{0+,c}f))(x)=(J^{\beta}_{0+,c}(J_{0+ ,c}^{m-\alpha}(\Theta_{c}^{m}f)))(x)=J^{\beta-\alpha}_{0+,c}(J^{m}_{0+,c}( \Theta_{c}^{m}f))(x)=(J^{\beta-\alpha}_{0+,c}f)(x).\]
Regarding b’), we have
\[(D^{\alpha}_{0+,c}(J_{0+,c}^{\beta}f))(x)=x^{-c}\delta^{m}(x^{c}J _{0+,c}^{m-\alpha+\beta}f)(x)=\Theta_{c}^{m}(J_{0+,c}^{m+\beta-\alpha}f)(x)=(J ^{\beta-\alpha}_{0+,c}f)(x).\leavevmode\nobreak\ \leavevmode\nobreak\ \Box\]
## 5 A relation between pointwise and strong fractional Mellin derivatives
In this section we will compare the definitions of the Mellin derivative in the strong and pointwise versions. For this purpose, we need some further notations and preliminary results.
For \(h,x\in\mathbb{R}^{+}\) and \(\widetilde{c}\in\mathbb{R},\) we define
\[m^{\widetilde{c}}_{h}(x):=\left\{\begin{array}[]{ll}x^{- \widetilde{c}}\chi_{[1/h,1]}(x),\leavevmode\nobreak\ \leavevmode\nobreak\ \mbox{if}\leavevmode\nobreak\ h\geq 1,\\ -x^{-\widetilde{c}}\chi_{[1,1/h]}(x),\leavevmode\nobreak\ \leavevmode\nobreak \ \mbox{if}\leavevmode\nobreak\ 0<h<1.\end{array}\right.\]
It is clear that \(m^{\widetilde{c}}_{h}\in X_{(-\infty,\infty)}\) and we have (see [18])
\[[m^{\widetilde{c}}_{h}]^{\wedge}_{M}(s)=\left\{\begin{array}[]{ll }\frac{1}{\widetilde{c}-s}(h^{\widetilde{c}-s}-1),\leavevmode\nobreak\ \leavevmode\nobreak\ \mbox{if}\leavevmode\nobreak\ s\in\mathbb{C}\setminus\{ \widetilde{c}\},\\ \log h,\leavevmode\nobreak\ \leavevmode\nobreak\ \mbox{if}\leavevmode\nobreak \ s=\widetilde{c}.\end{array}\right.\]
Denoting the \(r\)th-fold convolution of \(m^{\widetilde{c}}_{h}\) with itself by \((m^{\widetilde{c}}_{h}\ast)^{r},\leavevmode\nobreak\ r\in\mathbb{N},\) we have, by Theorem 3 in [18]
\[[(m^{\widetilde{c}}_{h}\ast)^{r}]^{\wedge}_{M}(s)=(\widetilde{c}-s)^{-r}(h^{ \widetilde{c}-s}-1)^{r},\leavevmode\nobreak\ \leavevmode\nobreak\ s\in\mathbb{ C}\setminus\{\widetilde{c}\}.\]
We recall that by Proposition 2, one has for \(f\in X_{[a,b]},\)\(c,\nu\in[a,b]\) and \(r\in\mathbb{N},\)
\[M\left[\sum_{k=0}^{r}(-1)^{r-k}\left(\begin{array}[]{c}r\\ k\end{array}\right)\tau^{c}_{h^{k}}f\right](\nu+it)=(h^{c-\nu-it}-1)^{r}M[f]( \nu+it).\] (13)
In [18] (Proposition 6, formula (8.8)), the following lemma was established:
**Lemma 6**: _If \(f\in{\mathcal{X}}^{r}_{[a,b]},\leavevmode\nobreak\ r\in\mathbb{N},\) then for \(c\in[a,b],\leavevmode\nobreak\ h>1\) we have, for \(x\in\mathbb{R}^{+}\)_
\[\sum_{k=0}^{r}(-1)^{r-k}\left(\begin{array}[]{c}r\\ k\end{array}\right)\tau^{c}_{h^{k}}f(x)=x^{-c}\bigg{(}(m^{0}_{h}\ast)^{r}\ast( \Theta^{r}_{c}f(\cdot)(\cdot)^{c})\bigg{)}(x).\]
**Lemma 7**: _If \(f\in{\mathcal{X}}^{r}_{[a,b]},\leavevmode\nobreak\ r\in\mathbb{N},\) then for \(c,\nu\in[a,b],\) we have_
\[M[\Theta^{r}_{c}f](\nu+it)=(c-\nu-it)^{r}M[f](\nu+it).\]
**Proof**. Let us put, for \(x\in\mathbb{R}^{+}\)
Since \(\Theta^{r}_{c}f\in X_{\nu}\) by assumption, it is easy to see that \(\Theta^{r}_{c}f(\cdot)(\cdot)^{c}\in X_{\nu-c}.\) Then \(G\in X_{\nu-c}\) and so \((\cdot)^{-c}G(\cdot)\in X_{\nu}.\) Hence by Lemma 6, Proposition 2 and (13) we have
\[M[(\cdot)^{-c}G(\cdot)](\nu+it)=M\left[\sum_{k=0}^{r}(-1)^{r-k} \left(\begin{array}[]{c}r\\ k\end{array}\right)\tau^{c}_{h^{k}}f\right](\nu+it)\]
\[=(h^{c-\nu-it}-1)^{r}M[f](\nu+it).\]
Using Proposition 1(c) in [18] and the convolution theorem, (Lemma 2(ii)), we have
\[M[(\cdot)^{-c}G(\cdot)](\nu+it) = M[G](\nu-c+it)\]
\[= M[(m^{0}_{h}\ast)^{r}](\nu-c+it)M[\Theta^{r}_{c}f(\cdot)(\cdot)^ {c}](\nu-c+it)\]
\[= (h^{c-\nu-it}-1)^{r}(c-\nu-it)^{-r}M[\Theta^{r}_{c}f](\nu+it),\]
from which we deduce the assertion. \(\Box\)
We prove the main theorem of this section.
**Theorem 5**: _Let \(\alpha>0\) be fixed._
1. _Let_ \(f\in{\mathcal{X}}^{\alpha}_{c}\) _such that_ \(\Theta^{m}_{c}f\in X_{c},\) _where_ \(m=[\alpha]+1.\) _Then_ \(f\in W^{\alpha}_{X_{c}}\) _and_ \[(D^{\alpha}_{0+,c}f)(x)=\mbox{s-}\Theta^{\alpha}_{c}f(x),\leavevmode\nobreak\ \leavevmode\nobreak\ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
2. _Let_ \(f\in{\mathcal{X}}^{\alpha}_{[a,b]}\) _such that_ \(\Theta^{m}_{c}f\in X_{[a,b]},\) _for_ \(c\in]a,b[,\) _where_ \(m=[\alpha]+1.\) _Then_ \(f\in W^{\alpha}_{X_{[a,b]}}\) _and_ \[(D^{\alpha}_{0+,c}f)(x)=\mbox{s-}\Theta^{\alpha}_{c}f(x),\leavevmode\nobreak\ \leavevmode\nobreak\ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+},\leavevmode \nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ c\in]a,b[.\]
**Proof**. (i) By Proposition 18 we have
\[(D^{\alpha}_{0+,c}f)(x)=(J^{m-\alpha}_{0+,c}(\Theta^{m}_{c}f))(x),\]
which belongs to \(X_{c}.\) Thus, passing to Mellin transforms, we have, for \(t\in\mathbb{R},\)
\[[D^{\alpha}_{0+,c}f]^{\wedge}_{M}(c+it)=[(J^{m-\alpha}_{0+,c}( \Theta^{m}_{c}f))]^{\wedge}_{M}(c+it)\]
\[=(-it)^{\alpha-m}[\Theta_{c}^{m}f]^{\wedge}_{M}(c+it)=(-it)^{ \alpha}[f]^{\wedge}_{M}(c+it)=[\mbox{s-}\Theta^{\alpha}_{c}f]^{\wedge}_{M}(c+ it).\]
Hence, \(D^{\alpha}_{0+,c}f\) and s-\(\Theta^{\alpha}_{c}f\) have the same Mellin transform along the line \(s=c+it\), and so the assertion follows by the identity theorem (see [18]).
(ii) Again, using Proposition 18, and taking the Mellin transform on the line \(s=\nu+it,\) for \(\nu\in]a,b[\) with \(\nu<c,\) we obtain
\[[D^{\alpha}_{0+,c}f]^{\wedge}_{M}(\nu+it)=[J^{m-\alpha}_{0+,c}(\Theta^{m}_{c}f )]^{\wedge}_{M}(\nu+it)=(c-\nu-it)^{\alpha}[f]^{\wedge}_{M}(\nu+it),\]
and so the assertion follows as before. \(\Box\) The above theorem reproduces Theorem 4.3 in [19], for integral values of \(\alpha,\) i.e. for \(k\in\mathbb{N},\) the pointwise Mellin derivative \(\Theta^{k}_{c}f\) equals the strong derivative s-\(\Theta^{k}_{c}f,\) as defined in [19], for functions belonging to the space \(W^{k}_{X_{c}}.\)
As a consequence of Theorem 5, for the spaces \(X_{c}\) we can give more direct proofs of the fundamental formulae of integral and differential calculus in the fractional Mellin setting, now using the Mellin transform. We begin with the following
**Theorem 6**: _Let \(\alpha>0\) be fixed._
1. _Let_ \(f\in{\mathcal{X}}^{\alpha}_{c}\) _be such that_ \(D^{\alpha}_{0+,c}f\in DomJ^{\alpha}_{0+,c},\) _and_ \(J^{\alpha}_{0+,c}f\in X_{c}.\) _Then_ \[(J^{\alpha}_{0+,c}(D^{\alpha}_{0+,c}f))(x)=f(x)\leavevmode\nobreak\ a.e. \leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
2. _Let_ \(f\in DomJ^{\alpha}_{0+,c}\cap X_{c}\) _be such that_ \(J^{\alpha}_{0+,c}f\in{\mathcal{X}}^{\alpha}_{c}.\) _Then we have_ \[(D^{\alpha}_{0+,c}J^{\alpha}_{0+,c}f)(x)=f(x),\leavevmode\nobreak\ a.e. \leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
**Proof**. As to part a), we can compute the Mellin transforms, obtaining
\[[J^{\alpha}_{0+,c}(D^{\alpha}_{0+,c}f)]^{\wedge}_{M}(c+it)=(-it)^ {-\alpha}[D^{\alpha}_{0+,c}f]^{\wedge}_{M}(c+it)\]
\[= (-it)^{-\alpha}(-it)^{\alpha}[f]^{\wedge}_{M}(c+it)=[f]^{\wedge}_ {M}(c+it)\]
and so the assertion follows by the uniqueness theorem of Mellin transform.
Part b) is carried out using the same approach. \(\Box\)
In comparison with Theorem 4 we have, under different assumptions, the following
**Theorem 7**: _Let \(\alpha,\beta>0\) with \(\beta>\alpha.\)_
1. _Let_ \(f\in{\mathcal{X}}^{\alpha}_{c}.\) _If_ \(D^{\alpha}_{0+,c}f\in DomJ^{\beta}_{0+,c},\) _and_ \(J^{\beta}_{0+,c}(D^{\alpha}_{0+,c}f)\in X_{c},\) _then_ \[(J^{\beta}_{0+,c}(D^{\alpha}_{0+,c}f))(x)=J^{\beta-\alpha}_{0+,c}f(x), \leavevmode\nobreak\ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
2. _Let_ \(f\in DomJ^{\beta}_{0+,c}\cap X_{c}\) _be such that_ \(J^{\beta}_{0+,c}f\in{\mathcal{X}}^{\alpha}_{c}.\) _Then_ \[(D^{\alpha}_{0+,c}J^{\beta}_{0+,c}f)(x)=(J^{\beta-\alpha}_{0+,c}f)(x), \leavevmode\nobreak\ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
**Proof**. As to part a’), using again the Mellin transform, we have
\[[J^{\beta}_{0+,c}(D^{\alpha}_{0+,c}f)]^{\wedge}_{M}(c+it)=(-it)^{ -\beta}[D^{\alpha}_{0+,c}]^{\wedge}_{M}(c+it)\]
\[= (-it)^{-\beta}(-it)^{\alpha}[f]^{\wedge}_{M}(c+it)=[J^{\beta- \alpha}_{0+,c}f]^{\wedge}_{M}(c+it),\]
and so the assertion follows again by the uniqueness theorem. As to part b’), the proof is similar to the previous one. \(\Box\)
For the special case of spaces \(X_{[a,b]},\) we have the following two further results.
**Theorem 8**: _Let \(\alpha>0\) be fixed._
1. _Let_ \(f\in{\mathcal{X}}^{\alpha}_{[a,b]}\) _and_ \(c\in]a,b].\) _If_ \(D^{\alpha}_{0+,c}f\in DomJ^{\alpha}_{0+,c},\) _then_ \[(J^{\alpha}_{0+,c}(D^{\alpha}_{0+,c}f))(x)=f(x)\leavevmode\nobreak\ a.e. \leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
2. _Let_ \(c,\nu\in[a,b]\) _with_ \(\nu<c.\) _If_ \(f\in DomJ^{\alpha}_{0+,c}\cap X_{[a,b]}\) _is such that_ \(J^{\alpha}_{0+,c}f\in{\mathcal{X}}^{\alpha}_{\nu},\) _then_ \[(D^{\alpha}_{0+,c}J^{\alpha}_{0+,c}f)(x)=f(x),\leavevmode\nobreak\ a.e. \leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
**Proof**. As to part a”) take \(\nu\in[a,b]\) with \(\nu<c.\) Using the Mellin transform on the line \(s=\nu+it,\) we have by Proposition 11 and Theorem 5
\[[J^{\alpha}_{0+,c}(D^{\alpha}_{0+,c}f)]^{\wedge}_{M}(\nu+it)=(c- \nu-it)^{-\alpha}[D^{\alpha}_{0+,c}]^{\wedge}_{M}(\nu+it)\]
\[= (c-\nu-it)^{-\alpha}(c-\nu-it)^{\alpha}[f]^{\wedge}_{M}(\nu+it)=[ f]^{\wedge}_{M}(\nu+it)\]
and so the assertion again follows by the uniqueness theorem. Part b”) is carried out similarly. \(\Box\)
**Theorem 9**: _Let \(\alpha,\beta>0\) be fixed with \(\beta>\alpha.\)_
1. _Let_ \(f\in{\mathcal{X}}^{\alpha}_{[a,b]}\) _and_ \(c\in]a,b].\) _If_ \(D^{\alpha}_{0+,c}f\in DomJ^{\beta}_{0+,c},\) _then_ \[(J^{\beta}_{0+,c}(D^{\alpha}_{0+,c}f))(x)=(J^{\beta-\alpha}_{0+,c}f)(x) \leavevmode\nobreak\ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
2. _Let_ \(c,\nu\in[a,b]\) _with_ \(\nu<c.\) _If_ \(f\in DomJ^{\beta}_{0+,c}\cap X_{[a,b]}\) _is such that_ \(J^{\beta}_{0+,c}f\in{\mathcal{X}}^{\alpha}_{\nu},\) _then_ \[(D^{\alpha}_{0+,c}J^{\beta}_{0+,c}f)(x)=(J^{\beta-\alpha}_{0+,c}f)(x), \leavevmode\nobreak\ a.e.\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
**Proof**. The proof is essentially the same as in Theorem 8 taking Mellin transforms in the space \(X_{\nu}.\)\(\Box\)
For what concerns the strong fractional Mellin derivatives we have
**Theorem 10**: _Let \(\alpha>0\) be fixed._
1. _Let_ \(f\in W^{\alpha}_{X_{c}}\) _be such that_ \(\mbox{s-}\Theta^{\alpha}_{c}f\in DomJ^{\alpha}_{0+,c}\) _and_ \(J^{\alpha}_{0+,c}(\mbox{s-}\Theta^{\alpha}_{c}f)\in X_{c}.\) _Then_ \[J^{\alpha}_{0+,c}(\mbox{s-}\Theta^{\alpha}_{c}f)(x)=f(x),\leavevmode\nobreak\ \leavevmode\nobreak\ \mbox{a.e}\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
2. _Let_ \(f\in W^{\alpha}_{X_{[a,b]}}\) _be such that_ \(\mbox{s-}\Theta^{\alpha}_{c}f\in DomJ^{\alpha}_{0+,c},\) _for_ \(c\in]a,b[.\) _Then_ \[J^{\alpha}_{0+,c}(\mbox{s-}\Theta^{\alpha}_{c}f)(x)=f(x),\leavevmode\nobreak\ \leavevmode\nobreak\ \mbox{a.e}\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
**Proof**. (i) By assumptions, we can compute the Mellin transform of the function \(J^{\alpha}_{0+,c}(\mbox{s-}\Theta^{\alpha}_{c}f),\) on the line \(s=c+it,\) obtaining, as before,
\[[J^{\alpha}_{0+,c}(\mbox{s-}\Theta^{\alpha}_{c}f)]^{\wedge}_{M}(c+it)=[f]^{ \wedge}_{M}(c+it),\leavevmode\nobreak\ \leavevmode\nobreak\ (t\in\mathbb{R}).\]
Analogously (ii) follows, by taking Mellin transforms on \(s=\nu+it,\) with \(\nu<c.\)\(\Box\)
**Theorem 11**: _Let \(\alpha>0\) be fixed._
1. _Let_ \(f\in X_{c}\) _be such that_ \(J^{\alpha}_{0+,c}f\in W^{\alpha}_{X_{c}}.\) _Then_ \[\mbox{s-}\Theta^{\alpha}_{c}(J^{\alpha}_{0+,c}f)(x)=f(x),\leavevmode\nobreak\ \leavevmode\nobreak\ \mbox{a.e}\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
2. _Let_ \(f\in X_{[a,b]}\) _be such that_ \(J^{\alpha}_{0+,c}f\in W^{\alpha}_{X_{\nu}},\)__\(\alpha>0\) _and_ \(c\in]a,b],\nu<c.\) _Then_ \[\mbox{s-}\Theta^{\alpha}_{c}(J^{\alpha}_{0+,c}f)(x)=f(x),\leavevmode\nobreak\ \leavevmode\nobreak\ \mbox{a.e}\leavevmode\nobreak\ x\in\mathbb{R}^{+}.\]
**Proof**. The proof is essentially the same as for the previous theorem. \(\Box\)
In order to state an extension to the fractional setting of the equivalence theorem proved in [18] Theorem 10, we introduce the following subspace of \(W^{\alpha}_{X_{c}},\) for \(\alpha>0\) and \(c\in\mathbb{R},\)
\[\widetilde{W}^{\alpha}_{X_{c}}=\{f\in W^{\alpha}_{X_{c}}:\mbox{s-}\Theta^{ \alpha}_{c}f\in DomJ^{\alpha}_{0+,c}\leavevmode\nobreak\ \mbox{and}\leavevmode \nobreak\ J^{\alpha}_{0+,c}(\mbox{s-}\Theta^{\alpha}_{c}f)\in X_{c}\}.\]
**Theorem 12**: _Let \(f\in X_{c}\) and \(\alpha>0.\) The following four assertions are equivalent_
* \(f\in\widetilde{W}^{\alpha}_{X_{c}}\)_._
* _There is a function_ \(g_{1}\in X_{c}\cap DomJ^{\alpha}_{0+,c}\) _with_ \(J^{\alpha}_{0+,c}g_{1}\in X_{c}\) _such that_ \[\lim_{h\to 1}\bigg{\|}\frac{\Delta^{\alpha,c}_{h}f}{(h-1)^{\alpha}}-g_ {1}\bigg{\|}_{X_{c}}=0.\]
* _There is_ \(g_{2}\in X_{c}\cap DomJ^{\alpha}_{0+,c}\) _with_ \(J^{\alpha}_{0+,c}g_{2}\in X_{c}\) _such that_ \[(-it)^{\alpha}M[f](c+it)=M[g_{2}](c+it).\]
* _There is_ \(g_{3}\in X_{c}\cap DomJ^{\alpha}_{0+,c}\) _such that_ \(J^{\alpha}_{0+,c}g_{3}\in X_{c},\) _and_
_If one of the above assertions is satisfied, then \(D^{\alpha}_{0+,c}f(x)=\) s-\(\Theta^{\alpha}_{c}f(x)=g_{1}=g_{2}=g_{3}\) a.e. \(x\in\mathbb{R^{+}}.\)_
**Proof.** It is easy to see that (i) implies (ii) and (ii) implies (iii) by Theorem 1. We prove now (iii) implies (iv). Let \(g_{2}\in X_{c}\) be such that (iii) holds. Then, putting \(g_{3}=g_{2}\) we have, by Proposition 9,
\[M[J^{\alpha}_{0+,c}g_{2}](c+it)=(-it)^{-\alpha}M[g_{3}](c+it)=M[f](c+it).\]
Thus by (iii) we have immediately the assertion by the identity theorem for Mellin transforms. Finally we prove that (iv) implies (i). By (iv), we have in particular that \(J^{\alpha}_{0+,c}g_{3}\in X_{c,loc}.\) This implies that \(J^{\alpha}_{0+,c}g_{3}\in DomJ^{m-\alpha}_{0+,c},\) since \(0<m-\alpha<1.\) So, by the semigroup property (Theorem 2), \(g_{3}\in DomJ^{m}_{0+,c}.\) Therefore the assumptions of Theorem 3 part b) are satisfied and we have
\[(D^{\alpha}_{0+,c}f)(x)=(D^{\alpha}_{0+,c}(J^{\alpha}_{0+,c}g_{3}))(x)=g_{3}(x )\quad a.e.\quad x\in\mathbb{R^{+}}.\]
So the assertion follows. \(\Box\) Analogous equivalence theorems hold for the spaces \(W^{\alpha}_{X_{[a,b]}}.\)
## 6 Some particular applications
In this section we discuss some basic examples.
1. The first example also discussed in [25], Lemma 3 and in Property 2.25 in [42] and used in the proof of Propositions 6 and 15, is the following: Consider the function \(g(x)=x^{b},\)\(b\in\mathbb{R}.\) Then for any \(c\in\mathbb{R}^{+}\) such that \(c+b>0\) we have \(g\in DomJ^{\alpha}_{0+,c},\) and \[(J^{\alpha}_{0+,c}g)(x)=(c+b)^{-\alpha}x^{b}.\] In particular, for \(b>0\) and \(c=0\) we get \((J^{\alpha}_{0+}g)(x)=b^{-\alpha}x^{b}.\) Analogously, we have also \[(D^{\alpha}_{0+,c}g)(x)=(c+b)^{\alpha}x^{b}.\] This also well enlightens the fundamental theorem in the fractional frame. It should be noted that in this case we cannot compute \(J^{\alpha}_{0+}1\) and \(D^{\alpha}_{0+}1\) since the function \(g(t)=1,\) corresponding to \(b=0,\) is not in the domain of \(J^{\alpha}_{0+}.\) However we can compute \(J^{\alpha}_{0+,c}1\) and \(D^{\alpha}_{0+,c}1,\) with \(c>0,\) obtaining easily \((J^{\alpha}_{0+,c}1)(x)=c^{-\alpha},\) and \((D^{\alpha}_{0+,c}1)(x)=c^{\alpha}.\) The last relation follows by \(\delta^{m}(x^{c}c^{\alpha-m})=c^{\alpha}x^{c},\) for \(m-1<\alpha<m,\) which is proved by an easy induction. Moreover we could also calculate \(J^{\alpha}_{a+}1\) and \(D^{\alpha}_{a+}1,\) with \(a>0\) in place of \(0\) in the definitions of the Hadamard-type integrals and derivatives (see [42]).
2. As a second example, let us consider the function \(g_{k}(x)=\log^{k}x,\) for \(k\in\mathbb{N}.\) For any \(\alpha>0\) and \(c>0\) we have \(g_{k}\in DomJ^{\alpha}_{0+,c}\) and by a change of variables and using the binomial theorem, we can write \[(J^{\alpha}_{0+,c}g_{k})(x) = \frac{1}{\Gamma(\alpha)}\int_{0}^{+\infty}e^{-cv}v^{\alpha-1}( \log x-v)^{k}dv\] \[=\] Putting \[B_{\alpha}(k,j):=\frac{\Gamma(\alpha+k-j)}{\Gamma(\alpha)}=\prod_{\nu=1}^{k-j} (\alpha+k-j-\nu),\] we finally obtain \[(J^{\alpha}_{0+,c}g_{k})(x)=\sum_{j=0}^{k}(-1)^{k-j}\left(\begin{array}[]{c}k \\ j\end{array}\right)\frac{B_{\alpha}(k,j)}{c^{\alpha+k-j}}\log^{j}x.\] For the fractional derivative, putting \(m=[\alpha]+1,\) we have \[(D^{\alpha}_{0+,c}g_{k})(x)=x^{-c}\sum_{j=0}^{k}(-1)^{k-j}\left(\begin{array}[ ]{c}k\\ j\end{array}\right)\frac{B_{m-\alpha}(k,j)}{c^{m-\alpha+k-j}}\delta^{m}(x^{c} \log^{j}x).\] In particular for \(k=1,\) we have \[(D^{\alpha}_{0+,c}g_{1})(x) = x^{-c}\bigg{[}\frac{-B_{m-\alpha}(1,0)}{c^{m-\alpha+1}}\delta^{m }x^{c}+\frac{B_{m-\alpha}(1,1}{c^{m-\alpha}}\delta^{m}(x^{c}\log x)\bigg{]}\] \[= x^{-c}\bigg{[}\frac{m-\alpha}{c^{m-\alpha+1}}\delta^{m}x^{c}+ \frac{1}{c^{m-\alpha}}\delta^{m}(x^{c}\log x)\bigg{]}.\] Now using an easy induction, \(\delta^{m}(x^{c}\log x)=c^{m}x^{c}\log x+mc^{m-1}x^{c},\) thus we finally obtain the formula: \[(D^{\alpha}_{0+,c}g_{1})(x)=\alpha c^{\alpha-1}+c^{\alpha}\log x.\] This is another explanation of the fundamental theorem of fractional calculus in the Mellin frame. Indeed, it is easy to see that \[J^{\alpha}_{0+,c}(D^{\alpha}_{0+,c}g_{1})(x)=\log x.\] We can obviously obtain formulae for higher values of \(k.\) Note that the assumption \(c>0\) is essential. Indeed as we remarked earlier, for \(c=0,\) the function \(\log x\) does not belong to the domain of the operator \(J^{\alpha}_{0+}.\) In [42], Property 2.24, some related examples are treated concerning the Hadamard integrals \(J^{\alpha}_{a+}f,\) with \(a>0\) in place of \(0.\)
3. Let us consider the function \(g(x)=e^{bt},\)\(b\in\mathbb{R}.\) Then for any \(c>0\) and \(\alpha>0,\) we have \(g\in DomJ^{\alpha}_{0+,c}\) and using the representation formula proved in [25], Lemma 5(i), we have \[(J^{\alpha}_{0+,c}g)(x)=\sum_{k=0}^{\infty}(c+k)^{-\alpha}\frac{b^{k}}{k!}x^{k },\quad x\in\mathbb{R}^{+}\] and the corresponding formula for the derivative, given in (Lemma 5(ii), [25]). \[(D^{\alpha}_{0+,c}g)(x)=\sum_{k=0}^{\infty}(c+k)^{\alpha}\frac{b^{k}}{k!}x^{k} ,\quad x\in\mathbb{R}^{+}.\] As already remarked, assumption \(c>0\) is essential. Indeed for \(c=0\) Propositions 6 and 15 imply that we have similar representations for \(J^{\alpha}_{0+}\) and \(D^{\alpha}_{0+},\) only if the analytic function \(f\) satisfies \(f(0)=0.\) Alternative representations are given by Propositions 16, 17, in terms of Stirling functions. We have, for \(\alpha>0,\) \[(D^{\alpha}_{0+,c}e^{bt})(x)=e^{bx}\sum_{k=0}^{\infty}S_{c}(\alpha,k)x^{k}b^{k }\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode \nobreak\ (x>0)\] and \[(J^{\alpha}_{0+,c}e^{bt})(x)=e^{bx}\sum_{k=0}^{\infty}S_{c}(-\alpha,k)x^{k}b^{ k}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode \nobreak\ (x>0).\]
4. Let us consider the ”sinc” function which is analytic over the entire real line. The Taylor series is given by: \[\mbox{sinc}(x)=\frac{\sin\pi x}{\pi x}=\sum_{k=0}^{\infty}(-1)^{k}\frac{\pi^{2 k}}{(2k+1)!}x^{2k}.\] Moreover it is easy to see that sinc \(\in X_{c,loc}\) for \(c>0,\) while sinc \(\not\in X_{0,loc}.\) Using Lemma 5 in [25] we have immediately \[(J^{\alpha}_{0+,c}\mbox{sinc})(x)=\sum_{k=0}^{\infty}(-1)^{k}(c+2k)^{-\alpha} \frac{\pi^{2k}}{(2k+1)!}x^{2k}\] and \[(D^{\alpha}_{0+,c}\mbox{sinc})(x)=\sum_{k=0}^{\infty}(-1)^{k}(c+2k)^{\alpha} \frac{\pi^{2k}}{(2k+1)!}x^{2k}.\] Another representation in term of Stirling functions of second type is a consequence of Proposition 13. A formula for the (classical) derivatives of sinc can be found in [29]. For a given \(s\in I\!\!N,\) differentiating by series, we have \[(\mbox{sinc}\leavevmode\nobreak\ x)^{(s)} = \sum_{k=s}^{\infty}(-1)^{k}\frac{\pi^{2k}}{(2k+1)!}\frac{d^{s}}{ dx^{s}}x^{2k}=\sum_{k=s}^{\infty}(-1)^{k}\frac{\pi^{2k}}{(2k+1)!}A_{s,k}x^{2k-s}\] \[= \sum_{k=0}^{\infty}(-1)^{k+s}\frac{\pi^{2k+2s}}{(2k+2s+1)!}A_{s,k +s}x^{2k+s},\] where \[A_{s,k}=\prod_{\nu=0}^{s-1}(2k-\nu).\] Thus, using again Lemma 5 in [25], we have \[(J^{\alpha}_{0+,c}(\mbox{sinc}\leavevmode\nobreak\ t)^{(s)})(x)=\sum_{k=0}^{ \infty}(-1)^{k+s}(c+2k+s)^{-\alpha}\frac{\pi^{2k+2s}}{(2k+2s+1)!}A_{s,k+s}x^{2 k+s}\] and \[(D^{\alpha}_{0+,c}(\mbox{sinc}\leavevmode\nobreak\ t)^{(s)})(x)=\sum_{k=0}^{ \infty}(-1)^{k+s}(c+2k+s)^{\alpha}\frac{\pi^{2k+2s}}{(2k+2s+1)!}A_{s,k+s}x^{2k +s}.\] Note that for every odd \(s,\) the above formula is valid also for \(c=0,\) since in this instance \((\mbox{sinc}\leavevmode\nobreak\ x)^{(s)}\in X_{0,loc}.\)
## 7 Applications to partial differential equations
In this section we apply our theory to certain fractional differential equations. We notice here that the use of Mellin analysis in the theory of differential equations was considered in [4], dealing with Cauchy problems for ordinary differential equations, involving Mellin derivatives of integral order. In [35], Mellin analysis was applied to numerical solutions of Mellin integral equations. In the fractional case, differential equations were treated using various types of fractional derivatives, e.g. Riemann-Liouville, Caputo, Hadamard, etc (see [42]). The use of integral transforms is a very useful and used method for certain Cauchy or boundary value problems. However, the use of Mellin transforms in fractional differential equations involving Hadamard derivatives is so far not common.
Here we will examine certain boundary value problems related to an evolution equation and to a diffusion problem, using the Mellin transform approach and using Hadamard derivatives. In the first example, the fractional evolution equation originates from a Volterra integral equation with a special kernel. The second example is a fractional diffusion equation.
### An integro-differential equation
Let \(\alpha\in]0,1[\) be fixed, and let
\[K_{\alpha}(x,u):=\frac{1}{\Gamma(1-\alpha)}\bigg{(}\log\frac{x}{u}\bigg{)}^{- \alpha}\chi_{]0,x[}(u),\quad x>0.\]
Let us consider the following problem: find a function \(w:\mathbb{R}^{+}\times\mathbb{R}^{+}\rightarrow\mathbb{C}\) such that \(w(x,0)=f(x),\) for a fixed boundary data \(f:\mathbb{R}^{+}\rightarrow\mathbb{C},\) and
\[Ax\frac{\partial}{\partial x}\int_{0}^{\infty}K_{\alpha}(x,u)w(x ,y)\frac{du}{u}+B\frac{\partial}{\partial y}w(x,y)=0,\] (14)
\(A,B\) being two positive constants.
Now, equation (14) can be rewritten as a fractional partial differential evolution equation in the Hadamard sense, as
\[A(D^{\alpha}_{0+}w(\cdot,y))(x)+B\frac{\partial}{\partial y}w(x,y)=0, \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (x,y\in\mathbb{ R}^{+}).\]
Without loss of generality we can assume \(A=B=1,\) thus
\[(D^{\alpha}_{0+}w(\cdot,y))(x)=-\frac{\partial}{\partial y}w(x,y) ,\leavevmode\nobreak\ \leavevmode\nobreak\ (x,y\in\mathbb{R}^{+})\] (15)
with initial data \(w(x,0)=f(x),\leavevmode\nobreak\ x>0.\) We call for a function \(w:\mathbb{R}^{+}\times\mathbb{R}^{+}\rightarrow\mathbb{C}\) satisfying the following properties
**1)**: \(w(\cdot,y)\in{\mathcal{X}}^{\alpha}_{[a,0]}\) for every \(y>0\) and for a fixed \(a<0\)
**2)**: there is a function \(K\in X_{\nu},\)\(\nu\in[a,0[,\) such that for every \(x,y>0\)
\[\bigg{|}\frac{\partial}{\partial y}w(x,y)\bigg{|}\leq K(x)\]
**3)**: for a fixed \(f\in X_{\nu},\) we have \(\lim_{y\to 0^{+}}\|w(\cdot,y)-f(\cdot)\|_{X_{\nu}}=0.\)
Assuming that such a function exists we apply the Mellin transform with respect to the variable \(x\) on the line \(\nu+it\) to both sides of (15), obtaining
\[[D^{\alpha}_{0+}w(\cdot,y)]^{\wedge}_{M}(\nu+it)=-\bigg{[}\frac{\partial}{ \partial y}w(\cdot,y)\bigg{]}^{\wedge}_{M}(\nu+it).\]
Using Theorems 1 and 5 we have
\[[D^{\alpha}_{0+}w(\cdot,y)]^{\wedge}_{M}(\nu+it)=(-\nu-it)^{\alpha}[w(\cdot,y) ]^{\wedge}_{M}(\nu+it).\]
Moreover by property 2),
\[\bigg{[}\frac{\partial}{\partial y}w(\cdot,y)\bigg{]}^{\wedge}_{M}(\nu+it)= \int_{0}^{+\infty}x^{\nu+it-1}\frac{\partial}{\partial y}w(x,y)dx=\frac{ \partial}{\partial y}[w(\cdot,y)]^{\wedge}_{M}(\nu+it),\]
thus equation (15) is transformed into a first order ordinary differential equation
\[(-\nu-it)^{\alpha}[w(\cdot,y)]^{\wedge}_{M}(\nu+it)=-\frac{\partial}{\partial y }[w(\cdot,y)]^{\wedge}_{M}(\nu+it)\]
which has the solution
\[[w(\cdot,y)]^{\wedge}_{M}(\nu+it)=A(\nu+it)e^{-(-\nu-it)^{\alpha}y}\]
where \(A(\nu+it)\) is independent of \(y.\) The determination of \(A(\nu+it)\) follows from condition 3); indeed we have that \([w(\cdot,y)]^{\wedge}_{M}(\nu+it)\rightarrow[f]^{\wedge}_{M}(\nu+it)\) uniformly for \(y\to 0^{+}\) and for \(t\in\mathbb{R},\) and so \(A(\nu+it)=[f]^{\wedge}_{M}(\nu+it),\) obtaining
\[[w(\cdot,y)]^{\wedge}_{M}(\nu+it)=[f]^{\wedge}_{M}(\nu+it)e^{-(-\nu-it)^{ \alpha}y}.\]
Now putting \(s=-\nu-it,\) we have \(Res=-\nu>0\) and so, since \(y>0,\) the inverse Mellin transform of \(e^{-ys^{\alpha}}\) exists and it is given by (see Theorem 6 in [18])
\[G(x,y):=\frac{1}{2\pi}\int_{-\infty}^{+\infty}e^{-(-\nu-it)^{ \alpha}y}x^{-\nu-it}dt.\] (16)
Thus if the solution of (15) exists, by the Mellin-Parseval formula (see [18]), it has the form
\[w(x,y)=\int_{0}^{+\infty}f(v)G(\frac{x}{v},y)\frac{dv}{v}, \leavevmode\nobreak\ \leavevmode\nobreak\ x,y>0.\]
In order to verify that the function \(w(x,y)\) is actually a solution of the problem we make a direct substitution. We have, by differentiating under the integral
\[= \frac{1}{2\pi}\int_{-\infty}^{+\infty}(-\nu-it)^{\alpha}x^{-\nu- it}e^{-(-\nu-it)^{\alpha}y}[f]^{\wedge}_{M}(\nu+it)dt.\]
Now, let us consider
\[(D^{\alpha}_{0+}w(\cdot,y))(x)=\delta(J^{1-\alpha}_{0+}w(\cdot,y))(x).\]
We have
\[(D^{\alpha}_{0+}w(\cdot,y))(x)=(x\frac{\partial}{\partial x}) \bigg{[}\frac{1}{\Gamma(1-\alpha)}\int_{0}^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^ {-\alpha}\bigg{(}\int_{0}^{\infty}f(v)G(\frac{u}{v},y)\frac{dv}{v}\bigg{)} \frac{du}{u}\bigg{]}\]
\[= (x\frac{\partial}{\partial x})\bigg{[}\frac{1}{\Gamma(1-\alpha)} \int_{1}^{+\infty}(\log z)^{-\alpha}\bigg{(}\int_{0}^{\infty}f(v)G(\frac{x}{zv },y)\frac{dv}{v}\bigg{)}\frac{dz}{z}\bigg{]}\]
\[= \frac{x}{\Gamma(1-\alpha)}\int_{1}^{+\infty}(\log z)^{-\alpha} \bigg{(}\int_{0}^{\infty}f(v)\frac{\partial}{\partial x}G(\frac{x}{zv},y)\frac {dv}{v}\bigg{)}\frac{dz}{z}.\]
Since
\[\frac{\partial}{\partial x}G(x,y)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}e^{-(- \nu-it)^{\alpha}y}(-\nu-it)x^{-\nu-it-1}dt,\]
putting \(s=-\nu-it,\) we obtain
\[(D^{\alpha}_{0+}w(\cdot,y))(x)\]
\[= \frac{x}{\Gamma(1-\alpha)}\int_{1}^{+\infty}(\log z)^{-\alpha} \bigg{(}\int_{0}^{\infty}f(v)\frac{1}{zv}\bigg{(}\frac{1}{2\pi}\int_{-\infty}^ {+\infty}e^{-s^{\alpha}y}s\bigg{(}\frac{x}{zv}\bigg{)}^{s-1}dt\bigg{)}\frac{dv }{v}\bigg{)}\frac{dz}{z}\]
\[= \frac{1}{2\pi}\frac{1}{\Gamma(1-\alpha)}\int_{1}^{+\infty}(\log z )^{-\alpha}\bigg{(}\int_{-\infty}^{+\infty}e^{-s^{\alpha}y}s\bigg{(}\frac{x}{z }\bigg{)}^{s}[f]^{\wedge}_{M}(-s)dt\bigg{)}\frac{dz}{z}\]
\[= \frac{1}{2\pi}\frac{1}{\Gamma(1-\alpha)}\int_{-\infty}^{+\infty}[ f]^{\wedge}_{M}(-s)e^{-s^{\alpha}y}s\bigg{(}\int_{0}^{x}\bigg{(}\log\frac{x}{u }\bigg{)}^{-\alpha}u^{s}\frac{du}{u}\bigg{)}dt.\]
Since Example 1 of Section 6, holds for \(c=0\) and complex \(b\) with Re \(b>0,\) we have
\[\frac{1}{\Gamma(1-\alpha)}\int_{0}^{x}\bigg{(}\log\frac{x}{u}\bigg{)}^{-\alpha }u^{s}\frac{du}{u}=(J^{1-\alpha}_{0+}u^{s})(x)=s^{\alpha-1}x^{s},\]
and so we have
\[(D^{\alpha}_{0+}w(\cdot,y))(x)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}s^{\alpha }x^{s}e^{-s^{\alpha}y}[f]^{\wedge}_{M}(\nu+it)dt,\]
i.e. the assertion. So we have proved the following
**Theorem 13**: _Under the assumptions imposed, equation (15) with the initial data \(f,\) has the unique solution given by_
\[w(x,y)=\int_{0}^{+\infty}f(v)G(\frac{x}{v},y)\frac{dv}{v},\leavevmode\nobreak \ \leavevmode\nobreak\ x,y>0,\]
_where the function \(G(x,y)\) is defined in (16)._
Note that for \(\alpha=1/2,\) we have a closed form for the function \(G(x,y).\) Indeed, using formula 3.7 page 174 in [51], we obtain
\[G(x,y)=\frac{y}{2\sqrt{\pi}}(-\log x)^{-3/2}\mbox{exp}\bigg{(}\frac{y^{2}}{4 \log x}\bigg{)}\chi_{]0,1[}(x)\]
and the solution is then given by
\[w(x,y)=\frac{y}{2\sqrt{\pi}}\int_{0}^{1}f(\frac{x}{v})(-\log v)^{-3/2}\mbox{ exp}\bigg{(}\frac{y^{2}}{4\log v}\bigg{)}\frac{dv}{v}.\]
Equation (15) was also discussed in [42], but using fractional Caputo derivatives. Our treatment, however, contains real proofs.
### A diffusion equation
For \(\alpha>0\) let us consider the fractional diffusion equation
\[(D^{\alpha}_{0+}w(\cdot,y))(x)=\frac{\partial^{2}}{\partial y^{2} }w(x,y),\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ (x,y\in \mathbb{R}^{+})\] (17)
with the initial condition
\[\lim_{y\to 0^{+}}\|w(\cdot,y)-f(\cdot)\|_{X_{0}}=0,\]
for a fixed \(f\in X_{0}.\)
We call for a function \(w:\mathbb{R}^{+}\times\mathbb{R}^{+}\rightarrow\mathbb{C}\) satisfying the following assumptions:
**1)**: \(w(\cdot,y)\in{\mathcal{X}}^{\alpha}_{0}\) for every \(y>0,\) and there exist \(N>0\) such that \(\|w(\cdot,y)\|_{X_{0}}\leq N,\) for every \(y\in\mathbb{R}^{+}.\)
**2)**: there are functions \(K_{1},K_{2}\in X_{0},\) such that for every \(x,y>0\)
\[\bigg{|}\frac{\partial}{\partial y}w(x,y)\bigg{|}\leq K_{1}(x),\leavevmode \nobreak\ \bigg{|}\frac{\partial^{2}}{\partial y^{2}}w(x,y)\bigg{|}\leq K_{2}(x)\]
**3)**: for a fixed \(f\in X_{0},\) we have \(\lim_{y\to 0^{+}}\|w(\cdot,y)-f(\cdot)\|_{X_{0}}=0.\)
Using the same approach as in the previous example, taking the Mellin transforms of both sides of the equation (17), we obtain
\[[D^{\alpha}_{0+}w(\cdot,y)]^{\wedge}_{M}(it)=-\bigg{[}\frac{\partial^{2}}{ \partial y^{2}}w(\cdot,y)\bigg{]}^{\wedge}_{M}(it).\]
Using Theorems 1 and 5 we have
\[[D^{\alpha}_{0+}w(\cdot,y)]^{\wedge}_{M}(\nu+it)=(-it)^{\alpha}[w(\cdot,y)]^{ \wedge}_{M}(it).\]
Moreover by property 2),
\[\bigg{[}\frac{\partial^{2}}{\partial y^{2}}w(\cdot,y)\bigg{]}^{\wedge}_{M}(it) =\frac{\partial^{2}}{\partial y^{2}}[w(\cdot,y)]^{\wedge}_{M}(it),\]
thus equation (17) is transformed into the second order linear ordinary differential equation
\[(-it)^{\alpha}z_{t}(y)=z^{\prime\prime}_{t}(y),\quad y>0,\] (18)
with respect to the function
\[z_{t}(y):=[w(\cdot,y)]^{\wedge}_{M}(it),\leavevmode\nobreak\ \leavevmode \nobreak\ \leavevmode\nobreak\ t\in\mathbb{R}.\]
If \(t=0\) the solution is the linear function \(z_{0}(y)=A(0)+B(0)y,\) while for \(t\neq 0,\) the characteristic equation associated with (18)
\[\lambda^{2}=\exp(\alpha\log(-it)),\]
has two complex solutions
\[\lambda_{1}:=|t|^{\alpha/2}\bigg{(}\cos\frac{\alpha\pi}{4}+i\sin\frac{\alpha \pi}{4}(-\mathop{\mathrm{sgn}}t)\bigg{)},\]
\[\lambda_{2}:=-|t|^{\alpha/2}\bigg{(}\cos\frac{\alpha\pi}{4}+i\sin\frac{\alpha \pi}{4}(-\mathop{\mathrm{sgn}}t)\bigg{)}.\]
Thus, for \(t\neq 0,\) we obtain the general solution
\[z_{t}(y)=A(t)e^{-|t|^{\alpha/2}(\cos(\alpha\pi/4)+i(-\mathop{\mathrm{sgn}}t) \sin(\alpha\pi/4))y}+B(t)e^{|t|^{\alpha/2}(\cos(\alpha\pi/4)+i(-\mathop{ \mathrm{sgn}}t)\sin(\alpha\pi/4))y}.\]
Now, let \(\alpha\) be such that \(\cos(\alpha\pi/4)>0.\) By the boundary condition 3), we have also that \(z_{t}(y)\) is uniformly convergent to \([f]^{\wedge}_{M}\) as \(y\to 0^{+}.\) Moreover, by assumption 1), there exists a constant \(N>0\) such that \(|z_{t}(y)|\leq N,\) for every \(t\in\mathbb{R}.\) This means that we must have \(B(t)=0\) for every \(t\in\mathbb{R},\) thus
\[z_{t}(y)=[w(\cdot,y)]^{\wedge}_{M}(it)=[f]^{\wedge}_{M}(it)e^{-|t|^{\alpha/2}( \cos(\alpha\pi/4)+i(-\mathop{\mathrm{sgn}}t)\sin(\alpha\pi/4))y}.\]
Now, the function
\[e^{-|t|^{\alpha/2}(\cos(\alpha\pi/4)+i(-\mathop{\mathrm{sgn}}t)\sin(\alpha\pi/ 4))y}\]
is summable as a function of \(t\in\mathbb{R},\) and its inverse Mellin transform is given by
\[G(x,y):=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-|t|^{\alpha/2}(\cos(\alpha\pi /4)+i(-\mathop{\mathrm{sgn}}t)\sin(\alpha\pi/4))y}x^{-it}dt.\]
Then if a solution exists it has the form
\[w(x,y)=\int_{0}^{\infty}f(u)G(\frac{x}{u},y)\frac{du}{u},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ x,y>0.\]
Analogously, if \(\alpha\) is such that \(\cos(\alpha\pi/4)<0,\) then we have \(A(t)=0\) for every \(t\in\mathbb{R},\) and the corresponding function \(G(x,y)\) takes the form
\[G(x,y)=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-|t|^{\alpha/2}(|\cos(\alpha\pi /4)|+i(-\mathop{\mathrm{sgn}}t)\sin(\alpha\pi/4))y}x^{-it}dt.\]
That the above function is really a solution can be proved, as before, by a direct subsitution into the differential equation.
The function \(G(x,y)\) can be written in a more simple form. Indeed, using Euler’s formula, putting \(a:=|\cos(\alpha\pi/4)|,\)\(b:=\sin(\alpha\pi/4),\) we can write:
\[G(x,y)=\frac{1}{2\pi}\int_{0}^{\infty}e^{-|t|^{\alpha/2}(a-ib)y} (\cos(t\log x-i\sin(t\log x))dt\]
\[+ \frac{1}{2\pi}\int_{0}^{\infty}e^{-t^{\alpha/2}(a+ib)y}(\cos(t \log x+i\sin(t\log x))dt\]
\[= \frac{1}{\pi}\int_{0}^{\infty}e^{-t^{\alpha/2}ay}[\cos(t\log x) \cos(t^{\alpha/2}by+\sin(t\log x)\sin(t^{\alpha/2}by)]dt\]
\[= \frac{1}{\pi}\int_{0}^{\infty}e^{-t^{\alpha/2}ay}\cos(t\log x-t^{ \alpha/2}by)dt.\]
For \(\alpha=1\) using Proposition 13, we obtain the (not fractional) equation
\[x\frac{\partial w}{\partial x}(x,y)=\frac{\partial^{2}w}{\partial y^{2}}(x,y), \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode \nobreak\ (x,y\in\mathbb{R}^{+}\]
and using our approach the corresponding problem has a unique solution of the form
\[w(x,y)=\int_{0}^{\infty}f(u)G(\frac{x}{u},y)\frac{du}{u},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ x,y>0,\]
where
\[G(x,y)=\frac{1}{\pi}\int_{0}^{\infty}\exp(-\frac{\sqrt{2t}y}{2})\cos(\frac{ \sqrt{2t}y}{2}-t\log x)dt.\]
This integral has a closed form. Indeed by an elementary subsitution, we can write
\[I:=\int_{0}^{\infty}\exp(-\frac{\sqrt{2t}y}{2})\cos(\frac{\sqrt{2t}y}{2}-t\log x )dt=2\int_{0}^{\infty}\exp(-\frac{\sqrt{2}yu}{2})\cos(\frac{\sqrt{2}yu}{2}-u^{ 2}\log x)dt.\]
Now, the above integral, depending on the sign of \(\log x,\) can be reduced to the integrals (\(p>0)\) (see [38], page 499)
\[\int_{0}^{\infty}ve^{-pv}\cos(2v^{2}-pv)dv=\frac{p\sqrt{\pi}}{8}\exp(-p^{2}/4),\]
if \(\log x>0\) and
\[\int_{0}^{\infty}ve^{-pv}\cos(2v^{2}+pv)dv=0,\]
if \(\log x\leq 0.\) Indeed, if we put \(u=\sqrt{2/\log x}v\) in the first case, and \(u=\sqrt{2/|\log x|}v\) in the second case, we get easily
\[I=\frac{\sqrt{\pi}}{\log x\sqrt{2\log x}}\exp\bigg{(}-\frac{y}{2\sqrt{2}\log x }\bigg{)},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ x>1,\]
while \(I=0\) for \(0<x<1.\) Therefore,
\[G(x,y)=\left\{\begin{array}[]{ll}\sqrt{\frac{\pi}{2 }}\frac{1}{(\log x)^{3/2}}\exp\bigg{(}-\frac{y}{2 \sqrt{2}\log x}\bigg{)},&\leavevmode\nobreak\ x>1,\leavevmode\nobreak\ y>0\\ \\ 0,&\leavevmode\nobreak\ 0<x\leq 1,\leavevmode\nobreak\ y>0.\end{array}\right.\]
For \(\alpha=1/2,\) the equation becomes
\[(D^{1/2}_{0+}w(\cdot,y))(x)=\frac{\partial^{2}}{\partial y^{2}}w(x,y), \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode \nobreak\ (x,y\in\mathbb{R}^{+})\]
and putting \(a:=\cos(\pi/8)=\sqrt{2+\sqrt{2}}/2,\leavevmode\nobreak\ b:=\sin(\pi/8)=\sqrt{2 -\sqrt{2}}/2,\) we obtain the following representation of the function \(G(x,y):\)
\[G(x,y)=\frac{1}{\pi}\int_{0}^{\infty}\exp(-\sqrt[4]{t}ay)\cos(t\log x-\sqrt[4] {t}by)dt.\]
For \(\alpha=4,\) using Proposition 13, our equation equation has the form
\[\sum_{k=0}^{4}S_{0}(4,k)x^{k}\bigg{(}\frac{\partial}{\partial x} \bigg{)}^{(k)}w(x,y)=\frac{\partial^{2}w}{\partial y^{2}}(x,y),\leavevmode \nobreak\ \leavevmode\nobreak\ (x,y\in\mathbb{R}^{+})\] (19)
i.e.
\[x^{4}\frac{\partial^{4}w}{\partial x^{4}}(x,y)+6x^{3}\frac{\partial^{3}w}{ \partial x^{3}}(x,y)+7x^{2}\frac{\partial^{2}w}{\partial x}(x,y)+x\frac{ \partial w}{\partial x}(x,y)=\frac{\partial^{2}w}{\partial y^{2}}(x,y), \leavevmode\nobreak\ \leavevmode\nobreak\ (x,y\in\mathbb{R}^{+})\]
In this instance we have \(\cos(\alpha\pi/4)=-1,\) and so, the unique solution of our problem for equation (19) has the form
\[w(x,y)=\int_{0}^{\infty}f(u)G(\frac{x}{u},y)\frac{du}{u},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ x,y>0,\]
where
\[G(x,y)=\frac{1}{\pi}\int_{0}^{\infty}e^{-t^{2}y}\cos(t\log x)dt.\]
This integral can be reduced by an elementary substitution, to the classical integral
\[g(v)=\int_{0}^{\infty}e^{-t^{2}}\cos(tv)dt=\frac{\sqrt{\pi}}{2}\exp(-v^{2}/4),\]
thus obtaining
\[G(x,y)=\frac{1}{2}\sqrt{\frac{\pi}{y}}\exp(-\log^{2}x/4y).\]
Another example in fractional case, is \(\alpha=5/2.\) In this case we have \(a=|\cos((5/8)\pi)|=\sqrt{2-\sqrt{2}}/2\) and \(b=\sin((5/8)\pi)=\sqrt{2+\sqrt{2}}/2.\) The corresponding function \(G(x,y)\) is given by
\[G(x,y)=\frac{1}{\pi}\int_{0}^{\infty}\exp({-t^{5/8}\sqrt{2-\sqrt{2}}y})\cos(t \log x-t^{5/4}\sqrt{2+\sqrt{2}}y/2)dt.\]
The above approach works for every value of \(\alpha\) except those for which \(\cos(\alpha\pi/4)=0.\) For \(\alpha=2,\) the resulting wave equation in the Mellin setting reads
\[x^{2}\frac{\partial^{2}}{\partial x^{2}}w(x,y)+x\frac{\partial}{\partial x}w(x ,y)=\frac{\partial^{2}}{\partial y^{2}}w(x,y),\leavevmode\nobreak\ \leavevmode \nobreak\ (x,y\in\mathbb{R}^{+})\]
But this equation is treated in detail in [18] with different boundary conditions. Experts in the evaluations of integrals could surely obtain more elegant representations of the \(G(x,y)-\) functions.
## 8 A short biography of R.G. Mamedov and some historical notes
Rashid Gamid-oglu Mamedov (changed into Mammadov since 1991), born in a peasant family on December 27, 1931, in the village Dashsalakhly, Azerbaijan SSR, lost his father at the age 6 and grew up with his mother and three sisters.
After spending the school years 1938-48 in the middle school of his home village, he was admitted to the Azerbaijan Pedagogical Institute (API) in Baku. In 1952, he graduated from its Mathematics Department with a so-called red diploma-honours (i.e. diploma cum laude). Immediately he was accepted for post-graduate study at the Chair of Mathematical Analysis of API, and defended his PhD thesis (”Kandidatskaya”) entitled ”Some questions of approximation by entire functions and polynomials” in 1955. This dissertation was one basis to the monograph ”Extremal Properties of Entire Functions” published in 1962 by his scientific supervisor I.I. Ibragimov. During the years 1953-1960, R.G. Mamedov was affiliated with the Chair of Mathematical Analysis at API in various positions, first as assistant (1953-1956) and senior lecturer (1956-57), later as docent (assistant professor, 1957-1960)
In 1960-1963, R.G. Mamedov held a position as senior researcher at the Institute of Mathematics and Mechanics of the Azerbaijan Academy of Science. Free of teaching duties, he published in a very short period of time his fundamental contributions to the theory of approximation by linear operators which made him known both in the former Soviet Union and abroad. These deep results comprised his ”Doktorskaya” (Habilitation degree) ”Some questions of approximation of functions by linear operators” submitted to Leningrad State Pedagogical A.I.Herzen-Institute in 1964. At the age of 33 years, R.G. Mamedov was awarded the Dr. of Phys. and Math. degree and was appointed as full professor to the Chair of Higher Mathematics at Azerbaijan Polytechnic Institute in Baku. Here he started his remarkable career as university teacher and educator, supervising as many as 23 PhD theses over the years, two of his students obtained the Dr. of Phys. and Math. degree themselves. In 1966, he gave a contributed talk at the ICM Congress in Moscow.
In 1967, he published his first monograph ”Approximation of Functions by Linear Operators”, recognised by the international mathematical community, although it was written in Azerbaijani. His son Aykhan reported that his father possessed a copy of [28] and recalls him speaking about the authors. In 1969, R.G. Mamedov, was appointed head of the Chair of Higher Mathematics at Azerbaijan State Oil Academy in Baku, a position which he held for 26 years. His cycle of investigations on properties of integral transforms of Mellin-type led to the publication of several research monographs, in particular ”On Approximation of Conjugate Functions by Conjugate M-Singular Integrals” (1977), ”On Approximation of Functions by Singular Integrals of Mellin Type” (1979), and ”Mellin Transform and Theory of Approximation” (1991). With equal enthusiasm, he created textbooks for use at the Azerbaijan institutions of higher education that are still of widespread use. His three-volume ”Course of Higher Mathematics” (1978, 1981, 1984) has several editions. R.G. Mamedov is also the author of 20 booklets and articles popularising mathematics among the general public and raising the standards of mathematics education in his home country.
R.G. Mamedov was not only an outstanding scientist and educator but also impressed everybody who met him by his outgoing character, friendly personality, and for being very accessible and supportive in personal and scientific matters. He married in 1960, two of his three sons being mathematicians themselves. R.G. Mamedov died on May 2, 2000, at the age of 68 after an infarct. He is survived by his spouse Flora Mamadova and three sons, there now being seven grandchildren, five boys and two girls, four being born after his death.
<figure><img src="content_image/1406.6202/x1.png"><figcaption>Figure 1: A photo of Prof. Rashid Mamedov together with his spouse FloraMamedova, who now takes her husband’s role in keeping alive Azerbaijanicustoms among her grandchildren. It was taken in the year of his death, 2000.</figcaption></figure>
Work in the broad area of approximation theory at the University of Perugia, was initiated by its former visionary, departmental director, C. Vinti (1926-1997) a master in Calculus of Variation (see [58]). It was decisively influenced by the work of J. Musielak, a chief representative of the Orlicz analysis school at Poznan, its first joint work bring in the direction of (nonlinear) integral operators in the setting of modular spaces ([13]), as well as by the work at Aachen, together with P. L. Butzer and R. L. Stens. During recent research at Perugia in matters asymptotic expansions of certain Mellin-type convolution operators and convergence properties in the spaces of functions of bounded variation (see [2, 3], [6, 7, 8, 9, 10, 11, 12]), a MathSciNet search led to the treatise of R.G. Mamedov under discussion. Since it was nowhere to be found, it was finally A. Gadjiev, Academy of Sciences of Azerbaijan, who within a few weeks kindly sent a copy, as a present. It has served us well not only in our local work at Perugia but also in the present joint investigation.
As to the work at Aachen, although we knew of the existence of the great school of approximation theory at Leningrad since 1949 (through G.G.Lorentz), it was the Second All-Union Conference on Constructive Theory of Functions, held at Baku on Oct.8-13, 1962 , that drew our attention to approximation theory at Baku. That was a couple of years after its proceedings (with 638 pp.) appeared in 1965. (The Aachen group organised the first conference on approximation in the West (August 4-10,1963 ;ISNM, Vol. 5, Birkhaeuser, Basel, 1964)).
It was Aachen’s former student E.L. Stark (1940-1984), who in view of his fluent knowledge of Russian kept well aware of approximation theoretical studies at Leningrad, Moscow and Kiev, was surprised when he discovered the Baku proceedings. In fact, Russian approximation theory was a model for us in Aachen, especially in its earlier years; and Stark’s great input benefited us all. We exchanged letters with R.G.Mamedov and in 1974 invited him to participate in our Oberwolfach conference on Linear Operators and Approximation II, held March 30 - April 6. But he was unable to attend at the last moment (likewise in the case of S.M. Nikolskii, S.A. Teljakovski and B.S. Mitijagin), as is recorded in its Proceedings (ISNM, Vol. 25, Birkhaeuser Basel, 1974). In our volume with R.J Nessel,” Fourier Analysis and Approximation (Birkhaeuser/Academic Press, 1971), we cited eight papers of R.G. Mamedov , plus his book ”Approximation of Functions by Linear Operators” (Azerbaijani, Baku, 1966). They played a specific role in our book. The work on Mellin analysis at Aachen , together with S. Jansche (see [18], [19], [20], [21]) was independent of that at Baku.
Figure 2: Private photo of the 30 Azerbaijani participants at the ICM, held in Moscow 1966, and kindly forwarded to the authors by Prof. Boris Golubov. Prof. Mamedov stands with his large briefcase in the first row, on the extreme left, the President of the Azerbaijani Academy of Sciences (in 1966), Acad,Z. Khalilov, stands in the center of the first row, eighth from the left, together with Prof.I.I. Ibragimov (fifth from the left) and the Dean of the Mechanical- Mathematical Department of Azerbaijani State University, Prof. A.I. Guseinov, sixth from the left. Prof.Golubov was invited to be present in this photo since he spent his first three years (1956-195 ) as a student at their university, and also participated. in the Congress. We find him in the second row, third from the right.
## 9 Concluding remarks
The theory of Mellin analysis is a fascinating field of research, one still in the state of development, one which will surely have further important applications in various fields of applied mathematics. As noted in the Introduction, a pioneering contribution in this direction was the treatise of R.G. Mamedov [45]. The translation into English of the main part of Mamedov’s preface reads: _In classical approximation theory approximation of functions by polynomials and entire functions are considered, and relations between the order of best approximation of the functions and their structural and differential properties are studied. In connection with the saturation problem and P.P. Korovkin theorems on the convergence of linear positive operators, numerous investigations are dedicated to the approximations of functions by linear operators, in particular by linear positive operators, and by various singular integral operators. To this aim some function classes are introduced and studied. Moreover, the saturation classes of different linear operators by means of Fourier transform or other integral transforms are investigated. Many results in this field and the base of the theory of integral Fourier transform were published in the fundamental monograph of P.L. Butzer and R.J. Nessel ”Fourier analysis and approximation”. At present some other integral transforms are also used in studying different function classes and the associated saturation order of approximation by linear operators._
_The Mellin transform has important applications in the solution of boundary value problems in wedge shaped regions. It is also one of the most important methods for the study of classes of functions defined on the positive real line. The theory of Mellin transform requires the introduction of new concepts of derivative and integral, called M-derivative and M-integral. In this field in recent years many results have been produced. In this monograph we attempt significantly to complement those results and introduce them from the unified point of view. I have used material written earlier in the book with G.N. Orudzhev, namely ”On the approximation of functions by singular integrals of Mellin type, Baku, 1979._
After that, Mellin analysis was introduced in a systematic way in [18], [19], [20], then developed in [22], [23], [24], [25], [26] and later on in [6], [7], [8], [9], [10], [11], [12], [49]. Many other results and applications are surely to be discovered and the present paper is a further contribution in this direction.
Our theory of Hadamard-type fractional integrals and derivatives is concerned with real values of the parameter \(\alpha.\) The extension to complex values of \(\alpha\) can be carried out essentially in the same way, assuming Re \(\alpha\geq 0\) in place of \(\alpha\geq 0\) (see also e.g. [25]). For general complex values \(\alpha\in\mathbb{C},\) the theory may be more delicate. As an example, in Theorem 1, the assumption Re \(\alpha>0\) is basic for the application of the Abel-Stolz theorem. Indeed, for complex values of \(\alpha\) such that Re \(\alpha<0\) the convergence of the binomial series on the boundary of its convergence disk may fail. For Re \(\alpha\leq-1,\) this convergence fails at every point of the boundary, while for \(-1<Re\leavevmode\nobreak\ \alpha<0,\) it fails at just one point.
**Aknowledgments** The authors would like to thank Boris Ivanovich Golubov (Moscow) for his great help in regard to the short biography of R.G. Mamedov. He contacted his colleague in Baku, who in turn received a four pages biography of Mamedov together with a list of his publications kindly sent by his son Aykhan Mammadov, and made a first translation of this biography. The present biography is the extended and polished version kindly carried out by Peter Oswald (Bremen). The authors are grateful to Aykhan Mammadov for his extraordinary help in regard to various aspects of his fathers life, Azerbaijan, and the paper itself. The translation of the preface of [45] is due to Andi Kivinukk (Tallinn). Also, the authors wish to thank Annarita Sambucini for her technical support in including photos in the text.
The first and third authors have been partially supported by the Gruppo Nazionale Analisi Matematica, Probabilità e Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM), through the INdAM - GNAMPA Project 2014, and by the Department of Mathematics and Computer Sciences of University of Perugia.
## References
* [1] L. V. Ahlfors, Complex Analysis, McGraw-Hill Int. Eds, Third Edition, 1979.
* [2] L. Angeloni and G. Vinti, Approximation in variation by homothetic operators in multidimensional setting, Differential Integral Equations, 26(5-6) (2013), 655–674.
* [3] L. Angeloni and G. Vinti, Variation and approximation in multidimensional setting for Mellin integral operators, In _New Perspectives in Approximation and Sampling Theory_, in Honor of Prof. Butzer’s 85th birthday, Birkhaeuser, in print (2014).
* [4] M.H. Annaby and P.L. Butzer, Mellin type differential equations and associated sampling expansions, Numer. Funct. Anal. Optim., 21, (2000), 1-24.
* [5] D. Baleanu, K. Diethelm, E. Scalas and J.J. Trujillo, Fractional Calculus: models and numerical methods. World Scientific, Series on Complexity, Nonlinearity and Chaos, vol.3 (2012).
* [6] C. Bardaro and I. Mantellini, Voronovskaya-type estimates for Mellin convolution operators, Result Math., 50, (2007), 1-16.
* [7] C. Bardaro and I. Mantellini, Quantitative Voronovskaja formula for Mellin convolution operators, Mediterr. J. Math., 7(4), (2010), 483-501.
* [8] C. Bardaro and I. Mantellini, Approximation properties for linear combinations of moment type operators, Comput. Math. Appl., 62, (2011), 213-229.
* [9] C. Bardaro and I. Mantellini, On the iterates of Mellin-Fejer convolution operators, Acta Appl. Math., (2012), 2304-2313.
* [10] C. Bardaro and I. Mantellini, On Voronovskaja formula for linear combinations of Mellin-Gauss-Weierstrass operators, Appl. Math. and Comput., 218, (2012), 10171-10179.
* [11] C. Bardaro and I. Mantellini, On the moments of the bivariate Mellin-Picard type kernels and applications, Integral Transform Spec. Funct., 23(2), (2012), 135-148.
* [12] C. Bardaro and I. Mantellini, On Mellin convolution operators: a direct approach to the asymptotic formulae, Integral Transform Spec. Funct., 25(3), (2014), 182-195.
* [13] C. Bardaro, J. Musielak and G. Vinti, Nonlinear integral operators and applications, De Gruyter Series in Nonlinear Analysis and Applications, 9, Walter De Gruyter, Berlin, New York, 2003.
* [14] A. Boccuto, D. Candeloro and A. Sambucini, Vitali-type theorems for filter convergence related to Riesz space-valued modulars and applications to stochastic processes, published online in J. Math. Anal. Appl., doi: 10.1016/j.jmaa.2014.05.014.
* [15] Yu. A. Bryckhov, H-J. Glaeske, A.P. Prudnikov and K.T. Vu, Multidimensional integral transformations, Gordon and Breach, Philadelphia, 1992.
* [16] P.L. Butzer, Legendre transform method in the solution of basic problems in algebraic approximation, In: _Functions, Series, Operators_, (Proc. Conf. Budapest, 1980, dedicated to to L. Fejer and F. Riesz on their hundredth birthday), Coll. Math. Coc. Janos Bolyai, 35, North-Holland, 1883, Vol. I, 277-301.
* [17] P.L. Butzer, C. Bardaro, I. Mantellini, Mellin Analysis and Exponential Sampling, Part I: Mellin fractional integrals, in Proceedings of 10th International Conference on Sampling Theory and Applications, Eurasip Open Library, 2013.
* [18] P.L. Butzer and S. Jansche, A direct approach to the Mellin transform, J. Fourier Anal. Appl., **3**, (1997), 325-375.
* [19] P.L. Butzer and S. Jansche, Mellin transform theory and the role of its differential and integral operators, Proc. Second Int. Workshop ”Transform methods and special functions”, Varna, 1996, 63-83.
* [20] P.L. Butzer and S. Jansche, A self-contained approach to Mellin transform analysis for square integrable functions, applications, Integral Transforms Spec.Funct., 8(1999), 175-198.
* [21] P.L. Butzer and S. Jansche, Mellin-Fourier series and the classical Mellin transform. Approximation in Mathematics (Memphis, 1997), Comput. Math. Appl. 40(1), (2000), 49-62.
* [22] P.L. Butzer, A.A. Kilbas and J.J. Trujillo, Fractional calculus in the Mellin setting and Hadamard-type fractional integral, J. Mat. Anal. Appl., 269, (2002), 1-27
* [23] P.L. Butzer, A.A. Kilbas and J.J. Trujillo, Compositions of Hadamard-type fractional integration operators and the semigroup property, J. Mat. Anal. Appl., 269, (2002), 387-400.
* [24] P.L. Butzer, A.A. Kilbas and J.J. Trujillo, Mellin transform analysis and integration by parts for hadamard-type fractional integrals, J. Mat. Anal. Appl., 270, (2002), 1-15.
* [25] P.L. Butzer, A.A. Kilbas and J.J. Trujillo, Stirling functions of the second kind in the setting of difference and fractional calculus, Numer. Funct. Anal. Optimiz., 4(7-8), (2003), 673-711.
* [26] P.L. Butzer, A.A. Kilbas and J.J. Trujillo, Generalized Stirling functions of second type and representation of fractional order difference via derivatives, J. Diff. Equ. Appl., 9, (2003), 503-533.
* [27] P.L. Butzer, A.A. Kilbas L. Rodrigues-Germá and J.J. Trujillo, Stirling functions of first kind in the setting of fractional calculus and generalized differences, J. Diff. Equ. Appl., 13(8-9), (2007), 683-721.
* [28] P.L. Butzer and R.J. Nessel, Fourier Analysis and Approximation. Vol.I, Academic Press, New York (1971).
* [29] P.L. Butzer, G. Schmeisser and R.L. Stens, Shannon’s sampling theorem for bandlimited signls and their Hilbert transform, Boas-type formulae for higer order derivatives - The aliasing error involved their extensions from bandlimited to non-bandlimited signals, Entropy, 14 (11),(2012), 2192-2226.
* [30] P.L. Butzer and R.L. Stens, The operational properties of the Chebyshev transform. II. Fractional derivatives, in _The theory of the approximation of functions_, (Proc. Intern. Conf., Kaluga, 1975)” (Russian), 49-61, ”Nauka”, Moscow, 1977.
* [31] P.L. Butzer and R.L. Stens, Chebyshev transform methods in the theory of best algebraic approximation. Abh.Math. Sem.Hamburg 45, (1976), 165-190.
* [32] P.L. Butzer, R.L. Stens and M. Wehrens, Higher moduli of continuity based on the Jacobi translation operator and best approximation. C.R. Math. Rep. Acad.Sci.Canada 2(1980), 83-87.
* [33] P.L. Butzer and U. Westphal, An access to fractional differentiation via fractional differece quotients, in _Fractional calculus and its Applications_, Proc. conf. New Haven, Lecture Notes in Math, 457, (1975), 116-145, Springer, Heidelberg.
* [34] P.L. Butzer and U. Westphal, An introduction to fractional calculus, In: Hifler, H., Ed; _Applications of Fractional Calculus in Physics_, Singapore, Wordl Scientific Publ. (2000), 1-85.
* [35] J. Elschner and I.G. Graham, Numerical methods for integral equations of Mellin type, J. Comput. Appl. Math., 125 (2000), 423-437.
* [36] H-J. Glaeske, A.P. Prudnikov and K.A. Skornik, Operational calculus and related topics, Chapman and Hall, CRC, Boca Raton, FL, (2006).
* [37] A.V. Glushak and T.A. Manaenkova, Direct and inverse problems for an abstract differential equation containing Hadamard fractional derivatives, Diff. Equ., 47(9), (2011), 1307-1317.
* [38] I.S. Gradshteyn and I.M. Ryzhik, Table of Integrals, Series and Products, Academic Press, IV edition, 1980.
* [39] J. Hadamard, Essai sur l’etude des fonctions donnees par leur developpement de Taylor, J. Math. Pures et Appl., Ser 4, 8(1892), 101-186.
* [40] R. Hilfer, Applications of fractional calculus in Physics, World scientific Publ., Singapore (2000).
* [41] A.A. Kilbas, Hadamard-type fractional calculus, J. Korean Math. Soc., 38(6), (2001), 1191-1204.
* [42] A.A. Kilbas, H.M. Srivastava and J.J. Trujillo, Theory and applications of fractional differential equations, Elsevier, Amsterdam, 2006.
* [43] W. Kolbe and R.J. Nessel, Saturation theory in connections with the Mellin transform methods, SIAM J. Math. Anal., 246-262.
* [44] C. Kou, J. Liu and Y. Ye, Existence and uniqueness of solutions for the Cauchy-type problems of fractional diferential equations, Discrete Dyn. Nat. Soc., vol 2010, Article ID 142175, (2010).
* [45] R.G. Mamedov, The Mellin transform and approximation theory, (in Russian), ”Elm”, Baku, 1991.
* [46] R.G. Mamedov and G.N. Orudhzev, The approximation of functions by singular integrals of the Mellin type, (Russian), Azerbaidzhan. Inst. Nefti i Khimii, Baku, 1979, 1-76.
* [47] R.G. Mamedov and G.N. Orudhzev, Some characteristics of classes of functions that have fractional derivatives (Russian), In ”Investigations on some questions of the constructive theory of functions and differential equations”, 3-11, Azerbaidzhan. Inst. Nefti i Khimii, Baku, 1981.
* [48] R.G. Mamedov and G.N. Orudhzev, Some classes of functions, their interconnection and characteristics (Russian), In ”Investigations on some questions of the constructive theory of functions and differential equations”, 12-15, Azerbaidzhan. Inst. Nefti i Khimii, Baku, 1981.
* [49] I. Mantellini, On the asymptotic behaviour of linear combinations of Mellin-Picard type operators, Math. Nachr., 286(17-18), (2013), 1820-1832.
* [50] C. Martinez, M. Sanz and D. Martinez, About fractional integrals in the space of locally integrable functions, J. Math. Anal. Appl., 167 (1992), 111-122.
* [51] F. Oberhettinger, Tables of Mellin Transform, Springer-Verlag Berlin-Heidelberg-New York, 1974.
* [52] A.P. Prudnikov, Yu. A. Bryckhov and O. I. Marichev, Calculation of integrals and the Mellin transform, (Russian), Transalated in J. Soviet. Math. 54(6), (1991),1239-1341.
* [53] M.D. Qassim, K.M. Furati and N.-E. Tatar, On a differential equation involving Hilfer-Hadamard fractional derivative, Abstr. Appl. Anal., vol. 2012 Article ID 391062, (2012).
* [54] S.G. Samko, A.A. Kilbas and O.I. Marichev, Fractional Integrals and Derivatives. Theory and Applications, Yverdon: Gordon and Breach, Amsterdam, (1993).
* [55] W. R. Schneider and W. Wyss, Fractional diffusion and wave equations, J. Math. Phys., 30(1),(1988), 134-145.
* [56] R.L. Stens and M. Wehrens, Legendre transform methods and best algebraic approximation, Ann. Soc. Math. Polon. Ser. I: Comment.Math. 21 (1979), 351-380.
* [57] Z. Szmydt and B. Ziemian, The Mellin transformation and Fuchsian type partial differential equations, Kluwer, Dordrecht, 1992.
* [58] C. Vinti, Opere Scelte, Universitá di Perugia, Aracne Editrice, Roma, 2008.
* [59] U. Westphal, An approach to fractional powers of operators via fractional differences, Proc. London Math. Soc. 29(3), (1974), 557- 576.
* [60] W. Wyss, The fractional diffusion equation, J. Math. Phys., 27(11) (1986), 2782-2786.
* [61] A. I. Zayed, Handbook of function and generalized function tranformations, Mathematical Sciences Reference Series, CRC Press, Boca Raton, FL, 1996.
* [62] A.H. Zemanian, Generalized integral transformations, Intersciernce, New York, 1968.
|
0801.0420 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 33633,
"num_imgs": 7,
"llama3_tokens_count": 8915
} | [
"content_image/0801.0420/x1.png",
"content_image/0801.0420/x2.png",
"content_image/0801.0420/x3.png",
"content_image/0801.0420/x4.png",
"content_image/0801.0420/x5.png",
"content_image/0801.0420/x6.png",
"content_image/0801.0420/x7.png"
] | # Quantum pumping of electrons by a moving modulated potential
Markku Jääskeläinen
mrq@phy.stevens.edu
Frank Corvino
Christopher P. Search
Department of Physics and Engineering Physics, Stevens Institute of Technology, Castle point on the Hudson, Hoboken, NJ 07030, USA
Vassilios Fessatidis
Department of Physics, Fordham University, Bronx, NY 10458, USA
February 25, 2024
###### Abstract
Quantum pumping holds great potential for future applications in micro- and nanotechnology. Its main feature, dissipationless charge transport, is theoretically possible via several different mechanisms. However, since no unambiguous verification has been demonstrated experimentally, the question of finding a viable mechanism for pumping remains open. Here we study quantum pumping in an one dimensional electron waveguide with a single time-dependent barrier. The quantum pumping of electrons using a potential barrier whose height and position are harmonically varied is analyzed analytically and by numerically solving the time-dependent Schrödinger equation. The pumped charge is modeled analytically by including two contributions in linear response theory. First, the scattering of electrons off a potential moving slowly through matter-waves gives a contribution independent of the translational velocity of the potential. Second, Doppler-shifted scattering events give rise to a velocity dependent contribution, which is found in general to be small in comparison with the first one. The relative phase between the oscillations of the height and position is found to be the factor that determines to what extent either contribution is present.
pacs: 73.23.-b, 03.65.-w,72.10.Bg
## I Introduction
Quantum pumping [1] is a novel way of transporting charge or spin[2] without applying bias voltages in nanoscale conductors. The main idea has been around for some time, beginning with the seminal work by Thouless [3], who envisioned transport of charge in a moving periodic potential with similar results later obtained by Niu [4]. The essential idea of pumping is that the electrons interact with a potential that depends on at least two independent parameters that vary periodically in time. When these parameters vary out of phase with each other, a finite dc current is produced that depends only on how the parameters are varied. If the cyclic variation of the parameters is much slower than all other time scales, the wave function of the electrons is adiabatically deformed and because of this quantum pumping is often called adiabatic quantum pumping. True quantum pumping is qualitatively different from classical dissipative rectification of an ac signal with the nearest classical analogue to quantum pumping being a peristaltic pump or Archimedean screw. The first claim for experimental observation of quantum pumping was reported by Switkes _et.al.[5]_ where the shape of an electrostatically defined quantum dot was cyclically deformed. Theoretical work showed that time dependence of the experimental parameters may introduce stray capacitances that produce a rectifying effect [6], which may overshadow the contribution from quantum pumping. The validity of this scenario was later verified experimentally [7].
A major objective in theoretical studies is to calculate the amount of charge transported per driving cycle for periodic signals. Thouless showed in his original work that the transported charge is quantized if the Fermi energy lies in an energy gap of the Hamiltonian [3] and others have shown that quantization of the pumped charge occurs due to Coulomb blockade [8; 9] although quantum pumping is not necessarily quantized [10]. Büttiker, Thomas, and Pretre [11] derived an expression for current partition in multi-probe conductors, and expressed the charge due to quantum pumping in terms of the instantaneous scattering matrix and its derivatives with respect to the driving parameters. Brouwer[10] used these results to derive a connection with geometric transport, where the adiabatic curvature measures the sensitivity of the quantum states to parametric changes in the Hamiltonian. These results are all based on linear response theory. Büttiker and Moskalets [12] and also Kim [13] applied the technique of Floquet scattering to deal with situations beyond the linear response regime for periodic variations.
Most theoretical discussions of quantum pumping focus on either shape deformations or modulated tunneling rates of a quantum dot [10; 8; 9; 14] or variations of the amplitude of two localized potential barriers in a quantum wire [12; 13]. The contribution to the pumped charge from a scatterer translated a finite distance was first discussed by Avron, Elgart, Grag, and Sadun [15]. Cohen, Kottos, and Schanz [16] treated translation in annular geometries, where magnetic fluxes give rise to Aharonov-Bohm type effects. However, periodic variations of the position alone of a scatterer in an open one dimensional waveguide will of course not produce any net pumped charge. In this paper we consider the quantum pumping by a barrier undergoing periodic translation together with the simultaneous modulation of its height. We thus extend earlier studies to the case of pumping with a single, localized barrier. We analyze the system by using both the formalism developed by Büttiker, Thomas, and Pretre [11], and Brouwer[10] and also by extending the results of Avron, Elgart, Grag and Sadun for translated potentials. The parametrically varied scattering matrix is taken as a starting point to derive results for both contributions. We argue that there are two mechanisms that contribute to the total pumped charge. The first is the ”snow plow” dynamics of Avron, Elgart, Grag and Sadun resulting from pushing the electrons. The second is the Doppler shifted scattering of the matter waves off the potential that originates from the finite velocity of the potential.
Time-dependent studies of the quantum behavior of electrons in guided nanostructures is relatively new. Fy and Willander[17] considered the effects of gate bias and device geometry on the I-V characteristics beyond a plane-wave model. Of more relevance to the results presented here is the work by Agarwal and Sen[18], where quantum pumping was studied in the time-domain for a tight-binding model. Oriols, Alarcon, and Fernandez-Diaz[19] studied the dynamics of independent electrons in phase-coherent devices beyond periodic driving with quantum pumping as an example. Therefore in addition to our analytic results, we study the quantum dynamics of the proposed pumping device by performing numerical simulations. These simulations allow us to visualize the effect of the pumping potential on the electron wave function starting from an empty wire.
The paper is organized as follows: In section II, we present our model and derive expressions for the two contributions to the pumped charge. In section III, we present numerical results and analyze the different contributions. Finally, in section IV, we summarize our results and discuss the implications of them.
## II Theory
Our aim here is to investigate the quantum pumping of a single translated and modulated potential barrier in a one dimensional quantum wire, both by using a plane wave scattering approach, and later in section III using numerical simulations. For simplicity we choose a specific model system basic enough to treat analytically, yet general enough to draw universal conclusions from. To achieve this, we study the quantum dynamics of electrons scattering off of the potential
\[V(x,t)=\frac{V_{0}(t)}{\cosh^{2}[(x-x_{c}(t))/L]},\] (1)
where \(L\) determines the width of the barrier. Both the position \(x_{c}\) and the barrier height \(V_{0}\) are varied harmonically around their central values with a difference \(\Delta\) in relative phase. The amplitude is taken to oscillate around a central value, which changes according to
\[V_{0}(t)=A_{0}[1+\kappa\sin(\omega t+\Delta)],\] (2)
and the barrier center is taken to depend on time as
\[x_{c}(t)=x_{0}\sin(\omega t).\] (3)
Equation (3) gives us for the instantaneous velocity of the barrier
\[v_{c}(t)\equiv\dot{x}_{c}(t)=\omega x_{0}\cos(\omega t).\] (4)
Our choice of potential is due to convenience as Eq. (1) has an analytically known solution for the scattering matrix[21].
<figure><img src="content_image/0801.0420/x1.png"><figcaption>Figure 1: The potential barrier (1) as a function of time for (a) Δ=0, and (b)Δ=π/2. Here x is measured in units of x0, which is defined in the text, t isin units of T, V is in units of A0, and κ=0.5 while L=x0/2. For the caseΔ=π/2, which is shown in b), the potential always takes on values at or belowA0 as is it moves towards larger values of x, and similarly values larger thanA0 for the other half of the cycle. From this it follows intuitively that the’snow-plow’ mechanism can pump charge through quantum scattering of electrons.</figcaption></figure>
In Fig. 1 the potential (1) is shown as a function of time and space. In (a) the relative phase is \(\Delta=0\), and the potential reaches its highest and lowest values at the turning points of the trajectory. For this case, the barrier height changes most rapidly at \(x=0\). In (b) we have \(\Delta=\pi/2\) and the extreme values of the height occur at \(x=0\), which is where the translational velocity peaks. For the potential (1), the scattering amplitudes for plane waves of momentum \(k_{F}\) are given by [21]
\[r(k_{F},V_{0})=\frac{\Gamma(1+\nu-ik_{F}L)\Gamma(-\nu-ik_{F}L)\Gamma(ik_{F}L)} {\Gamma(1+\nu)\Gamma(-\nu)\Gamma(-ik_{F}L)},\] (5)
and
\[t(k_{F},V_{0})=\frac{\Gamma(1+\nu-ik_{F}L)\Gamma(-\nu-ik_{F}L)}{\Gamma(1-ik_{F }L)\Gamma(-ik_{F}L)},\] (6)
and where
\[\nu=\frac{1}{2}\left[-1+\sqrt{1-8V_{0}L^{2}}\right].\] (7)
\(\Gamma(z)\) is the standard Gamma function. As the potential (1) is translated along the x-axis in an oscillatory manner, as shown in Fig. 1, particles which are scattered off of it see transmission/reflection amplitudes modulated periodically in time. We note here that the scattering matrix does not depend explicitly on the position \(x_{c}(t)\) of the scatterer. Rather the change of position gives rise to a pumped net charge in two physically distinct ways.
First, a contribution to quantum pumping occurs for any finite period due to Doppler shifting of the reflection and transmission amplitudes of the potential. This can be understood intuitively by considering the scattering off of the moving barrier from an inertial frame at rest with the scattering potential. In a frame moving at velocity \(v_{c}\), the instantaneous velocity of the potential, we find that the potential appears to be stationary, but that the momenta of plane waves propagating at \(k=\pm k_{F}\) change to \(k=\pm k_{F}-v_{c}\). A pedagogical sketch of this is shown in Fig. 2, where the potential is shown together with vectors representing both the velocity of the potential and the momenta of plane waves propagating inwards towards the potential. In (a) the velocities are shown in the lab frame, and the momenta of the incoming plane waves are equal in magnitude. In (b) the same situation is shown in a frame moving with \(v_{c}\) to the right now also including the scattered waves. All momenta are now shifted by the translational velocity of the potential, and an asymmetry between positive and negative momenta is created. In the moving frame the potential is stationary and the scattering can be treated using standard scattering theory with the modification that the momenta of the left and right going plane waves are shifted as indicated in Fig. 2.
<figure><img src="content_image/0801.0420/x2.png"><figcaption>Figure 2: The scattering of electrons off the potential (1) shownschematically in a) the lab frame and b) an inertial frame at rest relative tothe potential. The momenta of the incoming and scattered waves as well as theinstantaneous velocity of the potential are shown by arrows whose lengths areproportional to the magnitude of the velocities. In the lab frame shown in (a)the incoming plane waves have momenta of equal magnitude given by kF. In themoving frame shown in (b) the left going waves have larger momenta than theright going ones, as indicated in the figure. In the moving frame thepotential is stationary and the scattering can be treated using standardscattering theory with the modification that the momenta of the left and rightgoing plane waves are shifted as indicated above. In (b) the momentacorrespond from top to bottom to: transmitted, reflected and incoming wavesrespectively.</figcaption></figure>
Applying this to the scattering of plane waves, we find from Galileo invariance of the Schrödinger equation that the instantaneous scattering matrix, which relates the incoming and outgoing amplitudes, is given by a \(4\times 4\) S-matrix corresponding to the propagating modes \(\pm k_{F}-v_{c}\) and \(\pm k_{F}+v_{c}\). The reflection and transmission probability amplitudes in the moving frame are \(\tilde{r}_{\pm}=r(k_{F}\pm v_{c},V_{0})\) and \(\tilde{t}_{\pm}=t(k_{F}\pm v_{c},V_{0})\). Returning to the laboratory frame, the transmitted waves have momentum \(\pm k_{F}\) while the reflected waves have momentum \(\pm k_{F}+2v_{c}\), thus being scattered inelastically, and we find that the corresponding non-zero scattering probabilities in the S-matrix are given by (for \(k_{F}>2v_{c}\)) [12; 13; 22; 25]
\[|S_{-k_{F}\rightarrow-k_{F}}|^{2} = |t_{+}|^{2}=\frac{k_{F}}{k_{F}}|\tilde{t}_{+}|^{2},\] (8)
\[|S_{k_{F}\to k_{F}}|^{2} = |t_{-}|^{2}=\frac{k_{F}}{k_{F}}|\tilde{t}_{-}|^{2},\] (9)
\[|S_{k_{F}\rightarrow-k_{F}+2v_{c}}|^{2} = |r_{-}|^{2}=\frac{k_{F}-2v_{c}}{k_{F}}|\tilde{r}_{-}|^{2},\] (10)
\[|S_{k_{F}\to k_{F}+2v_{c}}|^{2} = |r_{+}|^{2}=\frac{k_{F}+2v_{c}}{k_{F}}|\tilde{r}_{+}|^{2}.\] (11)
In this case we thus explicitly take into account the velocity dependence of the scattering against a moving target. This results in different scattering energies for the electrons when considered from a frame at rest with the barrier, and can be viewed as pumping due to Doppler-shifted scattering events. We note here that as a consequence of the translational motion, Eqs. (5,6) are valid only when
\[\frac{(k_{F}+\omega x_{0})^{2}}{2}<A_{0}(1-\kappa),\] (12)
where \(k_{F}+\omega x_{0}\) is the maximal instantaneous momentum of the plane wave in the moving frame. Equation (12) is a condition for the dynamics to take place in the tunneling regime, when the total kinetic energy in the moving frame is smaller than the potential height at its minimum.
For any two-terminal device where two independent parameters \(X_{1}\) and \(X_{2}\) are varied cyclically, the charge accumulated per period on the left/right lead is given by[10]
\[Q_{L,R}=\frac{e}{\pi}\int_{S}\sum_{i,j\in L,R}Im\frac{\partial S^{*}_{ij}}{ \partial X_{1}}\frac{\partial S_{ij}}{\partial X_{2}}dX_{1}dX_{2},\] (13)
where the label j stands for the modes propagating towards the left (L) or right (R) lead respectively. Here, using the elements of the scattering matrix and Eq. (13), we obtain for the charge per cycle on the right lead due to the Doppler shifted contribution
\[Q_{D}=\frac{e}{\pi}Im\int_{S}(\frac{\partial t^{*}_{-}}{\partial v_{c}}\frac{ \partial t_{-}}{\partial V_{0}}-\frac{\partial t_{-}}{\partial v_{c}}\frac{ \partial t^{*}_{-}}{\partial V_{0}}+\frac{\partial r^{*}_{+}}{\partial v_{c}} \frac{\partial r_{+}}{\partial V_{0}}-\frac{\partial r_{+}}{\partial v_{c}} \frac{\partial r^{*}_{+}}{\partial V_{0}})dS.\] (14)
Alternatively, the charge can be calculated from
\[Q_{D}=\frac{e}{\pi}Im\int_{\partial S}(t^{*}_{-}\nabla t_{-}+r^{*}_{+}\nabla r _{+})\cdot d\vec{X},\] (15)
where we have introduced the notation \(\vec{X}=(v_{c},V_{0})\), and \(\nabla=(\partial/\partial v_{c},\partial/\partial V_{0})\). For the case of strong pumping, i.e. when the integrand in Eq. (14) varies appreciably over the integration area, the contribution is easier to calculate numerically using Eq. (15). The reason for this simply being that a one-dimensional integral requires less computation time than a two-dimensional one, and the increase in computation time for using Eq. (14) becomes noticeable for rapidly varying integrands. Both expressions Eq. (14) and Eq. (15) depend on the derivatives of the reflection and transmission amplitudes, with respect to both the barrier height and to the translational velocity of the potential, and are complicated enough to prohibit the derivation of compact analytical expressions for the pumped charge.
A second contribution comes from moving a scatterer slowly an infinitesimal distance \(dx_{c}\) through an impinging matter-wave of momentum \(k_{F}\), and gives rise to pumping through a quantum mechanical “snow-plow dynamics” [15]
\[dQ_{SP}=-\frac{ek_{F}}{\pi}|r(k_{F},V_{0})|^{2}dx_{c},\] (16)
that can be interpreted as resulting from reflecting a fraction \(|r|^{2}\) of the \(k_{F}dx_{c}/\pi\) electrons occupying the region in front of the barrier. As discussed in Ref. [15], Eq. (16) is obtained from the results of Büttiker, Thomas, and Pretre[11] for the adiabatically pumped charge due to moving the scatterer a distance \(dx_{c}\). The net transferred charge over one pumping cycle starting at an arbitrary time \(t_{0}\) is then given by
\[Q_{SP}(t_{0})=-\frac{ek_{F}}{\pi}\int_{t_{0}}^{t_{0}+T}|r(k_{F},V_{0}(t))|^{2} v_{c}(t)dt.\] (17)
The integrand of Eq. (17) is explicitly dependent on time and on the velocity \(v_{c}(t)\). Despite this, the pumped charge is independent of the velocity and its temporal evolution since \(Q_{SP}\) is an adiabatic invariant. The occurrence of the velocity and the time only serve to parameterize the integration. For the case \(\Delta=0\), illustrated in Fig. 1 (a), we expect that \(Q_{SP}=0\), since contributions coming from the potential moving to the right, when \(v_{c}>0\) are exactly canceled when the potential moves back to the left with \(v_{c}<0\) and identical value of the barrier height. For \(\Delta>0\), when the potential is moving to the left, the amplitude is on average below its central value, \(A_{0}\), whereas during the motion in the other direction it is above the central value \(A_{0}\) on average. The reflection probability is therefore larger when the potential moves to the right versus the left. This implies that a net charged will get pushed towards the right during one complete cycle. The difference in the amplitudes is maximal for \(\Delta=\pi/2\), and as a result the difference in charge reflected to the left and to the right is also maximal.
When \(\Delta=\pi/2\), as shown in Fig. 1 (b), the rate of change of the height is maximal for \(x=\pm x_{0}\), where the translational velocity is equal to zero. Likewise, we have maximal velocity at \(x=0\) when the change in height equals zero. For \(\Delta=0\) this is exactly reversed. The behavior of the relevant parameters is illustrated schematically in Fig. 3, where the integration contours in \((x_{c},V_{0})\) and \((v_{c},V_{0})\) are shown for the different phases \(\Delta=0\) and \(\Delta=\pi/2\). In (a) and (b) we have \(\Delta=0\), and we see that the area enclosed in \((x_{c},V_{0})\) equals zero, whereas the area in \((v_{c},V_{0})\) is maximal. For \(\Delta=0\) we thus expect that \(Q_{SP}=0\), whereas \(Q_{D}\) is maximal. In (c) and (d) we have \(\Delta=\pi/2\) and the situation is reversed so that we expect \(Q_{SP}\) to be maximal while \(Q_{D}=0\). Since of the two values \(\Delta=0\) maximizes the velocity dependent contribution and \(\Delta=\pi/2\) maximizes the position dependent pumped charge, we can view the pumping in this system as being the combination of two physically different mechanisms whose relative contributions are determined by the choice of relative phase between the driving parameters. The pumped charge for \(\Delta=-\pi/2\) and \(\pi\) will be the same as for \(\Delta=\pi/2\) and \(0\), respectively, except that the direction of the pumped charge is reversed. We also note here that the contribution from the Doppler shifted scattering depends on the velocity of the scatterer, and thus goes to zero in the limit of infinitely slow driving for finite spatial amplitude \(x_{0}\), leaving only \(Q_{SP}\) given by Eq. (17) as a contribution to the pumped charge.
<figure><img src="content_image/0801.0420/x3.png"><figcaption>Figure 3: Integration contours in the parameters (xc,V0) and (vc,V0) for thedifferent phase values Δ=0 and Δ=π/2. In (a) and (b) we have Δ=0, and we seethat the area enclosed in (xc,V0) equals zero, whereas the area in (vc,V0) ismaximal. For Δ=0 we thus expect that QSP=0, whereas QD should be maximal. In(c) and (d) we have Δ=π/2 and the situation is the opposite.</figcaption></figure>
## III Simulations
To investigate the presence of quantum pumping in our system in more detail, we simulate the dynamics by solving the following time-dependent Schrödinger equation numerically[24]
\[i\frac{\partial\Psi}{\partial t}=-\frac{1}{2}\frac{\partial^{2}\Psi}{\partial x ^{2}}+V(x,t)\Psi+S(x,t),\] (18)
where \(V(x,t)\) is given by Eq. (1), and \(S(x,t)\) a function describing the source, here taken to be both phase coherent and quasi-monochromatic,
\[S(x,t)=iS_{0}\exp(-(x-x_{s})^{2}/L_{s}^{2}\pm ik_{F}x),\] (19)
where \(S_{0}\) is the source strength, \(x_{s}\) is the central position of the source, and \(L_{s}\) is the width of the source. For numerical convenience, complex absorbing potentials [20] were used to implement transparent boundaries and thus avoid any significant effects of finiteness of the numerical grid on the dynamics. Note that here, as in the rest of the paper, we work in units where \(\hbar=m=1\). Figure 4 shows the dynamics of a typical simulation where charge is injected from the left lead into an initially empty scattering region.
<figure><img src="content_image/0801.0420/x4.png"><figcaption>Figure 4: The quantum dynamics of an initially empty scattering region.Matter-waves are seen to enter the region from a source on the left, and afteran initial transient a periodic stable state is reached. The oscillating peakat x=0 is due to the interference in the scattering region where the parts tobe transmitted still overlaps with the incoming and reflected parts. Note thatx is in units of the Fermi wavelength, and t is in units of T. For thepotential we used the following values kFL/2π=3/π, 2A0/k2F=10/9, κ=0.05, andkFx0/2π=3/40π.</figcaption></figure>
An initial transient when the scattering region is filled by the matter-waves is seen to be followed by a periodic regime. To the right, for \(x>0\), trains of transmitted waves are seen to exit the scattering region with amplitudes modulated in time. To the left, for \(x<0\), the reflected part is seen superimposed on, and interfering with the incoming waves.
For a more quantitative check, we calculate the pumped charge in the time-domain by integrating the difference in the instantaneous probability currents over one period of time. These are measured at two points \(x_{\pm}\), chosen at distances sufficiently far away from the scattering region that represent the outgoing leads, and we have
\[Q(t)=e\int_{t}^{t+T}\left[J_{+}(x_{+},t^{\prime})-J_{-}(x_{-},t^{\prime}) \right]dt^{\prime},\] (20)
where \(J_{\pm}\) is the total probability current of electrons being in the lead to the right (left) of the barrier. In any realistic implementation of quantum pumping of electrons, the two leads are connected to independent reservoirs with effectively no phase coherence between electrons injected from either one. To account for this, the two currents \(J_{\pm}\) were calculated by having the sources placed in opposite leads, and the charge imbalance was calculated as the incoherent difference by using completely independent numerical simulations. We also note here that the agreement between the simulations of quantum dynamics of scattering with results calculated using the corresponding instantaneous values of the scattering parameters for plane waves only will agree if the momentum distributions used are narrow enough, as discussed by Atabek and Lefebvre[22].
<figure><img src="content_image/0801.0420/x5.png"><figcaption>Figure 5: In (a) The pumped charge (in units of e) as a function of time (inunits of T) is shown for four different values of the relative phase Δ. In (b)the value of Q is shown as a function of Δ for large values of t. We note herethat the shape of the curve in (b) indicates that the pumped charge isbasically adiabatic in the sense that the dependence as a function of Δ isharmonic, and the pumping exhibits an area dependence expected from a sum ofparametric contributions as indicated in Fig. 3. On the other hand, thepresence of a nonzero contribution for Δ=0 indicates that the truly adiabaticlimit where the pumped charge is independent of the velocity of the parametricchanges, is not yet reached. For the potential we used the following valueskFL/2π=3/π, 2A0/k2F=10/9, κ=0.05, and kFx0/2π=3/40π. Note that the pumpingperiod was taken as T=2π in units of A−10 in these simulations.</figcaption></figure>
In Fig. 5, the resulting pumped charge is shown as a function of time and relative phase. The charges were averaged over one period of time and calculated from a set of simulations with different values for \(\Delta\). In a) the net charge as a function of time is shown for four different values of the relative phase. For all values there is a transient behavior followed by an asymptotic stationary value. In b) the pumped charge in the asymptotic regime is shown as a function of the relative phase \(\Delta\). The fact that the charge is nonzero for both \(\Delta=0\) and for \(\Delta=\pi/2\) shows that there are two distinct contributions to the dynamics, each dependent on the parameter combinations (\(x_{c}(t),V_{0}(t)\)) and (\(v_{c}(t),V_{0}(t)\)) respectively, as discussed in section II.
<figure><img src="content_image/0801.0420/x6.png"><figcaption>Figure 6: Pumped charge (in units of e) as a function of the oscillationperiod T (in units of 1/A0). In (a) Δ=π/2 we see that for large values of theperiod the charge approaches a constant value larger than zero, as expectedfrom Eq. (17). In (b) Δ=0 the pumped charge approaches zero for large periods,in accordance with Eq. (15) since max(vc)=ωx0∝1/T. For the potential we usedthe following values kFL/2π=3/π, 2A0/k2F=10/9, κ=0.05, and kFx0/2π=3/40π.</figcaption></figure>
We expect the system to behave adiabatically for slow enough driving[23], which is where the instantaneous scattering matrix used in Sec. II should describe the pumping well. For the snow plow contribution, which only depends on the position and amplitude, not the velocity, we expect from adiabaticity that the pumped charge over one cycle will reach a steady value for large pumping periods. In Fig. 6 the pumped charge is shown as a function of the pumping period \(T\) for the phases \(\Delta=\pi/2\) in (a), and \(\Delta=0\) in (b). In Fig. 6 (a) we see that for large values of \(T\), a constant value is reached, as we expect from our analysis. For \(\Delta=0\), on the other hand, we have that the area of the integration domain in Eq. 14 scales with \(max(v_{c})=\omega x_{0}\propto 1/T\), and we thus expect that the pumped charge approaches zero, as is also seen in Fig. 6 (b), where \(Q(T)\propto 1/T\) for large values of \(T\). For small (non-adiabatic) periods, the pumped charge obtained from the simulations deviates from what we expect using Eqs. (15) and (17), and for both cases the general result is that we find smaller values of the total pumped charge. In Fig. 6 (a) this is seen as the value in the simulation drops below the constant asymptote, and in (b) the values from the simulation fall below the \(1/T\) behavior expected from the scaling of Eq. (15).
For small values of L, the barrier becomes increasingly transparent, and as a result the integrand in Eq. (17) decreases. On the other hand, for very large widths, the reflection coefficient becomes nearly unity, nearly independent of the translational velocity or barrier height, so that the charge pushed to the right during one half-cycle exactly cancels the charge pushed to the left during the second half-cycle. From these two limits we deduce that there is a maximum in the pumped charge somewhere at moderate values of the width. In Fig. 7 the pumped charge is shown as function of \(L\), the barrier width, for two different values of the relative phase, in (a) we have \(\Delta=\pi/2\), and in (b) \(\Delta=0\). For both cases the pumped charge is seen to have a maximum as a function of barrier width. We find in Fig. 7 (a) that this occurs when the width is just below the wavelength of the matter-waves. Quantum pumping using “snow-plow” dynamics can thus not occur efficiently if the width of the barrier differs significantly from the wavelength of the particles. For the case of \(\Delta=0\), the maximum occurs for narrower widths, and assuming analyticity of the scattering amplitudes, their derivatives should be continuous in the limit of vanishing or infinite width. Therefore based on Eq. (15) an almost identical argument as given above indicates that the pumped charge goes to zero in the limit \(L\to 0,\infty\) and must therefore have a maximum at finite \(L\).
<figure><img src="content_image/0801.0420/x7.png"><figcaption>Figure 7: Pumped charge (in units of e) as a function of barrier width for twodifferent values of the relative phase. In (a) we have Δ=π/2, and in (b) Δ=0.For both cases the pumped charge is seen to have a maximum as a function ofbarrier width. For the scattering potential the following values were usedT=2πA−10, 2A0/k2F=10/9, κ=0.05, and kFx0/2π=3/40π.</figcaption></figure>
## IV Summary and Conclusions
To our knowledge, this work is the first one suggesting an explicit implementation of “snow-plow” dynamics in an open geometry to produce quantum pumping. Earlier the mechanism had been suggested only for stirring [16] in a closed circular geometry. In addition to this, we have shown that for finite pumping rates, Doppler shifted scattering gives a second contribution to the pumped charge. The two contributions (15) and (17) were derived using different assumptions, and are basically independent of each other, although both are based on the BTP-formula. It is here convenient to combine them since they are in some sense complementary in their dependence of the parameters, and in their different behavior. If we consider the position \(x_{c}(t)\) and its instantaneous velocity \(v_{c}(t)\) to be independent driving parameters, we can combine Eqs. (15) and (17) into a single one for a line integral in the three-dimensional space spanned by (\(x_{c}\),\(v_{c}\),\(V_{0}\)) to give
\[Q=\frac{e}{\pi}\int_{\gamma}\vec{B}\cdot d\vec{X}_{3},\] (21)
where \(\gamma\) is the integration contour, \(d\vec{X}_{3}=(x_{c},v_{c},V_{0})\), and the geometrical magnetic field vector \(\vec{B}\) is given by
\[B_{1}=-\frac{e}{\pi}k_{F}|r|^{2},\] (22)
\[B_{2}=t_{-}^{*}\frac{\partial t_{-}}{\partial v_{c}}+r_{+}^{*}\frac{\partial r _{+}}{\partial v_{c}},\] (23)
\[B_{3}=t_{-}^{*}\frac{\partial t_{-}}{\partial V_{0}}+r_{+}^{*}\frac{\partial r _{+}}{\partial V_{0}}.\] (24)
###### Acknowledgements.
The authors acknowledge the helpful advice of K. K. Das during early stages of this project.
## References
* (1) B. Altshuler, and L. I. Glazman, Science **283**, 1864 (1999).
* (2) K. K. Das, S. Kim, and A. Mizel, Phys. Rev. Lett. **97**, 096602 (2006).
* (3) D. J. Thouless, Phys. Rev. B **27**, 6083 (1983).
* (4) Q. Niu, Phys. Rev. B **34**, 5093 (1983).
* (5) M. Switkes, C. M. Marcus, K. Campman, and A. C. Gossard, Science **283**, 1905 (1999).
* (6) P. W. Brouwer, Phys. Rev. B **63**, 121303(R) (2001).
* (7)L. DiCarlo, C. M. Marcus, and J. S. Harris, Jr, Phys. Rev. Lett. **91**, 246804 (2003).
* (8) Y. Levinson, O. Entin-Wohlman, and P. Wolfle, Physica A **302**, 335 (2001).
* (9) I. L. Aleiner and A. V. Andreev, Phys. Rev. Lett. **81**, 1286 (1998).
* (10) P. Brouwer, Phys. Rev. B **58**, R10135 (1998).
* (11) M. Büttiker, H. Thomas, and A. Pretre, Z. Phys. B **94**, 133 (1994).
* (12) M. Moskalets and M. Büttiker, Phys. Rev. B **66**, 205320 (2002).
* (13) S. W. Kim, Phys. Rev. B **66**, 235304 (2002).
* (14) E. R. Mucciolo, C. Chamon, and C. M. Marcus, Phys. Rev. Lett. **89**, 146802 (2002).
* (15) J. E. Avron, A. Elgart, G. M. Grag, and L. Sadun, Phys. Rev. B **62**, R10618 (2000).
* (16) D. Cohen, T. Kottos, and H. Schanz, Phys. Rev. B **71**, 035202 (2005).
* (17) Y. Fy and M. Willander, J. Appl. Phys. **97**, 094311 (2005).
* (18) A. Agarwal and D. Sen, J. Phys: Cond. Mat. **19**, 046205 (2007).
* (19) X. Oriols, A. Alarcon, and E. Fernandez-Diaz, Phys. Rev. B **71**, 245322 (2005).
* (20) J. G. Muga, J. P. Palao, B. Navarro, and I. L. Egusquiza, Phys. Rep. **395**, 357 (2004).
* (21) L. Guechi and T. F. Hammann, Nuovo Cimmento B, **115**, 123 (2000).
* (22) O. Atabek and R. Lefebvre, J. Phys. B **38**, 2133 (2005).
* (23) J. E. Avron, A. Elgart, G. M. Graf, and L. Sadun, J. Math. Phys. **43**, 3415 (2002).
* (24) J. Z. H. Zhang and R. E. Wyatt, eds. _Dynamics of molecules and chemical reactions_, (Dekker, New York, 1996); N. Balakrishnan, C. Kalyanaraman, and N. Sathyamurthy, Phys. Rep. **280**, 80 (1997); B. M. Garraway and K. A. Suominen, Rep. Progr. Phys. **58**, 365 (1995).
* (25) Supriyo Datta, _Electronic Transport in Mesoscopic Systems_ (Cambridge University Press, Cambridge, UK, 1995).
|
1611.09349 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 35279,
"num_imgs": 13,
"llama3_tokens_count": 9339
} | [
"content_image/1611.09349/Scenarios.jpg",
"content_image/1611.09349/fig2part.jpg",
"content_image/1611.09349/DoubletsObliques.jpg",
"content_image/1611.09349/hyst_01.jpg",
"content_image/1611.09349/chaosetDKT2.jpg",
"content_image/1611.09349/x1.png",
"content_image/1611.09349/p0.jpg",
"content_image/1611.09349/p1.jpg",
"content_image/1611.09349/p2.jpg",
"content_image/1611.09349/f490.jpg",
"content_image/1611.09349/chaos2_01.jpg",
"content_image/1611.09349/diag_glob.jpg",
"content_image/1611.09349/vorticity.jpg"
] | # Chaotic sedimentation of particle pairs in a vertical channel at low Reynolds number: multiple states and routes to chaos.
Romuald Verjus\({}^{(1)}\), Sylvain Guillou\({}^{(1)}\), Alexander Ezersky\({}^{(2)}\) and Jean-Régis Angilella\({}^{(1)}\)¹
\({}^{(1)}\) Université de Caen Basse Normandie, LUSAC, Cherbourg, France
\({}^{(2)}\) Université de Caen Basse Normandie, M2C, Caen, France
[FOOTNOTE:1][ENDFOOTNOTE]
###### Abstract
The sedimentation of a pair of rigid circular particles in a two-dimensional vertical channel containing a Newtonian fluid is investigated numerically, for terminal particle Reynolds numbers (\(\mbox{Re}_{T}\)) ranging from 1 to 10, and for a confinement ratio equal to 4. While it is widely admitted that sufficiently inertial pairs should sediment by performing a regular DKT oscillation (Drafting-Kissing-Tumbling), the present analysis shows in contrast that a chaotic regime can also exist for such particles, leading to a much slower sedimentation velocity. It consists of a nearly horizontal pair, corresponding to a maximum effective blockage ratio, and performing a quasiperiodic transition to chaos under increasing the particle weight. For less inertial regimes, the classical oblique doublet structure and its complex behavior (multiple stable states and hysteresis, period-doubling cascade and chaotic attractor) are recovered, in agreement with previous work [Aidun & Ding, Physics of Fluids 15(6), 2003]. As a consequence of these various behaviors, the link between the terminal Reynolds number and the non-dimensional driving force is complex: it contains several branches displaying hysteresis as well as various bifurcations. For the range of Reynolds number considered here, a global bifurcation diagram is given.
## I Introduction
The sedimentation of inertial particles at low Reynolds number has been intensively studied in the past, since this problem is ubiquitous in industrial or natural sciences [19, 16, 5, 6, 9, 2]. Even if particles are non-Brownian and not submitted to electrostatic forces, they strongly interact in general through hydrodynamic interactions. This leads to complex sedimentation regimes with very irregular individual trajectories. A simple though non-trivial situation related to this problem is the settling of a small number of spheres in a vertical channel: the settling velocity, being influenced by inter-particle as well as particle/wall hydrodynamic interactions, is difficult to predict, especially if the confinement is strong. In the context of fluidized bed analyses, Fortes _et al._[10] observed that pairs of spheres, settling in a rectangular channel with a thin gap, could have a complex behavior. For Reynolds numbers of a few hundred, these authors observed a Drafting-Kissing-Tumbling (DKT) phenomenon: the trailing sphere approaches the leading one, then overtakes it and becomes the leading sphere, and so on. These experiments also revealed that spheres could place themselves along a quasi horizontal line joining the two end walls of the channel, and slowly sediment in this stable position. The two-dimensional version of this problem is the settling of _disks_ in a vertical plane channel. It has been studied numerically by Feng _et al._[7], Aidun & Ding [1], and more recently by Wang _et al._[25]. Even though the detailed structure of the flow around disks differs from the case of spheres, some common features exist between the two situations. In particular, the DKT phenomenon has been observed for disks also [7][1]. For less inertial regimes, the pair converges to some steady sedimentation structure taking the form of an oblique doublet (as shown in the pioneering works by Feng _et al._[7]). Because these behaviors occur for a wide range of Reynolds numbers, it is widely believed that the oblique doublet and the DKT are the only possible configurations for this sedimentation problem. However, Aidun & Ding [1] revealed that the motion of the particle pair could be much more complex. By using a Lattice Boltzmann approach, they observed multiple stable states, hysteresis, as well as a period-doubling cascade leading to a chaotic attractor. The steady oblique doublet exists when the terminal Reynolds number, based on the particle diameter \(D\), the long-term sedimentation velocity \(V_{T}\) (averaged over time and over both particles), and the fluid kinematic viscosity \(\nu\):
\[\mbox{Re}_{T}=\frac{V_{T}\,D}{\nu},\] (1)
is below some value of order unity. When \(\mbox{Re}_{T}\) increases the oblique doublet bifurcates, leading to a variety of behaviors as sketched in Fig. 1. For \(\mbox{Re}_{T}\) between 2.6 and 4.2, two sedimentation structures exist, corresponding to two different terminal velocities. Both structures are observed to oscillate periodically around an oblique line while the particles sediment. However, the slowest configuration, corresponding to a more horizontal doublet, has been observed to become unstable [1] when \(\mbox{Re}_{T}\) is above 4.2: for such Reynolds numbers a single, time-periodic, sedimentation structure is observed. By further increasing the Reynolds number a dramatic change is observed, as the doublet undergoes a period-doubling cascade leading to a low-dimensional chaotic attractor when \(\mbox{Re}_{T}\) is close to 5. This chaotic dynamics vanishes however for larger Reynolds numbers and a periodic Drafting-Kissing-Tumbling phenomenon takes place. This DKT phenomenon has been widely investigated in the past [10][8][15][11][21], either experimentally or numerically, in contrast with the period-doubling cascade and the chaotic dynamics observed by Aidun & Ding [1] which received less attention.
<figure><img src="content_image/1611.09349/Scenarios.jpg"><figcaption>Figure 1: Sketch of the typical behaviors of the sedimenting pair described inthe literature, when the confinement ratio is equal to 4. The star (∗)indicates the range of Reynolds numbers explored in the present study.</figcaption></figure>
The goal of this paper is to analyze the dynamics of the doublet for more inertial regimes which had not been investigated so far, and to determine whether the sedimentation process converges to some regular structure. Like in Ref. [1], we focus on the two-dimensional problem, so that particles can be thought of as disks. Throughout the paper, the terms ”particles” or ”disks” will be used indifferently to refer to these objects.
We are particularly interested in the link between the sedimentation velocity and the volume force driving the motion of the particles. To achieve this, we have developed a numerical algorithm based on the fictitious domain method (section II), and used it to simulate the sedimentation of two disks in a vertical channel for \(\mbox{Re}_{T}\) in the range \([2,10]\). The control parameter of the various simulations shown below is the non-dimensional apparent weight of the particles (which is the same for both particles since they are identical):
\[F=\frac{\pi}{4}G(\rho_{r}-1)\] (2)
where \(G\) is the Galileo number (\(G={D^{3}g}/{\nu^{2}}\)) and \(\rho_{r}\) is the particle / fluid density ratio. To ease the comparison, the same density ratio as Aidun & Ding [1] will be considered throughout the paper (\(\rho_{r}=1.002\)). Lengths and times have been set non-dimensional by using \(D\) and \(\nu\). The range of \(F\) investigated in Ref. [1] corresponds to \(F\leq 250\) (or, equivalently, \(G\leq 159000\)). This ”weakly inertial” regime will be re-visited in section III below. The new phenomena investigated in the present paper appear when \(370\leq F\leq 500\) (i.e. \(236000\leq G\leq 318000\)) and will be presented in section IV. These non-dimensional numbers correspond for example to particles of a few \(mm\) wide, slightly heavier than the fluid, and sedimenting in water. It will be shown that another chaotic attractor exists in this range, and that this attractor affects the sedimentation process. A conclusion will be drawn in section VI.
## II Problem description and numerical approach
We consider a pair of circular disks with a diameter \(D\), settling in a vertical plane channel filled with a Newtonian fluid, with viscosity \(\nu\). The length of the channel is infinite in the vertical direction, and its width is \(W=4D\). A Direct Forcing Fictitious Domain method [26] has been developed to solve the motion equation of both the fluid and the particles. It consists in solving the Navier-Stokes equation over a fictitious domain \(\Omega\) including the fluid and the particles, which will be denoted as \(P_{1}(t)\) and \(P_{2}(t)\) in the following. A volume force \(\lambda\) is then applied within the disks to impose a rigid-body motion. The resulting motion equations are:
\[{\rho_{f}}\left(\frac{\partial\textbf{u}}{\partial t}+(\textbf{u} .\boldsymbol{\nabla})\textbf{u}\right) = -\boldsymbol{\nabla}p+\mu\boldsymbol{\Delta}{\textbf{u}}+ \boldsymbol{\lambda}\quad\textnormal{ in }\quad{\Omega},\] (3)
\[{{\boldsymbol{\nabla}}.{\textbf{u}}} = 0\quad\textnormal{ in }\quad{\Omega},\] (4)
**u** \[= \textbf{U}_{i}+\boldsymbol{\omega}_{i}\times\textbf{r}\quad \textnormal{ in }\quad{P_{i}(t)},\] (5)
where **u** is the velocity field, \(p\) is the pressure, \(\rho_{f}\) is the fluid density, \(\mu\) is the dynamic viscosity, and (\(\textbf{U}_{i}\),\(\boldsymbol{\omega}_{i}\)) denote the translational and rotational velocities of particle \(P_{i}\) respectively. The linear and angular momentum equation read:
\[(1-\frac{1}{{\rho_{r}}})m_{i}({\frac{\mathrm{d}\textbf{U}_{i}}{ \mathrm{d}t}}-\textbf{g}) = -\int_{P_{i}(t)}\boldsymbol{\lambda}\,\mathrm{d}S,\] (6)
\[(1-\frac{1}{{\rho_{r}}})\frac{\mathrm{d}({\textbf{J}_{i} \boldsymbol{\omega}_{i}})}{\mathrm{d}t} = -\int_{P_{i}(t)}\textbf{r}\times\boldsymbol{\lambda}\,\mathrm{d}S,\] (7)
where \(m_{i}\) and \(\textbf{J}_{i}\) denote the mass and moment of inertia tensor respectively.
<figure><img src="content_image/1611.09349/fig2part.jpg"><figcaption>Figure 2: Sketch of the particle pair in a confined domain.</figcaption></figure>
Eqs. (3)-(4)-(5) are discretized by means of a Finite Differences algorithm on a cartesian staggered mesh[14], and by using a projection method (Chorin [3] ; Temam [20]). Time-stepping is done by means of a 2nd-order Adams-Bashforth scheme for advection, together with a 2nd-order Crank-Nicolson scheme for the diffusion term. Eqs. (6) and (7) are discretized by means of a Collocation Points Method [26]. The resulting discretized equations, in non-dimensional form, read:
\[\frac{{\textbf{u}}^{*}-{\textbf{u}}^{n}}{\Delta t}=-\boldsymbol{\nabla}p^{n}+ \frac{1}{2}(3(\textbf{u}.\boldsymbol{\nabla}\textbf{u})^{n}-(\textbf{u}. \boldsymbol{\nabla}\textbf{u})^{n-1})\]
\[+\frac{1}{2}({\boldsymbol{\nabla}}^{2}{\textbf{u}}^{n}+{\boldsymbol{\nabla}}^{ 2}{\textbf{u}}^{*})+{\boldsymbol{\lambda}}^{n},\] (8)
with:
\[\textbf{u}^{n+1} = \textbf{U}_{i}^{n+1}+\boldsymbol{\omega}_{i}^{n+1}\times\textbf{r},\] (9)
\[\boldsymbol{\nabla}.{\textbf{u}}^{*} = 0,\] (10)
where \(\mathbf{u}^{*}\) is a provisional velocity which, in general, does not satisfy the rigid-body motion within particles. The translational and angular velocities of each particle satisfy:
\[(1-\frac{1}{{\rho_{r}}})v({\frac{\mathrm{d}\textbf{U}_{i}^{n+1}}{ \mathrm{d}t}}-G\,{\hat{\textbf{g}}}) = -\int_{P_{i}(t)}\negmedspace\negmedspace\negmedspace\negmedspace \boldsymbol{\lambda}^{n+1}\mathrm{d}S,\] (11)
\[(1-\frac{1}{{\rho_{r}}})\frac{\mathrm{d}({\textbf{J}_{i} \boldsymbol{\omega}_{i}}^{n+1})}{\mathrm{d}t} = -\int_{P_{i}(t)}\negmedspace\negmedspace\negmedspace\negmedspace \negmedspace\negmedspace\textbf{r}\times\boldsymbol{\lambda}^{n+1}\mathrm{d}S,\] (12)
where \({\hat{\textbf{g}}}\) is the unit vector in the direction of gravity and \(v\) is the particle volume. The volume force is then chosen to impose a rigid-body motion within \(P_{1}(t)\) and \(P_{2}(t)\) (see also Yu & Shao [26]):
\[\frac{{\textbf{u}}^{n+1}-{\textbf{u}}^{*}}{\Delta t}={\boldsymbol{\lambda}}^{n +1}-{\boldsymbol{\lambda}}^{n}.\] (13)
The method has been validated on a number of benchmarks involving either isolated or interacting two-dimensional particles at Reynolds numbers varying between 0.1 and a few hundred. It has been systematically compared to existing results from the literature: fixed or oscillating cylinder [12, 13, 18, 21] ; unique cylindrical particle sedimenting in a vertical channel with either axial or asymmetric initial position [24, 21] ; pairs of particles interacting in a vertical channel [8, 24, 21]. These benchmarks are presented in details in Ref. [22]. In the following section, most of the results by Aidun & Ding [1], which had been obtain by using a completely different method (Lattice Boltzmann), will be revisited and confirmed.
## III \(F\leq 250\): oblique doublet and subharmonic cascade
Fig. 3 shows the evolution of the horizontal coordinates \(Y_{1}\) and \(Y_{2}\) of the centers of both particles, when \(F=136.77\). Two runs are shown, corresponding to two initial positions of the particles. In one case (_run A_, solid lines), the pair is initially released in a horizontal position with \(Y_{1}=-0.4\) and \(Y_{2}=1.4\), and converges to a steady oblique doublet. In the second case (_run B_, dashed lines) the pair is also released in a horizontal position, but nearer to the axis with \(Y_{1}=-0.1\) and \(Y_{2}=1.0\), and reaches a periodic regime. The terminal Reynolds number \(\mbox{Re}_{T}\) of the pairs is very different in both cases. It is plotted in Fig. 4, for \(F\) varying in the range [125,150]. Two branches co-exist: the lower one (”slow” branch, (a)) is the steady oblique doublet, whereas the upper one (”fast” branch, (b)) is the time-periodic doublet. Below \(F=130\), only the steady branch exists. A Hopf bifurcation takes place at \(F\simeq 131\). These results confirm previous analyses [1][7], except that our lower branch clearly corresponds to a steady doublet, in agreement with Feng _et al._[7], but in contrast with Aidun & Ding [1] who observe a periodic doublet there. This might be attributed to a numerical artefact, since the amplitude of the oscillations of the lower branch of Aidun & Ding is very small (\(0.025D\)), and of the order of their mesh size (\(0.03125D\)). By using a finer mesh our computations always led to a steady oblique doublet on the lower branch, like the one shown by the solid line of Fig. 3.
<figure><img src="content_image/1611.09349/DoubletsObliques.jpg"><figcaption>Figure 3: Evolution of the horizontal coordinate of the centers of theparticles when F=136.77. Solid lines correspond to the steady oblique doublet(run A, lower branch of Fig. 4), and dashed lines correspond to periodicoblique doublet (run B, upper branch of Fig. 4).</figcaption></figure>
<figure><img src="content_image/1611.09349/hyst_01.jpg"><figcaption>Figure 4: Terminal Reynolds number vs. the non-dimensional driving force F inthe range [125,150]. The lower branch (a) corresponds to a slow sedimentationin the form of a steady oblique doublet, whereas the upper one (”fast” branch,(b)) is the time-periodic doublet. Runs A and B of Fig. 3 are marked with ablack square.</figcaption></figure>
The lower branch of Fig. 4 can be approximated by the affine formula : \(\mbox{Re}_{T}=0.0251F-0.738\), and the upper branch by: \(\mbox{Re}_{T}=0.030F-0.101\). Both branches are stable and co-exist when \(130.74\leq F\leq 145\). However, our simulations show that only the upper one persists, with a well-defined period \(T\), when \(F\) is above this range. In addition, increasing \(F\) above 145 leads to a series of period-doubling bifurcations. The first one appears when \(F\simeq 146\), and the period of the doublet jumps from \(T\) to \(2T\). The \(2T\to 4T\) bifurcation occurs when \(F\simeq 156\), and \(4T\to 8T\) occurs when \(F\simeq 158\). Fig. 5 (left) shows the dynamics in the plane (\(Y_{1}\), \(Y_{2}\)) after a large number of period-doubling bifurcations: a chaotic attractor, already observed by Aidun & Ding [1] appears. Increasing again the non-dimensional weight leads to a more regular, periodic, dynamics (Fig. 5 (right)). It corresponds to the DKT regime discovered by Feng _et al._[7]. In the next section we focus on this inertial regime.
<figure><img src="content_image/1611.09349/chaosetDKT2.jpg"><figcaption>Figure 5: Left: chaotic attractor resulting from the subharmonic cascade,F=197.68. Right: return to periodicity (DKT dynamics), F=241.61.</figcaption></figure>
Note that the oblique doublet observed here is often explained by invoking the role of the intrinsic rotation of the particles, which would create a Magnus lift force. This force is opposed to the repulsive force induced by the vertical walls, and the competition between these two effects would maintain the stability of the doublet. If this explanation were relevant, one could argue that, in the absence of rotation, the sedimentation structure should be quite different. To check this point we have done a series of runs with exactly the same physical conditions as the ones used in section III, but by removing the rotational degree-of-freedom of the objects. No significant difference emerged, and the dynamics of the oblique doublet was only slightly perturbed. Indeed, the lift force does not necessarily requires the particles to rotate around their centers, since this force results from the asymmetry of the disturbance flow due to the inclusion, which can exist even if the particle does not rotate.
## IV \(F\geq 250\): horizontal doublet and quasi-periodic route to chaos
We now focus on regimes which had not been explored in details so far. The usual DKT regime is observed to persist when \(F>250\), up to \(F\simeq 400\). When \(F\) approaches this value, we observe that the DKT phenomenon co-exists with a steady horizontal structure, the vorticity field of which is shown in Fig. 6. To our knowledge, this structure had not been observed in previous analyses. According to the initial orientation of the pair, particles either perform DKT, or converge to a horizontal structure as the one shown in Fig. 6. In this case the flow is highly symmetric with respect to the middle-line, and the trajectory of the particle centers is perfectly vertical. Both objects rotate as if rolling on the nearest wall. When \(F>400\) the DKT no longer exists, but the quasi-horizontal structure persists. It is observed to exist irrespective of the disks’ initial positions. A Hopf bifurcation appears at \(F\simeq 400\), and the horizontal structure becomes time-periodic: Fig. 7 shows this oscillation in the \((Y_{1},Y_{2})\) plane, as well as the power spectrum of \(Y_{1}(t)\), when \(F=427\). A frequency \(f_{1}\simeq 0.397\,Hz\) and its harmonic \(2f_{1}\) is clearly visible. In this regime, the doublet remains perfectly horizontal, the particles sediment with a zigzag motion while keeping their distance constant. Particles never cross the axis of the channel, as \(Y_{1}\) and \(Y_{2}\) remain positive and negative for all times, respectively. When \(F\) increases, the regime remains periodic until \(F\simeq 486\). Fig. 8 shows the case \(F=480\). A fundamental frequency \(f_{1}\simeq 0.421\,Hz\) and its harmonics are visible. Particles overtake each other periodically, i.e. the difference between their vertical coordinates (\(X_{1}-X_{2}\)) changes sign periodically.
<figure><img src="content_image/1611.09349/x1.png"><figcaption>Figure 6: Steady symmetric horizontal doublet at F=400.12. Solid and dashedlines correspond to positive and negative vorticity contours respectively. Inthis configuration, the effective blockage ratio (defined as the apparenttotal diameter of the pair divided by the gap of the channel) is maximum.</figcaption></figure>
<figure><img src="content_image/1611.09349/p0.jpg"><figcaption>Figure 7: Periodic oscillations of the horizontal doublet when F=427. (a):dynamics in the (Y1,Y2) plane. (b): Power spectrum S(f) of Y1(t) showing peaksat f1≃0.397Hz and 2f1.</figcaption></figure>
<figure><img src="content_image/1611.09349/p1.jpg"><figcaption>Figure 8: Periodic regime at F=480. (a): dynamics in the (Y1,Y2) plane. Thepower spectrum (b) shows harmonics of the fundamental frequency f1≃0.421Hz.</figcaption></figure>
<figure><img src="content_image/1611.09349/p2.jpg"><figcaption>Figure 9: Quasi-periodic oscillations of the horizontal doublet when F=489.5.(a): dynamics in the (Y1,Y2) plane. (b): power spectrum of Y1(t).</figcaption></figure>
Under increasing \(F\), the phase portrait becomes more complex (Fig. 9(a), \(F=489.5\)). The power spectrum of \(Y_{1}(t)\) shows a large number of peaks (Fig. 9(b)), corresponding to linear combinations of two fundamental frequencies \(f_{1}\simeq 0.434\,Hz\) and \(f_{2}\simeq 0.064\,Hz\) (Fig. 10). Particles still overtake each other unceasingly, but in a non-straightforward manner. The trailing particle rotates for some time while remaining behind the leading one, then overtakes it. Note that particles do not cross the axis of the channel (i.e. \(Y_{1}(t)>0\) and \(Y_{2}(t)<0\) for all \(t\)), like in the periodic cases above.
When \(F\) is above 507, a chaotic dynamics takes place. Fig. 11 shows our results when \(F=527.78\). The phase portrait is characterized by very disordered trajectories in a bounded volume of the phase space, where particles cross the channel axis (i.e. the sign of \(Y_{i}(t)\) is no longer constant) in an intermittent manner (Fig. 11 (a)). The spectrum shows a wide range of frequencies, with a broadband noise structure (Fig. 11 (b)). This suggests that the Ruelle-Takens scenario is taking place here, and that chaos occurs after the appearance of a third frequency.
<figure><img src="content_image/1611.09349/f490.jpg"><figcaption>Figure 10: Magnification of the spectrum of Fig. 9. Fundamental frequenciesare f1≃0.434Hz and f2≃0.064Hz.</figcaption></figure>
<figure><img src="content_image/1611.09349/chaos2_01.jpg"><figcaption>Figure 11: Chaotic attractor (a) and corresponding spectrum (b) when F=527.78.</figcaption></figure>
## V Global bifurcation diagram
These results show that the settling of the particle pair is a very complex phenomenon, even in this simple two-dimensional configuration. In particular, the terminal Reynolds number of the pair is difficult to predict. To give a general view of the sedimentation process, we summarize the various regimes described above by plotting the terminal Reynolds number \(\mbox{Re}_{T}\) versus \(F\) (Fig. 12), up to \(F=500\). We observe that \(\mbox{Re}_{T}\) is a piecewise increasing function of \(F\), containing quasi-discontinuities at bifurcation points where \(\mbox{Re}_{T}\) decays or increases abruptly. Therefore, increasing the non-dimensional weight \(F\) will not lead systematically to higher settling Reynolds numbers. The most spectacular drop of settling velocity appears when the periodic DKT becomes unstable, and the quasi horizontal stable structure appears. This happens when \(F\) increases from 370 to 400. This abrupt decay is related to the abrupt change in the _effective_ blockage ratio (defined here as the apparent total diameter of the pair divided by the gap of the channel). It is minimum, and equal to \(D/W\), when the doublet is in a vertical position (i.e. the leading particle hides entirely the trailing one). In contrast, it is maximum and equal to \(2D/W\) when the doublet is in the horizontal position.
The decay of settling velocity is very remarkable here, as it is mostly divided by two when \(F\) varies from 370 to 400. However, this effect is rather common in that it can be observed in many elementary dynamical systems. For example, consider a two-dimensional system \((u(t),y(t))\) such that \(\dot{u}=F-(1+y^{2})u.\) It corresponds to a forced system (with a constant driving force \(F>0\)) submitted to a friction force with friction coefficient \(1+y^{2}\) depending on the second variable \(y(t)\). (The variable \(u\) can be thought of as the settling velocity of the system of particles, whereas \(y\) plays the role of the effective blockage ratio which increases the efficiency of viscous friction.) Suppose that, for \(F<F_{0}\), say, the system has a stable equilibrium position at \(y=y_{a}=0\) and \(u=u_{a}=F/(1+y_{a}^{2})=F\). Therefore, one can expect the system to remain in the vicinity of this state for long times, provided the initial conditions have been chosen close enough to \((u_{a},y_{a})\). Now, suppose that the position \(y_{a}=0\) is no longer stable for \(F>F_{0}\), and that a new asymptotically stable position appears there, e.g. \(y_{b}=1\) (which would correspond to the horizontal doublet in our simulations). Then the system will quit the vicinity of the terminal velocity \(u_{a}=F\), and is likely to converge to the vicinity of a much smaller terminal velocity, that is \(u_{b}=F/(1+y_{b}^{2})=F/2\). Let \(\langle u\rangle\) denote the velocity \(u\) averaged over long times. Assuming that \(\langle u\rangle\) is close to the stable equilibrium solution \(u_{a}\) or \(u_{b}\), one can expect that the graph of \(\langle u\rangle\) versus \(F\) will display a steep decrease at \(F=F_{0}\), like the one observed in Fig. 12. This behavior corresponds to a kind of ”obstruction effect”, in that increasing the cause of the motion (\(F\)) leads to a larger effective blockage ratio, and to a slower motion. In particular, even if both \(u_{a}\) and \(u_{b}\) increase separately with \(F\), the _stable_ equilibrium solution \(\{u_{a}\) or \(u_{b}\}\), and therefore \(\langle u\rangle,\) is only piecewise increasing, and, in the present case, decays with \(F\) in the vicinity of \(F_{0}\).
Finally, note that all the initial positions used in the present simulations for \(F>400\) lead to the quasi-horizontal doublet structure. We have checked however that particles released in a perfectly vertical manner (\(Y_{1}(0)=Y_{2}(0)=0\)), settled by conserving their initial vertical orientation, and acquired a terminal velocity which is much larger than the one of the quasi-horizontal doublet. Similarly, particles injected symmetrically along a horizontal line (\(X_{1}(0)=X_{2}(0)\) and \(Y_{1}(0)=-Y_{2}(0)\)) will keep their horizontal orientation, and sediment slowly. However, in both cases, any small disturbance forces the particles to join the quasi-horizontal slow structure. This suggests that this structure is asymptotically stable (attracting) and has a large basin of attraction.
<figure><img src="content_image/1611.09349/diag_glob.jpg"><figcaption>Figure 12: Global diagram: settling Reynolds number ReT versus the non-dimensional driving force F.</figcaption></figure>
The various regimes described above correspond to a complex interaction between the particles and the fluid, as shown in the instantaneous vorticity fields of Fig. 13, corresponding to the very same parameters as the ones of Figs. 7, 8, 9 and 11. In the periodic regime observed at \(F=427\), the wake of the objects is highly localized. When the driving force increases, the wake looses its symmetry and becomes more unsteady. The vorticity produced by particles creates a non-trivial flow, showing evidence of vortex shedding, which in turns affects the particles. The disordered flow structure is even more pronounced in the chaotic case \(F=527.78\). This suggests that a strong coupling exists between flow and particles, and that any attempt to reduce the dynamics should take into account not only the degrees-of-freedom of the particles, but also the effective degrees-of-freedom of the flow.
<figure><img src="content_image/1611.09349/vorticity.jpg"><figcaption>Figure 13: Instantaneous vorticity fields for increasing driving forces :periodic regimes (F=427 and F=480), quasi-periodic (F=489) and chaotic regime(F=527).</figcaption></figure>
## VI Conclusion
The computations presented in this paper reveal some new features of the settling of inertial disks in a confined domain. The multiple stable-state, the hysteresis and the chaotic attractor discovered by Aidun & Ding [1] have been recovered when the non-dimensional driving force \(F\) is below 200. Our results are in quantitative agreement with the ones of these authors, except that the lower branch of the bifurcation diagram of Fig. 4 corresponds to a steady position in our case. We attribute these differences to some numerical artefact.
For more inertial regimes, the collisional DKT phenomenon takes place, in agreement with previous analyses. The particle Reynolds number increases almost monotonically with \(F\) over the range \(200\lesssim F\lesssim 400\). An abrupt change occurs for larger \(F\): the DKT vanishes and particles tend to join a non-collisional attracting quasi-horizontal structure, corresponding to a slow sedimentation. Settling, though slow, can then be steady, periodic, quasi-periodic or chaotic, according to the values of \(F\). A transition towards chaos occurs on this branch as \(F\) increases, but the route leading to chaos is different from the subharmonic cascade observed by Aidun & Ding [1] at smaller \(F\). Indeed, a quasiperiodic route is observed here when \(400\lesssim F\lesssim 500\), leading to a chaotic attractor where particles cross the axis of the channel in an intermittent manner.
The link between the (non-dimensional) particle weight \(F\) and the (non-dimensional) settling velocity \(\mbox{Re}_{T}\) is therefore extremely complex in this apparently simple situation. The most spectacular effect is the abrupt decrease in settling velocity at \(F\simeq 400\): here, the doublet takes the form of a nearly horizontal structure, the channel is therefore partially ”blocked” by particles, and sedimentation is slow. In a sense, such a behaviour shares some common features with the Braess effect observed in electrical circuits or in pipe flows[17][4]: near some critical value, increasing the cause of the motion will lead to a larger blockage and to a slower motion.
Finally, to understand further the complex dynamics involved in this sedimentation process we intend to perform simulations with larger driving forces. Also, three dimensional situations, that is pairs of spheres settling in a vertical channel, will be considered in the near future.
_This paper is dedicated to the memory of Professor Alexander Ezersky._
## References
* _Aidun and Ding_ [2003] Aidun, C. K., and E.-J. Ding (2003), Dynamics of particle sedimentation in a vertical channel: Period-doubling bifurcation and chaotic state, _Physics of Fluids_, _15_, 1612.
* _Champmartin_ [2006] Champmartin, S. (2006), Matrice de résistance et description du mouvement d’une particule en interaction hydrodynamique et conséquences du confinement asymétrique sur les phénomènes de transfert, Ph.D. thesis, Université d’Angers.
* _Chorin_ [1968] Chorin, A. J. (1968), Numerical solution of the Navier-Stokes equations, _Mathematics of computation_, _22_(104), 745–762.
* [4] J. E. Cohen and P. Horowitz. Paradoxical behaviour of mechanical and electrical networks. 1991.
* _Crowe and Tsuji_ [1998] Crowe, S. M., C.T., and Y. Tsuji (1998), _Multiphase Flows With Droplets and Particles_, CRC Press.
* _Derksen_ [2011] Derksen, J. (2011), Simulations of granular bed erosion due to laminar shear flow near the critical shields number, _Physics of Fluids_, _23_(11), 113,303.
* _Feng and Joseph_ [1995] Feng, J., and D. Joseph (1995), The unsteady motion of solid bodies in creeping flows, _Journal of Fluid Mechanics_, _303_, 83–102.
* _Feng et al._ [1994] Feng, J., H. H. Hu, and D. D. Joseph (1994), Direct simulation of initial value problems for the motion of solid bodies in a newtonian fluid. part 1. sedimentation, _Journal of Fluid Mechanics_, _261_, 95–134.
* _Feng et al._ [1996] Feng, J., P. Huang, and D. Joseph (1996), Dynamic simulation of sedimentation of solid particles in an oldroyd-b fluid, _Journal of non-newtonian fluid mechanics_, _63_(1), 63–88.
* _Fortes et al._ [1987] Fortes, A. F., D. D. Joseph, and T. S. Lundgren (1987), Nonlinear mechanics of fluidization of beds of spherical particles, _Journal of Fluid Mechanics_, _177_, 467–83.
* _Glowinski et al._ [1999] Glowinski, R., T.-W. Pan, T. Hesla, and D. Joseph (1999), A distributed Lagrange multiplier/fictitious domain method for particulate flows, _International Journal of Multiphase Flow_, _25_(5), 755 – 794.
* _Happel and Brenner_ [1965] Happel, J., and H. Brenner (1965), Low Reynolds number hydrodynamics, Prentice-Hall,1965.
* _Harper and Chang_ [1967] Harper, E. Y., and I.-D. Chang (1967), Drag on a cylinder between parallel walls in Stokes’ flow, _Physics of Fluids (1958-1988)_, _10_(1), 83–88.
* _Höfler and Schwarzer_ [2000] Höfler, K., and S. Schwarzer (2000), Navier-Stokes simulation with constraint forces: Finite-difference method for particle-laden flows and complex geometries, _Physical Review E_, _61_(6), 7146.
* _Hu_ [1996] Hu, H. (1996), Direct simulation of flows of solid-liquid mixtures, _International Journal of Multiphase Flow_, _22_(2), 335 – 352.
* _Jayaweera and Mason_ [1965] Jayaweera, K. O. L. F., and B. J. Mason (1965), The behaviour of freely falling cylinders and cones in a viscous fluid, _Journal of Fluid Mechanics_, _22_, 709–720.
* [17] M.G. Pala, S. Baltazar, P. Liu, H. Sellier, B. Hackens, F. Martins, V. Bayot, X. Wallart, L. Desplanque, and S. Huant. Transport inefficiency in branched-out mesoscopic networks: An analog of the Braess paradox. _Physical review letters_, 108(7):076802, 2012.
* _Park et al._ [1998] Park, J. and Kwon, K. and Choi, H. (1998), Numerical solutions of flow past a circular cylinder at Reynolds numbers up to 160, _KSME International Journal_, _12_(6), 1200–1205.
* _Richardson and Zaki_ [1954] Richardson, J., and W. Zaki (1954), Sedimentation and fluidisation: Part I, _Trans. Instn. Chem. Engrs._, _32_(0), 35 – 53.
* _Temam_ [1969] Temam, R. (1969), Sur l’approximation de la solution des équations de Navier-Stokes par la méthode des pas fractionnaires (I), _Archive for Rational Mechanics and Analysis_, _32_(2), 135–153.
* _Uhlmann_ [2005] Uhlmann, M. (2005), An immersed boundary method with direct forcing for the simulation of particulate flows, _Journal of Computational Physics_, _209_(2), 448 – 476.
* _Verjus_ [2015] Verjus, R. (2015), Etude de la sédimentation de particules en domaine confiné, _Ph. D. thesis, University of Lorraine_.
* _Wachmann et al._ [1998] B. Wachmann, W. Kalthoff, S. Schwarzer, and H. J. Herrmann (1998), Collective drag and sedimentation: comparison of simulation and experiment in two and three dimensions, _Granular Matter_, _1_(2), 75–82.
* _Wachs_ [2009] Wachs, A. (2009), A DEM-DLM/FD method for direct numerical simulation of particulate flows : sedimentation of polygonal isometric particles in a newtonian fluid with collisions, _Computers and Fluids_, _38_(8), 1608–1628.
* _Wang et al._ [2014] L. Wang, Z.L. Guo and J.C. Mi (1998), Drafting, kissing and tumbling process of two particles with different sizes, _Computers and Fluids_, _96_, 20–34.
* _Yu et Shao_ [2007] Yu, Z. and X. Shao (2007), A direct-forcing fictitious domain method for particulate flows, _Journal of Computational Physics_, _227_(1), 292 – 314.
|
1908.00775 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 118573,
"num_imgs": 13,
"llama3_tokens_count": 40242
} | [
"content_image/1908.00775/x1.png",
"content_image/1908.00775/x2.png",
"content_image/1908.00775/x3.png",
"content_image/1908.00775/x4.png",
"content_image/1908.00775/x5.png",
"content_image/1908.00775/x6.png",
"content_image/1908.00775/x7.png",
"content_image/1908.00775/x8.png",
"content_image/1908.00775/x9.png",
"content_image/1908.00775/x10.png",
"content_image/1908.00775/x11.png",
"content_image/1908.00775/x12.png",
"content_image/1908.00775/x13.png"
] | # Quantum chaos in the Brownian SYK model with large finite \(N\):
OTOCs and tripartite information
Christoph Sünderhauf
Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Str. 1, 85748 Garching, Germany
Munich Center for Quantum Science and Technology, Schellingstraße 4, 80799 München, Germany
Lorenzo Piroli
Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Str. 1, 85748 Garching, Germany
Munich Center for Quantum Science and Technology, Schellingstraße 4, 80799 München, Germany
Xiao-Liang Qi
Stanford Institute for Theoretical Physics, Stanford University, Stanford, CA 94305, USA
Department of Physics, Stanford University, Stanford, CA 94305, USA
Google, 100 Mayfield Ave, Mountain View, CA 94043, USA
Norbert Schuch
Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Str. 1, 85748 Garching, Germany
Munich Center for Quantum Science and Technology, Schellingstraße 4, 80799 München, Germany
J. Ignacio Cirac
Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Str. 1, 85748 Garching, Germany
Munich Center for Quantum Science and Technology, Schellingstraße 4, 80799 München, Germany
###### Abstract
We consider the Brownian SYK model of \(N\) interacting Majorana fermions, with random couplings that are taken to vary independently at each time. We study the out-of-time-ordered correlators (OTOCs) of arbitrary observables and the Rényi-\(2\) tripartite information of the unitary evolution operator, which were proposed as diagnostic tools for quantum chaos and scrambling, respectively. We show that their averaged dynamics can be studied as a quench problem at imaginary times in a model of \(N\) qudits, where the Hamiltonian displays site-permutational symmetry. By exploiting a description in terms of bosonic collective modes, we show that for the quantities of interest the dynamics takes place in a subspace of the effective Hilbert space whose dimension grows either linearly or quadratically with \(N\), allowing us to perform numerically exact calculations up to \(N=10^{6}\). We analyze in detail the interesting features of the OTOCs, including their dependence on the chosen observables, and of the tripartite information. We observe explicitly the emergence of a scrambling time \(t^{\ast}\sim\ln N\) controlling the onset of both chaotic and scrambling behavior, after which we characterize the exponential decay of the quantities of interest to the corresponding Haar scrambled values.
03B1\(\alpha\)03C0\(\pi\)03A8\(\Psi\)03C8\(\psi\)03B3\(\gamma\)03B4\(\delta\)03C3\(\sigma\)03C7\(\chi\)03A9\(\Omega\)03C1\(\rho\)27E8⟨ 27E9⟩
###### Contents
Contents
* I Introduction
* II The model and the chaos quantifiers * II.1 The OTOCs and the operator spreading * II.2 Diagnostic of scrambling: the tripartite information in fermionic systems
* III Exact approach from emergent permutational symmetry * III.1 Decomposing the dynamical problem * III.2 The generator of the dynamics:mapping to a bosonic system * III.3 The OTOC and the tripartite information
* IV The physical results * IV.1 The OTOCs: numerical results * IV.2 The Rényi-\(2\) tripartite information
* V Deriving the key formulas * V.1 The Hamiltonian * V.2 Extracting OTOCs and Entropies * V.2.1 The OTOCs * V.2.2 The Rényi-\(2\) entanglement entropy \(S^{(2)}_{AC}({\bar{\ell}})\) * V.3 Some large-\(N\) limits
* VI Conclusions
* A Non-interacting case: \(q=2\)
* B Relation between OTOCs and Rényi-\(2\) entropies * B.1 The case of Pauli matrices * B.2 The case of Majorana operators
* C Details on the numerical implementation
* D The Rényi-\(2\) entanglement entropy \(S^{(2)}_{AD}({\bar{\ell}})\)
## I Introduction
The study of many body quantum chaos is currently experiencing a golden age, also due to its implications on important aspects in many-body physics such as the thermalization [1; 2] of isolated systems [3; 6; 4; 5; 7; 8], or the scrambling of quantum information [9; 10; 11]. In fact, the field has already enjoyed intense research activity more than thirty years ago [12; 13], when the relations between chaotic many-body systems and random matrix theory were first explored. Recently, a renewed interest came from the study of black hole physics and concepts such as scrambling of quantum information, and computational complexity [16; 15; 14; 17; 18; 19; 20].
An important milestone in the recent literature is the Sachdev-Ye-Kitaev model [22; 21], which was originally proposed by Sachdev and Ye as a model of strongly correlated electron systems, and generalized by Kitaev in 2015 who pointed out its connection to holographic duality. This model describes \(N\) Majorana fermions or complex fermions with random all-to-all interactions. In the work we will focus on the Majorana fermion version. The SYK model has already drawn enormous attention from different communities, ranging from quantum gravity [23; 24] to condensed-matter and many-body physics [25; 26; 27; 28; 29; 30; 31], due to the concomitance of several unique features. Among these, the model has been shown to be maximally chaotic and yet amenable to exact analysis in the large-\(N\) limit [21; 25; 32; 26; 33], making it an ideal playground for the study of chaos and scrambling of quantum information.
In the same years, the effort to better characterize quantum chaos led to the systematic development of reliable indicators for its diagnosis. In particular, out-of-time-ordered correlation (OTOC) functions, historically introduced in the context of disordered superconductors [34], were naturally selected as ideal probes to detect the “scrambling” of local observables [35; 17; 21; 36; 37], namely the spreading of their spatial support in the operator basis. It is important to mention that these ideas had far reaching ramifications, motivating the study of OTOCs also in many-body systems with short-range interactions [38; 45; 39; 40; 41; 42; 43; 29; 44; 46; 46; 47; 48; 49; 50; 51] and in spatially local “quantum unitary circuits” [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64], which provide minimally structured models for chaotic quantum dynamics. In fact, related studies on information scrambling in a class of random, and in general non-local, circuits (the so-called approximate \(t\)-designs) were already carried out within quantum information theory [65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79], where the latter were used to provide rapid approximations to random unitary operators. Finally, we note that OTOCs were also shown to be directly related to the growth of the operator size, _i.e._ the size of its support [37; 80].
So far, computations of OTOCs in the SYK model have been carried out through field-theoretical approaches in the large-\(N\) limit [21; 23; 25]. On the other hand, despite the many works devoted to this topic, results for finite values of \(N\) are difficult to obtain, and remain scarce [81; 82; 83]. This is also true for numerical computations: the exponential growth of the Hilbert space dimension, and the presence of disorder averages yield strong limitations on the sizes of the systems that can be simulated [84; 37; 85; 86]. Still, it would be highly desirable to develop a systematic approach to investigate the properties of the SYK model at finite \(N\), even numerically. Indeed, not only would this allow for inspection of finite-size corrections to the large-\(N\) results, but also to compute quantities beyond multi-point correlation functions, for which field-theoretical approaches might be difficult to apply. A notable example is given by the (negative) tripartite information of the evolution operator introduced in Ref. [9] in the context of unitary circuits. This was suggested as a valuable tool to quantify the scrambling power of a quantum channel, namely its ability to delocalize information provided as an input. We note that, so far, this quantity has been computed only numerically for small system sizes [9; 87] (see also Refs. [88; 89; 90], where the tripartite information of given states, and not of the channel, was studied).
Motivated by the above picture, we consider a simpler, but closely related, _Brownian_ SYK model, and address the problem of its exact analysis at finite sizes. The model was introduced in Ref. [91], and differs from the traditional SYK in that the random couplings are chosen to vary independently at each time. The simplification arising in this case is similar to the one we have in unitary circuits by choosing random gates independently in each layer. Experience from the latter framework suggests that the main features of the chaotic dynamics remain qualitatively unaltered by introducing an additional time-dependence to the spatial disorder, except that random circuits and Brownian models behave like infinite-temperature systems since they do not display energy conservation.
In this work, we focus on the development of a systematic approach to the chaotic dynamics in the Brownian SYK model, which could also be applied, more generally, to other time-dependent, disordered Hamiltonians with infinite-range interactions. In particular, we aim to compute OTOCs of arbitrary local observables, and other dynamical quantities which can be extracted from disordered averages involving up to four unitary evolution operators. These include a Rényi-\(2\) version of the tripartite information introduced in [9], which has been shown to encode information about all possible OTOCs [9].
As a main result of our work, we show that the averaged dynamics of the OTOCs and of the Rényi tripartite information can be studied as a quench problem at imaginary times in a model of \(N\) qudits, where the Hamiltonian displays full site-permutational symmetry. We analyze this problem by means of a description in terms of bosonic collective modes, and prove that for the quantities of interest the dynamics takes place in a subspace of the Hilbert space whose dimension grows either linearly or quadratically with \(N\). This allows us to perform numerically exact calculations up to one million particles, and, consequently, analyze in great detail the behavior of OTOCs and of the Rényi tripartite information, highlighting their most interesting features. While some of our results depend on simplifications arising in the special case of the SYK model, we expect that suitable generalizations of our method could be successfully applied also to the study of other disordered time-dependent Hamiltonians with all-to-all interactions.
It is useful to compare our method with that of existing studies, as some of the ideas used in our work are related to other approaches in the literature. First, Ref. [91] proposed the Brownian SYK model as a simplified version of the original SYK, and mainly focused on the computation of the spectral form factor [92]. For this specific quantity, it was shown that in the Brownian SYK model an exact solution could be achieved, by means of an elementary mapping to a classical partition function. Our results on OTOCs and tripartite information cannot be obtained using the same approach.
Next, we discuss Refs. [70; 71; 74], where a class of random quantum circuits was considered, in which at each layer a single unitary gate is applied to a pair of qudits randomly chosen. There, it was shown that the moments of the evolution operator associated with a time step could be mapped onto a permutational invariant Hamiltonian which generalizes the Lipkin-Meshkov-Glick model [93]. Even though the idea underlying our method is similar, both our mapping and the quantities studied in this paper are different.
We also note that the computation of OTOCs in models with a continuous-time evolution in the presence of Brownian disorder and infinite-range interactions have been already addressed in [94; 95] (see also [97; 96]). The system studied in these works, consisting of \(N\) qudits driven by an Hamiltonian which is bilinear in the Pauli operators, was introduced as a chaotic toy model in Ref. [15], where its scrambling time was first estimated to be logarithmic in \(N\) (see also Ref. [98], where the spectral form factor was analyzed for the same system). The approach of [94; 95] relies on the derivation, based on an application of Itô calculus [99], of a system of differential equations for the OTOCs of interest. Solving the latter, numerical results were given in Ref. [15] for sizes comparable to those that can be reached with our method, while an analytical solution was found in [95] for a particular average of OTOCs. As we will see, our approach differs from that of [94; 95], as we tackle directly the computation of the averaged moments of the evolution operator. This allows us to use the same formalism to also analyze the tripartite information discussed above, which was not addressed in these studies. Finally, we note that rigorous results, relevant to the present paper, for the scrambling properties of continuous-time evolution generated by random Hamiltonians were recently presented in Refs. [100; 101].
The organization of the rest of this paper is as follows. In Sec. II we introduce the Brownian SYK model and the quantities which will be investigated in this work. We proceed to present the key features of our method in Sec. III, while our physical results are reported in Sec. IV. The most technical aspects of our study are consigned to Sec. V and to a few appendices. Finally, our conclusions are discussed in Sec. VI.
## II The model and the chaos quantifiers
<figure><img src="content_image/1908.00775/x1.png"><figcaption>Figure 1: Pictorial representation of the Brownian SYK model described by theHamiltonian (1), for q=4. At each time, all the Majorana fermions are coupledtogether within clusters of q particles, with time-dependent randominteractions.</figcaption></figure>
The object of study of this work will be the Brownian SYK model, describing a set of \(N\) Majorana fermions with \(q\)-local, all-to-all random interactions, cf. Fig. 1. It is defined on a Hilbert space \(\mathcal{H}_{N}\) of dimension \(D=2^{N/2}\), with \(N\) operators \(\psi_{j}\) acting on \(\mathcal{H}_{N}\). They are the representation of standard Majorana fermions, and thus satisfy \(\{\psi_{j},\psi_{k}\}=2\delta_{j,k}\) and \(\psi_{j}^{\dagger}=\psi_{j}\) (the quantities of interest in this work will not depend on the representation chosen for the \(N\) Majorana fermions). Its time-dependent Hamiltonian reads
\[H_{\rm SYK}(t)=i^{q/2}\sum_{1\leq i_{1}<i_{2}<\ldots<i_{q}\leq N}J_{i_{1}, \ldots\,,i_{q}}(t)\psi_{i_{1}}\psi_{i_{2}}\ldots\psi_{i_{q}}\,.\] (1)
Here, the couplings \(J_{i_{1},\ldots\,,i_{q}}(t)\) are random variables, which we assume to be Gaussian distributed with vanishing mean and variance
(2)
where we denoted by \(\overline{[\ldots]}\) the average over disorder realizations. While our method could be applied for arbitrary integer values of \(q\), we will focus for concreteness on the case \(q=4\). Furthermore, we will choose the constant \(\sigma_{J}\) in such a way that
\[\overline{J_{i_{1}\ldots\,,i_{4}}(t)J_{i_{1}^{\prime}\ldots\,,i_{4}^{\prime}} \left(t^{\prime}\right)}=\delta_{i_{1}i_{1}^{\prime}}\cdots\delta_{i_{4}i_{4}^ {\prime}}\delta\left(t-t^{\prime}\right)\frac{1}{N^{3}}\,.\] (3)
In comparison, the original SYK Hamiltonian shares the same form of (1), but with time-independent couplings. In appendix A we additionally discuss the case \(q=2\), which lacks chaotic behavior as each disorder realization is non-interacting.
### The OTOCs and the operator spreading
As we have already discussed in Sec. I, we will be mainly interested in two quantifiers of quantum chaos and scrambling. The first one is given by OTOCs of local observables: explicitly, given two operators \(\mathcal{O}\), \(\mathcal{O}^{\prime}\), we define their OTOC on a state \(\rho\) as
\[\mathcal{F}_{\mathcal{O},\mathcal{O}^{\prime}}(t)={\rm tr}\left[\rho\mathcal{O }(t)\mathcal{O}^{\prime}(0)\mathcal{O}(t)\mathcal{O}^{\prime}(0)\right]\,,\] (4)
where \(\mathcal{O}(t)=U^{\dagger}(t)\mathcal{O}U(t)\), and \(U(t)\) is the unitary evolution operator. In this work we will choose the infinite-temperature Gibbs density matrix \(\rho=\mathbb{1}/2^{N/2}\), which represents a stationary state for the time-dependent Hamiltonian (1).
Importantly, we recall that the OTOC (4) can be related to an intuitive notion of the spreading of localized operators under unitary evolution. To this end, we choose for simplicity \(\mathcal{O}=\psi_{j}\), \(\mathcal{O}^{\prime}=\psi_{k}\) with \(j\neq k\), and consider the quantity
\[\mathcal{C}(t)=\frac{1}{2}{\rm tr}\left[\rho\left(\left\{\psi_{j}(t),\psi_{k}( 0)\right\}\right)^{\dagger}\left(\left\{\psi_{j}(t),\psi_{k}(0)\right\}\right) \right]\,,\] (5)
which measures the magnitude of the anticommutator between \(\psi_{j}(t)\) and \(\psi_{k}(0)\). At time \(t=0\), one simply has \(\mathcal{C}(t)=0\). On the other hand, as time increases, the spacial support of \(\psi_{j}(t)\) will also increase; namely \(\psi_{j}(t)\) will evolve into a complicated sum of strings of Majorana operators. Then, we see that deviations of \(\mathcal{C}(t)\) from zero signal that the support of \(\psi_{j}(t)\) has grown to include site \(k\). Accordingly, \(\mathcal{C}(t)\) can be understood as a measure of the spatial spreading of the local operator \(\psi_{j}(t)\). The connection between the latter and OTOCs is finally established by the simple relation
\[\mathcal{C}(t)=1+{\rm Re}\left[{\rm tr}\left(\rho\psi_{j}(t)\psi_{k}(0)\psi_{j }(t)\psi_{k}(0)\right)\right]\,.\] (6)
In conclusion, the discussion above allows one to view the OTOCs as a measure of chaos: chaotic dynamics corresponds to OTOCs that vanish sufficiently rapidly with time. On the other hand, for a non-chaotic Hamiltonian one expects information to spread coherently: for large system sizes this results in either a slow decay or a non-vanishing asymptotics of OTOCs [46], while for small ones this causes revivals, consisting in OTOCs frequently returning close to their original value [102].
### Diagnostic of scrambling: the tripartite information in fermionic systems
<figure><img src="content_image/1908.00775/x2.png"><figcaption>Figure 2: Pictorial representation of the state |U⟩⟩ defined in Eq. (8). Theoperator U is depicted as a box, while its legs correspond to the localHilbert spaces hj. The output legs are bent to create a state in a doubledHilbert space HN⊗H′N. Each space is partitioned into two regions: A and B forthe input space HN, and C and D for the output space H′N.</figcaption></figure>
The OTOCs provide a physically clear definition of quantum chaos in terms of correlation functions between local operators. Other measures probing different features intuitively associated with chaos exist. Among these, the notion of scrambling of information, originally introduced in the study of black hole physics [16; 14], is particularly clear: a quantum system is a good scrambler if a localized perturbation in the initial state spreads over all its degrees of freedom, in such a way that it can no longer be detected by local measurements at large times. In this context, it is useful to think of the unitary evolution as a quantum channel, taking an initial state as the input, and returning the evolved state as the output. In this logic, it was proposed in Ref. [9] that the scrambling power of a channel could be conveniently measured by the tripartite information between bipartitions of its input and output, as we review in the following.
For simplicity, let us first consider a system of \(N\) qudits, associated with a Hilbert space \(\mathcal{H}_{N}=h_{1}\otimes\ldots\otimes h_{N}\), where \(h_{j}\simeq\mathbb{C}^{D}\), and a unitary operator \(U:\mathcal{H}_{N}\to\mathcal{H}_{N}\). In order to study the scrambling properties of \(U\), we wish to interpret it as a state in a suitable space. To this end, we introduce a copy of the original Hilbert space \(\mathcal{H}^{\prime}_{N}\), and define the maximally entangled state \(\ket{I}\in\mathcal{H}_{N}\otimes\mathcal{H}^{\prime}_{N}\) as
\[\ket{I}=\frac{1}{D^{N/2}}\sum_{j=1}^{D^{N}}\ket{j}\otimes\ket{j^{\prime}}\,,\] (7)
where \(\{\ket{j}\}_{j=1}^{D^{N}}\), \(\{\ket{j^{\prime}}\}_{j^{\prime}=1}^{D^{N}}\) are orthonormal bases for \(\mathcal{H}_{N}\) and \(\mathcal{H}^{\prime}_{N}\), respectively. Note that we choose the basis such that \(\ket{I}\) is a direct product of EPR pairs between qudits in the two systems, as is illustrated in Fig. 2. Then, the operator \(U\) can be interpreted as a state in \(\mathcal{H}_{N}\otimes\mathcal{H}_{N}^{\prime}\) through the Choi-Jamiolkowski mapping
\[U\mapsto\ket{U}\rangle=\left(\mathbb{1}\otimes U\right)\ket{I}\,.\] (8)
Here the operator \(U\) is depicted as a box, whose legs correspond to the local Hilbert spaces \(h_{j}\); we see that one could intuitively think of the state \(\ket{U}\rangle\) as obtained by “bending” the output legs, so as to treat input and output, associated with \(\mathcal{H}_{N}\) and \(\mathcal{H}^{\prime}_{N}\) respectively, on an equal footing. It should be noted that the mapping from \(U\) to \(\ket{U}\rangle\) is not unique, as it depends on the choice of state \(\ket{I}\). However, different \(\ket{I}\) are related by a local unitary transformation, which does not affect the entropy-related quantities we discuss in the following.
Given \(\ket{U}\rangle\in\mathcal{H}_{N}\otimes\mathcal{H}^{\prime}_{N}\), one can compute the entanglement entropy between different spatial regions in \(\mathcal{H}_{N}\) and \(\mathcal{H}^{\prime}_{N}\). We consider in particular bipartitions of \(\mathcal{H}_{N}\) and \(\mathcal{H}^{\prime}_{N}\) into the complementary subsystems \(A\), \(B\) and \(C\),\(D\) respectively; in Fig. 2 a special choice for these regions is shown. Given a pair of bipartitions \((A,B)\) and \((C,D)\), we define the tripartite information as [9]
\[I_{3}(A:C:D)=I(A:C)+I(A:D)-I(A:CD)\,,\] (9)
where \(CD\) denotes the union of the regions \(C\) and \(D\). Here \(I(X:Y)\) is the mutual information between the regions \(X\) and \(Y\)
\[I(X:Y)=S_{X}+S_{Y}-S_{XY}\,,\] (10)
where \(S_{X}\) is the von Neumann entropy of the reduced density matrix \(\rho_{X}\). For instance, we have
\[S_{AC}=-\operatorname{tr}\left[\rho_{AC}\ln\rho_{AC}\right]\,,\] (11)
where \(\rho_{AC}=\operatorname{tr}_{BD}[\rho]\).
The tripartite information in Eq. (9) was suggested in Ref. [9] as a natural and convenient diagnostic for scrambling. In fact, as in the case of OTOCs, its underlying physical meaning is easy to grasp. From Eq. (9), we see that \(-I_{3}(A:C:D)\) quantifies the amount of information on the input region \(A\) that can be recovered by global measurements in \(C\cup D\), but can not be obtained by probing \(C\) and \(D\) individually. Recalling that \(\mathcal{H}^{\prime}_{N}=C\cup D\) corresponds to the output, this is exactly a measure of scrambling: if \(-I_{3}(A:C:D)\) is large, it means that the information localized in a subsystem \(A\) of the input state can be recovered only by global measurements in the output state, and information has been scrambled. Accordingly, if for any bipartition of \(\mathcal{H}_{N}\) and \(\mathcal{H}^{\prime}_{N}\), \(I_{3}(A:C:D)\) is negative with an absolute value close to the maximum possible value, the channel \(U\) has large scrambling power. Finally, a close connection was established in Ref. [9] between the tripartite information (9) and the OTOCs, which further corroborated the appeal of the former as a valuable diagnostic of scrambling and, more generally, of chaotic dynamics. This connection is reviewed in Appendix B, where we also discuss its generalization to the fermionic setting.
The above discussion is carried out in terms of qudits, whereas in our work we are interested in a fermionic system. At this point, one could employ a Jordan-Wigner representation of the Majorana operators in the Hamiltonian (1), interpret the resulting evolution operator as a unitary channel acting on a system of \(N/2\) qubits, and define the tripartite information for the latter according to the discussion above. However, given a correspondence between Majorana and Pauli operators via the Jordan-Wigner transformation, it is known that the reduced density matrix of disjoint intervals written in terms of the two is not the same, leading to different results for the corresponding von Neumann entanglement entropy [103; 104]. In our case, we stress that the physical degrees of freedom are represented by the Majorana operators and, accordingly, the tripartite information in Eq. (9) should be computed in terms of the latter. In this respect, we find it useful to discuss explicitly the generalization of the above construction for Majorana operators, without making direct reference to the tensor-product structure of the doubled Hilbert space associated with the input and output of the channel.
<figure><img src="content_image/1908.00775/x3.png"><figcaption>Figure 3: Pictorial representation for the state |Iab⟩ in Eq. (12). The blackbullets in the lower and upper rows represent the original and “replica”fermions ψaj and ψbj, respectively. Each link is a maximally entangled pair,which corresponds to the vacuum for the complex Fermi operators cj=ψaj−iψbj.The evolution operator U, generated by the Hamiltonian (1), is applied only tothe original system.</figcaption></figure>
As a first ingredient, we wish to interpret the evolution operator generated by the Hamiltonian (1) as a state. To this end, we introduce a system of \(2N\) Majorana operators \(\psi^{\alpha}_{j}\), where \(j=1\,,\ldots\,,N\), while \(\alpha\) is an index labeling two different species which we denote by \(a\) and \(b\). The maximally entangled state \(\ket{I^{ab}}\) is then defined as the vacuum state for the complex fermions \(c_{j}=\psi_{j}^{a}-i\psi_{j}^{b}\) [105], namely
\[\left(\psi_{j}^{a}-i\psi_{j}^{b}\right)\ket{I^{ab}}=0\,,\qquad\forall j\,.\] (12)
The operator \(U\) can now be interpreted as a state in the doubled system through the mapping
\[U(t)\mapsto\ket{U(t)}\rangle=U^{a}(t)\ket{I^{ab}}\,.\] (13)
Here the superscript \(a\) indicates that the Hamiltonian generating the unitary evolution operator \(U^{a}(t)\) is written in terms of the fermions \(\psi^{a}_{j}\). A pictorial representation of this construction is shown in Fig. 3. One can now proceed to compute the fermionic reduced density matrices for the evolved state \(\ket{U(t)}\rangle\), and consequently the corresponding tripartite information as in Eq. (9). We refer the reader to Sec. V for further details.
Unfortunately, despite its great interest, the computation of the tripartite information (9) is a very difficult task, which so far has been carried out only numerically for qudit systems of small sizes [6; 87]. For this reason, we study a simpler but closely related quantity, which is obtained from \(I_{3}(A:C:D)\) by considering Rényi, rather than von Neumann, entropies. Specifically, we will compute the following Rényi-\(2\) tripartite information
\[I^{(2)}_{3}(A:C:D)=I^{(2)}(A:C)+I^{(2)}(A:D)-I^{(2)}(A:CD)\,,\] (14)
where
\[I^{(2)}(X:Y)=S^{(2)}_{X}+S^{(2)}_{Y}-S^{(2)}_{XY}\,,\] (15)
and
\[S^{(2)}_{X}=-\ln\left[\overline{\operatorname{tr}\left(\rho^{2}_{X}\right)} \right]\,.\] (16)
We note that, strictly speaking, \(S^{(2)}_{X}\) is not the averaged Rényi entropy of order \(2\), as the disorder average is taken inside the logarithm. However, Ref. [6] showed that the OTOC for a pair of operators in \(A\) and \(C\) averaged over all operator choices is determined by \({\rm tr}\left(\rho_{AD}^{2}\right)\). Therefore the averaged purity \(\overline{{\rm tr}\left(\rho_{X}^{2}\right)}\) is a meaningful physical quantity to consider. Also, for \(N\) not too small, one expects the effect of fluctuations in the disorder to be small, so that \(S^{(2)}_{X}\) remains a good approximation for the Rényi-\(2\) entropy [6; 55].
It is worth to notice that Eq. (14) can be simplified in general. Indeed, it is easy to show [9]
\[I^{(2)}_{3}(A:C:D)=\frac{N}{2}\ln(2)-S^{(2)}_{AC}-S^{(2)}_{AD}\,,\] (17)
where we used that the dimension of the Hilbert space associated with \(N\) Majorana fermions is \(D=2^{N/2}\). Eq. (17) tells us that, in order to obtain the tripartite information, it is sufficient to compute the entropies \(S^{(2)}_{AC}\) and \(S^{(2)}_{AD}\) between different regions of the input and the output.
We conclude this section by stressing that while the Rényi tripartite information (14) differs quantitatively from \(I_{3}(A:C:D)\), based on previous studies [105], we can still expect it to display the same qualitative features of the latter, and thus to be a suitable measure for scrambling.
## III Exact approach from emergent permutational symmetry
Having introduced the model and the quantities of interest, we proceed by presenting the general ideas of the method developed in this work. The physical results will be then discussed in Sec. IV, while we postpone the most technical details of our calculations to Sec. V.
### Decomposing the dynamical problem
We will begin our discussion with the concrete problem of computing the OTOC (4), which we rewrite as
\[\mathcal{F}_{\mathcal{O},\mathcal{O}^{\prime}}(t)=\frac{1}{2^{N/2}}{\rm tr} \left\{\mathcal{O}U(t)\mathcal{O}^{\prime}U^{\dagger}(t)\mathcal{O}U(t) \mathcal{O}^{\prime}U^{\dagger}(t)\right\}\,.\] (18)
We recall that the time-dependent, disordered Hamiltonian (1) gives rise to a dynamics which can be interpreted as the continuous limit of the discrete process defined by
\[U(t)=e^{-i\Delta tH_{\rm SYK}(t_{n})}e^{-i\Delta tH_{\rm SYK}(t_{n-1})}\ldots e ^{-i\Delta tH_{\rm SYK}(t_{1})}\,,\] (19)
where \(\Delta t=t/n\) and \(t_{j}=j\Delta t\), while the delta function in Eq. (3) is regularized as
\[\overline{J_{i_{1}\ldots\,,i_{4}}(t_{r})J_{i_{1}^{\prime}\ldots\,,i_{4}^{ \prime}}\left(t_{s}\right)}=\delta_{i_{1}i_{1}^{\prime}}\cdots\delta_{i_{4}i_{ 4}^{\prime}}\frac{1}{N^{3}}\frac{\delta_{r,s}}{\Delta t}\,.\] (20)
In practice, one can work with the discrete form (19) of the evolution operator, and take the continuum limit at the end of the calculations.
In order to compute \(\mathcal{F}_{\mathcal{O},\mathcal{O}^{\prime}}(t)\), we first introduce a resolution of the identity between each pair of operators in (18), yielding
\[\mathcal{F}_{\mathcal{O},\mathcal{O}^{\prime}}\]
\[\times \langle m|\mathcal{O}|n\rangle\langle n|U|o\rangle\left\langle o \left|\mathcal{O}^{\prime}\right|p\right\rangle\left\langle p\left|U^{\dagger} \right|i\right\rangle\,.\] (21)
Here \(\{\ket{j}\}\) denotes a basis for the Hilbert space \(\mathcal{H}_{N}\) [introduced before Eq. (1)] on which the operators \(\psi_{j}\) act. Rearranging the above sum, we obtain
\[\mathcal{F}_{\mathcal{O},\mathcal{O}^{\prime}}=\bra{L}\left(U \otimes U^{\ast}\otimes U\otimes U^{\ast}\right)\ket{R}\,,\] (22)
where
\[\ket{L}=\] (23)
\[\ket{R}=\] (24)
Here \(U^{\ast}(t)\) denotes the complex conjugate of \(U(t)\) (which is well defined, once a basis \(\{\ket{j}\}\) of \(\mathcal{H}_{N}\) is given) and we introduced the vectors \(\ket{i,j,k,l}=\ket{i}\otimes\ket{j}\otimes\ket{k}\otimes\ket{l}\in\mathcal{H}_ {N}^{\otimes 4}\). According to Eq. (22), the dynamical information about the OTOC is uniquely encoded in the operator \(\mathcal{U}(t)\equiv U\otimes U^{\ast}\otimes U\otimes U^{\ast}\), while \(\mathcal{O}\), \(\mathcal{O}^{\prime}\) only affect the “left” and “right” states \(\ket{L}\), \(\ket{R}\), cf. Fig. 4 .
<figure><img src="content_image/1908.00775/x4.png"><figcaption>Figure 4: Schematic diagram summarizing our method. The dynamical informationabout the OTOC is uniquely encoded in the operator U(t)≡U⊗U∗⊗U⊗U∗(t), whilethe observables O,O′ define the “right” and “left” states |L⟩, |R⟩ (cf. SecIII.1). Exploiting the emergent permutational symmetry, we can map¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯U⊗U∗⊗U⊗U∗(t) onto a state|¯¯¯¯U(t)⟩⟩ in a bosonic Fock space, in which the dynamics is governed by aneffective imaginary-time Hamiltonian evolution (cf. Sec. III.2). Finally, weexpress the matrix element of ¯¯¯¯U(t) with respect to |L⟩ and |R⟩ as theoverlap between |¯¯¯¯U(t)⟩⟩ and an appropriate state ⟨⟨WO,O′| (cf. Sec III.3).As a result, the entire computation of the OTOC can be performed veryefficiently within a bosonic space, whose dimension grows linearly with N. TheRényi-2 entanglement entropies S(2)AC and S(2)AD are amenable to a similartreatment as the OTOCs.</figcaption></figure>
From Eq. (19), we see immediately that \(\mathcal{U}(t)\) is written in terms of the operators
\[\chi^{a}_{j}:= \psi_{j}\otimes\mathbb{1}\otimes\mathbb{1}\otimes\mathbb{1}\,, \quad\chi^{b}_{j}:=\mathbb{1}\otimes\psi^{\ast}_{j}\otimes\mathbb{1}\otimes \mathbb{1}\,,\] (25)
\[\chi^{c}_{j}:= \mathbb{1}\otimes\mathbb{1}\otimes\psi_{j}\otimes\mathbb{1}\,, \quad\chi^{d}_{j}:=\mathbb{1}\otimes\mathbb{1}\otimes\mathbb{1}\otimes\psi_{j} ^{\ast}\,,\] (26)
which provide a basis for all the operators in \(\mathcal{H}^{\otimes 4}_{N}\). Note that, as we already stressed, \(\psi_{j}\) is the representation of a Majorana fermion, and thus is an operator acting on \(\mathcal{H}_{N}\), for which the tensor product is defined in the usual way. Due to the tensor-product structure of \(\mathcal{H}_{N}^{\otimes 4}\), the operator \(\chi_{j}^{\alpha}\) satisfy mixed commutation and anticommutation relations, namely \(\left[\chi^{\alpha}_{j},\chi^{\beta}_{k}\right]=0\) if \(\alpha\neq\beta\), while \(\left\{\chi^{\alpha}_{j},\chi^{\alpha}_{k}\right\}=2\delta_{j,k}\). On the other hand, it is possible to introduce related operators in \(\mathcal{H}_{N}^{\otimes 4}\) which are all anti-commuting with one another, realizing a truly fermionic algebra. We consider for concreteness the case \(N\equiv 0\ ({\rm mod}\ 4)\) [if \(N\equiv 2\ ({\rm mod}\ 4)\), one has a similar treatment], and introduce
\[\mathcal{Q}^{\alpha}=\prod_{k=1}^{N}\chi^{\alpha}_{k}\,,\quad\alpha=a\,,b\,,c \,,d\,.\] (27)
Then, we can define
\[\psi^{a}_{j}= i\mathcal{Q}^{a}\chi^{a}_{j}\,,\quad\psi^{b}_{j}=\mathcal{Q}^{a} \chi^{b}_{j}\,,\] (28)
\[\psi^{c}_{j}= i\mathcal{Q}^{a}\mathcal{Q}^{b}\mathcal{Q}^{c}\chi^{c}_{j}\,, \quad\psi^{d}_{j}=\mathcal{Q}^{a}\mathcal{Q}^{b}\mathcal{Q}^{c}\chi^{d}_{j}\,.\] (29)
One can easily verify that \(\{\psi^{\alpha}_{j}\}_{j,\alpha}\) satisfy fermionic anticommutation relations, namely \(\{\psi_{j}^{\alpha},\psi_{k}^{\beta}\}=2\delta_{\alpha,\beta}\delta_{j,k}\), and that \(\left(\psi^{\alpha}_{j}\right)^{\dagger}=\psi^{\alpha}_{j}\). Furthermore, since \(\left(\mathcal{Q^{\alpha}}\right)^{2}=\mathbb{1}\), we have \(\prod_{k=1}^{M}\chi_{j_{k}}^{\alpha}=\prod_{k=1}^{M}\psi_{j_{k}}^{\alpha}\) for any even integer \(M\). Since the Hamiltonian (1) contains a sum of products of Majorana operators with an even number of particles, it is then straightforward to show
\[\mathcal{U}(t)=U_{+}^{a}(t)U_{-}^{b}(t)U_{+}^{c}(t)U_{-}^{d}(t)\,,\] (30)
where
\[U^{\alpha}_{\pm}(t)=e^{\mp i\Delta tH_{\rm SYK}^{\alpha}(t_{n})}e^{\mp i\Delta tH _{\rm SYK}^{\alpha}(t_{n-1})}\ldots e^{\mp i\Delta tH_{\rm SYK}^{\alpha}(t_{1} )}\,,\] (31)
while \(H_{\rm SYK}^{\alpha}\) is the Hamiltonian (1) written in terms of the fermions \(\psi^{\alpha}_{j}\). We see that \(\mathcal{U}(t)\) can be viewed as an evolution operator on the space of four “replica” Majorana fermions \(\psi^{\alpha}_{j}\), labeled by \(\alpha=a\), \(b\), \(c\), \(d\). Eq. (30) represents the starting point for our subsequent calculations.
The above discussion allows us to decompose the problem of computing the OTOC (18) into two logically separated steps:
* compute the disorder average of the generator of the dynamics \(\mathcal{U}(t)\), defined in Eq. (30) (cf. Sec. III.2);
* given the operator \(\overline{\mathcal{U}}(t)\), evaluate the matrix element , where \(\ket{L}\), \(\ket{R}\) were introduced in Eq. (22) and pictorially represented in Fig. 4 (cf. Sec. III.3).
Importantly, it is possible to show that the same procedure can be employed for the Rényi-\(2\) tripartite information (14): one can express also this quantity in the form of a matrix element , for an appropriate choice of the vectors \(\ket{L}\) and \(\ket{R}\), cf. Sec. III.3. We will address the two points above separately in the following subsections, for both the OTOCs and the tripartite information, providing a complete overview of the approach developed in this work.
### The generator of the dynamics:
mapping to a bosonic system
We start by addressing the computation of the average evolution operator defined in Eq. (30). Using that even numbers of different Majorana operators commute, and that one can factor disorder averages at different times, we note that Eqs. (30), (31) imply
\[\overline{\mathcal{U}}(t_{n})= \overline{e^{-i\Delta tH^{a}(t_{n})}e^{i\Delta tH^{b}(t_{n})}e^{- i\Delta tH^{c}(t_{n})}e^{i\Delta tH^{d}(t_{n})}}\]
\[\times\overline{\mathcal{U}}(t_{n-1})\,.\] (32)
This allows us to write down a linear differential equation for \(\overline{\mathcal{U}}(t)\), as follows.
First, from Eq. (20), we see that, in order to expand the first line at the first order in \(\Delta t\), each exponential factor has to be expanded up to the _second_ order. By doing this, and carefully taking into account the correlations between the couplings, one obtains an equation of the form
\[\overline{\mathcal{U}}(t_{n})=\overline{\mathcal{U}}(t_{n-1})+L\overline{ \mathcal{U}}(t_{n-1})\Delta t+O(\Delta t^{2})\,,\] (33)
namely, taking the limit \(\Delta\to 0\)
\[\frac{{\rm d}}{{\rm d}t}\overline{\mathcal{U}}(t)=L\overline{\mathcal{U}}(t)\,,\] (34)
where
\[L=\frac{1}{N^{3}}\Bigg{[}-2\binom{N}{4}+\sum_{\begin{subarray}{c }\alpha,\beta=a,b,c,d\\ \alpha<\beta\end{subarray}}(-1)^{\gamma_{\alpha,\beta}}\]
\[\times\left.\sum_{i_{1}<i_{2}<i_{3}<i_{4}}\left(\psi^{\alpha}_{i_ {1}}\psi^{\beta}_{i_{1}}\right)\left(\psi^{\alpha}_{i_{2}}\psi^{\beta}_{i_{2}} \right)\left(\psi^{\alpha}_{i_{3}}\psi^{\beta}_{i_{3}}\right)\right. \left(\psi^{\alpha}_{i_{4}}\psi^{\beta}_{i_{4}}\right)\Bigg{]}\,.\] (35)
Here, the indexes \(a,b,c,d\) are ordered as \(a<b<c<d\), while we introduced
\[(-1)^{\gamma_{\alpha,\beta}}=\begin{cases}1&(\alpha,\beta)=(a,b),(a,d),(b,c),( c,d)\,,\\ -1&(\alpha,\beta)=(a,c),(b,d)\,.\end{cases}\] (36)
We note that, since the disorder average has been already taken, the operator \(L\) is time- and disorder-independent. Eq. (34) can thus be seen as a Schrodinger equation (at imaginary times) for \(\overline{\mathcal{U}}(t)\) in the space \({\rm End}(\mathcal{H}_{N}^{\otimes 4})\) of the linear endomorphisms acting on \(\mathcal{H}_{N}^{\otimes 4}\), where the left matrix multiplication by \(L\) is interpreted as a superoperator. In the following, it will be useful to denote by \(\ket{\mathcal{O}}\rangle\) the state in \({\rm End}(\mathcal{H}_{N}^{\otimes 4})\) associated with the operator \(\mathcal{O}\).
In order to proceed, we note that at every time \(t\), the operator \(\overline{\mathcal{U}}(t)\) can be written as a linear superposition of operators of the form
\[\mathcal{O}_{1}^{\alpha_{1}}\mathcal{O}_{2}^{\alpha_{2}}\ldots\mathcal{O}_{N}^ {\alpha_{N}}\,,\] (37)
where \(\mathcal{O}_{j}^{\alpha_{j}}\) is chosen within the set of operators
\[I_{j}=\{\mathbb{1},\psi^{a}_{j}\psi^{b}_{j}\,,\psi^{a}_{j}\psi^{ c}_{j},\psi^{a}_{j}\psi^{d}_{j}\}\]
\[\cup\{\psi^{b}_{j}\psi^{c}_{j}\,,\psi^{b}_{j}\psi^{d}_{j}\,, \psi^{c}_{j}\psi^{d}_{j},\psi^{a}_{j}\psi^{b}_{j}\psi^{c}_{j}\psi ^{d}_{j}\}\,.\] (38)
Indeed, due to the anticommutation relations of the Majorana operators and the form of the Hamiltonian \(H\), it is easy to see that \(\overline{\mathcal{U}}(t)\) can not contain terms with an odd number of fermions at site \(\psi^{\alpha}_{j}\). Hence, the dynamics of \(\ket{\overline{\mathcal{U}}(t)}\rangle\) takes places in the Hilbert space generated by the vectors
\[\ket{\alpha_{1}\ldots\alpha_{N}}:=\ket{\mathcal{O}_{1}^{\alpha_{1}}\mathcal{O} _{2}^{\alpha_{2}}\ldots\mathcal{O}_{N}^{\alpha_{N}}}\rangle\,.\] (39)
Here, \(\alpha_{j}\in\{1,ab,ac,ad,bc,bd,cd,abcd\}\), with the convention \(\mathcal{O}_{j}^{1}=\mathbb{1}\), \(\mathcal{O}_{j}^{ab}=\psi^{a}_{j}\psi^{b}_{j}\), \(\ldots\), \(\mathcal{O}_{j}^{abcd}=\psi^{a}_{j}\psi^{b}_{j}\psi^{c}_{j}\psi^{d}_{j}\), i.e the ordered set \(\{\mathcal{O}_{j}^{\alpha}\}_{\alpha=1}^{abcd}\) coincides with \(I_{j}\) in Eq. (38).
Eq. (39) defines the previously announced mapping to a system of \(N\) qudits, as one can interpret
\[\ket{\alpha_{1}\ldots\alpha_{N}}=\ket{\alpha_{1}}\otimes\ldots\otimes\ket{ \alpha_{N}}\in\mathcal{K}_{N}\,,\] (40)
where \(\mathcal{K}_{N}=h_{1}\otimes\ldots\otimes h_{N}\) and \(h_{j}\simeq\mathbb{C}^{8}\) is the space generated by \(\{\ket{1}\,,\ket{ab}\,,\ldots\,,\ket{abcd}\}\). In this picture, the differential equation (34) is equivalent to a quench problem in \(\mathcal{K}_{N}\): the system is prepared in the initial product state
\[\ket{\overline{\mathcal{U}}(0)}\rangle=\ket{\mathbb{1}}\rangle=\ket{1}\otimes \ket{1}\otimes\ldots\otimes\ket{1}\,,\] (41)
and left to evolve according to the differential equation
\[\frac{{\rm d}}{{\rm d}t}\ket{\overline{\mathcal{U}}(t)}\rangle=H \ket{\overline{\mathcal{U}}(t)}\rangle\,.\] (42)
Here, \(H\) [not be confused with \(H_{\rm SYK}\) in (1)] is an operator acting on \(\mathcal{K}_{N}\) which plays the role of the Hamiltonian driving the imaginary-time dynamics. The precise form \(H\) in terms of local operators can be derived by computing the action on the basis operators (37) of the left multiplication by \(L\) in (35); however, even without doing this explicitly, it is easy to show that \(H\) is invariant under any permutation of the sites in \(\mathcal{K}_{N}\). This comes from the fact that the operator \(L\) in (35) is left unchanged under the exchange of the pairs \(\psi^{\alpha}_{i}\psi^{\beta}_{i}\) and \(\psi^{\alpha}_{j}\psi^{\beta}_{j}\) for any choice of \(i\) and \(j\). Since the initial state (41) also enjoys the same symmetry, we can conclude that the dynamics of \(\ket{\overline{\mathcal{U}}(t)}\rangle\) takes place in the subspace \(\mathcal{S}_{N}\subset\mathcal{K}_{N}\) which is invariant under arbitrary permutations of the sites. This is of course a great simplification for our problem. The permutational symmetry of the Hamiltonian \(H\) is “emergent” in the sense that it manifests itself only after taking averages over the Brownian disorder, while the Hamiltonian \(H_{\rm SYK}\) in Eq. (1) does not display this symmetry for individual random realizations.
In order to study the dynamics in this subspace, we introduce the basis vectors \(\ket{\mathcal{O}_{\vec{n}}}\rangle\) for \(\mathcal{S}_{N}\)
\[\ket{\mathcal{O}_{\vec{n}}}\rangle= \ket{n_{1}\,,n_{ab}\,,n_{ac}\,,n_{ad}\,,n_{bc}\,,n_{bd}\,,n_{cd} \,,n_{abcd}}=\frac{1}{\sqrt{N!n_{1}!n_{ab}!n_{ac}!n_{ad}!n_{bc}!n_{bd}!n_{cd}! n_{abcd}!}}\]
\[\times\] (43)
where we used the same notations as in Eqs. (39), (40). Here \(\pi\) is the unitary operator associated with a generic element in the symmetric group \(S_{N}\), whose action permutes different sites in \(\mathcal{K}_{N}\). Note that, since the sum runs over all the permutations, not all the elements in the sum are linearly independent.
The basis vectors (43) of the permutation symmetric space \(\mathcal{S}_{N}\) are labeled by sets of \(8\) integers \(\{n_{j}\}\), satisfying \(\sum_{k}n_{k}=N\), where each integer \(n_{k}\) [with \(k=1\,,ab\,,\ldots\,,abcd]\) “counts” the number of qudits in the level associated with \(k\). In fact, it is possible to employ a more convenient representation, by viewing the state (43) as an \(8\)-mode Fock state generated by bosonic creation operators acting on a vacuum \(\ket{\Omega}\). In particular, we have the identification
\[\ket{n_{1},\ldots,n_{abcd}}=\frac{1}{\sqrt{n_{1}!\cdots n_{abcd}! }}(a_{1}^{\dagger})^{n_{1}}(a_{ab}^{\dagger})^{n_{ab}}\cdots(a_{abcd}^{\dagger })^{n_{abcd}}\ket{\Omega}\,.\] (44)
Here, each operator \(a_{k}^{\dagger}\) creates a collective mode corresponding to the level associated with \(k\). In this language, the initial state (41) is written as \(\ket{\overline{\mathcal{U}}(0)}\rangle=\frac{1}{\sqrt{N!}}\left(a_{1}^{\dagger }\right)^{N}\ket{\Omega}\).
This representation is particularly convenient, due to the fact that the Hamiltonian \(H\) in Eq. (42) can be written in terms of the same bosonic operators appearing in Eq. (44):
\[H=\frac{1}{N^{3}}\left(-2\binom{N}{4}+\frac{1}{4!}\sum_{r=1}^{6}(-1)^{\gamma_{ r}}\left[X_{r}^{4}-X_{r}^{2}(-6N+8)+3N(N-2)\right]\right)\,,\] (45)
where \(X_{r}\) is a bilinear operator of bosons. The explicit form of \(X_{r}\) is derived in Sec. V.1, cf. Eqs. (81)-(86). A formal solution to the problem of computing \(\overline{\mathcal{U}}(t)\) is then obtained as
\[\ket{\overline{\mathcal{U}}(t)}\rangle=e^{Ht}\ket{\overline{\mathcal{U}}(0)} \rangle=e^{Ht}\frac{1}{\sqrt{N!}}\left(a_{1}^{\dagger}\right)^{N}\ket{\Omega}\,.\] (46)
From its explicit form, one can see that the Hamiltonian \(H\) commutes with the operator \(\sum_{j=1}^{8}a^{\dagger}_{j}a_{j}\), which “counts” the total number of bosonic modes; accordingly, the evolved state (46) always belongs to the finite-dimensional Hilbert space generated by the basis vectors (44). However, the dimension of the latter is \(D=\binom{N+7}{7}\), which grows as \(N^{7}\), strongly limiting any numerical computation based on a brute force implementation of Eq. (46). Luckily, it is possible to show that the Hamiltonian \(H\) has additional symmetries, which are unveiled by means of an appropriate Bogoliubov transformation
\[a_{n}=\frac{1}{\sqrt{8}}C_{n,m}b_{m}\,,\] (47)
further reducing the dimension of the effective Hilbert space explored by the dynamics. The matrix element \(C_{n,m}\) are reported in Sec. V.1 [cf. Eq. (87)]. Then, from the form of the Hamiltonian \(H\) in terms of the modes \(b_{j}\) [cf. Eqs. (88)–(93)], one obtains that the number operators
\[n_{1,2}:=b_{1}^{\dagger}b_{1}+b_{2}^{\dagger}b_{2}\,,\ n_{3,4}:= b_{3}^{\dagger}b_{3}+b_{4}^{\dagger}b_{4}\] (48)
\[n_{5,6}:=b_{5}^{\dagger}b_{5}+b_{6}^{\dagger}b_{6}\,,\ n_{7,8}:= b_{7}^{\dagger}b_{7}+b_{8}^{\dagger}b_{8}\] (49)
are conserved, namely they commute with \(H\). Of course, the initial state can also be expressed in terms of the modes introduced in Eq. (47). Using the explicit form of \(C_{n,m}\), we obtain
\[\ket{\overline{\mathcal{U}}(0)}\rangle=\frac{1}{\sqrt{N!}\sqrt{8} ^{N}}(b_{1}^{\dagger}-b_{2}^{\dagger}-b_{3}^{\dagger}-b_{4}^{\dagger}\]
\[+b_{5}^{\dagger}+b_{6}^{\dagger}+b_{7}^{\dagger}-b_{8}^{\dagger}) ^{N}\ket{\Omega}\] (50)
and find that the total conserved number \(n_{1,2}+n_{3,4}+n_{5,6}+n_{7,8}\) is \(N\).
As we will see in the next section, these formulas allow us to work with effective Hilbert spaces whose dimensions grow either linearly or quadratically with \(N\), and hence to provide numerically exact results for very large system sizes.
### The OTOC and the tripartite information
We now discuss the last step of our method, namely the computation of the matrix elements of the form (22). Let us consider the most general OTOC
\[\mathcal{F}_{(p,n,m)}(t)=\frac{1}{2^{N/2}}{\rm tr}\left\{\Phi^{(p ,n)}(t)\Phi^{(p,m)}(0)\right.\]
\[\left.\Phi^{(p,n)}(t)\Phi^{(p,m)}(0)\right\}\,,\] (51)
where we introduced
\[\Phi^{(p,n)} =\psi_{i_{1}}\cdots\psi_{i_{p}}\,\psi_{j_{1}}\cdots\psi_{j_{n}}\,,\] (52)
\[\Phi^{(p,m)} =\psi_{i_{1}}\cdots\psi_{i_{p}}\,\psi_{k_{1}}\cdots\psi_{k_{m}}\,,\] (53)
and where all indices are different, i.e. the operators have only \(p\) Majorana fermions in common. Considering Eq. (22), we can expand \(\overline{\mathcal{U}}(t)=\overline{U\otimes U^{\ast}\otimes U\otimes U^{\ast}}\) into the basis of operators \(\mathcal{O}_{\vec{n}}\) corresponding to the vector (43) in \({\rm End}(\mathcal{H}_{N}^{\otimes 4})\). We obtain
\[\mathcal{F}_{(p,n,m)}(t)= \sum_{\vec{n}}c_{\vec{n}}(t)\bra{L}\left(\mathcal{O}_{\vec{n}} \right)\ket{R}\,,\] (54)
where the sum runs over all the sets \(\vec{n}=\{n_{j}\}\) with \(j=1,ab,\ldots,abcd\) and \(\sum_{j}n_{j}=N\), while \(c_{\vec{n}}(t)\) are the coefficients of \(\overline{\mathcal{U}}(t)\) in the basis of the operators \(\mathcal{O}_{\vec{n}}\). One can now interpret the sum (54) as the scalar product between an appropriate state \(\ket{W_{(p,n,m)}}\rangle\in{\rm End}(\mathcal{H}_{N}^{\otimes 4})\) and \(\ket{\overline{\mathcal{U}}(t)}\rangle\); namely we can write
(55)
The whole problem of extracting the numerical value of the OTOC from the knowledge of \(\overline{\mathcal{U}}(t)\) then boils down to writing down explicitly \(\ket{W_{(p,n,m)}}\rangle\). After this is done, one can straightforwardly compute the overlap (55).
The derivation of the explicit form of \(\ket{W_{(p,n,m)}}\rangle\) is however rather technical, and for this reason we postpone it to Sec V.2. The final result, instead, is extremely simple, and reads
\[|W_{(p,n,m)}\rangle\rangle=\frac{\sqrt{8}^{N}}{\sqrt{N!}}(-1)^{m( m-1)/2+n(n-1)/2+nm}\\ \times(-b_{3}^{\dagger})^{p}(-b_{2}^{\dagger})^{n}(-b_{4}^{ \dagger})^{m}(b_{1}^{\dagger})^{N-p-n-m}|\Omega\rangle\,,\] (56)
where \(\ket{\Omega}\) and \(b_{j}\) were introduced in Eqs. (44) and (47) respectively.
Surprisingly, one can also express the Rényi-\(2\) entropies entering in the definition of the tripartite information (14) in the same form. More precisely, choosing the same conventions as Fig. 2 for the bipartitions of input and output of the evolution operator, one can write
\[\exp\left[-S^{(2)}_{AC}(\bar{\ell})\right]\] (57)
\[\exp\left[-S^{(2)}_{AD}(\bar{\ell})\right]\] (58)
Here \(\bar{\ell}\) is the length of \(B\) and \(D\) (chosen to be of the same size), while we will use \(\ell\) for the length of the regions \(A\) and \(C\).
Once again, we refer the reader to Sec. V, where this is explicitly shown, while in the following we report the final result of this analysis, which gives
\[|W_{S^{(2)}_{AC}(\bar{\ell})}\rangle\rangle=\frac{\sqrt{8}^{N}}{\sqrt{N!}} \frac{1}{2^{N}}(b_{1}^{\dagger}-b_{2}^{\dagger})^{\ell}(b_{1}^{\dagger}-b_{4}^ {\dagger})^{\bar{\ell}}|\Omega\rangle\,,\] (59)
and
\[|W_{S^{(2)}_{AD}(\bar{\ell})}\rangle\rangle=\frac{\sqrt{8}^{N}}{\sqrt{N!}} \frac{1}{2^{N/2+\bar{\ell}}}(b_{1}^{\dagger})^{\ell}(b_{1}^{\dagger}-b_{2}^{ \dagger}+b_{3}^{\dagger}-b_{4}^{\dagger})^{\bar{\ell}}|\Omega\rangle\,.\] (60)
Similar formulas could be in principle derived also for more general choices of the bipartitions of input and output. This, however, would introduce additional technical difficulties, so we don’t derive them here.
<figure><img src="content_image/1908.00775/x5.png"><figcaption>Figure 5: OTOCs Fx,x(t) and Fx,y(t) of single-site Majorana fermions.Subfigures (a) and (b) show our numerical results for increasing system sizesfor operators on the same site (from top to bottom) and on different sites(from left to right) respectively. In Subfigure (a) the black dashed line isthe analytic prediction (65). Subfigure (c): the two different OTOCs arereported in the same plot, where the dynamics after the scrambling time isseen to coincide.</figcaption></figure>
It is now very important to comment on the form of the formulas presented above, as it allows us to reduce the computational cost required to obtain the physical quantities of interest. Let us first consider the case of the OTOC (51), which is conveniently rewritten as
(61)
Namely, in order to compute \(\mathcal{F}_{(p,n,m)}(t)\), one can evolve \(|W_{(p,n,m)}\rangle\rangle\) and then take the overlap with the state \(\ket{\overline{\mathcal{U}}(0)}\). This is important, as is best appreciated by looking at the simplest instance \(p=0,n=m=1\). In this case, Eq. (56) implies that the state \(|W_{(0,1,1)}\rangle\rangle\) belongs to the sector of the Hilbert space labeled by the quantum numbers \(n_{1,2}=N-1,n_{3,4}=1,n_{5,6}=n_{7,8}=0\), where \(n_{i,i+1}\) were introduced in Eq. (48)–(49). Since \(n_{j,j+1}\) are conserved by the Hamiltonian \(H\), the dynamics takes place in this sector of the Hilbert space, whose dimension can be easily seen to be \(D=N\). Accordingly, one can conveniently represent the restricted Hamiltonian in a basis consisting of \(N\) elements, and compute \(e^{Ht}|W_{(0,1,1)}\rangle\rangle\) in this basis, which allows us to go to system size one million.
Similar considerations hold for the generic OTOC \(\ket{W_{(p,n,m)}}\rangle\) (which belongs to the sector \(n_{1,2}=N-p-m\), \(n_{3,4}=p+m\), \(n_{5,6}=n_{7,8}=0\)) and for the Rényi-\(2\) entropies corresponding to (59), (60). In the latter cases, expanding
\[(b_{1}^{\dagger}-b_{4}^{\dagger})^{\bar{\ell}}= \sum_{r=0}^{\bar{\ell}}\binom{\bar{\ell}}{r}\left(b_{1}^{\dagger} \right)^{r}\left(-b_{4}^{\dagger}\right)^{\bar{\ell}-r}\,,\] (62)
\[(b_{1}^{\dagger}-b_{2}^{\dagger}+b_{3}^{\dagger} -b_{4}^{\dagger})^{\bar{\ell}}=\sum_{r=0}^{\bar{\ell}}\binom{\bar {\ell}}{r}\left(b_{1}^{\dagger}-b_{2}^{\dagger}\right)^{r}\left(b_{3}^{\dagger }-b_{4}^{\dagger}\right)^{\bar{\ell}-r}\,,\] (63)
one is left with a sum of terms, each of which requires a simulation within Hilbert spaces up to dimensions \(N\bar{\ell}\sim N^{2}\). Putting all together, we see that the computation of the quantities of interest requires us to simulate the dynamics in a Hilbert space whose dimension grows either linearly (for the OTOCs) or quadratically (for the tripartite information) with \(N\).
<figure><img src="content_image/1908.00775/x6.png"><figcaption>Figure 6: Subfigure (a): rescaled logarithmic OTOC ℓN(t) [defined in Eq.(68)]. Solid lines correspond to increasing values of N (from bottom to top),while the dashed black line is the linear ansatz y(t)=43t+ln(2). Subfigure(b): data collapse using the shift (70) for the OTOCs Fx,y(t) (with x≠y).Subfigure (c): data collapse for different OTOCs. The curves correspond (frombottom to top) to the OTOCs in Eq. (64), (71) and (72) respectively. In orderto compare the three curves, we have multiplied Fα,β by the global phase(−1)σα,β, which is −1 for (α;β)=(x,y;z,w) and 1 otherwise.</figcaption></figure>
## IV The physical results
In this section we present the main physical results of our work. We begin with the analysis of the OTOCs, and continue with the Rényi-\(2\) tripartite information introduced in Eq. (14).
### The OTOCs: numerical results
We start by presenting our numerical results for the simplest OTOC
\[\mathcal{F}_{x,y}(t)=\frac{1}{2^{N/2}}{\rm tr}\left\{\psi_{x}(t)\psi_{y}(0) \psi_{x}(t)\psi_{y}(0)\right\}\,.\] (64)
Due to the infinite range of the interactions and the disorder averages, \(\mathcal{F}_{x,y}(t)\) does not depend on the precise choice of \(x\) and \(y\), but only on whether \(x=y\) or \(x\neq y\). Both cases are displayed in Fig. 5, where we report data for increasing values of the system size \(N\). We see that \(\mathcal{F}_{x,x}(t)\) and \(\mathcal{F}_{x,y}(t)\) (with \(x\neq y\)) behave qualitatively differently at short times: the former displays an initial exponential decay, while the latter appears to remain approximately constant. In fact, based on the formulas of the previous section, one can make these statements more precise and show
\[\lim_{N\to\infty}\mathcal{F}_{x,x}(t)= -1+2\exp\left(-\frac{2}{3}t\right)\,,\] (65)
\[\lim_{N\to\infty}\mathcal{F}_{x,y}(t)= -1\,,\] (66)
where the convergence is point-wise in \(t\). This is proven in Sec. (V.3). In both OTOCs \(\mathcal{F}_{x,x}(t)\) and \(\mathcal{F}_{x,y}(t)\), we see the emergence of a characteristic time \(t^{\ast}(N)\), increasing with \(N\), which is required before they begin to decay towards zero at large times. One naturally interprets \(t^{\ast}(N)\) as a scrambling time, which is also consistent with our subsequent analysis of the tripartite information. Finally, in Fig. 5\((c)\) we plot together the OTOCs for \(x=y\) and \(x\neq y\), for different systems sizes. We see that after an initial time window, the two OTOCs become indistinguishable, meaning that the information on the initial operators chosen has been completely washed out by the chaotic dynamics.
In order to quantitatively characterize the dependence of the scrambling time \(t^{\ast}(N)\) on the system size, we test the short-time behavior of \(\mathcal{F}_{x,y}(t)\) against the analytical ansatz [32]
\[\mathcal{F}_{x,y}(t)\sim-1+c_{x,y}\frac{e^{\lambda_{x,y}t}}{N}\,,\] (67)
where \(c_{x,y}\) is a constant (independent of \(N\)). In particular, we compute
\[\ell_{N}(t)=\ln\left[1+\mathcal{F}_{x,y}(t)\right]+\ln N\,,\] (68)
and compare the numerical data against a linear behavior. The results are shown in Fig. 6\((a)\). We clearly see that as the system size is increased, the curves for \(\ell_{N}(t)\) approach the linear fit \(y(t)=\frac{4}{3}t+\ln(2)\), within an initial time interval that is also increasing with \(N\). In turn, this means that the ansatz (67) is valid, with the free parameters fixed as
\[\lambda_{x,y}=4/3\,,\qquad c_{x,y}=2\,.\] (69)
From this result, we can identify the scrambling time with \(t^{\ast}(N)=3\ln(N)/4\).
<figure><img src="content_image/1908.00775/x7.png"><figcaption>Figure 7: Subfigure (a): time evolution of different OTOCs corresponding toinitial strings of Majorana operators with the same length. Each curve islabeled by three integer numbers, according to the convention of Eq. (51). Wesee that in this case all the OTOCs quickly approach the same curve. Subfigure(b): large-time behavior of the logarithm of the OTOC Fx,y(t).</figcaption></figure>
The initial behavior in Eq. (67), together Fig. 5\((b)\), suggests that a data collapse should take place if we consider the shifted functions
\[\mathcal{F}_{x,y}(t+3\ln(N)/4)\,,\] (70)
where we assumed that the parameters (69) are exact. This is plotted in Fig. 6\((b)\), where we see a remarkable data collapse at all times. In particular, the data appear to be perfectly collapsed already for \(N\gtrsim 800\).
Next, we have tested how robust the above predictions are, against different choices of the local observables. We have considered in particular
\[\mathcal{F}_{x;y,z}(t)=\frac{1}{2^{N/2}}{\rm tr}\left\{\psi_{x}(t )\psi_{y}(0)\psi_{z}(0)\right.\]
\[\left.\psi_{x}(t)\psi_{y}(0)\psi_{z}(0)\right\}\,,\] (71)
\[\mathcal{F}_{x,y;z,w}(t)=\frac{1}{2^{N/2}}{\rm tr}\left\{\psi_{x} (t)\psi_{y}(t)\psi_{z}(0)\psi_{w}(0)\right.\]
\[\left.\psi_{x}(t)\psi_{y}(t)\psi_{z}(0)\psi_{w}(0)\right\}\,.\] (72)
We have verified that at short times the ansatz (67) is always valid, and that a data collapse always takes place using the shift in Eq. (70). Furthermore the exponent is universal, namely it is independent of the observables chosen (while the prefactor is not). However, the OTOCs corresponding to distinct choices of local operators are quantitatively different, also after the scrambling time \(t^{\ast}(N)\), as it can be appreciated from Fig. 6\((c)\). This can be interpreted by saying that, at finite times, the system retains some information on the initial observable chosen.
<figure><img src="content_image/1908.00775/x8.png"><figcaption>Figure 8: Subfigure (a): time evolution of the Rényi-2 entropies S(2)AC forthe subsystem A∪C, as displayed in Fig. 2, with ¯ℓ=10. Solid lines correspondto increasing values of N (from bottom to top), while the black dashed line isthe analytic prediction (74). Subfigure (b): time evolution of the Rényi-2entropy S(2)AD for the subsystem A∪D, with ¯ℓ=10. Note that S(2)AD is shiftedby its maximum value (N/2)ln2. Solid lines correspond to increasing values ofN (from top to bottom). Subfigure (c): time evolution of the Rényi-2tripartite information I(2)3(A:C:D), for the bipartitions A∪B, C∪D displayedin Figs. 2 and 3 , with ¯ℓ=10. Solid lines correspond to increasing values ofN (from bottom to top).</figcaption></figure>
In order to investigate this point further, we plot in Fig. (7)\((a)\) different OTOCs, corresponding to distinct choices of local observables, which are labeled according to the convention of in Eq. (51). The curves correspond to initial operators that all have the same length, namely that are product of the same number of fermions. In this case, we see that all the OTOCs converge to the same function (up to small corrections) after the scrambling time. Comparing with the results displayed in Fig. 6\((c)\), we can conclude the following: after the scrambling time, information regarding the specific initial observables is lost, whereas OTOCs corresponding to operators with different initial length can still be distinguished.
Finally, we have investigated the large-time exponential decay of the OTOCs. The data in Fig. 5 suggest to consider an ansatz of the form
\[\mathcal{F}_{x,y}(t)\sim d_{x,y}\exp\left[-t/\tau_{N}\right]\,,\] (73)
where \(\tau_{N}\) should be asymptotically independent of \(N\). In Fig. 7\((b)\), we plot \(\ln(-\mathcal{F}_{x,y}(t))\) for large values of \(t\), and we see that the data are indeed consistent with an exponential decay of \(\mathcal{F}_{x,y}(t)\). To be quantitative, we have performed a fit of \(\ln(-\mathcal{F}_{x,y}(t))\) using \(r_{N}(t)=a-t/\tau_{N}-b/t\). For the values of time \(t\) available, we have found that the fitted \(\tau_{N}\) has a weak dependence on \(N\), with \(\tau_{N}\simeq 1.53\pm 0.04\) for \(N\simeq 10^{5}\). The fitted value appears to be independent of the choice of the local observables, up to the inaccuracy of the extrapolation method.
### The Rényi-\(2\) tripartite information
We finally present our results for the Rényi-\(2\) tripartite information introduced in Eq. (14). As we discussed in Sec. III.3, for this quantity the effective dynamics to be computed takes place in a Hilbert space whose dimension grows quadratically with \(N\), so that we are restricted to smaller system sizes than in the case of OTOCs. Furthermore, for large subsystems the value of the entropy becomes very large, so that we also have to deal with issues of numerical precision. Overall, for the computationally worst case of bipartitions of equal size, we are able to provide data up to \(N\simeq 400\). More details on the numerical implementations are reported in Appendix C.
<figure><img src="content_image/1908.00775/x9.png"><figcaption>Figure 9: Subfigure (a): Rényi-2 entropies S(2)AC for the subsystem A∪C, asdisplayed in Fig. 2, for N=200 and different subsystem sizes ¯ℓ. Solid linescorrespond to the times t=0,0.4,1,2,4,6,15 (from bottom to top). The blackdashed line is obtained by Haar average as computed in Ref. HQRY16 . Subfigure(b): Rényi-2 entropies S(2)AC computed at t=1, for different subsystem sizes¯ℓ. Solid lines correspond to system sizes N=50,100,200,400 (from bottom totop). Subfigure (c): Rényi-2 tripartite information (14) for N=400 anddifferent subsystem sizes ¯ℓ. Solid lines correspond to the timest=0,0.4,1,2,4,6,15 (from top to bottom). The black dashed line is obtained byHaar average as computed in Ref. HQRY16</figcaption></figure>
In Fig. 8 we present data for the time evolution of the Rényi entropies of the subsystems \(A\cup C\) and \(A\cup D\), where we used the same partitions as Fig. 2. The plots correspond to fixed subsystem size and increasing \(N\). Based on the formulas of Sec. III, in this limit we are able to compute (cf. Sec. V.3)
\[\lim_{N\rightarrow\infty,\overline{\ell},t\text{ fix }}S^{(2)}_{AC}(\bar{\ell} ,t)=\overline{\ell}\ln\frac{2}{1+e^{-2t/3}}\,.\] (74)
We see from Fig. 8\((a)\) that the numerical results are in perfect agreement with this prediction. For finite \(N\), the entropy \(S^{(2)}_{AC}(t)\) displays an initial linear increase, eventually reaching a saturation value, as expected from the traditional picture of quantum quenches. The behavior of the Rényi entropy \(S^{(2)}_{AD}(t)\) is instead not monotonic, as displayed in Fig. 8\((b)\). Indeed, one has \(S^{(2)}_{AD}(0)=(N/2)\ln 2\), which is the maximum entropy possible, so that at small times \(S^{(2)}_{AD}(t)\) has to decrease. Its dynamics is then non-trivial during the initial scrambling time \(t^{\ast}(N)\), after which it begins an exponential decay towards its large-time stationary value.
Figs. 9 and 10 show the same quantities for all the possible values of the subsystems \(\bar{\ell}\), at different times and system sizes. First, we notice that the entropies and the tripartite information are symmetric under exchange \(\bar{\ell}\leftrightarrow\ell=N-\bar{\ell}\), as they should. Furthermore, we see that for different values of \(\bar{\ell}\) we have the same qualitative behavior, where at large times an asymptotic value is always reached. In fact, it is possible to compute the latter exactly, as it is known that unitary evolutions driven by Brownian Hamiltonians converge in the infinite-time limit to unitary \(k\)-designs, for arbitrary positive integers \(k\) [100; 101]. As a consequence, the asymptotic properties can be computed using Haar averages. The latter, which were already computed in Ref. [9], are reported as dashed lines in Fig. 9 and 10, towards which convergence is apparent. We note that, while their infinite-time limit could be expected, the entropies undergo non-trivial dynamics at short and intermediate times. This is best appreciated by looking at the entropy \(S^{(2)}_{AD}(\bar{\ell})\) in Fig. 10. We see that up to the scrambling time \(t^{\ast}(N)\) it appears to be decreasing (precisely, its average over \(\bar{\ell}\)), while at later times it increases again. This results in the non-trivial dynamics of the tripartite information, which can become positive at short times [cf. Fig. 8\((c)\)].
<figure><img src="content_image/1908.00775/x10.png"><figcaption>Figure 10: Time evolution of the Rényi-2 entropies S(2)AD for the subsystemA∪D, as displayed in Fig. 2, for N=200 and different values of ¯ℓ. Solid linesin subfigures (a) and (b) correspond, respectively, to times t=0,0.2,0.4,1(from top to bottom) and t=2,4,6,15 (from bottom to top). The black dashedline in Subfigure (b) is obtained by Haar average as computed in Ref. HQRY16 .</figcaption></figure>
## V Deriving the key formulas
In this last section, we finally address the most technical aspects of our calculations, including several details of the method outlined in Sec. III. We start by presenting the explicit form of the Hamiltonian driving the dynamics in the four-replica space in Sec. V.1. Next, we derive the key formulas (56), (59) and (60) in Sec. V.2. Finally, in Sec. V.3 we report the proof of Eqs. (65) and (74) for the large-\(N\) limit of the OTOC \(\mathcal{F}_{x,y}(t)\), and of the Rényi entropy \(S^{(2)}_{AC}(\bar{\ell})\).
### The Hamiltonian
In this section we show how to derive the explicit form (45) of the Hamiltonian driving the imaginary-time evolution in Eq. (42), from Eq. (35). We start with the identity
\[4!\sum_{\mathclap{1\leq j<k<l<m\leq N}}x_{i}x_{j}x_{k}x_{l}\]
\[=X^{4}- X^{2}(-6N+8)+3N(N-2)\,,\] (75)
with \(X=\sum_{i=1}^{N}x_{i}\), for commuting operators \(x_{i}\) satisfying \(x_{i}^{2}=-1\). This can be easily derived as follows (see e.g Ref. [91]). First, define
\[f_{q}=q!\sum_{\mathclap{1\leq i_{1}<\ldots<i_{q}\leq N}}x_{i_{1}}\ldots x_{i_{ q}}\,.\] (76)
Then, using \(x_{j}^{2}=-1\), it is straightforward to show
\[Xf_{q}=f_{q+1}-q(N+1-q)f_{q-1}\,,\] (77)
which immediately yields the desired identity. Eq. (75) allows us to write the Hamiltonian in terms of global sums of pairs of single-site Majorana operators.
Next, suppose that for a single-site operator \(x_{i}\) we have
\[x_{i}\mathcal{O}_{i}^{\alpha}=c(\alpha)\mathcal{O}_{i}^{f(\alpha)}\ \forall \alpha\in\{1,ab,\ldots,abcd\}\,,\] (78)
where \(\mathcal{O}^{\alpha}_{j}\) have been defined after Eq. (39). Then one can make the following identification
\[X=\sum_{\alpha=1}^{abcd}c(\alpha)a_{f(\alpha)}^{\dagger}a_{\alpha}\,,\] (79)
namely the action of \(X\) on the permutation symmetric basis (43) is the same as the r.h.s. of Eq. (79), as can be checked directly. From this, the final form of the Hamiltonian in terms of bosonic modes \(a_{j}\) is readily obtained, and reads
\[H=\frac{1}{N^{3}}\left(-2\binom{N}{4}+\frac{1}{4!}\sum_{r=1}^{6}(-1)^{\gamma_{ r}}\left[X_{r}^{4}-X_{r}^{2}(-6N+8)+3N(N-2)\right]\right),\] (80)
where \((-1)^{\gamma_{r}}\) is given in (36), while the operators \(X_{r}\) are defined as
\[X_{{ab}}=a^{\dagger}_{{ab}}a_{{1}}-a^{\dagger}_{{1}}a_{{ab}}-a^{ \dagger}_{{bc}}a_{{ac}}-a^{\dagger}_{{bd}}a_{{ad}}+a^{\dagger}_{{ac}}a_{{bc}}+ a^{\dagger}_{{ad}}a_{{bd}}+a^{\dagger}_{{abcd}}a_{{cd}}-a^{\dagger}_{{cd}}a_{{ abcd}}\,,\] (81)
\[X_{{ac}}=a^{\dagger}_{{ac}}a_{{1}}+a^{\dagger}_{{bc}}a_{{ab}}-a^ {\dagger}_{{1}}a_{{ac}}-a^{\dagger}_{{cd}}a_{{ad}}-a^{\dagger}_{{ab}}a_{{bc}}- a^{\dagger}_{{abcd}}a_{{bd}}+a^{\dagger}_{{ad}}a_{{cd}}+a^{\dagger}_{{bd}}a_{{ abcd}}\,,\] (82)
\[X_{{ad}}=a^{\dagger}_{{ad}}a_{{1}}+a^{\dagger}_{{bd}}a_{{ab}}+a^ {\dagger}_{{cd}}a_{{ac}}-a^{\dagger}_{{1}}a_{{ad}}+a^{\dagger}_{{abcd}}a_{{bc} }-a^{\dagger}_{{ab}}a_{{bd}}-a^{\dagger}_{{ac}}a_{{cd}}-a^{\dagger}_{{bc}}a_{{ abcd}}\,,\] (83)
\[X_{{bc}}=a^{\dagger}_{{bc}}a_{{1}}-a^{\dagger}_{{ac}}a_{{ab}}+a^ {\dagger}_{{ab}}a_{{ac}}+a^{\dagger}_{{abcd}}a_{{ad}}-a^{\dagger}_{{1}}a_{{bc} }-a^{\dagger}_{{cd}}a_{{bd}}+a^{\dagger}_{{bd}}a_{{cd}}-a^{\dagger}_{{ad}}a_{{ abcd}}\,,\] (84)
\[X_{{bd}}=a^{\dagger}_{{bd}}a_{{1}}-a^{\dagger}_{{ad}}a_{{ab}}-a^ {\dagger}_{{abcd}}a_{{ac}}+a^{\dagger}_{{ab}}a_{{ad}}+a^{\dagger}_{{cd}}a_{{bc }}-a^{\dagger}_{{1}}a_{{bd}}-a^{\dagger}_{{bc}}a_{{cd}}+a^{\dagger}_{{ac}}a_{{ abcd}}\,,\] (85)
\[X_{{cd}}=a^{\dagger}_{{cd}}a_{{1}}+a^{\dagger}_{{abcd}}a_{{ab}}- a^{\dagger}_{{ad}}a_{{ac}}+a^{\dagger}_{{ac}}a_{{ad}}-a^{\dagger}_{{bd}}a_{{bc }}+a^{\dagger}_{{bc}}a_{{bd}}-a^{\dagger}_{{1}}a_{{cd}}-a^{\dagger}_{{ab}}a_{{ abcd}}\,.\] (86)
Inspection of Eq. (80) reveals that the Hamiltonian displays several conservation laws. It is natural to look for a Bogoliubov transformation of the modes which makes some of the symmetries apparent. In addition, one would also like this transformation to simplify the convoluted \(a\)-mode expression \(\ket{\mathcal{W}_{(p,n,m)}}\rangle\) for the OTOCs (111). Motivated by this, we look for a transformation where the first boson \(b_{1}\) is associated with the macroscopically occupied mode in Eq. (111), and choose the other modes \(b_{j}\) to satisfy canonical commutation relations. While this can be done in different ways, it turns out that a particularly convenient transformation is the one defined by Eq. (47), where \(C_{n,m}\) is the element in the line \(n\) and in the column \(m\) of the matrix
\[C=\left(\begin{array}[]{cccccccc}1&-1&-1&-1&1&1&1&-1\\ i&-i&i&i&i&-i&-i&-i\\ 1&1&-1&1&-1&1&-1&-1\\ i&i&i&-i&-i&-i&i&-i\\ i&i&i&-i&i&i&-i&i\\ -1&-1&1&-1&-1&1&-1&-1\\ i&-i&i&i&-i&i&i&i\\ -1&1&1&1&1&1&1&-1\\ \end{array}\right)\,.\] (87)
Indeed, after this Bogoliubov transformation the form of the Hamiltonian immediately reveals the presence of additional symmetries which can be directly exploited for our computations.
It is straightforward to rewrite the operators (81)–(86), and hence the Hamiltonian (80), in terms of the new modes \(b_{j}\). In particular, we have
\[X_{ab} =i(b_{1}^{\dagger}b_{2}+b_{2}^{\dagger}b_{1}+b_{3}^{\dagger}b_{4} +b_{4}^{\dagger}b_{3}-b_{5}^{\dagger}b_{5}+b_{6}^{\dagger}b_{6}+b_{7}^{\dagger }b_{7}-b_{8}^{\dagger}b_{8})\,,\] (88)
\[X_{ac} =-b_{1}^{\dagger}b_{2}+b_{2}^{\dagger}b_{1}+b_{3}^{\dagger}b_{4}- b_{4}^{\dagger}b_{3}-b_{5}^{\dagger}b_{6}+b_{6}^{\dagger}b_{5}+b_{7}^{\dagger} b_{8}-b_{8}^{\dagger}b_{7}\,,\] (89)
\[X_{ad} =-i(b_{1}^{\dagger}b_{1}-b_{2}^{\dagger}b_{2}-b_{3}^{\dagger}b_{3 }+b_{4}^{\dagger}b_{4}-b_{5}^{\dagger}b_{6}-b_{6}^{\dagger}b_{5}-b_{7}^{ \dagger}b_{8}-b_{8}^{\dagger}b_{7})\,,\] (90)
\[X_{bc} =-i(b_{1}^{\dagger}b_{1}-b_{2}^{\dagger}b_{2}-b_{3}^{\dagger}b_{3 }+b_{4}^{\dagger}b_{4}+b_{5}^{\dagger}b_{6}+b_{6}^{\dagger}b_{5}+b_{7}^{ \dagger}b_{8}+b_{8}^{\dagger}b_{7})\,,\] (91)
\[X_{bd} =b_{1}^{\dagger}b_{2}-b_{2}^{\dagger}b_{1}-b_{3}^{\dagger}b_{4}+b _{4}^{\dagger}b_{3}-b_{5}^{\dagger}b_{6}+b_{6}^{\dagger}b_{5}+b_{7}^{\dagger}b _{8}-b_{8}^{\dagger}b_{7}\,,\] (92)
\[X_{cd} =i(b_{1}^{\dagger}b_{2}+b_{2}^{\dagger}b_{1}+b_{3}^{\dagger}b_{4} +b_{4}^{\dagger}b_{3}+b_{5}^{\dagger}b_{5}-b_{6}^{\dagger}b_{6}-b_{7}^{\dagger }b_{7}+b_{8}^{\dagger}b_{8})\,.\] (93)
### Extracting OTOCs and Entropies
We now wish to show how to derive an explicit expression for the vectors \(|W_{(p,n,m)}\rangle\rangle\), \(|W_{S^{(2)}_{AC}(\bar{\ell})}\rangle\rangle\) and \(|W_{S^{(2)}_{AD}(\bar{\ell})}\rangle\rangle\) in Eqs. (56), (59) and (60), respectively. In order to simplify this task, we start by proving the following lemma. Let
\[|W_{N}\rangle\rangle =\sum_{n_{1}+\cdots+n_{abcd}=N}\frac{1}{\sqrt{N!n_{1}!\cdots n_{ abcd}!}}\] (94)
\[\ \sum_{\pi\in S_{N}}\pi\prod_{x=1}^{N}\alpha_{x}(z_{x})\pi^{-1}| n_{1},\ldots,n_{abcd}\rangle,\] (95)
where \(z_{x}\in\{1,ab,ac,ad,bc,bd,cd,abcd\}\) is the operator at site \(x\) for the permutation \(\pi\), c.f. (43) and \(\alpha_{x}(z_{x})\) constants. Then
\[\ket{W_{N}}\rangle =\frac{1}{\sqrt{N!}}\prod_{x=1}^{N}\left(\sum_{z=1}^{abcd}\alpha_ {x}(z)a_{z}^{\dagger}\right)\ket{\Omega}.\] (96)
The equivalence between Eqs. (95) and (96) is best established by directly expanding the product in Eq. (96), and regrouping the different terms.
Next, we introduce some notations to handle our subsequent calculations in a compact way. In particular, let us rewrite the basis operator \(\mathcal{O}_{\vec{n}}\) in Eq. (43) as
\[\mathcal{O}_{\vec{n}} =\frac{1}{\sqrt{N!n_{1}!\cdots n_{abcd}!}}\]
\[\times \sum_{π\in S_{N}}π\,Ψ_{ab}^{ab}Ψ_{ac}^{ac}Ψ_{ad}^{ad}Ψ_{bc}^{bc}Ψ _{bd}^{bd}Ψ_{cd}^{cd}Ψ_{abcd}^{abcd}\,π^{-1}\,.\] (97)
Here we introduced the notations \(\Psi^{ab}_{ab}=\prod_{p\in I_{ab}}\psi_{p}^{a}\psi_{p}^{b}\), \(\Psi^{ac}_{ac}=\prod_{p\in I_{ac}}\psi_{p}^{a}\psi_{p}^{c}\), \(\ldots\), \(\Psi^{abcd}_{abcd}=\prod_{p\in I_{abcd}}\psi_{p}^{a}\psi_{p}^{b}\psi_{p}^{c} \psi_{p}^{d}\), where \(I_{ab}\), \(I_{ac}\), \(\ldots\)\(I_{abcd}\) are ordered, pairwise disjoint, subsets of \(\{1,2,\ldots N\}\), such that \(|I_{ab}|=n_{ab}\), \(\ldots\)\(|I_{abcd}|=n_{abcd}\). In this notation, upper indexes in \(\Psi^{\alpha}_{\beta}\) indicate the type of single-site operators, while lower indices specify which subset of \(\{1,2,\ldots,N\}\) the product of such operators runs over. Consistent with this convention, we also introduce
\[\Psi^{a}_{a}=\prod_{p\in I_{a}}\psi_{p}^{a}\,,\] (98)
where \(I_{a}\) is the ordered set defined by
\[I_{a}=I_{ab}\cup I_{ac}\cup I_{ad}\cup I_{abcd}\,,\] (99)
and whose elements are ordered as they appear in Eq. (99) (namely, the first \(n_{ab}\) elements of \(I_{a}\) are those of \(I_{ab}\) with the same order, followed by those of \(I_{ac}\), and so on). Analogously, one can define \(\Psi^{b}_{b}\), \(\Psi^{c}_{c}\) and \(\Psi^{d}_{d}\), where
\[I_{b}=I_{ab}\cup I_{bc}\cup I_{bd}\cup I_{abcd}\,,\] (100)
\[I_{c}=I_{ac}\cup I_{bc}\cup I_{cd}\cup I_{abcd}\,,\] (101)
\[I_{d}=I_{ad}\cup I_{bd}\cup I_{cd}\cup I_{abcd}\,,\] (102)
with the elements of \(I^{a}\), \(I^{b}\) and \(I^{c}\) ordered as they appear in Eqs. (100), (101) and (102). Finally, let us consider two disjoint subsets \(A\cup B=\{1\ldots N\}\). Then, we define
\[\Psi^{ab}_{abA}=\prod_{p\in I_{ab}\cap A}\psi_{p}^{a}\psi_{p}^{b} \,,\qquad\Psi^{ab}_{abB}=\prod_{p\in I_{ab}\cap B}\psi_{p}^{a}\psi_{p}^{b}\,,\] (103)
\[\Psi^{a}_{aA}=\prod_{p\in I_{a}\cap A}\psi_{p}^{a}\,,\qquad\Psi^{ a}_{aB}=\prod_{p\in I_{a}\cap B}\psi_{p}^{a}\,,\] (104)
and analogously for the other cases. Using these notations, we can rewrite
\[\mathcal{O}_{\vec{n}} =\mathcal{N}\sum_{π\in S_{N}}π\,Ψ_{abA}^{ab}Ψ_{acA}^{ac}Ψ_{adA}^{ ad}Ψ_{bcA}^{bc}Ψ_{bdA}^{bd}Ψ_{cdA}^{cd}Ψ_{abcdA}^{abcd}\,Ψ_{abB}^{ab}Ψ_{acB}^{ ac}Ψ_{adB}^{ad}Ψ_{bcB}^{bc}Ψ_{bdB}^{bd}Ψ_{cdB}^{cd}Ψ_{abcdB}^{abcd}π^{-1}\]
\[=\mathcal{N}\sum_{π\in S_{N}}π\,Ψ_{aB}^{a}Ψ_{bB}^{b}Ψ_{cB}^{c}Ψ_{ dB}^{d}\,Ψ_{aA}^{a}Ψ_{bA}^{b}Ψ_{cA}^{c}Ψ_{dA}^{d}\,(-1)^{\gamma_{A}+\gamma_{B} }π^{-1}\]
\[=\mathcal{N}\sum_{π\in S_{N}}π\,Ψ_{aB}^{a}Ψ_{aA}^{a}Ψ_{bB}^{b}Ψ_{ bA}^{b}Ψ_{cB}^{c}Ψ_{cA}^{c}Ψ_{dB}^{d}Ψ_{dA}^{d}\,(-1)^{γ_{A}+γ_{B}+δ}\pi^{-1}\,,\] (105)
where \(\mathcal{N}=(N!n_{1}!\cdots n_{abcd}!)^{-1/2}\) is the normalization. In order to write down the first line, we used that even string of different fermions commute, while sorting the Majorana operators in the second line resulted in the phases \((-1)^{\gamma_{A}}\), \((-1)^{\gamma_{B}}\). We will not write \(\gamma_{A}\), \(\gamma_{B}\) explicitly, as they will cancel at the end of the calculations. Conversely, one can easily compute the phase \((-1)^{\delta}\) appearing in the last line of Eq. (105):
\[(-1)^{\delta} =(-1)^{n_{aA}(n_{bB}+n_{cB}+n_{dB})+n_{bA}(n_{cB}+n_{dB})+n_{cA}n _{dB}}\]
\[=(-1)^{(n_{aB}^{2}+n_{bB}^{2}+n_{cB}^{2}+n_{dB}^{2})/2}\,.\] (106)
Here we have used that \(n_{a}=n_{aB}+n_{aA}\) and \(n_{aB}+n_{bB}+n_{cB}+n_{dB}\) are even, which can be seen by writing explicitly \(n_{aB}=n_{abB}+n_{acB}+n_{adB}+n_{abcdB}\) etc.
Eq. (105) is the starting point to derive the explicit form of the vectors \(|W_{(p,n,m)}\rangle\rangle\), \(|W_{S^{(2)}_{AC}(\bar{\ell})}\rangle\rangle\) and \(|W_{S^{(2)}_{AD}(\bar{\ell})}\rangle\rangle\) for OTOCs and Rényi entropies respectively. These are treated in the following, in dedicated subsections.
#### v.2.1 The OTOCs
We wish to calculate the OTOC (51) of the initial operators (52), (53). Starting from (22)-(24), we can insert (105) for the correlated time evolution operator, where simply \(A=\{1\ldots N\},B=\{\}\) such that there is only one \((-1)^{γ}\) and no \((-1)^{δ}\) and we omit the labels \(A,B\). Since this expression involves products of even numbers of Majorana fermions acting on the same “replica” space, we may switch to the operators \(\chi_{j}^{\alpha}\) in Eqs. (25), (26), and perform backwards the steps to derive Eq. (22) from Eq. (18). This gives
\[\mathcal{F}_{(p,n,m)}= \sum_{\vec{n}}\mathcal{N}c_{\vec{n}}(t)\sum_{\pi\in S_{N}}\pi \operatorname{tr}\left[\Phi^{(p,n)}Ψ_{a}\right.\]
\[\Phi^{(p,m)}Ψ_{b}^{\dagger} \left.\Phi^{(p,n)}Ψ_{c}\Phi^{(p,m)}Ψ_{d}^{\dagger}\right](-1)^{ \gamma}\pi^{-1}/2^{N/2}\,,\] (107)
where \(\Phi^{(p,n)}\), \(\Phi^{(p,m)}\) are defined in Eqs. (52), (53). Here, we simply wrote \(\Psi_{a}\), \(\Psi_{b}\), \(\Psi_{c}\) and \(\Psi_{d}\) without superscript, as we only have a single copy of the fermionic space. More explicitly, we have, for instance
\[\Psi_{a}=\prod_{p\in I_{a}}\psi_{p}\,,\] (108)
where \(I_{a}\) is defined in (99).
Next we move the operator pairs \(\Phi^{(p,n)}\) and \(\Phi^{(p,m)}\) together such that they cancel. Of course, this generates phases through the anti-commutation relations of the fermions; we obtain
\[\mathcal{F}_{(p,n,m)}=\sum_{\vec{n}}\mathcal{N}c_{\vec{n}}(t)\sum _{\pi\in S_{N}}\pi\operatorname{tr}\left[{Ψ_{a}Ψ_{b}^{\dagger}Ψ_{c}Ψ_{d}^{ \dagger}}\right]\]
\[\times(-1)^{\gamma}(-1)^{n_{a,c}\left(\{i_{\alpha}\}\right)+n_{a, b}\left(\{j_{\alpha}\}\right)+n_{b,c}\left(\{k_{\alpha}\}\right)}\]
\[(-1)^{m(m-1)/2+n(n-1)/2+mn}\pi^{-1}/2^{N/2}\,,\] (109)
where \(n_{a,c}\left(\{i_{\alpha}\}\right)\) is the number of indeces in \(\{i_{\alpha}\}_{\alpha=1}^{p}\) which also belong to \(I_{a}\cup I_{c}\) [as defined in Eqs. (99), (101)]. Analogously, \(n_{a,b}\left(\{j_{\alpha}\}\right)\) and \(n_{b,c}\left(\{k_{\alpha}\}\right)\) are, respectively, the numbers of indexes in \(\{j_{\alpha}\}_{\alpha=1}^{n}\) and in \(\{k_{\alpha}\}_{\alpha=1}^{m}\) which also belong to \(I_{a}\cup I_{b}\) and \(I_{b}\cup I_{c}\). Finally, noticing
\[\operatorname{tr}\left[{Ψ_{a}Ψ_{b}^{\dagger}Ψ_{c}Ψ_{d}^{\dagger}} \right]= (-1)^{(n_{b}+n_{d})/2}\operatorname{tr}\left[{Ψ_{a}Ψ_{b}Ψ_{c}Ψ_{d }}\right]\]
\[= 2^{N/2}(-1)^{(n_{b}+n_{d})/2}(-1)^{\gamma},\] (110)
we see that the factor \((-1)^{\gamma}\) in (109) is exactly canceled. We are left with an equation of the form (95), setting \(\alpha\)’s appropriately. Thus we may apply the lemma proved before [cf. Eq. (96)], which directly gives
\[|W_{(p,n,m)}\rangle\rangle=\frac{1}{\sqrt{N!}}(-1)^{m(m-1)/2+n(n- 1)/2+nm}\\ \times(a_{1}^{\dagger}-ia_{ab}^{\dagger}+a_{ac}^{\dagger}-ia_{ad} ^{\dagger}-ia_{bc}^{\dagger}-a_{bd}^{\dagger}-ia_{cd}^{\dagger}-a_{abcd}^{ \dagger})^{p}\\ \times(a_{1}^{\dagger}+ia_{ab}^{\dagger}-a_{ac}^{\dagger}-ia_{ad} ^{\dagger}-ia_{bc}^{\dagger}+a_{bd}^{\dagger}+ia_{cd}^{\dagger}-a_{abcd}^{ \dagger})^{n}\\ \times(a_{1}^{\dagger}-ia_{ab}^{\dagger}-a_{ac}^{\dagger}+ia_{ad} ^{\dagger}+ia_{bc}^{\dagger}+a_{bd}^{\dagger}-ia_{cd}^{\dagger}-a_{abcd}^{ \dagger})^{m}\\ \times(a_{1}^{\dagger}+ia_{ab}^{\dagger}+a_{ac}^{\dagger}+ia_{ad} ^{\dagger}+ia_{bc}^{\dagger}-a_{bd}^{\dagger}+ia_{cd}^{\dagger}-a_{abcd}^{ \dagger})^{N-p-n-m}\ket{\Omega}.\\\] (111)
The result (56) is finally obtained after expressing the operators \(a_{j}^{\dagger}\) in terms of the \(b\)-modes, introduced in Eq. 47.
#### v.2.2 The Rényi-\(2\) entanglement entropy \(S^{(2)}_{AC}({\bar{\ell}})\)
Next, we turn to the task of deriving the vector \(|W_{S^{(2)}_{AC}(\bar{\ell})}\rangle\rangle\) introduced in Eq. (57), corresponding to the exponential of the second Rényi entropy \(S^{(2)}_{AC}({\bar{\ell}})\).
As we have explained in detail in Sec. II.2, in order to compute \(S_{AC}^{(2)}(\bar{\ell})\), we need to consider the evolved state \(U^{a}\ket{I^{ab}}\), where \(\ket{I^{ab}}\) was introduced in Eq. (12), while the time evolution operator \(U^{a}\) acts only on the “replica” space \(a\) (cf. Fig. 3). For our fermionic system, the reduced density matrix of the union of the disjoint sets \(A\) and \(C\) can then be written as (see e.g. [104])
(112)
Here we denoted by \(\{F^{a}_{A}\}\), and \(\{F^{b}_{C}\}\) a complete basis of operators in \(A\), \(C\) respectively; namely \(F^{a}_{A}\) and \(F^{b}_{C}\) take value in all the possible strings of Majorana operators supported in \(A\) and \(C\). Here, as before, we followed the convention that upper indexes indicate the type of single-site operators, while lower indices specify which subset of \(\{1,2,\ldots,N\}\) the product of such operators runs over. Through simple manipulations, we have
\[\operatorname{tr}\left[\overline{ρ_{AC}^{2}}\right] =\frac{2^{\ell}}{2^{2\ell}}\sum_{F_{A},F_{C}}\overline{ \underbrace{\bra{I^{ab}}U^{a{\dagger}}}_{\bra{I^{ab}}U^{b}_{-}}F_{A}^{a}F_{C}^ {b}U^{a}\ket{I^{ab}}\underbrace{\bra{I^{cd}}U^{c{\dagger}}}_{\bra{I^{cd}}U^{d} _{-}}\underbrace{F_{C}^{d\dagger}F_{A}^{c\dagger}}_{F_{A}^{c}F_{C}^{d}(-1)^{α} }U^{c}\ket{I^{cd}}}\]
\[=\frac{1}{2^{\ell}}\sum_{F_{A},F_{C}}(-1)^{α}\bra{I^{ab}}\otimes \bra{I^{cd}}\ F_{A}^{a}\otimes F_{A}^{c}\ \overline{U^{a}_{+}U^{b}_{-}U^{c}_{+ }U^{d}_{-}}\ F_{C}^{b}\otimes F_{C}^{d}\ \ket{I^{ab}}\otimes\ket{I^{cd}}\,,\] (113)
where \(U^{\alpha}_{\pm}(t)\) are defined in Eq. (31). From this equation we clearly see that, in complete analogy with the case of the OTOCs, we can write also \(\operatorname{tr}\left[\overline{ρ_{AC}^{2}}\right]\) in the form \(\bra{L}\overline{\mathcal{U}}(t)\ket{R}\). As anticipated, this allows us to apply a procedure similar to the one employed for the OTOCs, and derive the vector \(|W_{S^{(2)}_{AC}(\bar{\ell})}\rangle\rangle\). In particular, we can use the notations introduced in Sec. V.2, and exploit directly Eq. (105). This yields straightforwardly
\[\operatorname{tr}\left[\overline{ρ_{AC}^{2}}\right] =\frac{1}{2^{\ell}\sqrt{N!n_{1}!\cdots n_{abcd}!}}\sum_{\vec{n}}c _{\vec{n}}(t)\sum_{\pi\in S_{N}}π(-1)^{\gamma_{A}+γ_{B}+δ}\]
(114)
Next, we compute
\[(*)\]
(115)
In (V.2.2), all Majorana operators are in the same system, so we leave away the doubled system label. To extract the traces, note that those are the only operators acting on the region \(B\) of the system. Thus the Majorana fermions on those sites \(B\) already have to cancel in pairs, or the expectation value will be zero. Similarly, the only value of \(F_{C}\) with non-zero contribution has
\[Ψ_{bA}^{{\dagger}}F_{A}Ψ_{aA}F_{C}^{{\dagger}}=\pm 1\Rightarrow F_{C}=\pm Ψ_{ bA}^{{\dagger}}F_{A}Ψ_{aA}\,,\] (116)
which can be inserted into the second expectation value, canceling the \(\pm 1\) and giving
\[(*)\]
\[=2^{-\bar{\ell}}\,\operatorname{tr}\left[Ψ_{bB}^{\dagger}Ψ_{aB} \right]\operatorname{tr}\left[Ψ_{dB}^{\dagger}Ψ_{cB}\right]\operatorname{tr} \left[Ψ_{aA}Ψ_{dA}^{\dagger}\right]\operatorname{tr}\left[Ψ_{bA}^{\dagger}Ψ_{ cA}\right]\]
\[=2^{-\bar{\ell}}(-1)^{\frac{n_{b}+n_{d}}{2}+n_{bB}+n_{dB}} \operatorname{tr}\left[Ψ_{aB}Ψ_{bB}\right]\operatorname{tr}\left[Ψ_{cB}Ψ_{dB} \right]\operatorname{tr}\left[Ψ_{aA}Ψ_{dA}\right]\operatorname{tr}\left[Ψ_{bA} Ψ_{cA}\right]\,.\] (117)
Here, in order to go from the first to the second line, we made use of the identity (151) in Appendix B.2. The last line of (117) is non-zero only for
\[n_{1}= n_{1B}+n_{1A}\,,\quad n_{ab}=n_{abB}\,,\quad n_{ac}=0\,,\] (118)
\[n_{ad}= n_{adA}\,,\quad n_{bc}=n_{bcA}\,,\quad n_{bd}=0\,,\]
\[n_{cd}= n_{cdB}\,,\quad n_{abcd}=n_{abcdB}+n_{abcdA}\,.\]
With this we see that the traces evaluate as \((-1)^{γ_{B}+γ_{A}}\). Also, it shows that \(n_{aB}\equiv n_{bB}\equiv n_{bA}\equiv n_{cA}\equiv n_{cB}\equiv n_{dB}\ ({\rm mod }\ 2)\), such that \((-1)^{\delta}=+1\). Putting all together, we get
\[\operatorname{tr}\overline{ρ^{2}_{AC}} =\sum_{\vec{n}}\frac{c_{\vec{n}}(t)}{\sqrt{N!n_{1}!\cdots n_{abcd }!}}\]
\[\times\sum_{π\in S_{N}}π\,(-1)^{(n_{b}+n_{d})/2}δ(\pi)π^{-1}\] (119)
where the Kronecker delta \(δ(\pi)\) enforces the constraints (118). This expression can also be cast into the form (95) by setting some \(\alpha\)’s to zero. Then Eq. (96) gives us
\[|W_{S^{(2)}_{AD}(\bar{\ell})}\rangle\rangle=\frac{1}{\sqrt{N!}} \left[ia^{\dagger}_{ab}+ia^{\dagger}_{cd}+(a^{\dagger}_{1}-a^{\dagger}_{abcd}) \right]^{\ell}\]
\[\left[ia^{\dagger}_{ad}+ia^{\dagger}_{bc}+(a^{\dagger}_{1}-a^{ \dagger}_{abcd})\right]^{\bar{\ell}}\ket{\Omega}\,.\] (120)
Transformation to \(b\)-modes (47) finally yields the result anticipated in Eq. (59).
An analogous treatment can be carried out for the case of the entropy \(S^{(2)}_{AD}\). Since the technical steps are very similar, we report them in Appendix D.
### Some large-\(N\) limits
In this section, we finally show how one can compute the limit \(N\to\infty\), while keeping time \(t\) fixed, for the OTOCs and the Rényi-\(2\) entropies, and derive in particular Eqs. (65) and (74).
We start with the case of OTOCs, and consider Eq. (61). As a first simplification, we only need to keep modes \(b_{1}\) through \(b_{4}\) in the Hamiltonian and initial state \(\langle\bra{\overline{\mathcal{U}}(0)}\) as the others are not present in \(\ket{W_{(p,n,m)}}\rangle\). Next, we switch to ladder operators with an unusual normalization, specifically
\[\tilde{b}_{i}^{\dagger}\ket{n_{i}}_{\tilde{b}}=\ket{n_{i}+1}_{\tilde{b}},\ \tilde{b}_{i}\ket{n_{i}}_{\tilde{b}}=n_{i}\ket{n_{i}-1}_{\tilde{b}}.\] (121)
This now allows us to take the leading order in \(N\) for each term of the exponential \(e^{Ht}\), using that \(\tilde{b}_{1}\sim N\) as \(p,n,m\ll N\). We obtain
(122)
(123)
with
\[\lim_{N\to\infty}H=H_{A}+H_{B}+H_{C},\] (124)
\[H_{A}=\frac{2}{3}(\tilde{b}_{2}^{{\dagger}2}\tilde{b}_{1}^{2}/N^ {2}-1)\tilde{b}_{2}^{\dagger}\tilde{b}_{2},\] (125)
\[H_{B}=\frac{2}{3N^{3}}\tilde{b}_{2}^{{\dagger}3}\tilde{b}_{1}^{3 }\tilde{b}_{4}^{\dagger}\tilde{b}_{3},\] (126)
\[H_{C}=-\frac{2}{3}\tilde{b}_{3}^{\dagger}\tilde{b}_{3}\,.\] (127)
As \(\langle\bra{\overline{\mathcal{U}}(0)}(\tilde{b}_{2}^{{\dagger}2}\tilde{b}_{1} ^{2}/N^{2}-1)=0\) at the highest order in \(N\), terms with \(H_{A}\) do not contribute at the leading order. For the OTOC \(\mathcal{F}_{x,y}(t)\), also \(H_{B}\) and \(H_{C}\) cannot occur, because \(\ket{W_{x,y}}\rangle\) (56) does not contain any \(b_{3}\)-modes. The asymptotic result is then the constant , as reported in (66). In contrast, for the OTOC \(\mathcal{F}_{x,x}(t)\), the state \(\ket{W_{x,x}}\rangle\) does contain one \(b_{3}\)-mode such that \(H_{B}\) can appear at most once. The remaining Hamiltonian is still simple enough to finally derive the exponential decay (65). We stress that we can only expect these limits to be point-wise in \(t\) due to the exchange of limits in (123); in fact, convergence is clearly not uniform, as can be seen from the exact numerical results.
The case of the entropy \(S_{AC}^{(2)}(t)\) is treated along similar lines. We first perform a further mode transformation
\[c_{1} =(b_{1}-b_{2})/\sqrt{2}\,, c_{2} =(b_{1}+b_{2})/\sqrt{2}\,,\]
\[c_{3} =(b_{3}+b_{4})/\sqrt{2}\,, c_{4} =(b_{4}-b_{3})/\sqrt{2}\,,\] (128)
such that
\[\ket{W_{S^{(2)}_{AC(\bar{\ell})}}}\rangle=\frac{2^{N}}{\sqrt{N!}}\frac{1}{2^{ \bar{\ell}}}(c_{1}^{\dagger})^{\ell}(c_{1}^{\dagger}+c_{2}^{\dagger}-c_{3}^{ \dagger}-c_{4}^{\dagger})^{\bar{\ell}}\ket{\Omega}\,.\] (129)
We may now follow the same procedure as for the OTOCs. In fact, the Hamiltonian has the exact same form in terms of the modes \(b_{j}\) and \(c_{j}\). Taking \(\bar{\ell}\ll N\), Eq. (124) is therefore valid, after substituting the modes \(\tilde{b}_{j}\) with \(\tilde{c}_{j}\). Now, the initial state
\[\langle\bra{\overline{\mathcal{U}}(0)}=\bra{\Omega}(c_{1}-c_{3})^{N}\frac{1}{ \sqrt{N!}2^{N}}\] (130)
annihilates both \(H_{A}\) and \(H_{B}\) ensuing in a very simple (quadratic) asymptotic Hamiltonian \(H_{C}\). From this, Eq. (74) follows straightforwardly.
## VI Conclusions
In this work, we have developed an approach to analyze the chaotic dynamics in the Brownian SYK model, a system of \(N\) Majorana fermions coupled together via random, time-dependent interactions. We have shown that the OTOCs and the tripartite information of the unitary evolution can be studied as a quench problem (at imaginary times) in a system of \(N\) qudits, which can be conveniently investigated in terms of bosonic modes, due to an emergent permutational symmetry. Exploiting the latter, we were able to produce numerically exact results up to \(N=10^{6}\), and to study several features of the chaotic dynamics at finite size.
We have analyzed in detail the dependence of the OTOCs on the observables chosen, highlighting the pieces of information on the initial operators which are not washed out by the chaotic dynamics. In particular, after the scrambling time \(t^{\ast}(N)\sim\ln N\), the OTOCs of distinct operators converge to the same curve if they have the same length, namely if they are written as products of the same number of Majorana fermions, whereas the curves of different OTOCs can be distinguished after the scrambling time \(t^{\ast}(N)\) if the length is different. Furthermore, we have verified that the exponent of the initial exponential growth of the OTOCs is universal and performed a data collapse for increasing system sizes. Regarding the tripartite information, we have shown that its evolution is non-trivial during the initial scrambling time, while at large times it always decays exponentially to the corresponding Haar-scrambled value; this result is consistent with the rigorous recent findings of Refs. [100; 101]
The approach developed in this paper can be generalized to other models where the Hamiltonian displays all-to-all random interactions, with time-dependent Brownian disorder. Indeed, one can straightforwardly follow the steps outlined in Sec. III, and study the dynamics of OTOCs and tripartite information as a quench problem in a qudit system with site permutational symmetry. In turn, this implies that the effective imaginary-time dynamics takes place in a Hilbert space whose dimension grows as a polynomial in \(N\). Of course, one would need to investigate for each case whether a further reduction of the effective dimension takes place, as for the Brownian SYK model studied in this paper.
It is possible that the final formulas obtained with our method (which have been used in this work mainly for efficient numerical computations) could be simplified further and evaluated to exact analytic expressions in the large-\(N\) limit. In fact, by means of a different approach, an exact result for a suitable average of OTOCs was found in Ref. [95] for the Brownian dynamics generated by a disordered Hamiltonian in a qudit system. It would be interesting to see whether ideas related to the work [95] could be used here, to obtain analytic expressions for the OTOCs of arbitrary observables and for the tripartite information, in the large-\(N\) limit.
Finally, the approach presented in this paper could also be applied to compute quantities involving higher moments of the evolution operator \(U(t)\), such as Rényi entropies of higher order, or the Rényi-\(2\) operator entanglement entropy of local observables [106; 107]. In these cases, however, the application of our method would be inevitably more complicated. More importantly, it is not granted that a reduction of the Hilbert-space dimension could be achieved by means of a transformation analogous to (47). In any case, it would be certainly interesting to investigate these points further.
###### Acknowledgements.
LP thanks Bruno Bertini and Tomaz Prosen for discussions related to this work. XLQ thanks the helpful discussions with David Huse and Alex Streicher. LP acknowledges support from the Alexander von Humboldt foundation. JIC and NS acknowledge support by the EU Horizon 2020 program through the ERC Advanced Grant QENOCOBA (No. 742102, JIC) and the ERC Starting Grant WASCOSYS (No. 636201, NS), and from the DFG (German Research Foundation) under Germany’s Excellence Strategy - EXC-2111 - 390814868. XLQ is supported by the National Science Foundation under grant No. 1720504, and in part by the Department of Energy under grant No. DE-SC0019380.
## Appendix A Non-interacting case: \(q=2\)
In this section, we study the Brownian SYK model (1) for \(q=2\). We choose the constant \(\sigma_{J}\) in (2) such that the disorder’s correlations are given by
\[\overline{J_{ij}(t)J_{i^{\prime}j^{\prime}}(t^{\prime})}=\delta_{ii^{\prime}} \delta_{jj^{\prime}}\delta(t-t^{\prime})\frac{1}{N}\,.\] (131)
Each disorder realization is governed by a free Hamiltonian, therefore we do not expect any scrambling of operators or decay of OTOCs.
The method developed in this article can be applied to arbitrary \(q\) and we may study the non-interacting case within its framework. The states
\[\ket{W_{(p,n,m)}}\rangle,\ \ket{W_{S^{(2)}_{AC}(\bar{\ell})}}\rangle,\ \ket{W_ {S^{(2)}_{AD}(\bar{\ell})}}\rangle,\ \text{and}\ \ket{\overline{\mathcal{U}}(0 )\rangle}\] (132)
representing the OTOC (56), Rényi-2 entanglement entropies (57) and (58), and initial time evolution operator (50) are independent of \(q\) as long as \(q\) is even. However, the effective Hamiltonian reflects the change of \(q\) and is simpler. Along the same lines as for \(q=4\) (see section III.2), we can compute
\[\frac{\mathrm{d}}{\mathrm{d}t}\overline{\mathcal{U}(t)}=L \overline{\mathcal{U}(t)},\] (133)
\[L=\frac{1}{N}\left[-2\binom{N}{2}-\sum_{\begin{subarray}{c} \alpha,\beta=a,b,c,d\\ \alpha<\beta\end{subarray}}\sum_{i<j}(ψ_{i}^{\alpha}ψ_{i}^{\beta})(ψ_{j}^{ \alpha}ψ_{j}^{\beta})\right].\] (134)
The corresponding representation of the effective Hamiltonian after operator-state mapping in bosonic modes is
\[{\qquad}\ket{\overline{\mathcal{U}}(t)}\rangle=e^{Ht}\ket{ \overline{U}(0)}\rangle,\] (135)
\[H {}=\frac{1}{N}\left[-2\binom{N}{2}-3N-\frac{1}{2}\sum_{r=ab}^{cd} X_{r}^{2}\right]\] (136)
\[{}=-\frac{4}{N}(b_{1}^{\dagger}b_{3}^{\dagger}-b_{2}^{\dagger}b_{ 4}^{\dagger})(b_{1}b_{3}-b_{2}b_{4}),\] (137)
where the six \(X_{r}\) operators are the same as in the corresponding expression (45) for \(q=4\).
For the two simple OTOCs \(\mathcal{F}_{x,y}(t)\) and \(\mathcal{F}_{x,x}(t)\) the dynamics only explores the two-level subspace spanned by \(\ket{N-1,0,1,0}\) and \(\ket{N-2,1,0,1}\). Therefore we can compute these OTOCs analytically and obtain
\[\mathcal{F}_{x,y}(t) {}=-1+\frac{2}{N}-\frac{2}{N}e^{-4t}\,,\] (138)
\[\mathcal{F}_{x,x}(t) =-1+\frac{2}{N}+\frac{2}{N}e^{-4t}(N-1)\,.\] (139)
The curves are plotted in Fig. 11. As expected, the OTOCs do not decay to zero at long times, as the non-interacting model is not chaotic. Since the tripartite information can be written as an average of OTOCs [9], it too will lack the characteristics of scrambling.
<figure><img src="content_image/1908.00775/x11.png"><figcaption>Figure 11: OTOCs Fx,x(t) (solid lines) and Fx,y(t) (dashed lines) for singlesite Majorana fermions. We show the analytical results (139) and (138) forvarious system sizes N. At long times, they decay to the same value −1+2N≠0,indicating the absence of scrambling.</figcaption></figure>
We now make a comment on the so-called “length” of the operator \(ψ_{j}(t)\), see e.g. [37]. At any time \(t\), we can always write
\[ψ_{j}(t)=\sum_{s}\sum_{\{k_{j}\}}\underbrace{ψ_{k_{1}}\cdots ψ_{k_{s}}}_{\text {length}\ s}c_{s,\{k_{j}\}}(t)\,,\] (140)
and define the average length \(L(t)\) as
\[L(t)=\sum_{s}s\sum_{\{k_{j}\}}|c_{s,\{k_{j}\}}|^{2}\,.\] (141)
It can be shown that the average length is related to an appropriate average over OTOCs, namely [37]
\[L(t)=\frac{N+\mathcal{F}_{x,x}(t)+(N-1)\mathcal{F}_{x,y}(t)}{2}.\] (142)
Using this relation, if follows from our results (138)–(139) that the length is constant 1. This is expected because the Gaussian dynamics preserves the length of products of Majorana operators.
Next, we can calculate the entanglement entropies numerically, just like in the interacting case. We present the results in Fig. 12. In the limit \(N\to\infty,{\bar{\ell}},t\) fixed, we can derive
\[\lim_{N\to\infty,{\bar{\ell}},t\ \text{fix}}S_{AC}^{(2)}({\bar{\ell}},t)={\bar {\ell}}\ln\frac{2}{1+e^{-4t}},\] (143)
along the same lines as in section V.3. While the entropy \(S_{AC}^{(2)}({\bar{\ell}})\) saturates to its maximal Haar value at large \(N\) and \(t\), the behavior of \(S_{AD}^{(2)}({\bar{\ell}})\) is qualitatively different from the interacting case (Fig. 8). This leads to the tripartite information being positive at all times and system sizes, indicating the absence of scrambling.
<figure><img src="content_image/1908.00775/x12.png"><figcaption>Figure 12: For the free case (q=2), we show the time behavior of theentanglement entropies S(2)AC (a), S(2)AD (b) and tripartite information I(2)3(c) for several system sizes N and fixed subsystem size ¯ℓ=10. In (a), we alsoindicate the limit (143). The tripartite information is always positive, whichmeans that the non-interacting system does not scramble quantum information.</figcaption></figure>
As for the interacting case, we can also study the entanglement entropies’ dependence on the subsystem size, see Fig. 13. Comparing against Figs. 9 and 10, we see that in the free case, the entanglement entropies do not reach the maximal Haar scrambled values at finite ratios \(\bar{\ell}/N\).
<figure><img src="content_image/1908.00775/x13.png"><figcaption>Figure 13: The entanglement entropies S(2)AC (a) and S(2)AD (b) for varioussubsystem sizes ¯ℓ and several times t in the free case. The black dottedlines show the values reached with Haar-scrambling.</figcaption></figure>
## Appendix B Relation between OTOCs and Rényi-\(2\) entropies
In this appendix we review the relation between OTOCs and Rényi-\(2\) entropies in the case of unitary evolution operators defined on qubit systems, and generalize the latter for fermionic (Majorana) systems. We focus on the configuration displayed in Fig. 2, taking the regions \(A\) and \(C\) of the same size and position (and analogously for \(B\) and \(D\)).
### The case of Pauli matrices
We start by giving a derivation of the aforementioned relation for a system of qubits, along the lines of the one in Ref. [9]. First, we can write the reduced density matrix \(\rho_{AC}\) as
(144)
where the sum is over the complete bases \(\{O_{j}^{A}\}\) and \(\{O_{k}^{C}\}\) of strings of Pauli operators in \(A\) and \(C\), while \(a\) and \(c\) are equal to the number of sites in \(A\) and \(C\). The state \(\ket{I}\) is the maximally entangled state connecting \(A\cup B\) and \(C\cup D\), satisfying
\[O_{j}^{C}\ket{I}=\left(O_{j}^{A}\right)^{T}\ket{I}\,,\] (145)
while \(U_{AB}\) is the evolution operator acting non-trivially only on the system \(A\cup B\). Then, using the orthogonality of the Pauli operators, and after simple simplifications, we have
\[{\rm tr}\left[\rho_{AC}^{2}\right] =2^{-a-c-2N}\sum_{j,k}{\rm tr}\left[U^{\dagger}_{AB}O_{j}^{A}U_{ AB}O_{k}^{A}\right]\] (146)
\[\times{\rm tr}\left[O_{k}^{A}U^{\dagger}_{AB}O_{j}^{A}U_{AB} \right]\,.\]
Consider now the sum
\[2^{-a-c-3N}\sum_{j,k,\tilde{j},\tilde{k}}{\rm tr}\left[O^{A}_{ \tilde{j}}O^{B}_{\tilde{k}}\left(U^{\dagger}_{AB}O_{j}^{A}U_{AB}O_{k}^{A} \right)O^{A}_{\tilde{j}}O^{B}_{\tilde{k}}\right.\]
\[\times\left.\left(O_{k}^{A}U^{\dagger}_{AB}O_{j}^{A}U_{AB}\right) \right]\,.\] (147)
Using the identity [9]
\[\sum_{j}A_{j}\mathcal{O}A_{j}=|A|\operatorname{tr}_{A}\{\mathcal{O}\}\,,\] (148)
(here \(A_{j}\) are a complete basis of operators for the Hilbert space associated with \(A\), while \(|A|\) is its dimension), one immediately obtains that the r.h.s. of Eq. (146) is equal to (147). Therefore
\[{\rm tr}\left[\rho^{2}_{AC}\right]= 2^{-a-c-3N}\sum_{j,k,\tilde{j},\tilde{k}}{\rm tr}\left[O^{A}_{ \tilde{j}}O^{B}_{\tilde{k}}\left(U^{\dagger}_{AB}O_{j}^{A}U_{AB}O_{k}^{A} \right)\right.\]
\[\times \left.O^{A}_{\tilde{j}}O^{B}_{\tilde{k}}\left(O_{k}^{A}U^{\dagger }_{AB}O_{j}^{A}U_{AB}\right)\right]\]
\[= \frac{1}{2^{3N-a+c}}\sum_{j,l}{\rm tr}\left[O^{B}_{l}O^{A}_{j}(t) O^{B}_{l}O^{A}_{j}(t)\right]\] (149)
In the last step, we summed over \(k\), used once again the identity (148), and finally renamed the indexes \(\tilde{k}=l\). Putting all together, we find
\[\frac{1}{4^{a+d}}\frac{1}{2^{N}}\sum_{j,k}{\rm tr}\left[O^{A}_{j}(t)O^{D}_{k}( 0)O^{A}_{j}(t)O^{D}_{k}(0)\right]=2^{N-a-d-S^{(2)}_{AC}}\,.\] (150)
This is exactly the same result as in [9]. An analogous derivation holds for the case of \(S^{(2)}_{AD}\). This equation encodes a close connection between the tripartite information and the OTOCs, and allows one to establish that chaos, as measure by small values of all OTOCs, implies scrambling [9]. In the next section we show that a similar relation, with the addition of proper signs, holds in the case of fermionic systems.
### The case of Majorana operators
For Majorana operators one needs a different treatment. Indeed, the identity (148) is no longer valid, but should be modified as follows. Let \(\mathcal{O}\) be an operator with a well defined parity of Majorana operators, i.e. \(\mathcal{O}\) is the sum of strings of operators that are either all even or all odd. Then, by expanding in the operator basis of Majorana operators, one can prove
\[\sum_{j}(-1)^{\ell_{j}\ell_{\mathcal{O}}}A_{j}\mathcal{O}A^{\dagger}_{j}=|A| \operatorname{tr}_{A}\{\mathcal{O}\}.\] (151)
Here \(\ell_{j}\) is the length of the operator \(A_{j}\). For example, if \(A_{j}=\psi_{1}\psi_{4}\) then \(\ell_{j}=2\). Analogously, \(\ell_{\mathcal{O}}\) is the length of one of the terms in \(\mathcal{O}\). Since all these terms have the same parity, it does not matter which one we choose. We can then proceed as in the previous sections, now paying attention to the order of the operators involved in the calculations. First, we have
(152)
where now \(a\) and \(c\) are _half_ the number of sites in \(A\) and \(C\). Proceeding as before, we have
\[{\rm tr}\left[\rho_{AC}^{2}\right]=2^{-a-c-2n}\sum_{j,k}{\rm tr} \left[U^{\dagger}_{AB}O_{j}^{A}U_{AB}\left(O_{k}^{A}\right)^{\dagger}\right]\]
\[\times{\rm tr}\left[O_{k}^{A}U^{\dagger}_{AB}\left(O_{j}^{A} \right)^{\dagger}U_{AB}\right]\,,\] (153)
where \(n=N/2\). Consider now the sum
\[2^{-a-c-3n}\sum_{j,k,\tilde{j},\tilde{k}}(-1)^{\ell_{\tilde{j}}( \ell_{j}+\ell_{k})+\ell_{\tilde{k}}(\ell_{j}+\ell_{k})}\]
\[{\rm tr}\left[O^{A}_{\tilde{j}}O^{B}_{\tilde{k}}\left(U^{\dagger} _{AB}O_{j}^{A}U_{AB}\left(O_{k}^{A}\right)^{\dagger}\right)\left(O^{B}_{\tilde {k}}\right)^{\dagger}\left(O^{A}_{\tilde{j}}\right)^{\dagger}\right.\]
\[\left.\left(O_{k}^{A}U^{\dagger}_{AB}\left(O_{j}^{A}\right)^{ \dagger}U_{AB}\right)\right]\,.\] (154)
Noticing now that the evolution operator can always be written as sum of even strings of Majorana operators, one can directly apply the identity (151) to prove that the r.h.s. of (153) is equal to (154). On the other hand, using
\[\ell_{\tilde{j}}(\ell_{j}+\ell_{k})+\ell_{\tilde{k}}(\ell_{j}+\ell_{k})=\ell_{ j}(\ell_{\tilde{j}}+\ell_{\tilde{k}})+\ell_{k}(\ell_{\tilde{j}}+\ell_{\tilde{k }})\,,\] (155)
we can sum over \(k\) by employing once again the identity (151), and finally rename \(\tilde{k}=r\). Putting all together, we obtain
\[{\rm tr}\left[\rho^{2}_{AC}\right]= \frac{1}{2^{3n-a+c}}\sum_{j,r}(-1)^{\ell_{j}\ell_{r}}\]
\[\times{\rm tr}\left[O^{B}_{r}O^{A}_{j}(t)\left(O^{B}_{r}\right)^{ \dagger}\left(O^{A}_{j}(t)\right)^{\dagger}\right]\,.\] (156)
We see that additional signs appear in the sum over the OTOCs with respect to the case of Pauli matrices. However, the Rényi-\(2\) entropy still encodes global information about the sum of the OTOCs over extended regions of the system.
## Appendix C Details on the numerical implementation
In this section, we explain a few details for the numerical computation. For the OTOCs, we implement (61) in terms of the modes \(b_{1},b_{2},b_{3},b_{4}\), as explained in the main text. For the entropies however, we use different modes. While we could likewise implement (57) and (58) in \(b\)-modes, this introduces large numerical error as \(N\) increases, due to cancellations of large numbers. Instead we have found that for the entropy \(S_{AC}^{(2)}\), a numerical calculation in terms of modes \(c_{1},c_{2},c_{3},c_{4}\) (128) is most stable. For the entropy \(S_{AD}^{(2)}\), we found that using the modes \(b_{1},b_{2},c_{3},c_{4}\) gives the most stable results. For our numerical calculation we have therefore transformed initial state \(\ket{\overline{\mathcal{U}}(0)}\rangle\) (50), effective Hamiltonian \(H\) (80) and \(\ket{W_{S_{AC,BD}^{(2)}({\bar{\ell}})}}\rangle\) (59)-(60) into these modes.
## Appendix D The Rényi-\(2\) entanglement entropy \(S^{(2)}_{AD}({\bar{\ell}})\)
In this appendix, we turn to the task of deriving the vector \(|W_{S^{(2)}_{AD}(\bar{\ell})}\rangle\rangle\) introduced in Eq. (58), corresponding to the exponential of the second Rényi entropy \(S^{(2)}_{AD}({\bar{\ell}})\). The discussion goes along the same lines of the one presented in Sec. V.2.2 for the entropy \(S^{(2)}_{AC}({\bar{\ell}})\). Writing out the partial trace as done for the other entropy, we get
(157)
where we denoted by \(\{F^{a}_{A}\}\) and \(\{F^{b}_{D}\}\) a complete basis for the operators in \(A\), \(D\) respectively; namely \(F^{a}_{A}\) and \(F^{b}_{D}\) take value in all the possible strings of Majorana operators supported in \(A\) and \(D\). We can continue along the same lines as for the other entropy, giving
\[\operatorname{tr}\left[\overline{ρ_{AD}^{2}}\right]=\frac{1}{2^{N /2}\sqrt{N!n_{1}!\cdots n_{abcd}!}}\sum_{\vec{n}}c_{\vec{n}}(t)\sum_{\pi\in S_ {N}}π(-1)^{\gamma_{A}+γ_{B}+δ}\,(*)\pi^{-1}\,.\] (158)
Here the term \((*)\) can be evaluated as for the other entropy up until (V.2.2). Then, however, \(Ψ_{bB}^{a{\dagger}}\) and \(Ψ_{aB}^{a}\) are not the only parts in the first expression acting on this subspace, now \(F_{D}^{a}\) also does. So, continuing from \((*)\) and dropping the doubled system label as all operators are in the same system, we have
\[(*)\]
(159)
The left side is only non-zero for
\[Ψ_{bA}^{{\dagger}}F_{A}Ψ_{aA}=\pm 1\Rightarrow F_{A}^{\dagger}= \pm Ψ_{aA}Ψ_{bA}^{\dagger}\] (160)
\[Ψ_{bB}^{\dagger}Ψ_{aB}F_{D}^{\dagger}=\pm 1\Rightarrow F_{D}=\pm Ψ _{bB}^{\dagger}Ψ_{aB}\] (161)
such that we can evaluate the sum \(\sum_{F_{A},F_{D}}\), inserting these in the right side. We get
\[(*)\]
\[=\operatorname{tr}Ψ_{bB}^{\dagger}Ψ_{aB}Ψ_{dB}^{\dagger}Ψ_{cB}/2^ {{\bar{\ell}}/2}\,\operatorname{tr}Ψ_{dA}^{\dagger}Ψ_{aA}Ψ_{bA}^{\dagger}Ψ_{cA }/2^{{\bar{\ell}}/2}\]
\[=\operatorname{tr}Ψ_{dB}Ψ_{cB}Ψ_{bB}Ψ_{aB}/2^{{\bar{\ell}}/2}\, \operatorname{tr}Ψ_{aA}Ψ_{bA}Ψ_{cA}Ψ_{dA}/2^{{\bar{\ell}}/2}(-1)^{\frac{n_{b}+ n_{d}}{2}+n_{bB}+n_{dB}}\]
\[=\operatorname{tr}(Ψ_{aB}Ψ_{bB}Ψ_{cB}Ψ_{dB})^{\dagger}/2^{{\bar{ \ell}}/2}\,\operatorname{tr}Ψ_{aA}Ψ_{bA}Ψ_{cA}Ψ_{dA}/2^{{\bar{\ell}}/2}(-1)^{ \frac{n_{b}+n_{d}}{2}+n_{bB}+n_{dB}}\]
\[\qquad(-1)^{(n_{aB}(n_{aB}-1)+n_{bB}(n_{bB}-1)+n_{cB}(n_{cB}-1)+n _{dB}(n_{dB}-1))/2}\]
\[=(-1)^{γ_{A}+γ_{B}}(-1)^{(n_{b}+n_{d})/2}(-1)^{(n_{aB}(n_{aB}-1)+ n_{bB}(n_{bB}+1)+n_{cB}(n_{cB}-1)+n_{dB}(n_{dB}+1))/2}\,.\] (162)
The traces now give \((-1)^{γ_{A}}\) and \((-1)^{γ_{B}}\), respectively. Inserted back into (V.2.2), these cancel, and the \(δ\) partially cancels the other phases,
\[\operatorname{tr}\left[\overline{ρ^{2}_{AD}}\right]=\frac{1}{2^{N /2}\sqrt{N!n_{1}!\cdots n_{abcd}!}}\sum_{\vec{n}}c_{\vec{n}}(t)\sum_{π\in S_{N }}π(-1)^{(n_{b}+n_{d})/2}(-1)^{(-n_{aB}+n_{bB}-n_{cB}+n_{dB})/2}π^{-1}\,.\] (163)
Again, this has the form (95) with suitable choices of \(\alpha\)’s. Then, according to Eq. (96) we obtain
\[|W_{S^{(2)}_{AC}(\bar{\ell})}\rangle\rangle=\frac{1}{2^{N/2}\sqrt {N!}}\left[a_{1}^{\dagger}+ia_{ab}^{\dagger}+ia_{ad}^{\dagger}+ia_{bc}^{ \dagger}+ia_{cd}^{\dagger}-a_{abcd}^{\dagger}+a_{ac}^{\dagger}-a_{bd}^{\dagger }\right]^{\ell}\]
\[\times\left[a_{1}^{\dagger}+ia_{ab}^{\dagger}+ia_{ad}^{\dagger}+ ia_{bc}^{\dagger}+ia_{cd}^{\dagger}-a_{abcd}^{\dagger}-a_{ac}^{\dagger}+a_{bd} ^{\dagger}\right]^{\bar{\ell}}\ket{\Omega}\,.\] (164)
Finally, the transformation to \(b\)-modes in (47) results in Eq. (60).
## References
* (1) J. M. Deutsch, Phys. Rev. A **43**, 2046 (1991).
* (2) M. Srednicki, Phys. Rev. E **50**, 888 (1994).
* (3) M. Rigol, V. Dunjko, and M. Olshanii, Nature **452**, 854 (2008).
* (4) L. D’Alessio, Y. Kafri, A. Polkovnikov, and M. Rigol, Adv. Phys. **65**, 239 (2016).
* (5) Y. D. Lensky and X.-L. Qi, arXiv:1805.03675 (2018).
* (6) P. Hosur and X.-L. Qi, Phys. Rev. E **93**, 042138 (2016).
* (7) T. Hartman and J. Maldacena, JHEP (2013) 014.
* (8) H. Liu and S. J. Suh, Phys. Rev. Lett. **112**, 011601 (2014).
* (9) P. Hosur, X.-L. Qi, D. A. Roberts, and B. Yoshida, JHEP (2016) 4.
* (10) K. A. Landsman, C. Figgatt, T. Schuster, N. M. Linke, B. Yoshida, N. Y. Yao, and C. Monroe, Nature **567**, 61 (2019).
* (11) G. Bentsen, Y. Gu, and A. Lucas, PNAS **116**, 6689 (2019).
* (12) M. C. Gutzwiller, Chaos in Classical and Quantum Mechanics (Springer-Verlag New York, 1990).
* (13) H.-J. Stoöckmann, Quantum Chaos: An Introduction (Cambridge University Press, 2007).
* (14) Y. Sekino and L. Susskind, JHEP (2008) 065.
* (15) N. Lashkari, D. Stanford, M. Hastings, T. Osborne, and P. Hayden, JHEP (2013) 22 .
* (16) P. Hayden and J. Preskill, JHEP (2007) 120.
* (17) S. H. Shenker and D. Stanford, JHEP (2014) 46.
* (18) A. Kitaev, talk “_Hidden correlations in the hawking radiation and thermal noise_”, Fundamental Physics Prize Symposium, (2014)
* (19) L. Susskind, Fortschritte Der Physik **64**, 24 (2016).
* (20) A. R. Brown, D. A. Roberts, L. Susskind, B. Swingle, and Y. Zhao, Phys. Rev. D **93**, 086006 (2016).
* (21) A. Kitaev, talk “_A simple model of quantum holography_”, KITP strings seminar and Entanglement 2015 program; http://online.kitp.ucsb.edu/online/entangled15/kitaev/.
* (22) S. Sachdev and J. Ye, Phys. Rev. Lett. **70**, 3339 (1993).
* (23) J. Maldacena and D. Stanford, Phys. Rev. D **94**, 106002 (2016).
* (24) J. Maldacena, D. Stanford, and Z. Yang, Prog. Theor. Exp. Phys. (2016), 12.
* (25) J. Polchinski and V. Rosenhaus, JHEP (2016) 1.
* (26) D. Bagrets, A. Altland, and A. Kamenev, Nucl. Phys. B **911**, 191 (2016).
* (27) R. A. Davison, W. Fu, A. Georges, Y. Gu, K. Jensen, and S. Sachdev, Phys. Rev. B **95**, 155131 (2017).
* (28) I. R. Klebanov and G. Tarnopolsky, Phys. Rev. D **95**, 046004 (2017).
* (29) Y. Gu, X.-L. Qi, and D. Stanford, JHEP (2017) 125.
* (30) X.-Y. Song, C.-M. Jian, and L. Balents, Phys. Rev. Lett. **119**, 216601 (2017).
* (31) D. Chowdhury, Y. Werman, E. Berg, and T. Senthil, Phys. Rev. X **8**, 031024 (2018).
* (32) J. Maldacena, S. H. Shenker, and D. Stanford, JHEP (2016) 106.
* (33) D. Bagrets, A. Altland, and A. Kamenev, Nucl. Phys. B **921**, 727 (2017).
* (34) A. Larkin and Y. N. Ovchinnikov, Sov Phys JETP **28** 1200 (1969).
* (35) S. H. Shenker and D. Stanford, JHEP (2014) 67.
* (36) D. A. Roberts, D. Stanford, and L. Susskind, JHEP (2015) 51.
* (37) D. A. Roberts, D. Stanford, and A. Streicher, JHEP (2018) 122.
* (38) D. A. Roberts and B. Swingle, Phys. Rev. Lett. **117**, 091602 (2016).
* (39) I. L. Aleiner, L. Faoro, and L. B. Ioffe, Ann. Phys. **375**, 378 (2016).
* (40) B. Swingle and D. Chowdhury, Phys. Rev. B **95**, 060201 (2017).
* (41) N. Yunger Halpern, Phys. Rev. A **95**, 012120 (2017).
* (42) A. A. Patel and S. Sachdev, PNAS **114**, 1844 (2017).
* (43) I. Kukuljan, S. Grozdanov, and T. Prosen, Phys. Rev. B **96**, 060301 (2017).
* (44) B. Dóra and R. Moessner, Phys. Rev. Lett. **119**, 026802 (2017).
* (45) N. Tsuji, P. Werner, and M. Ueda, Phys. Rev. A **95**, 011601 (2017).
* (46) C.-J. Lin and O. I. Motrunich, Phys. Rev. B **97**, 144304 (2018); C.-J. Lin and O. I. Motrunich, Phys. Rev. B **98**, 134305 (2018).
* (47) A. Smith, J. Knolle, R. Moessner, and D. L. Kovrizhin, arXiv:1812.07981 (2018).
* (48) S. Nakamura, E. Iyoda, T. Deguchi, and T. Sagawa, arXiv:1904.09778 (2019).
* (49) M. McGinley, A. Nunnenkamp, and J. Knolle, Phys. Rev. Lett. **122**, 020603 (2019).
* (50) Y. Huang, F. G. S. L. Brandão, and Y.-L. Zhang, Phys. Rev. Lett. **123**, 010601 (2019).
* (51) J. Chávez-Carlos, B. López-del-Carpio, M. A. Bastarrachea-Magnani, P. Stránský, S. Lerma-Hernández, L. F. Santos, and J. G. Hirsch, Phys. Rev. Lett. **122**, 024101 (2019).
* (52) A. Nahum, J. Ruhman, S. Vijay, and J. Haah, Phys. Rev. X **7**, 031016 (2017).
* (53) C. Sünderhauf, D. Pérez-García, D. A. Huse, N. Schuch, and J. I. Cirac, Phys. Rev. B **98**, 134204 (2018).
* (54) A. Nahum, S. Vijay, and J. Haah, Phys. Rev. X **8**, 021014 (2018).
* (55) C. W. von Keyserlingk, T. Rakovszky, F. Pollmann, and S. L. Sondhi, Phys. Rev. X **8**, 021013 (2018).
* (56) A. Chan, A. De Luca, and J. T. Chalker, Phys. Rev. X **8**, 041019 (2018); A. Chan, A. De Luca, and J. T. Chalker, Phys. Rev. Lett. **121**, 060601 (2018).
* (57) T. Rakovszky, F. Pollmann, and C. W. von Keyserlingk, Phys. Rev. X **8**, 031058 (2018).
* (58) V. Khemani, A. Vishwanath, and D. A. Huse, Phys. Rev. X **8**, 031057 (2018).
* (59) P. Kos, M. Ljubotina, and T. Prosen, Phys. Rev. X **8**, 021062 (2018).
* (60) B. Bertini, P. Kos, and T. Prosen, Phys. Rev. Lett. **121**, 264101 (2018).
* (61) N. Hunter-Jones, arXiv:1905.12053 (2019).
* (62) M. J. Gullans and D. A. Huse, Phys. Rev. X **9**, 021007 (2019).
* (63) T. Zhou and A. Nahum, Phys. Rev. B **99**, 174205 (2019).
* (64) A. J. Friedman, A. Chan, A. De Luca, and J. T. Chalker, arXiv:1906.07736 (2019).
* (65) J. Emerson, E. Livine, and S. Lloyd, Phys. Rev. A **72**, 060302 (2005).
* (66) J. Emerson, Y. S. Weinstein, M. Saraceno, S. Lloyd, and D. G. Cory, Science **302**, 2098 (2003).
* (67) O. C. O. Dahlsten, R. Oliveira, and M. B. Plenio, J. Phys. A: Math. Theor. **40**, 8081 (2007).
* (68) D. Gross, K. Audenaert, and J. Eisert, J. Math. Phys. **48**, 052104 (2007).
* (69) R. Oliveira, O. C. O. Dahlsten, and M. B. Plenio, Phys. Rev. Lett. **98**, 130502 (2007).
* (70) M. Žnidarič, Phys. Rev. A **76**, 012318 (2007).
* (71) M. Žnidarič, Phys. Rev. A **78**, 032324 (2008).
* (72) L. Arnaud and D. Braun, Phys. Rev. A **78**, 062329 (2008).
* (73) A. W. Harrow and R. A. Low, Commun. Math. Phys. **291**, 257 (2009).
* (74) W. G. Brown and L. Viola, Phys. Rev. Lett. **104**, 250501 (2010).
* (75) I. T. Diniz and D. Jonathan, Commun. Math. Phys. **304**, 281 (2011).
* (76) W. Brown and O. Fawzi, arXiv:1210.6644 (2012).
* (77) F. G. S. L. Brandao, A. W. Harrow, and M. Horodecki, Commun. Math. Phys. **346**, 397 (2016).
* (78) Y. Nakata, C. Hirche, M. Koashi, and A. Winter, Phys. Rev. X **7**, 021006 (2017).
* (79) Soonwon Choi, Yimu Bao, Xiao-Liang Qi, and Ehud Altman, arXiv:1903.05124 (2019)
* (80) X.-L. Qi and A. Streicher, arXiv:1810.11958 (2018).
* (81) A. M. García-García and J. J. M. Verbaarschot, Phys. Rev. D **94**, 126010 (2016).
* (82) A. M. García-García and J. J. M. Verbaarschot, Phys. Rev. D **96**, 066012 (2017).
* (83) A. M. García-García, Y. Jia, and J. J. M. Verbaarschot, JHEP (2018) 146.
* (84) W. Fu and S. Sachdev, Phys. Rev. B **94**, (2016).
* (85) J. S. Cotler, G. Gur-Ari, M. Hanada, J. Polchinski, P. Saad, S. H. Shenker, D. Stanford, A. Streicher, and M. Tezuka, JHEP (2017) 118.
* (86) G. Gur-Ari, R. Mahajan, and A. Vaezi, JHEP (2018) 70.
* (87) O. Schnaack, S. Paeckel, S. R. Manmana, S. Kehrein, and M. Schmitt, arXiv:1808.05646 (2018).
* (88) E. Iyoda and T. Sagawa, Phys. Rev. A **97**, 042330 (2018).
* (89) S. Pappalardi, A. Russomanno, B. Žunkovič, F. Iemini, A. Silva, and R. Fazio, Phys. Rev. B **98**, 134303 (2018).
* (90) A. Seshadri, V. Madhok, and A. Lakshminarayan, Phys. Rev. E **98**, 052205 (2018).
* (91) P. Saad, S. H. Shenker, and D. Stanford, arXiv:1806.06840 (2018).
* (92) F. Haake, Quantum Signatures of Chaos , (Springer-Verlag, 2010).
* (93) P. Ribeiro, J. Vidal, and R. Mosseri, Phys. Rev. E **78**, (2008).
* (94) S. H. Shenker and D. Stanford, JHEP (2015) 132.
* (95) T. Zhou and X. Chen, Phys. Rev. E **99**, 052212 (2019).
* (96) S. Xu and B. Swingle, arXiv:1805.05376 (2018).
* (97) X. Chen and T. Zhou, arXiv:1808.09812 (2018).
* (98) H. Gharibyan, M. Hanada, S. H. Shenker, and M. Tezuka, JHEP (2018) 124.
* (99) K.R.Parthasarathy, An introduction to quantum stochastic calculus, Monographs in Mathematics 85, Birkhäuser (1992).
* (100) L. Banchi, D. Burgarth, and M. J. Kastoryano, Phys. Rev. X **7**, 041015 (2017).
* (101) E. Onorati, O. Buerschaper, M. Kliesch, W. Brown, A. H. Werner, and J. Eisert, Commun. Math. Phys. **355**, 905 (2017).
* (102) J. R. González Alonso, N. Yunger Halpern, and J. Dressel, Phys. Rev. Lett. **122**, 040404 (2019).
* (103) F. Iglói and I. Peschel, EPL **89**, 40001 (2010).
* (104) M. Fagotti and P. Calabrese, J. Stat. Mech. (2010) P04016.
* (105) Y. Gu, A. Lucas, and X.-L. Qi, JHEP (2017) 120.
* (106) T. Prosen and I. Pižorn, Phys. Rev. A **76**, 032316 (2007).
* (107) J. Dubail, J. Phys. A: Math. Theor. **50**, 234001 (2017).
|
1910.11355 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 70592,
"num_imgs": 0,
"llama3_tokens_count": 23659
} | [] | # Analytical amplitudes from numerical solutions of the scattering equations
Giuseppe De Laurentis
giuseppe.de-laurentis@durham.ac.uk
March 2, 2024
###### Abstract
The CHY formalism for massless scattering provides a cohesive framework for the computation of scattering amplitudes in a variety of theories. It is especially compelling because it elucidates existing relations among theories which are seemingly unrelated in a standard Lagrangian formulation. However, it entails operations that are highly non-trivial to perform analytically, most notably solving the scattering equations. We present a new Python package (seampy) to solve the scattering equations and to compute scattering amplitudes. Both operations are done numerically with high-precision floating-point algebra. Elimination theory is used to obtain solutions to the scattering equations for arbitrary kinematics. These solutions are then applied to a variety of CHY integrands to obtain tree amplitudes for the following theories: Yang-Mills, Einstein gravity, biadjoint scalar, Born-Infeld, non-linear sigma model, Galileon, conformal gravity and \((\text{DF})^{2}\). Finally, we exploit this high-precision numerical implementation to explore the singularity structure of the amplitudes and to reconstruct analytical expressions which make manifest their pole structure. Some of the expressions for conformal gravity and the \((\text{DF})^{2}\) gauge theory are new to the best of our knowledge.
Keywords:Scattering Amplitudes vs fontsize= 03D5\(\phi\)03F5\(\epsilon\)2202\(\partial\)220F\(\prod\)03C9\(\omega\)2260\(\neq\)22C5\(\cdot\)27E8\(\langle\)27E9\(\rangle\)2192\(\rightarrow\)00B2\({}^{2}\)221A\(\sqrt{}\)223C\(\sim\)226A\(\ll\)00D7\(\times\)2081\({}_{\BeginAccSupp{method=hex,unicode,ActualText=2081}1\EndAccSupp{}}\)2082\({}_{\BeginAccSupp{method=hex,unicode,ActualText=2082}2\EndAccSupp{}}\)2083\({}_{\BeginAccSupp{method=hex,unicode,ActualText=2083}3\EndAccSupp{}}\)2084\({}_{\BeginAccSupp{method=hex,unicode,ActualText=2084}4\EndAccSupp{}}\)2085\({}_{\BeginAccSupp{method=hex,unicode,ActualText=2085}5\EndAccSupp{}}\)2086\({}_{\BeginAccSupp{method=hex,unicode,ActualText=2086}6\EndAccSupp{}}\)⎡⎤│23A223A5⎣⎦
IPPP/19/82
†
[FOOTNOTE:†][ENDFOOTNOTE]
## 1 Introduction
The scattering equations (SE) first appeared in the litterature in the context of string theory in the ’70s [1; 2; 3] and ’80s [4]. They were more recently rediscovered by Cachazo, He and Yuan (CHY) in a series of pioneering papers [5; 6; 7] demonstrating that the SE provide a set of algebraic equations that are key to an alternative formulation of scattering amplitudes at tree level in \(d\) dimensions. Shortly afterwards, this framework was proven to reproduce the correct results for \(ϕ^{3}\) and Yang-Mills [8], to generalise to loop level [9; 10], and to arise naturally from a worldsheet theory called ambitwistor string [11].
In this alternative QFT formulation the kinematic information of the scattering process is encoded in a set of variables describing the location of punctures on the Riemann sphere. The locations of the punctures are related to the external momenta by the SE. Tree-level amplitudes are obtained by integrating over the position of the punctures on the Riemann sphere, while removing a redundancy coming from Möbius transformations, and imposing the solution of the SE. Alternatively, this integral can be recast as a contour integral around the punctures of the Riemann sphere. The rest of the integrand (called the CHY-integrand) depends on the chosen theory and it has the nice feature of making manifest relations that are hidden in a standard Lagrangian formulation. For instance, the CHY-integrands for Yang-Mills, Einstein gravity and biadjoint scalar theory closely match the KLT relations [12; 13].
The main bottleneck for the study of QFTs following this approach is the factorial growth of the number of solutions to the SE. In general, after accounting for Möbius redundancy, the the CHY formulae are supported on \((n-3)!\) solutions of the SE. More specifically, at three-point there are no free punctures, at four-point the SE have a single rational solution, and at five-point the there two irrational solutions. At six-point there are six irrational solutions which have been shown to be still algebraic in \(d=4\)[14]. Starting at seven-point in \(d=4\) and at six-point for general \(d\) dimensions the solutions can not be expressed in terms of radicals. At the same time tree-level amplitudes are rational functions of the external kinematics for any phase space multiplicity. Clearly some non-trivial simplification has to occur.
There exist also formulae specific to \(d=4\) based on the scattering equations refined by MHV degree [15; 16; 17]. In this case the counting is different and the number of solutions corresponds to the Eulerian numbers.
An intriguing solution found in the literature [18; 19] to this factorial growth problem is to obtain the sum of residues from the integral yielding the amplitude without explicitly finding the position of the poles. This powerful approach makes the rationality of the amplitude manifest even when the punctures are irrational. However, as the analytical complexity grows with the multiplicity of the scattering process, even this approach seems to require some form of numerical or semi-numerical reconstruction in order to achieve an analytical expression for the amplitude.
In this paper we develop a purely numerical approach, followed by an analytical reconstruction with the strategy of Ref. [20]. To perform this reconstruction we need an implementation of the CHY formulae which is both sufficiently stable in singular limits and that yields amplitudes with enough numerical precision. We provide code that satisfies these criteria in a Python package which we called seampy (from “Scattering equations and amplitudes with Python”).
A publicly available package to compute amplitudes within the CHY framework had already been presented in Ref. [21]. It is based on the scattering equations refined by MHV degree. However, it was not designed to provide amplitudes with the high precision needed by our reconstruction strategy. Furthermore, although the reconstructed analytical expressions we present in section 4 are specific to \(d=4\), our package provides numerical solutions to the SE in general \(d\) dimensions.
This article is organised as follows. In section 2 we review parts of the CHY formalism, in particular the polynomial form of the scattering equations [22], their solutions by means of elimination theory [23; 24], and a variety of CHY-integrands [25; 26]. The algorithm for solving the SE and the CHY-integrands are implemented in the Python package provided, which is presented in section 3. It provides high-precision floating-point solutions to the SE and numerical amplitudes. In section 4 we make use of this technology to explore the singularity structure of amplitudes and reconstruct explicit expressions in the \(d=4\) spinor helicity language. Finally, in section 5 we give our conclusions and outlook.
## 2 The CHY formalism
In this section we briefly review the theory underlining the CHY formalism and in the next section present its implementation in a Python library. For a more thorough introduction to the subject, with explicit step by step derivations, please consider Ref. [27] and the references therein.
Let us consider the tree-level scattering of \(n\) massless particles in \(d\) dimensions. We denote with \(A\) the set \(\{1,\dots,n\}\), with \(k^{\mu}_{a}\) (\(a\in A\)) the \(n\) momenta, and with \(z_{a}\) the \(n\) special points of the Riemann sphere called punctures. The map from momentum space to the Riemann sphere, as defined in Ref. [5], is the given by
\[k^{\mu}_{a}=\frac{1}{2\pi i}\oint_{|z-z_{a}|=\epsilon}dz\frac{p^{\mu}(k,z)}{ \prod_{b\in A}(z-z_{b})}\;,\] (1)
where \(p^{\mu}(z)\) are \(d\) polynomials with coefficients depending on the momenta and the punctures. The contour is taken to encircle the punctures.
From Eq. (1) it can be shown, as a consistency condition, that the following equations have to be satisfied
\[f_{a}(z,k)\equiv\sum_{b\in A\backslash\{a\}}\frac{k_{a}\cdot k_{b}}{z_{a}-z_{b }}=0,\qquad\forall a\in A\;,\] (2)
these are the so-called _scattering equations_. As previously mentioned, the SE are invariant under Möbius transformations \(\text{SL}(2,\mathbb{C})\), that is under the following mapping
\[z\rightarrow\zeta=\frac{\alpha z+\beta}{\gamma z+\delta}\;.\] (3)
Because Eq. (3) has effectively three free complex parameters, we can fix the position of three of the \(n\) punctures. A common choice in the literature, which we follow throughout this work, is given by
\[z_{1}=\infty,\quad z_{2}=1,\quad z_{n}=0\;.\] (4)
Scattering amplitudes for \(n\) massless particles \(A_{n}\)¹ are then obtained by integrating a CHY-integrand \(I_{ CHY}\)² over the solutions of the SE. This can be achieved either with a normal integral over delta functions, or as a contour integral over the SE. As by prescription
[FOOTNOTE:1][ENDFOOTNOTE]
[FOOTNOTE:2][ENDFOOTNOTE]
\[\textit{A}_{n} =\,i\int\frac{d^{n}z}{d^{3}\omega}\;\;I_{ CHY}( z;k;\epsilon)\prod_{a\in A}\,\mskip-8.0mu ^{{}^{\prime}}\delta(f_{a}(z,k))\] (5)
\[=\,i\oint_{\textit{O}}\frac{d^{n}z}{d^{3}\omega}\;\;I_{ CHY}(z;k;\epsilon)\prod_{a\in A}\,\mskip-8.0mu ^{{}^{\prime }}\frac{1}{f_{a}(z,k)}\;,\] (6)
where the Möbius measure \(d\omega\) and the modified product symbol \(\prod\,\mskip-8.0mu ^{\prime}\) are defined as
\[d^{3}\omega=\frac{dz_{r}dz_{s}dz_{t}}{(z_{r}-z_{s})(z_{s}-z_{t})(z_{t}-z_{s})}\;,\] (7)
\[\prod_{a\in A}\,\mskip-8.0mu ^{{}^{\prime}}=(z_{i}-z_{j})(z_{j}-z_{k})(z_{k}-z _{i})\prod_{a\in A\backslash\{i,j,k\}}\;.\] (8)
By substituting Eq. (7) and Eq. (8) back into Eq. (5) or Eq. (6) it can be shown that the amplitude \(A_{n}\) is invariant under Möbius transformations. Note that in principle the sets \(\{i,j,k\}\) and \(\{r,s,t\}\) are independent, but in practice they are often taken to be the same for convenience sake. Clearly the requirement of Möbius invariance also imposes a restriction on the valid CHY-integrands \(I_{ CHY}\), as we will see shortly.
We would like to use a purely algebraic approach, as it is more amenable to be implementation as computer code. To achieve this we can recast Eq. (5) from an integral to a summation by changing variables from the punctures \(z_{a}\) to the scattering equations \(f_{a}\). This introduces a Jacobian factor, i.e. the determinant of the Jacobian matrix defined as
\[ϕ_{ab}\,=\frac{∂f_{a}}{∂z_{b}}=\begin{cases}\frac{2k_{a}\cdot k_{b}}{(z_{a}-z_ {b})^{2}}&a\neq b\;,\\ -\sum\limits_{j\in A\backslash\{a\}}\frac{2k_{a}\cdot k_{j}}{(z_{a}-z_{j})^{2} }&a=b\;.\end{cases}\] (9)
Again, in the spirit of preserving Möbius invariance, since we have removed punctures \(i\), \(j\), and \(k\) from the above \(\delta\)-function, we also have to remove the corresponding rows form the Jacobian. Similarly, we are not integrating over \(r\), \(s\) and \(t\), and therefore those columns have to be removed as well. The matrix of Eq. (9) with rows \(i\), \(j\), \(k\) and columns \(r\), \(s\), \(t\) removed is denoted by \(ϕ_{rst}^{ijk}\). In the end, the relevant Jacobian for the change of variables, which is independent of the Möbius fixing choice, is given by³
[FOOTNOTE:3][ENDFOOTNOTE]
\[\mathit{J}=\frac{(z_{i}-z_{j})(z_{j}-z_{k})(z_{k}-z_{i})(z_{r}-z_{s})(z_{s}-z_ {t})(z_{t}-z_{r})}{\det(ϕ_{rst}^{ijk})}\;.\] (10)
If we impose the choice made in Eq. (4), we have
\[\{i,j,k\}=\{r,s,t\}=\{z_{1},z_{2},z_{n}\}=\{\infty,1,0\}\;.\] (11)
We now write Eq. (5) for the scattering amplitudes as
\[\textit{A}_{n}\,=\,z_{1}^{4}\cdot i\sum_{j=1}^{(n-3)!}\frac{I_{ CHY}(z^{(j)}(k);k;ϵ)}{\det(ϕ_{rst}^{ijk})(z^{(j)}(k);k)}\;,\] (12)
where \(j\) labels the solution of to the SE given by the set of punctures \(z^{(j)}\), which are themselves function of the momenta \(k\). Note that, because of Eq. (10) and our choice Eq. (11), the Jacobian \(\mathit{J}\) introduces the four powers of \(z_{1}=\infty\) in the numerator. Therefore, \(I_{ CHY}\) must come with four powers of \(z_{1}\) in the denominator for Eq. (12) to be sensible. This is a check of Mobius invariance.
### Polynomial form of the SE and their solutions
We now turn to the problem of actually finding the solutions to the SE. It is easiest to consider the SE in the form found in Ref. [22], where the SE are reformulated as \(n-3\) polynomial equations. We can then follow Ref. [23; 24] in using an elimination theory algorithm to find the solutions.
The SE in polynomial form, which are equivalent to the original SE of Eq. (2), are given by
\[h_{m}=\sum_{S\subset A^{\prime},\,|S|=m}k^{2}_{S_{1}}z_{S}=0\,,\:\:\mbox{with} \:\:1\leq m\leq n-3\,,\] (13)
where the sets \(A^{\prime}\) and \(S_{1}\) are defined as
\[A^{\prime}=A\backslash\{1,n\}\,,\:\:S_{1}=S\cup\{1\}\] (14)
and where \(k_{S}\) and \(z_{S}\) are defined as
\[\quad k_{S}=\sum_{b\in S}k_{b}\:\:\:\mbox{and}\:\:\:z_{S}=\prod_{b\in S}z_{b}\;.\] (15)
In the above \(z_{1}\) and \(z_{n}\) have already been set to \(\infty\) and \(0\) respectively, but \(z_{2}\) is still kept free.
This is a system of \(n-3\) polynomial equations (\(h_{1\leq m\leq n-3}\)) in \(n-2\) variables (\(z_{2\leq i\leq n-1}\)). As such it can be solved by using an elimination theory algorithm. The idea underpinning elimination theory is to express the system of equations in matrix form and to introduce more variables and equations until the system is over-specified and yields a consistency condition in the form of \(\det(M_{n})=0\). Here we are going to discuss directly the general \(n\) case. A more detailed discussion can be found in the original papers of Ref. [23; 24] or in Ref. [27].
In general, the aim is to obtain an equation of order \((n-3)!\) in the ratio \(z_{n-1}/z_{n-2}\). The original set of \(2^{n-4}\) monomials we wish to eliminate is given by
\[V^{T}=\{1,z_{2}\}\times\{1,z_{3}\}\times\,...\,\times\{1,z_{n-3}\}\;.\] (16)
We introduce a auxiliary set
\[W^{T}=\{1\}\times\{1,z_{3}\}\times\{1,z_{4},z_{4}^{2}\}\times\,...\,\times\{1, z_{n-3},...,z_{n-3}^{n-5}\}\;,\] (17)
which contains \((n-4)!\) terms. The new set of monomials is then given by
\[V^{T}\to V^{T}\times W^{T}=\{1,z_{2}\}\times\{1,z_{3},z_{3}^{2}\} \times\,...\,\times\{1,z_{n-3},...,z_{n-3}^{n-4}\}\;,\] (18)
which is of length \((n-3)!\). Similarly, the new \((n-3)!\) equations are given by
\[H^{T}\to H^{T}\times W^{T}\;,\] (19)
where \(H^{T}\) denotes the vector of polynomial scattering equations \(h_{1\leq m\leq n-3}\).
This procedure ensures that the number of monomials matches the number of equations, thus allowing to express the system in matrix form.
Then, by taking partial derivatives of the entries of the extended \(H\) of Eq. (19) w.r.t. those of the extended \(V\) of Eq. (18), we could construct the \((n-3)!\times(n-3)!\) matrix \(M_{n}\) whose determinant is the required equation. However, this is not necessary in practice since the matrix \(M_{n}\) can be built recursively in a block-matrix format starting directly from the original set \(h_{1\leq m\leq n-3}\) and their derivatives w.r.t. \(z_{2\leq i\leq n-3}\). We denote the derivatives with superscripts (\(M^{z}=\partial_{z}M\)) and we have
\[M_{i}=\left(\begin{array}[]{ccccccc}M_{i-1}&M_{i-1}^{z_{i-3}}&0&\dots&0&0\\ 0&M_{i-1}&M_{i-1}^{z_{i-3}}&\dots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\dots&M_{i-1}&M_{i-1}^{z_{i-3}}\\ \end{array}\right),\quad M_{4}=H,\quad H=\left(\begin{array}[]{c}h_{1}\\ h_{2}\\ \vdots\\ h_{n-3}\\ \end{array}\right)\;,\] (20)
with \(M_{i}\) of dimensions \((i-4)\times(i-3)\) when written in terms of \(M_{i-1}\). After the derivative is taken \(z_{i-3}\) is set to zero. \(M_{n}\) is then a function of \(z_{n-1}\) and \(z_{n-2}\) only, the required equation of order \((n-3)!\) in \(z_{n-1}/z_{n-2}\) is simply \(\det(M_{n})=0\), and its roots are the solutions we seek. Note that, as discussed in the introduction, it is feasible to perform this root-finding step analytically only for low phase space multiplicities.
Clearly we are not at the end of the calculation yet, because we want values or expressions for the punctures themselves not for ratios. This is achieved by reintroducing one variable at a time in \(M\). More explicitly, we first check with Eq. (18) the position in the vector of the variable \(\tilde{z}\) we want to reintroduce (say it is the \(j^{th}\) entry) then we add \(\tilde{z}\) times the \(j^{th}\) column of \(M\) to its first column, and eventually remove the \(j^{th}\) column and the last row. This leads to a matrix of size \((n-3)!-1\times(n-3)!-1\) whose determinant will be a linear equation for \(\tilde{z}\). There is one notable exception to this procedure, namely when \(\tilde{z}=z_{2}\) we set \(z_{2}=1\) and get a linear equation for \(z_{n-2}\) instead.
Finally, we are left with \((n-3)!\) sets of punctures \(\{z_{1}=\infty,\,z_{2}=1,\,z_{3},\,\dots\,z_{n-1},\,z_{n}=0\}\) that solve the scattering equations.
### CHY-integrands
So far we have treated the theory-independent part of Eq. (12). Now we consider the theory-dependent term \(I_{ CHY}\). It can be built in a modular way from various building blocks. Here we review the definition of some of those building blocks found in Ref. [25] and in Ref. [26] which we have implemented in the Python package presented in the next section.
Starting from the building blocks that are matrices, we have the \(2n\times 2n\) anti-symmetric matrix \(\Psi\) which is defined block-wise in terms of two \(n\times n\) anti-symmetric matrices \(A\) and \(B\) and in terms of a third \(n\times n\) matrix \(C\). The definitions follow.
\[\Psi=\left(\begin{array}[]{cc}A&-C^{T}\\ C&B\end{array}\right),\quad A_{ab}=\begin{cases}\frac{2k_{a}\cdot k_{b}}{(z_{a }-z_{b})}&a\neq b\;,\\ 0&a=b\;,\end{cases}\] (21)
\[B_{ab}=\begin{cases}\frac{2ϵ_{a}\cdot ϵ_{b}}{(z_{a}-z_{b})}&a\neq b\;,\\ 0&a=b\;,\end{cases}\quad C_{ab}=\begin{cases}\frac{2ϵ_{a}\cdot k_{b}}{(z_{a}-z _{b})}&a\neq b\;,\\ -\sum\limits_{j\in A\backslash\{a\}}\frac{2ϵ_{a}\cdot k_{j}}{(z_{a}-z_{j})}&a= b\;.\end{cases}\] (22)
Since these are matrices we have to define an operation which converts them to a rank-one object before we can use them to construct \(I_{ CHY}\). In the case of anti-symmetric matrices the determinant can be written as a square of a polynomial in the matrix entries. This polynomial is called the Pfaffian and it was shown to be the correct operation to perform. More specifically, since the matrix \(\Psi\) has two null vectors and its Pfaffian would be zero, it is necessary to define a reduced Pfaffian \(\text{PF}^{\prime}\) as
\[\text{PF}^{\prime}(\Psi)=\frac{(-1)^{i+j}}{z_{i}-z_{j}}\text{PF}(\Psi_{ij}^{ij })\;,\] (23)
where \(\Psi_{ij}^{ij}\) again denotes deletion of rows and columns \(i\) and \(j\). The same reduction applies also to different arguments, such as the matrix \(A\).
We also consider two scalar building blocks \(C_{n}\) and \(W_{1}\). \(C_{n}\) is a cyclic Parke-Taylor-like factor simply defined as
\[C_{n}=\frac{1}{(z_{1}-z_{2})\dots(z_{n}-z_{1})}\;,\] (24)
and the \(W_{1}\) function is defined as ⁴
[FOOTNOTE:4][ENDFOOTNOTE]
\[W_{1}=∏_{i\in A}ω_{i}\;,\quad\text{with}\quad ω_{i}=\sum\limits_{j\in A \backslash\{i\}}\frac{ϵ_{i}\cdot k_{j}\,(z_{j}-z_{r})}{(z_{r}-z_{i})(z_{i}-z_{ j})}\;,\;\;r≠i.\] (25)
\(I_{ CHY}\) is built from products of pairs of these building blocks. A more detailed analysis reveals that \(\text{PF}^{\prime}(\Psi)\), \(C_{n}\) and \(W_{1}\) come with a factor of \(z_{1}^{-2}\), while \(\text{PF}^{\prime}(A)\) comes with a factor of \(z_{1}^{-1}\). This dictates which combinations are allowed by Möbius invariance (recall that overall we need four powers of \(z_{1}\) to balance out those in Eq. (12)).
Table 1 summarises the theories that can be built out of \(\text{PF}^{\prime}(\Psi)\), \(C_{n}\), \(\text{PF}^{\prime}(A)^{2}\) and \(W_{1}\): EG stands for Einstein Gravity, YM for Yang-Mills, BS for Biadjoint Scalar, BI for Born-Infeld, NLSM for Non Linear Sigma Model and CG for Conformal Gravity. The theories labelled with a question mark do not seem to have an agreed upon name, but they are discussed in the reference from which the \(W_{1}\) function is taken.
× | PF′(Ψ) | Cn | PF′(A)2 | W1
---|---|---|---|---
PF′(Ψ) | EG | YM | BI | CG
Cn | YM | BS | NLSM | (DF)2
PF′(A)2 | BI | NLSM | Galileon | ?
W1 | CG | (DF)2 | ? | ?
Table 1: Possible QFTs built out of PF′(Ψ) , Cn , PF′(A)2 and W1.
A product is implied between rows and columns, eg: ICHY,EG=PF′(Ψ)×PF′(Ψ).
This is by no means a complete recount of all possible integrands \(I_{ CHY}\), but it is sufficient to illustrate the framework. Also note that, as anticipated in the introduction, relations among theories through double copies are now manifest in the structure of the integrands.
Finally, remember that the CHY-integrands are not unique. For instance, a different integrand for conformal gravity is given in Ref. [28].
## 3 Python libraries
In this section we introduce two new packages developed in Phyton 2.7:
* seampy (Scattering equations and amplitudes with Python),
* lips (\(d=4\) Lortenz invariant phase space).
The former provides high-precision floating-point solutions to the scattering equations in \(d\) dimensions and a variety of numerical scattering amplitudes built from their solutions. The latter is used to manipulate and pass a high-precision phase space point as input to the numerical amplitude.
Both packages are available on the Python Package Index. The source code is available on github and the documentation on the associated github pages seampy and lips. Their installation is straightforward thanks to pip: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]shell pip install –upgrade seampy # this installs lips as well pip install –upgrade lips # but it can be installed separately
The same commands can be used to update the libraries. The --upgrade option ensures that the lastest version is always used. A review of the key features of these packages is now provided. Further examples are given in section 4 and in the appendices A and B.
#### Solving the scattering equations
In this section we show how to easily obtain solutions for the scattering equations. All the following examples have \(n=6\).
The SE in polynomial form as in Eq. (13) can be accessed as follows: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=topline, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ hms(6)
[escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=bottomline, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, fontsize=]python
They are functions of the punctures and of Mandelstam invariants, which are given here as they appear in the SE: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ punctures(6) (z₁, z₂, z₃, z₄, z₅, z₆) ¿¿¿ mandelstams(6) (s₁₂, s₁₃, s₁₄, s₁₅, s₁₂₃, s₁₂₄, s₁₂₅, s₁₃₄, s₁₃₅, s₁₄₅, s₁₂₃₄, …)
The SE can be solved by calling the function solve_scattering_equations. It requires two inputs: the multiplicity of the phase space, n, and a Python dictionary with the numerical values for the Mandelstam invariants, num_ss. We therefore need a phase space point. This is easily done through the lips toolkit object Particles which generates a random phase space point: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oPs = Particles(6) # arg. is multiplicity of phase space ¿¿¿ num_ss = str(s): oPs.compute(str(s)) for s in mandelstams(6)
Alternatively it is possible to set the momenta from a list by modifying the four_mom attribute of each Particle in the list subclass Particles or to provide an independently construced set of Mandelstam invariants. More of this in appendix A.
We can then solve the scattering equations by calling: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ sols = solve_scattering_equations(6, num_ss)
the output, sols, is a list of length \((n-3)!\), in this case 6. Each solution in the list is a dictionary for the non arbitrarily fixed punctures, in this case of the form: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ sols[0] ’z3’: mpc(real=’#nbr’, imag=’#nbr’), ’z4’: mpc(real=’#nbr’, imag=’#nbr’), ’z5’: mpc(real=’#nbr’, imag=’#nbr’)
where each python’#nbr’ has by default 300 digits of precision.
#### Computing scattering amplitudes
First of all we can list the theories directly available for computation: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ theories [YM, EG, BS, BI, NLSM, Galileon, CG, DF2]
To calculate an amplitude we need to generate a phase space point, as in the example for the solutions of the scattering equations: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oParticles = Particles(6) # arg. is multiplicity of phase space
We then need to declare what quantity we want to compute. This requires us to specify a theory and a multiplicity. For example, biadjoint scalar theory (BS) amplitudes or non-linear sigma model (NLSM) amplitudes can be accessed as follows: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oBSAmp = NumericalAmplitude(theory=’BS’, multiplicity=6) ¿¿¿ oNLSMAmp = NumericalAmplitude(theory=’NLSM’, multiplicity=6)
Gauge and gravity theories also require an helicity configuration to be specified (the multiplicity is then deduced from it). Note that for gravity theories we are suppressing the repeated helicity sign since we don’t have mixed cases such as dilatons. This means that in the following code snippet for conformal gravity (CG) helconf=pmpmpm stands for \(1^{++}2^{--}3^{++}4^{--}5^{++}6^{--}\). [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oDFAmp = NumericalAmplitude(theory=’DF2’, helconf=’pmpmpm’) ¿¿¿ oCGAmp = NumericalAmplitude(theory=’CG’, helconf=’pmpmpm’)
It is then simply a matter of evaluating any amplitude at the phase space point: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oBSAmp(oParticles) mpc(real=’#nbr’, imag=’#nbr’)
Since most of these helicity amplitudes come with pre-factors of \(\sqrt{2}\), we decided to normalise them in such a way that numerical coefficients in analytical expressions are rational fractions and often simply the imaginary unit. This also allows for easier comparison to other codes, which usually adopt such a normalisations. For instance, in the case of Yang-Mills amplitudes the right hand side of Eq. (12) is multiplied by \(1/(\sqrt{2})^{n-2}\), so that the numerical coefficient in the Parke-Taylor expression for MHV amplitudes is \(i\) instead of \((\sqrt{2})^{n-2}i\), where \(n\) is the multiplicity of the process.
#### Validations
A first validation of the code is to check the solutions of the scattering questions. This is simply a matter of inserting each of the solutions back in the polynomial SE and check they vanish to working precision. This can easily be done in practice: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ sol = solve_scattering_equations(n, num_ss)[0] ¿¿¿ simplify(hms(n).subs(sol).subs(num_ss).subs(punctures(n)[1]: 1) [ 10 ** -290, 10 ** -290, 10 ** -290] # for n = 6 there are 3 SE
Additional checks that don’t require independent implementations of amplitudes include checking the little group scalings, mass dimensions, pole structure (more of this in section 4) or properties such as color ordering. For instance, as a sanity check, we can see that \((\text{DF})^{2}\) is color ordered whereas conformal gravity is not. This is shown in the following snippet (we are still using the helconf=pmpmpm amplitudes declared above): [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oNewParticles = oParticles.image(”321456”) # swap momenta 1 & 3 ¿¿¿ abs(oCGAmp(oParticles) - oCGAmp(oNewParticles)) ¡ 10 ** -270 True ¿¿¿ abs(oDFAmp(oParticles) - oDFAmp(oNewParticles)) ¡ 10 ** -270 False
However, picking the correct cyclic permutation of the external legs leaves the \((\text{DF})^{2}\) amplitude unchanged as well.
Finally the most stringent tests come from comparing to independent libraries. We have checked all pure gluon (Yang-Mills) tree amplitudes at 3, 4, 5, 6, and 7 point against BlackHat [29] and Yang-Mills, Einstein and conformal gravity against the code of Ref. [21]. They all match, that is their ratio differs at most by a normalisation factor fixed by convention.
## 4 Analytical reconstruction
We now consider how to recover analytical expressions for the tree-level scattering amplitudes discussed so far. There are several reasons why analytical expressions are preferable to numerical ones, such as execution speed, numerical stability and general understanding of their analytical structure. The same reconstruction technique can be applied to all the theories from Table 1. In the accompanying files we provide sample analytical amplitudes for all these theories up to six point. The the results are given both in human readable format and as expressions readable by the S@MMathematica package [30].
In this section, we are going to explicitly discuss only the reconstruction of \((\text{DF})^{2}\) and conformal gravity amplitudes, since they are the ones with a less well known analytical structure and therefore the most interesting to analyse. These theories are related by a double copy relation, similar to that between Yang-Mills and Einstein gravity, namely: \((\text{DF})^{2}\times\text{YM}∼CG\). \((\text{DF})^{2}\) and conformal gravity present issues with renormalisability and unitarity, since for instance \((\text{DF})^{2}\) is built out of dimension-six operators, as implied by the name. Despite this, they are of interest for a few reasons. Namely, one type of conformal gravity arises in Berkovits-Witten twistor string [31], it is the zero-mass limit of a mass-deformed theory that reproduces Einstein gravity in the infinite-mass limit [32], and it may be useful for computing Einstein gravity amplitude in curved backgrounds for cosmological applications [33; 34].
More specifically, in the following paragraphs we are going to provide: a) the first complete set of five-point \((\text{DF})^{2}\) amplitudes (one of which we could confirm numerically with that found in Ref. [35]); b) an alternative expression to that of Ref. [31] for the five-point MHV conformal gravity amplitude; c) results for the leading three-particle sigularities of the six-point amplitudes in the MHV and NMHV helicity sectors. All the amplitudes we present are written in the spinor helicity language and are free from spurious singularities, unless explicitly stated. We think that, in order to obtain similar complete results at six point, it could be necessary to use spurious sigularities, which introduces a further complication in the analysis.
We make use of the high floating-point precision provided by seampy and follow the strategy introduced in Ref. [20]. Briefly summarised, we study the behaviour of amplitudes in singular limits of complex phase space to obtain the poles and their degree. We then study the amplitudes in doubly singular regions to obtain information about the structure of the denominators of the amplitude. Using this information we generate ansätze for the residues of different poles and solve linear systems for the coefficients of bases of spinor expressions in the numerators. If a reconstructed ansatz is correct, once subtracted from the numerical amplitude, it removes a singularity. We repeat the procedure until the amplitude is fully reconstructed.
Explicit examples are discussed in the following subsections.
### Five-point amplitudes
#### \((\text{DF})^{2}\): five-point all-plus (explained example)
In contrast to QCD amplitudes, five-point \((\text{DF})^{2}\) amplitudes are non zero for all helicity configurations even at tree level. They are color ordered, like QCD, because their CHY-integrand contains the Parke-Taylor-like cyclic factor \(C_{n}\) of Eq. (24). Therefore, the symmetry group is restricted to cyclic and anti-cylic permutations. It can be generated from two operations, which can be thought of as the rotations and reflections of a pentagon (i.e. the dihedral group \(D_{5}\)):
\[(12345→23451)\quad\text{and}\quad(12345→-15432)\;.\] (26)
The minus sign in the reflection comes from the partity operation applied to vector particles (\(J^{P}=1^{-})\). In total the group contains 10 elements (including the identity).
The poles and their order, as well as any common factor in the numerator, can be obtained by studying the behaviour of the amplitude in singular limits. A singular limit is intended as a region of phase space where a single spinor helicity invariant vanishes (\(∼O(ϵ≪1)\)). We can see how this procedure works in practice in the case of angle and square spinor brackets with the following snippet, which can be run with the provided packages: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ from __future__import unicode_literals ¿¿¿ from lips import Particles ¿¿¿ from seampy import NumericalAmplitude ¿¿¿ import mpmath
¿¿¿ oDF2Amp = NumericalAmplitude(”DF2”, helconf=”+++++”) ¿¿¿ oParticles = Particles(oDF2Amp.multiplicity) ¿¿¿ oParticles.set(”⟨1—2⟩”, 10 ** -30) ¿¿¿ a = oDF2Amp(oParticles) ¿¿¿ oParticles.set(”⟨1—2⟩”, 10 ** -31) ¿¿¿ b = oDF2Amp(oParticles) ¿¿¿ round(mpmath.log(abs(b)/abs(a))/mpmath.log(10)) 2.0 # this is the order of the pole ⟨1—2⟩
What the above code does is to compute the amplitude at two phase space points and to calculate the slope of the line going through the two points in a log-log plot (Amplitude vs. spinor invariant).
Following this same procedure with the rest of the spinor invariants we obtain a first look at the analytical structure of the all plus amplitude:
\[A_{(\text{DF})^{2}}(1^{+},\,2^{+},\,3^{+},\,4^{+},\,5^{+})=\frac{\mathcal{N}}{ ⟨12⟩²⟨13⟩⟨14⟩⟨15⟩²⟨23⟩²⟨24⟩⟨25⟩⟨34⟩²⟨35⟩⟨45⟩²}\,,\] (27)
where \(\mathcal{N}\) is some numerator structure.
Two comments are now in order. Firstly, note that the adjacent particle singularities are of second order. This reflects the fact that this theory has a quartic propagator instead of the usual quadratic one. Secondly, although in this case it is possible to obtain an expression for the numerator \(\mathcal{N}\), it is often not feasible to do so in this single fraction representation, especially with higher point amplitudes; and even when it is possible, the result is complicated and obscures the structure of the amplitude.
In order to obtain a compact representation, we want to write the amplitude as a sum of fractions, each of which should have a simpler denominator structure than the expression above. It is generally convenient to start by considering the double poles, since they make it difficult to numerically access the corresponding simple poles. We study doubly singular limits, that is regions of phase space where pairs of spinor invariants vanish. In practice, this can be done with the same code snippet as above, by replacing the oParticles.set function with the oParticles.set_pair one. For example, for the pair \(⟨12⟩,\,⟨23⟩\) we have: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oParticles.set_pair(”⟨1—2⟩”, 10 ** -30, ”⟨2—3⟩”, 10 ** -30)
| ⟨13⟩ | ⟨14⟩ | ⟨15⟩ | ⟨23⟩ | ⟨24⟩ | ⟨25⟩ | ⟨34⟩ | ⟨35⟩ | ⟨45⟩
---|---|---|---|---|---|---|---|---|---
⟨12⟩ | 2 | 2 | 2 | 2 | 2 | 2 | 3 | 2 | 3
Table 2: Doubly singular limits for ⟨12⟩ in A(DF)2(1+,2+,3+,4+,5+)
By repeating the same procedure with all pairs involving \(⟨12⟩\) and recording the behaviour of the amplitude in the corresponding doubly singular limit we can generate Table 2. Since \(⟨12⟩\) is already a double pole, it is not likely for any other invariant appearing with a \(2\) in the table to be in the same denominator as \(⟨12⟩^{2}\). Therefore, we make an ansatz where only \(⟨34⟩\) and \(⟨45⟩\) (as simple poles) appear together with \(⟨12⟩^{2}\). More rigorously, we conjecture that:
\[\lim_{⟨12⟩→0}A_{(\text{DF})^{2}}(1^{+},\,2^{+},\,3^{+},\,4^{+},\,5^{+})=\frac{ \mathcal{N}_{12}}{⟨12⟩²⟨34⟩⟨45⟩}+O(⟨12⟩^{-1})\,.\] (28)
To check whether the above is true or not, we start by noting that the amplitude has mass dimension⁵ of \(1\) and little group weights⁶ of \([-2,\,-2,\,-2,\,-2,\,-2]\). Therefore, the numerator in the RHS must have mass dimension \(5\) and little group weights \([0,\,0,\,-1,\,0,\,-1]\) in order to match the LHS. We then generate a complete set of linearly independent products of spinor invariants consistent with these constraints. In this specific case the basis contains 20 independent entries:
[FOOTNOTE:5][ENDFOOTNOTE]
[FOOTNOTE:6][ENDFOOTNOTE]
\[⟨12⟩⟨13⟩[13][13][25],\quad⟨12⟩⟨15⟩[13][15][25],\quad⟨12⟩⟨23⟩[13][ 23][25],\quad⟨12⟩⟨25⟩[13][25][25],\]
\[⟨12⟩⟨35⟩[13][25][35],\quad⟨13⟩⟨13⟩[13][13][35],\quad⟨13⟩⟨15⟩[13][ 15][35],\quad⟨13⟩⟨23⟩[13][23][35],\]
\[⟨13⟩⟨25⟩[12][35][35],\quad⟨13⟩⟨25⟩[13][25][35],\quad⟨13⟩⟨35⟩[13][ 35][35],\quad⟨15⟩⟨15⟩[15][15][35],\]
\[⟨15⟩⟨25⟩[15][25][35],\quad⟨15⟩⟨35⟩[15][35][35],\quad⟨23⟩⟨23⟩[23][ 23][35],\quad⟨23⟩⟨25⟩[23][25][35],\]
\[⟨23⟩⟨35⟩[23][35][35],\quad⟨25⟩⟨25⟩[25][25][35],\quad⟨25⟩⟨35⟩[25][ 35][35],\quad⟨35⟩⟨35⟩[35][35][35].\]
Note that, the basis would have 290 entries if we were to generate it for the numerator of Eq. 27. Moreover, since we are not working in a generic phase space region but in the limit of small \(⟨12⟩\), it turns out that 10 of the 20 basis elements only contribute to the \(O(⟨12⟩^{-1})\) part of Eq. 28 and thus can be ignored. We can now generate 10 random phase space points in the \(⟨12⟩→ϵ\ll 1\) region and solve for the coefficients of the 10 elements. The solution has only one non zero coefficient:
\[\mathcal{N}_{12}=i[12]⟨13⟩⟨25⟩[35]^{2}\] (29)
To obtain the remaining four double poles, we can simply symmetrise the expression for the \(⟨12⟩\) double pole by applying the following cyclic permutations:
\[(12345→23451),\quad(12345→34512),\quad(12345→45123),\quad(12345→51234).\] (30)
Once an expression for a particular pole has been reconstructed, it can be numerically subtracted from the amplitude and the left over quantity will not contain that particular singularity anymore. Its singular limits can then be studied, ansätze made and reconstructions performed until all the poles have been successfully obtained and the amplitude fully reconstruced.
The final result for the all plus \((\text{DF})^{2}\) amplitude follows. On the left hand side we give the amplitude written using the symmetries discussed above. This is the format used throughout the rest of the article. For the sake of clarity, below we reproduce on the right hand side the same expression with the meaning of the symmetries made explicit.
\[A_{(\text{DF})^{2}}(1^{+},\,2^{+},\,3^{+},\,4^{+},\,5^{+})=\]
\[\frac{i[12]⟨13⟩⟨25⟩[35]^{2}}{⟨12⟩^{2}⟨34⟩⟨45⟩}+\frac{i[14][24][35 ]}{⟨12⟩⟨35⟩}+\]
\[(12345→23451)\,+\]
\[(12345→34512)\,+\]
\[(12345→45123)\,+\]
\[(12345→51234)\,+\]
\[\frac{2i[15][23]⟨4|1+2|4]}{⟨12⟩⟨34⟩⟨45⟩}+\]
\[\frac{2i[12][45]⟨3|1+5|3]}{⟨15⟩⟨23⟩⟨34⟩}+\]
\[\frac{2i[12][15][34]}{⟨23⟩⟨45⟩}\]
\[A_{(\text{DF})^{2}}(1^{+},\,2^{+},\,3^{+},\,4^{+},\,5^{+})=\]
\[\frac{i[12]⟨13⟩⟨25⟩[35]^{2}}{⟨12⟩^{2}⟨34⟩⟨45⟩}+\frac{i[14][24][35 ]}{⟨12⟩⟨35⟩}+\]
\[\frac{i⟨13⟩[14]^{2}[23]⟨24⟩}{⟨15⟩⟨23⟩^{2}⟨45⟩}+\frac{i[14][25][35 ]}{⟨14⟩⟨23⟩}+\]
\[\frac{i⟨24⟩[25]^{2}[34]⟨35⟩}{⟨12⟩⟨15⟩⟨34⟩^{2}}+\frac{i[13][14][25 ]}{⟨25⟩⟨34⟩}+\]
\[\frac{i[13]^{2}⟨14⟩⟨35⟩[45]}{⟨12⟩⟨23⟩⟨45⟩^{2}}+\frac{i[13][24][25 ]}{⟨13⟩⟨45⟩}+\]
\[\frac{i⟨14⟩[15][24]^{2}⟨25⟩}{⟨15⟩^{2}⟨23⟩⟨34⟩}+\frac{i[13][24][35 ]}{⟨15⟩⟨24⟩}+\]
\[\frac{2i[15][23]⟨4|1+2|4]}{⟨12⟩⟨34⟩⟨45⟩}+\]
\[\frac{2i[12][45]⟨3|1+5|3]}{⟨15⟩⟨23⟩⟨34⟩}+\]
\[\frac{2i[12][15][34]}{⟨23⟩⟨45⟩}\]
#### \((\text{DF})^{2}\): five-point single-minus
The single minus amplitude has a single element in its symmetry group besides the identity, namely \((12345→-43215)\), and is slightly more complicated than the all plus one.
\[A_{(\text{DF})^{2}}(1^{+},\,2^{+},\,3^{+},\,4^{+},\,5^{-})=\]
\[\frac{i/2[23]⟨25⟩^{3}⟨34⟩[45]}{⟨12⟩⟨14⟩⟨23⟩^{2}⟨24⟩}+\frac{[23]⟨3 5⟩(-i/2[12]⟨13⟩⟨25⟩+i/2⟨15⟩[15]⟨35⟩)}{⟨13⟩⟨14⟩⟨23⟩⟨34⟩}+\]
\[(12345→-43215)\,+\]
\[\frac{i[12]⟨14⟩⟨15⟩⟨25⟩⟨35⟩[45]}{⟨12⟩^{2}⟨13⟩⟨34⟩^{2}}+\frac{i⟨35 ⟩\mathcal{N}}{⟨12⟩⟨15⟩⟨23⟩⟨34⟩⟨45⟩}+\]
\[\frac{-i[12]⟨14⟩⟨23⟩[24]⟨25⟩⟨45⟩}{⟨12⟩⟨13⟩⟨24⟩⟨34⟩^{2}}+\frac{-i[ 14][24]⟨25⟩⟨45⟩}{⟨13⟩⟨23⟩⟨24⟩}+\frac{-i[13][14]^{2}[24]}{[15]⟨23⟩[45]}\phantom {+}\;,\]
In the above \(\mathcal{N}\) is given by
\[\mathcal{N}= ([12][13]⟨15⟩^{2}⟨25⟩+[13]^{2}⟨15⟩^{2}⟨35⟩+[12]⟨15⟩[23]⟨25⟩^{2}\]
\[+[13]⟨15⟩[23]⟨25⟩⟨35⟩+[23]^{2}⟨25⟩^{2}⟨35⟩)\;.\]
#### \((\text{DF})^{2}\): five-point MHV (adjacent)
This MHV amplitude is the only one we could already find in the litterature, specifically in Ref. [35], where it was written in terms of Mandelstam invariants. The expression we provide is more concise, makes its symmetry explicit and is free from spurious singularities. We have numerically checked that the two expressions agree. The one we found follows.
\[A_{(\text{DF})^{2}}(1^{+},\,2^{+},\,3^{+},\,4^{-},\,5^{-})=\]
\[\frac{i[12]⟨14⟩^{2}⟨25⟩^{2}⟨45⟩}{⟨12⟩^{2}⟨15⟩⟨23⟩⟨34⟩}+\frac{[13] ⟨45⟩(i⟨12⟩[12]+i/2⟨13⟩[13]+i⟨14⟩[14])}{⟨12⟩⟨23⟩[45]}+\]
\[\frac{i[13]^{2}⟨14⟩⟨35⟩}{⟨12⟩⟨23⟩[45]}+\frac{-i[12]⟨14⟩⟨25⟩⟨45⟩^{ 2}}{⟨12⟩⟨15⟩⟨23⟩⟨34⟩}+\frac{i[12][13]⟨15⟩⟨34⟩}{⟨13⟩⟨23⟩[45]}+\]
\[(12345→-32154)\,+\]
\[\frac{i⟨13⟩[13][15][34]⟨45⟩}{⟨12⟩⟨23⟩[45]^{2}}+\frac{-i[12][13]^{ 2}[23]}{[15][34][45]}\phantom{+}\;.\]
#### \((\text{DF})^{2}\): five-point MHV (non-adjacent)
The following is the last independent five-point amplitude. All others can be obtained by permutations and/or conjugation of the amplitudes presented here.
\[A_{(\text{DF})^{2}}(1^{+},\,2^{+},\,3^{-},\,4^{+},\,5^{-})=\]
\[\frac{i[12]⟨15⟩^{2}⟨23⟩⟨35⟩}{⟨12⟩^{2}⟨14⟩⟨45⟩}+\frac{i[34]⟨35⟩^{3 }}{⟨12⟩⟨15⟩⟨24⟩}+\frac{i[12]⟨23⟩⟨35⟩^{2}}{⟨12⟩⟨24⟩⟨34⟩}+\]
\[(12345→-21543)\,+\]
\[\frac{i⟨35⟩\mathcal{N}}{⟨12⟩⟨14⟩⟨24⟩}+\]
\[\frac{i[14][24]⟨35⟩}{⟨12⟩[35]}+\frac{-i[12][14]^{2}[24]^{2}}{[15] [23][34][45]}\]
In the above \(\mathcal{N}\) is given by
\[\mathcal{N}= ([12]⟨13⟩⟨25⟩+⟨13⟩[13]⟨35⟩+⟨15⟩[15]⟨35⟩\]
\[+⟨23⟩[23]⟨35⟩+⟨25⟩[25]⟨35⟩+2⟨35⟩^{2}[35])\]
#### Conformal gravity: five-point MHV
An all-multiplicities expression for MHV conformal gravity amplitudes exists thanks to work by Berkovits and Witten [31]. Here we present an expression specific to five point which makes manifest the absence of terms with pairs of double poles.
\[A_{CG}(1^{++},\,2^{++},\,3^{++},\,4^{--},\,5^{--})=\]
\[\frac{-i[12]^{2}⟨24⟩[34]⟨45⟩^{5}}{⟨12⟩^{2}⟨23⟩⟨34⟩⟨35⟩}+\frac{i[1 2]^{2}[13]⟨15⟩⟨45⟩^{4}}{⟨12⟩⟨13⟩⟨23⟩⟨35⟩}+\]
\[(12345→23145)\,+\,(12345→31245)\,+\]
\[\frac{-2i[12][13][23]⟨45⟩^{4}}{⟨12⟩⟨13⟩⟨23⟩}\]
### Six-point partial results
#### \((\text{DF})^{2}\): six-point MHV (adjacent) (partial)
In order to convey the increse in complexity that a six-point amplitude entails here we present an expression for the three-particle double poles as well as for the simple poles of non-adjacent three-particle singularities in a six-point MHV \((\text{DF})^{2}\) amplitude.
\[A_{(\text{DF})^{2}}(1^{+},\,2^{+},\,3^{+},\,4^{+},\,5^{-},\,6^{- })=\]
\[\footnotesize\frac{i[13][46]⟨56⟩\mathcal{N}_{1}}{⟨12⟩⟨23⟩⟨45⟩[56] ^{2}s_{123}^{2}}+\footnotesize\frac{i[12][34]⟨26⟩⟨35⟩\mathcal{N}_{2}}{⟨12⟩^{2} [16]⟨34⟩[45]s_{345}^{2}}+\]
\[\frac{-i[14][24][35][36]⟨56⟩}{⟨12⟩[56]^{2}s_{124}}+\frac{-i[12]⟨1 5⟩⟨25⟩[34][45]⟨46⟩}{⟨12⟩^{2}⟨34⟩[56]s_{125}}+\]
\[\frac{i[14]^{2}[23]⟨26⟩⟨36⟩[46]}{⟨23⟩^{2}[45][56]s_{145}}+\frac{- i[14][23]⟨26⟩⟨5|1+4|2]}{⟨14⟩⟨23⟩[56]s_{145}}+\]
\[(123456→432165)\,+\]
\[\frac{-i[12][14]⟨15⟩[34]⟨46⟩}{⟨12⟩⟨34⟩[56]s_{125}}+\frac{-i[13]^{ 2}[24]^{2}⟨25⟩⟨36⟩}{⟨13⟩[16]⟨24⟩[45]s_{245}}+\]
\[\frac{\mathcal{N}}{⟨12⟩²⟨13⟩⟨14⟩⟨16⟩[16]⟨23⟩²⟨24⟩⟨34⟩²⟨45⟩[45][56 ]²s_{123}s_{234}s_{345}}\]
Where \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\) are given by:
\[\mathcal{N}_{1}\,=\,( -2⟨12⟩^{2}[12]^{2}⟨24⟩[24]-2⟨12⟩^{2}[12]^{2}⟨25⟩[25]-2⟨12⟩^{2}[12 ][13][24]⟨34⟩\]
\[-2⟨12⟩^{2}[12][13][25]⟨35⟩-⟨12⟩^{2}[12][14][25]⟨45⟩-2⟨12⟩[12]^{2} ⟨13⟩⟨24⟩[34]\]
\[-2⟨12⟩[12]^{2}⟨13⟩⟨25⟩[35]-2⟨12⟩[12]⟨13⟩[13]⟨34⟩[34]-2⟨12⟩[12]⟨13 ⟩[13]⟨35⟩[35]\]
\[-⟨12⟩[12]⟨13⟩[14][35]⟨45⟩-⟨12⟩[12]^{2}⟨14⟩⟨25⟩[45]-⟨12⟩[12][13]⟨1 4⟩⟨35⟩[45]\]
\[+⟨12⟩[12]⟨23⟩[24][35]⟨45⟩+⟨12⟩[12][23]⟨24⟩⟨35⟩[45]+⟨12⟩[13][23]⟨3 4⟩⟨35⟩[45]\]
\[+[12]⟨13⟩⟨23⟩[34][35]⟨45⟩)\]
\[\mathcal{N}_{2}\,=\,( +3⟨12⟩[12]⟨13⟩[13][34]-2⟨12⟩⟨13⟩[13]^{2}[24]-⟨12⟩[13]⟨14⟩[14][24]\]
\[+⟨12⟩[12]⟨15⟩[15][34]-⟨12⟩[13][14]⟨15⟩[25]-⟨12⟩[13]⟨23⟩[23][24]\]
\[-⟨12⟩[15][23][24]⟨25⟩+⟨13⟩^{2}[13]^{2}[34]-⟨13⟩[13]^{2}⟨15⟩[45]\]
\[+⟨13⟩[13]⟨15⟩[15][34]+⟨13⟩[13]⟨23⟩[23][34]-⟨13⟩[13][23]⟨25⟩[45]\]
\[+⟨13⟩[15][23]⟨25⟩[34]-⟨14⟩^{2}[14]^{2}[34]-[13]⟨14⟩[14]⟨15⟩[45]\]
\[-⟨14⟩[14]⟨15⟩[15][34]-⟨14⟩[14]⟨24⟩[24][34]-[13]⟨14⟩[24]⟨25⟩[45]\]
\[-⟨14⟩[15][24]⟨25⟩[34]-[13]⟨15⟩^{2}[15][45]-⟨15⟩[15][23]⟨25⟩[45])\]
In the above expression \(\mathcal{N}\) would contain several thousand terms. It is therefore crucial to identify appropriate ways to perform a partial fraction decomposition, since smaller denominators would in turn imply smaller numerators and thus easier systems of linear equations to generate and solve. However, further studies will be necessary to check whether such a decomposition requires the introduction of spurious singularities, like for NMHV amplitudes in Yang-Mills, and if so what form these spurious poles would take.
#### Conformal gravity: NMHV (partial)
To conclude, we present an expression for the three-particle double poles in the six-point NMHV conformal gravity amplitude. To the best of our knowledge this is the first analytical result, albeit a partial one, for NMHV conformal gravity amplitudes.
\[A_{CG}(1^{++},\,2^{++},\,3^{++},\,4^{--},\,5^{--},\,6^{--})=\]
\[\footnotesize\frac{i[23]^{4}⟨56⟩^{4}\mathcal{N}_{1}}{⟨15⟩⟨16⟩⟨23⟩ ^{2}[24][34][56]^{2}s_{234}^{2}}+\]
\[(123456→312645)\,+(123456→231564)\,+(123456→312564)\,+\]
\[(123456→231645)\,+(123456→312456)\,+(123456→231456)\,+\]
\[(123456→123645)\,+(123456→123564)\,+\]
\[\frac{\mathcal{N}}{\begin{gathered}(⟨12⟩²⟨13⟩²⟨14⟩[1 4]⟨15⟩[15]⟨16⟩[16]⟨23⟩²⟨24⟩[24]⟨25⟩[25]⟨26⟩[26]⟨34⟩[34]\\ ×⟨35⟩[35]⟨36⟩[36][45]²[46]²[56]²s_{124}s_{125}s_{134}s_{135}s_{14 5}s_{234}s_{235}s_{245}s_{345})\end{gathered}}\]
In the above \(\mathcal{N}_{1}\) is given by
\[\mathcal{N}_{1}\,=\,( -[12]^{2}⟨13⟩[15]⟨23⟩⟨24⟩^{2}[36]+[12]⟨13⟩[13][15]⟨23⟩⟨24⟩^{2}[26 ]-[12]⟨13⟩[13][15]⟨23⟩⟨24⟩⟨34⟩[36]\]
\[+⟨13⟩[13]^{2}[15]⟨23⟩⟨24⟩[26]⟨34⟩+[12]^{2}⟨14⟩[15]⟨23⟩^{2}⟨24⟩[36 ]-[12][13]⟨14⟩[15]⟨23⟩^{2}⟨24⟩[26]\]
\[-[12]⟨14⟩[14][15]⟨23⟩⟨24⟩⟨34⟩[36]+[13]⟨14⟩[14][15]⟨23⟩⟨24⟩[26]⟨34 ⟩-[12][13]⟨23⟩^{2}⟨24⟩^{2}[25][26]\]
\[-2[12][13]⟨23⟩^{2}⟨24⟩[25]⟨34⟩[36]-[12][13]⟨23⟩^{2}⟨34⟩^{2}[35][3 6]-[12][14]⟨23⟩⟨24⟩^{3}[25][26]\]
\[-[12][13]⟨23⟩⟨24⟩^{2}[24]⟨34⟩[56]-2[12][14]⟨23⟩⟨24⟩^{2}[25]⟨34⟩[3 6]+[13][14]⟨23⟩⟨24⟩^{2}[25][26]⟨34⟩\]
\[-[12][14]⟨23⟩⟨24⟩⟨34⟩^{2}[35][36]-[13]^{2}⟨23⟩⟨24⟩[24]⟨34⟩^{2}[56 ]+2[13][14]⟨23⟩⟨24⟩[25]⟨34⟩^{2}[36]\]
\[+[13][14]⟨23⟩⟨34⟩^{3}[35][36]-[12][14]⟨24⟩^{3}[24]⟨34⟩[56]+[14]^{ 2}⟨24⟩^{3}[25][26]⟨34⟩\]
\[-[13][14]⟨24⟩^{2}[24]⟨34⟩^{2}[56]+2[14]^{2}⟨24⟩^{2}[25]⟨34⟩^{2}[3 6]+[14]^{2}⟨24⟩⟨34⟩^{3}[35][36])\,.\]
Here \(\mathcal{N}\) would contain even more terms then in the six-point \((\text{DF})^{2}\) example. Similar expressions where the symmetries of the poles are made manifest are also possible in Einstein gravity amplitudes, for example the following represent the three-particle simple poles in the six-point NMHV sector.
\[A_{EG}(1^{++},\,2^{++},\,3^{++},\,4^{--},\,5^{--},\,6^{--})=\]
\[\frac{-i[12]^{3}⟨56⟩^{3}⟨4|1+2|3]^{4}}{⟨12⟩⟨14⟩[14]⟨24⟩[24]⟨35⟩[3 5]⟨36⟩[36][56]s_{124}}+\]
\[(123456→132456)+(123456→123546)+(123456→132546)\,+\]
\[(123456→321456)+(123456→123654)+(123456→321654)\,+\]
\[(123456→231546)+(123456→132645)+\]
\[\frac{\mathcal{N}}{⟨12⟩⟨13⟩⟨14⟩[14]⟨15⟩[15]⟨16⟩[16]⟨23⟩⟨24⟩[24]⟨2 5⟩[25]⟨26⟩[26]⟨34⟩[34]⟨35⟩[35]⟨36⟩[36][45][46][56]}\]
However, this symmetric approach, which is also free from spurious singularities, makes it highly non trivial to obtain the rest of the amplitude (i.e. the numerator \(\mathcal{N}\)). Indeed, the compact expressions that we are aware of come from BCFW recursions and have a quite different structure:
\[A_{EG}(1^{++},\,2^{++},\,3^{++},\,4^{--},\,5^{--},\,6^{--})=\]
\[\frac{-i[23]^{7}⟨34⟩⟨56⟩^{7}[56]}{⟨15⟩⟨16⟩[24][34]⟨1|2+4|3]⟨1|2+3 |4]⟨5|1+6|2]⟨6|1+5|2]s_{234}}+\]
\[\footnotesize\frac{i[24]⟨4|1+2|3]^{7}\left(\begin{gathered} -⟨12⟩[12]⟨13⟩[35]⟨45⟩+⟨12⟩[13]⟨14⟩[25]⟨35⟩+⟨12⟩[23]⟨24⟩[25]⟨35⟩-⟨ 12⟩[24]⟨34⟩[35]⟨45⟩\\ -⟨13⟩⟨14⟩[14][35]⟨45⟩+[13]⟨14⟩^{2}⟨35⟩[45]-⟨14⟩⟨24⟩[25][34]⟨35⟩+⟨ 14⟩[24]⟨25⟩⟨34⟩[35]\end{gathered}\right)}{⟨12⟩^{2}⟨24⟩[35][36][56]⟨1|2+4|3]⟨1| 2+4|5]⟨1|2+4|6]⟨4|1+2|5]⟨4|1+2|6]s_{124}}+\]
\[\footnotesize\frac{i[12]^{6}⟨14⟩⟨56⟩^{7}\left(\begin{gathered} -⟨12⟩[12][23]⟨35⟩[45]-[12]⟨13⟩[14]⟨15⟩[35]+[12]⟨14⟩[34]⟨35⟩[45]+⟨ 14⟩[15][24][34]⟨35⟩\\ -[12]⟨15⟩⟨23⟩[24][35]-[14]⟨15⟩[24]⟨34⟩[35]+⟨23⟩[24]^{2}[35]⟨45⟩-[ 23]⟨24⟩[24]⟨35⟩[45]\end{gathered}\right)}{[14]⟨35⟩⟨36⟩⟨3|1+4|2]⟨3|1+2|4]⟨5|1+4 |2]⟨5|1+2|4]⟨6|1+4|2]⟨6|1+2|4]s_{124}}+\]
\[\frac{-i[34]⟨56⟩⟨4|1+3|2]^{7}}{⟨13⟩⟨14⟩[25][26]⟨34⟩[56]⟨1|2+6|5]⟨ 1|2+5|6]⟨3|1+4|2]s_{134}}+\]
\[(123456→123546)+(123456→123654)\,+\]
\[\footnotesize\frac{i[23]s_{123}^{7}\left(\begin{gathered} ⟨12⟩⟨13⟩[14][25]⟨45⟩-⟨12⟩[12]⟨14⟩⟨35⟩[45]+⟨12⟩⟨23⟩[24][25]⟨45⟩+⟨1 2⟩[23]⟨34⟩⟨35⟩[45]\\ +⟨13⟩^{2}[14][35]⟨45⟩-⟨13⟩[13]⟨14⟩⟨35⟩[45]+⟨13⟩⟨23⟩[25][34]⟨45⟩-⟨ 13⟩[23]⟨25⟩⟨34⟩[45]\end{gathered}\right)}{⟨12⟩^{2}⟨23⟩[45][46][56]⟨1|2+3|4]⟨1| 2+3|5]⟨1|2+3|6]⟨3|1+2|4]⟨3|1+2|5]⟨3|1+2|6]}\]
We have reproduced this result already known in the literature by applying our analytical reconstruction strategy to a single BCFW factorisation channel at a time, which is significantly simpler than the full amplitude⁷. Compared to the previous partial result, we note that this representation manifestly does not contain two-particle Mandelstam invariants, but introduces many spurious singularities and hides the symmetries which were manifest in the above partial result.
[FOOTNOTE:7][ENDFOOTNOTE]
The strategy of studying a factorisation channel at a time could prove fruitful also in the case of conformal gravity and \((\text{DF})^{2}\) amplitudes, but the quartic propagator introduces a significant complication in the BCFW recursion.
In fact, the usual \(\textit{A}_{L}\textit{A}_{R}/p^{2}\) factorisation is broken by the presence of higher order poles in the Laurent expansion in the shift parameter. We attempted to achieve such a factorisation by means of a Taylor expansion of the numerator \(\textit{A}_{L}\textit{A}_{R}\) around the pole. However, this involves taking a derivative with respect to the shift parameter which in turns requires the amplitudes to be well defined in the neighbourhood of the factorisation point. This seems to be equivalent to the factorisation formula (Eq. 2.18) given in Ref. [32], where the derivative is implicit in the fact that we have to take the zero mass limit of expressions like \((\textit{A}_{L}(m^{2})-\textit{A}_{L}(0))/m^{2}\). This would also explain why our approach fails: the amplitudes we use are well defined only exactly at the factorisation point, where the legs are on-shell and massless.
However, we do have the six-point amplitude through the CHY formula and there is no need to generate it recursively from lower point amplitudes. At the same time, we expect single factorisation channels to have an easier analytical structure than the full amplitude. This suggests to still look at the amplitude via the residue theorem:
\[\frac{1}{2\pi i}\oint\frac{\hat{A}(z)}{z}dz=\hat{A}(0)+\sum_{i}\frac{\text{Res }\hat{A}(z)|_{z=z_{i}}}{z_{i}}.\] (31)
We can then study one term in the sum in the RHS at a time. Note that the simultaneous need to generate singular phase space limits and to numerically extract the residue from a Laurent expansion in some cases requires to increase the working numerical of precision.
As an example, let us consider the same \(⟨21]\) shift as before, and more specifically the \((2,3,4)_{L}\), \((1,5,6)_{R}\) channel, which for Einstein gravity yields the first term from the previous expression, i.e.:
\[\frac{\text{Res}\,\hat{A}_{EG}^{\textit{NMHV}}(z)}{z}\Big{|}_{z=z_{\tiny(2,3,4 )_{L},(1,5,6)_{R}}}=\frac{i[23]^{7}⟨34⟩⟨56⟩^{7}[56]}{⟨15⟩⟨16⟩[24][34]⟨1|2+4|3] ⟨1|2+3|4]⟨5|1+6|2]⟨6|1+5|2]s_{234}}.\]
The same shift in the same channel in the case of conformal gravity instead yields:
\[\frac{\text{Res}\,\hat{A}_{CG}^{\textit{NMHV}}(z)}{z}\Big{|}_{z=z_{(2,3,4)_{L} ,(1,5,6)_{R}}}=\frac{\mathcal{N}}{\begin{gathered}(⟨12⟩^{2}⟨13⟩^{ 2}⟨15⟩⟨16⟩[24]⟨34⟩^{2}[34][46]^{2}[56]^{2}⟨1|3+4|2]^{2}⟨1|2+4|3]\\ ×⟨1|2+3|4]^{3}⟨5|1+6|2]⟨6|1+5|2]s_{124}^{2}s_{125}^{2}s_{234}^{2} )\end{gathered}}.\]
The numerator \(\mathcal{N}\), having mass dimension of 46, is unfortunately still too complicated to be determined. We see that the conformal gravity residue has more poles and poles of higher order compared to Einstein gravity one, as well as some spurious singularities of order higher than one. Furthermore, note that for this shift the contour integral vanishes for Einstein gravity but not for conformal gravity. Therefore, in the latter case we would have to include a boundary term coming from the residue at infinity. Some of the other possible shifts have the advantage of vanishing on the contour, but the structure of the residues remains similarly complicated. Further work will be required to see whether a reasonably compact analytical expression can be obtained for these residues.
## 5 Conclusion and outlook
In this article we have briefly reviewed the CHY formalism for massless tree-level scattering, and more specifically the problem of solving the scattering equations and applying their solutions to CHY-integrands.
In order to overcome the analytical complexity of the computation, we have developed a Python package (seampy) which allows to numerically solve the scattering equations and to computate tree amplitudes with high floating-point precision for the following theories: Yang-Mills, Einstein gravity, biadjoint scalar, Born-Infeld, non-linear sigma model, Galileon, conformal gravity and \((\text{DF})^{2}\).
Finally, we have discussed how to recover analytical expression in the spinor helicity language from numerical evaluations. In particular, we have presented the first complete set of five-point \((\text{DF})^{2}\) amplitudes, a new form for the five-point MHV conformal gravity amplitude and a discussion with partial results for six-point amplitudes in both \((\text{DF})^{2}\) and conformal gravity.
In the accompanying files we have provided sample analytical amplitudes for all mentioned theories up to six point. The the results are given both in human readable format and as expressions readable by the S@MMathematica package.
Let us remark the fact that despite not all the solutions to the scattering equations are rational (except at three and four point), and in some cases not even be expressible in terms of radicals (beyond six point), the tree-level amplitudes built from them are purely rational functions. This is made clear by reconstructing explicit rational analytical expressions from numerical evaluations. The expressions we obtain are usually compact, with a clear symmetry structure when available and free from spurious singularities, unless explicitly stated.
We have observed that complexity increases significantly from five-point to six-point amplitudes and given explicit examples. In the previous section we discussed ways to look at simpler building blocks rather than the full amplitude all at once, such as a modified BCFW recursion for the quartic propagators of conformal gravity and \((\text{DF})^{2}\) or a more naive application of the residue theorem. These approaches seem promising, since they can still be carried out numerically while resulting in simpler structures to which apply the analytical reconstruction. However, more remains to be done to make this feasible in practice for the more complicated theories.
Finally, going forward it might be interesting to use this numerical approach to the CHY formalism together with the analytical reconstruction tools to look at other interesting quantities such as double copy structures, BCJ numerators, amplitudes with mixed particle content, and loop-level amplitudes.
###### Acknowledgements.
I would like to thank Yang-Hui He for introducing me to the subject of the scattering equations; Arthur Lipstein and Joseph Farrow for useful discussion on formal aspects of the work; and Daniel Maitre for technical suggestions as well as for the use of his code to generate spinor helicity ansätze. I am funded by an STFC PhD scholarship.
## References
* (1) D. Fairlie and D. Roberts, _Dual Models without Tachyons - a New Approach_, _unpublished Durham preprint PRINT-72-2440_ (1972) .
* (2) D. Roberts, _Mathematical Structure of Dual Amplitudes_, _Durham PhD thesis_ (1972) .
* (3) D. B. Fairlie, _A Coding of Real Null Four-Momenta into World-Sheet Coordinates_, _Adv. Math. Phys._**2009** (2009) 284689 [0805.2263].
* (4) D. J. Gross and P. F. Mende, _String Theory Beyond the Planck Scale_, _Nucl. Phys._**B303** (1988) 407.
* (5) F. Cachazo, S. He and E. Y. Yuan, _Scattering in Three Dimensions from Rational Maps_, _JHEP_**10** (2013) 141 [1306.2962].
* (6) F. Cachazo, S. He and E. Y. Yuan, _Scattering of Massless Particles in Arbitrary Dimensions_, _Phys. Rev. Lett._**113** (2014) 171601 [1307.2199].
* (7) F. Cachazo, S. He and E. Y. Yuan, _Scattering of Massless Particles: Scalars, Gluons and Gravitons_, _JHEP_**07** (2014) 033 [1309.0885].
* (8) L. Dolan and P. Goddard, _Proof of the Formula of Cachazo, He and Yuan for Yang-Mills Tree Amplitudes in Arbitrary Dimension_, _JHEP_**05** (2014) 010 [1311.5200].
* (9) T. Adamo, E. Casali and D. Skinner, _Ambitwistor strings and the scattering equations at one loop_, _JHEP_**04** (2014) 104 [1312.3828].
* (10) Y. Geyer, L. Mason, R. Monteiro and P. Tourkine, _Loop Integrands for Scattering Amplitudes from the Riemann Sphere_, _Phys. Rev. Lett._**115** (2015) 121603 [1507.00321].
* (11) L. Mason and D. Skinner, _Ambitwistor strings and the scattering equations_, _JHEP_**07** (2014) 048 [1311.2564].
* (12) F. Cachazo, S. He and E. Y. Yuan, _Scattering equations and Kawai-Lewellen-Tye orthogonality_, _Phys. Rev._**D90** (2014) 065001 [1306.6575].
* (13) H. Kawai, D. Lewellen and S.-H. Tye, _A relation between tree amplitudes of closed and open strings_, _Nuclear Physics B_**269** (1986) 1 .
* (14) S. Weinzierl, _On the solutions of the scattering equations_, _JHEP_**04** (2014) 092 [1402.2516].
* (15) Y. Geyer, A. E. Lipstein and L. J. Mason, _Ambitwistor Strings in Four Dimensions_, _Phys. Rev. Lett._**113** (2014) 081602 [1404.6219].
* (16) M. Spradlin and A. Volovich, _From Twistor String Theory To Recursion Relations_, _Phys. Rev._**D80** (2009) 085022 [0909.0229].
* (17) A. E. Lipstein, _Soft Theorems from Conformal Field Theory_, _JHEP_**06** (2015) 166 [1504.01364].
* (18) R. Huang, J. Rao, B. Feng and Y.-H. He, _An Algebraic Approach to the Scattering Equations_, _JHEP_**12** (2015) 056 [1509.04483].
* (19) M. Søgaard and Y. Zhang, _Scattering Equations and Global Duality of Residues_, _Phys. Rev._**D93** (2016) 105009 [1509.08897].
* (20) G. De Laurentis and D. Maître, _Extracting analytical one-loop amplitudes from numerical evaluations_, _JHEP_**07** (2019) 123 [1904.04067].
* (21) J. A. Farrow, _A Monte Carlo Approach to the 4D Scattering Equations_, _JHEP_**08** (2018) 085 [1806.02732].
* (22) L. Dolan and P. Goddard, _The Polynomial Form of the Scattering Equations_, _JHEP_**07** (2014) 029 [1402.7374].
* (23) L. Dolan and P. Goddard, _General Solution of the Scattering Equations_, _JHEP_**10** (2016) 149 [1511.09441].
* (24) C. Cardona and C. Kalousios, _Elimination and recursions in the scattering equations_, _Phys. Lett._**B756** (2016) 180 [1511.05915].
* (25) F. Cachazo, S. He and E. Y. Yuan, _Scattering Equations and Matrices: From Einstein To Yang-Mills, DBI and NLSM_, _JHEP_**07** (2015) 149 [1412.3479].
* (26) T. Azevedo and O. T. Engelund, _Ambitwistor formulations of R\({}^{2}\) gravity and (DF)\({}^{2}\) gauge theories_, _JHEP_**11** (2017) 052 [1707.02192].
* (27) G. De Laurentis, _The CHY formalism for massless scattering (master thesis)_, _Oxford MPhys thesis_ (2016) .
* (28) J. A. Farrow and A. E. Lipstein, _New Worldsheet Formulae for Conformal Supergravity Amplitudes_, _JHEP_**07** (2018) 074 [1805.04504].
* (29) C. F. Berger, Z. Bern, L. J. Dixon, F. Febres Cordero, D. Forde, H. Ita et al., _One-Loop Calculations with BlackHat_, _Nucl. Phys. Proc. Suppl._**183** (2008) 313 [0807.3705].
* (30) D. Maitre and P. Mastrolia, _S@M, a Mathematica Implementation of the Spinor-Helicity Formalism_, _Comput. Phys. Commun._**179** (2008) 501 [0710.5559].
* (31) N. Berkovits and E. Witten, _Conformal supergravity in twistor-string theory_, _JHEP_**08** (2004) 009 [hep-th/0406051].
* (32) H. Johansson, G. Mogull and F. Teng, _Unraveling conformal gravity amplitudes_, _JHEP_**09** (2018) 080 [1806.05124].
* (33) J. Maldacena, _Einstein Gravity from Conformal Gravity_, 1105.5632.
* (34) G. Anastasiou and R. Olea, _From conformal to Einstein Gravity_, _Phys. Rev._**D94** (2016) 086008 [1608.07826].
* (35) H. Johansson and J. Nohle, _Conformal Gravity from Gauge Theory_, 1707.02965.
* (36) H. Elvang and Y.-t. Huang, _Scattering Amplitudes_, 1308.1697.
## Appendix A lips (phase space generator)
In this first appendix we present in more details the lipsPython package. It is an object-oriented high-precision floating-point phase space generator. It is not the focus of this work, but it is necessary in order to pass a sufficiently precise phase space point to the scattering equations solver function or numerical amplitude object.
The lips phase space generator is built on two layers. The lower one, called Particle, describes the kinematics of a single particle. Though setters and getters, it provides self-updating numerical tensors for the left and right spinors, four vectors and rank two spinors. This means that if, say, the value of the four momentum is changed, then the values of the spinor attributes are immediately recalculated to reflect the change. We can see the naming conventions in the following code snippet: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oParticle = Particle() ¿¿¿ oParticle.l_sp_u # left spinor with index up (\(\bar{\lambda}^{\dot{\alpha}}\)) ¿¿¿ oParticle.r_sp_d # right spinor with index down (\(\lambda_{\alpha}\)) ¿¿¿ oParticle.four_mom # four momentum with index up (\(P^{\mu}\)) ¿¿¿ oParticle.r2_sp # rank two spinor (\(P^{\dot{\alpha}\alpha}\))
By default the Particle object is initialised with random complex momenta. However, this can be overruled by specifying the optional paramenter real_momentum=True. A custom value for any of these attributes can also be passed. For instance, we can set the momentum to be along the x axis: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oParticle.four_mom = [1, 1, 0, 0]
The second layer is a list subclass, called Particles. It is a base-one list of Particle objects with several methods attached associated to it. The reason why the list is rebased to start from 1 instead of 0 is simply to match the notation in the amplitudes community. As we have observed, it is initialised as follows: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oParticles = Particles(6) # argument is the multiplicity
It also accepts an optional parameter, now called real_momenta, which is by default set to False, and which gets automatically passed down to all the Particle objects in the Particles list, thus generating a complex or real phase space point.
Furthermore, as discussed in conjunction with the analytical reconstruction, theParticles phase space can be manipulated to generate specific configurations. For instance, we can generate phase space point with vanishing angle bracket \(⟨12⟩\) by calling: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oParticles.set(”⟨1—2⟩”, 10 ** -30)
Doubly singular limits for pairs of invariants can be similarly generated. For instance, we can make both \(⟨12⟩\) and \(⟨23⟩\) small: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oParticles.set_pair(”⟨1—2⟩”, 10 ** -30, ”⟨2—3⟩”, 10 ** -30)
At present these functions only work with complex momenta, because with complex momenta it is possible to construct phase space points where, say, \(⟨12⟩\) is small but \([12]\) is not, while with real momenta this is not possible (\([12]\sim⟨12⟩^{*}\)).
Other notable functions are: [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ oParticles.randomise_all() # randomises all momenta ¿¿¿ oParticles.angles_for_squares() # swaps right/left spinors (C-sym.) ¿¿¿ oParticles.image(”234561”) # argument is a permutation of 123…n
For more details we refer the reader to the package documentation on the github pages at lips.
## Appendix B seampy (further details)
In this appendix we provide more details on the seampy package. Although not crucial from a user point of view, these may be of interest if one wants to study in more details the internal behaviour of the program or perform modifications, such as add new theories to the list of those available for computation.
Still using \(n=6\) for our examples, we can see two important elements of the elimination theory algorithm:
\(\circ\) the vector of variables to be removed via elimination theory from Eq. (18): [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ V(6) [1, z₂, z₃, z₂⋅z₃, z₃², z₂⋅z₃²]
\(\circ\) the elimination theory matrix obtained with the recursion algorithm of Eq. (20): [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=topline, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ M(6)
[escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=bottomline, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python
These are the basis for the solve_scattering_equations function, which involves taking the determinant of M and finding its roots.
We can also consider the CHY-integrands and the Jacobian for the change of variables. We denote with the term _reduced_ the following sequence of operations: a) removing rows and columns: two of them for arguments of Pfaffians and three of them for the Jacobian; b) imposing the Möbius fixing choice of Eq. (4); c) removing any factorised factor of \(z_{1}=\infty\). In the following code snippets we reproduce some examples:
\(\circ\) the reduced Jacobian Matrix \(\phi\) of Eq. (9): [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=topline, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ Phi(6)
[escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=bottomline, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python
\(\circ\) the reduced matrix A of Eq. (21): [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=topline, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ A(6) [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=bottomline, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python
\(\circ\) the reduced cyclic Parke-Taylor-like factor \(C_{n}\) of Eq. (24): [escapeinside=——, mathescape=true, linenos=False, numbersep=5pt, gobble=2, frame=lines, framesep=2mm, breaklines, breakautoindent=false, breakindent=-12.5pt, ]python ¿¿¿ Cyc(6) -1 ──────────────────────────────── z₅⋅(-z₃ + 1)⋅(z₃ - z₄)⋅(z₄ - z₅)
All these symbolic quantities are built with sympy. However, note that the symbolical substitution function from sympy is very slow, therefore we use regular expressions from the re library to perform substitutions in the conversion from symbolic to numeric.
For more details we refer the reader to the package documentation on the github pages at seampy.
|
1702.02451 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 31993,
"num_imgs": 7,
"llama3_tokens_count": 8470
} | [
"content_image/1702.02451/x1.png",
"content_image/1702.02451/x2.png",
"content_image/1702.02451/x3.png",
"content_image/1702.02451/x4.png",
"content_image/1702.02451/x5.png",
"content_image/1702.02451/x6.png",
"content_image/1702.02451/x7.png"
] | # Magnetic vortices as localized mesoscopic domain wall pinning sites
R. L. Novak
rafael.novak@ufsc.br
Universidade Federal de Santa Catarina – Campus Blumenau, Rua Pomerode, 710, 89065-300 Blumenau (SC), Brazil
L. C. Sampaio
Centro Brasileiro de Pesquisas Físicas – Rua Dr. Xavier Sigaud 150, 22290-180 Rio de Janeiro (RJ), Brazil
###### Abstract
We report on the controllable pinning of domain walls in stripes with perpendicular magnetic anisotropy by magnetostatic coupling to magnetic vortices in disks located above the stripe. Pinning mechanisms and depinning fields are reported. This novel pinning strategy, which can be realized by current nanofabrication techniques, opens up new possibilities for the non-destructive control of domain wall mobility in domain wall based spintronic devices.
pacs: 75.60.Ch, 75.60.-d, 75.78.Cd
## I Introduction
Magnetic domain wall-based microelectronic devices are promising candidates for future data storage and spintronic technologies Parkin _et al._ (2008); Allwood _et al._ (2005); Hrkac _et al._ (2011). In these devices, the motion and careful positioning of domain walls (DWs) in thin film-based patterned sub-micron stripes underlie the functionalities of the device, and the precise control of DW motion evidently becomes one of the most important goals in device design Parkin _et al._ (2008); Allwood _et al._ (2005). The most common approaches rely either on localized modifications of material parameters such as the anisotropy constant, which can be modified by ion bombardment Vieu _et al._ (2002); Repain _et al._ (2004); Franken _et al._ (2011), or on structural modifications of the stripe such as holes Adeyeye _et al._ (1997), lateral extensions along the stripe Lewis _et al._ (2010) or notches cut into its long edge Parkin _et al._ (2008); Hayashi _et al._ (2006, 2007); Bogart _et al._ (2009); Brandão _et al._ (2014). In all these cases, the goal is to induce strong DW pinning at certain positions. Recently, even standing acoustic waves have been proposed as pinning sites Dean _et al._ (2015) along magnetic stripes.
All these approaches rely on structural modifications that lead to different local distributions of demagnetizing fields that change either the internal DW structure and dynamics, or the local fields acting on the DW itself. Although very effective, these alternatives are destructive, since they introduce modifications on the medium where DW motion is occurring. As this can lead to undesired side effects on the magnetization processes, non-destructive methods, based on the magnetostatic coupling between DWs and magnetic structures external to the medium, offer an attractive alternative way to control the DW positioning through pinning.
One approach to non-destructive DW pinning is the introduction of localized magnetic fields generated by overlying nanomagnet arrays Metaxas _et al._ (2013); Novak _et al._ (2015) separated from the stripe by non-magnetic spacer layers thick enough to ensure that the coupling is primarily magnetostatic. The asymmetric pinning generated by these nanomagnets provides a way to locally pin DWs in a controllable and potentially reprogrammable fashion. This method, furthermore, introduces a new degree of freedom to control the pinning of DWs because different DW mobilities are possible by controlling the relative alignment of the applied field and the orientation of the magnetization of the nanomagnets Metaxas _et al._ (2013). Even though effective, this method relies on large \((50\,\mathrm{x}\,50\mu\mathrm{m}^{2})\) arrays, which are evidently not suitable for very small/narrow stripes. In this case, single nanomagnets acting as pinning sites are desired, but their fabrication still presents a challenge. Recently, Franken et al. Franken _et al._ (2014) succeeded in growing a single magnetic nanopillar on top of a stripe and showed that the perpendicular components of its stray fields effectively work as a source of pinning for a magnetic DW moving along an underlying stripe with perpendicular magnetic anisotropy (PMA), and that the pinning can be tuned by the height of the pillar as well as its magnetic state. In another work, van Mourik et al. van Mourik _et al._ (2014) demonstrated the feasibility of using a single overlying nanomagnet with in-plane magnetization as source of switchable pinning of DWs in magnetic stripes with PMA. As in Franken et al.Franken _et al._ (2014), the magnetic state of the nanomagnet, and the thickness of the spacer layer, determine the pinning strength at the site. Nanomagnets, however, may present many ground states that depend on the material parameters and geometry, and consequently maintaining a nanomagnet in single domain state may impose restrictions to device geometry. In many nanomagnet geometries, the ground state exhibits a magnetic vortex Cowburn _et al._ (1999); Shinjo _et al._ (2000); Wachowiak _et al._ (2002); Alex Hubert and Rudolf Schäfer (1998); Feldtkeller and Thomas (1965) characterized by an in-plane magnetization that curls around the center of the nanomagnet and a strong and spatially localized out-of-plane component, known as the vortex core (VC), at its center. The combination of these in-plane systems with PMA films in a multilayer device could lead to new functionalities, because the strong stray fields emanating from the VCs could act as sources of localized, non-destructive pinning sites for DWs in, for example, an underlying stripe with PMA. Further pinning could be achieved by coupling of the in-plane components of the vortex magnetization and of the DW stray field, effectively presenting two sources of DW pinning in these hybrid in-plane/out-of-plane magnetic systems. Similar systems have already been investigated in extended PMA films coupled to a single Permalloy nanomagnet Heldt _et al._ (2014); Wohlhüter _et al._ (2015) with a vortex ground state. In these studies, the coupling between the VC and the underlying out-of-plane domain structure was demonstrated, but since there was no spacer layer separating the different materials, both magnetostatic and exchange interactions contributed to the observed coupling. The authors did not investigate a scenario where only the magnetostatic fields contributed to the coupling, as was done in Metaxas _et al._ (2013) and Novak _et al._ (2015). The understanding of how the magnetostatic or the exchange interaction contribute individually to the coupling between DWs and vortices is fundamental to further developments of these hybrid systems, especially since magnetostatic fields play a major role in the statics and dynamics of magnetic nanostructures.
In this work, we report the results of micromagnetic simulations demonstrating the feasibility of using a soft magnetic disk-shaped nanomagnet with in-plane magnetization and a vortex ground state as a source of purely magnetostatic pinning for Bloch DWs in underlying magnetic stripes with PMA. The purely magnetostatic coupling between the vortex and the DW gives rise to strong and asymmetric pinning which could be exploited in spintronic devices. Simulated hysteresis loops show that the pinning involves coupling of both the in-plane and out-of-plane components of the DW and vortex magnetizations, indicating a complex pinning scenario, arising purely from magnetostatic interactions, highlighting the major role this interaction plays in pinning site engineering in magnetic DW-based devices.
## II Micromagnetic simulations
The simulations were performed with the Mumax3 package Vansteenkiste _et al._ (2014). The simulated system consisted of a \(2000\) nm long, \(512\) nm wide Co-like stripe with \(M_{s}=1135\) kA/m, \(2\) nm thickness, exchange stiffness \(A_{ex}=17\) pJ/m, damping constant \(\alpha=0.5\) and perpendicular uniaxial anisotropy constant \(K_{u}=1240\) kJ/m\({}^{3}\). This stripe was capped by a variable thickness empty “spacer layer”. On top of this layer, a \(512\) nm wide and \(24\) nm thick NiFe-like disk was put in the midpoint of the Co stripe (Fig. 1, inset). The disk has the following material parameters: \(M_{s}=796\) kA/m, \(A_{ex}=13\) pJ/m, \(\alpha=0.5\) (for faster relaxation) and no intrinsic magnetic anisotropy. These regions are divided in \(3.9\) x \(4\) x \(2\) nm\({}^{3}\) cells. The stripe is always initialized with a Bloch DW near the left end of the stripe, separating an “up” domain (\(m_{z}^{stripe}=+1\)) on the left from a “down” domain on the right (\(m_{z}^{stripe}=-1\)). This domain structure will be present in the stipe in all simulations. The disk is initialized in a vortex state with definite circulation (\(c=+1\) for counterclockwise and \(-1\) for clockwise circulation) and core polarity (\(p=+1\) for and “up” core magnetization and \(-1\) for a “down” core magnetization). The system is then allowed to relax using Mumax’s relaxation routine (“minimize”) which uses a conjugate gradient method to evolve the magnetization until the ground state configuration is reached. Following each relaxation step, a magnetic field is applied along the \(z\) axis (perpendicular to the stripe plane) in \(10\) Oe steps, driving the magnetization reversal of the stripe through DW displacement. This way, hysteresis loops of the stripe can be obtained. The perpendicular field does not significantly affect the magnetic state of the disk, which stays in its initial vortex state until interacting with the DW. Time-driven, torque minimization dynamical simulations were also performed, yielding the same hysteresis loops as the relaxation routine outlined above.
## III Results and discussion
We will first consider the cases of a disk with \(c=+1\) and \(p=+1\) or \(-1\), and a \(6\) nm thick spacer layer. Hysteresis loops of the Co stripe obtained in these two cases are shown in Fig. 1 (blue and red symbols) and snapshots of the magnetization state of the stripe and the Permalloy disk corresponding to selected points along the loop are shown in Fig. 2. The loops have a different shape when compared to a free Co stripe with the same geometry (black line), indicating the complex magnetization reversal process taking place.
In the case of negative core polarity (\(p=-1\), blue circles and curve) the magnetization process is characterized by the “free” propagation of a DW under an external magnetic field \(H_{z}=+20\) Oe, starting close to the left edge of the stripe (Figs. 1 and 2, A) and moving towards the right until it reaches the edge of the disk (Figs. 1 and 2, B). At this point, the DW is “pulled” towards the region underneath the disk and continues to move slowly towards the disk center, causing the VC to move upwards (Figs. 1 and 2, C). This happens because of the strong in-plane component of the DW stray field acting along the \(+x\) direction (Fig. 2, L). Since the vortex has positive circulation (\(c=+1\)), the application of an in-plane field along \(+x\) pushes the VC upwards, as the results clearly show. Eventually, the DW and the VC both reach equilibrium positions (Figs. 1 and 2, C) under \(H_{z}=+20\) Oe. Then, \(H_{z}\) is increased in \(10\) Oe steps, causing the DW to move further to the right (Figs 1 and 2, B - D). As it approaches the VC, the DW starts to display an increasing bowing around it, caused by magnetostatic coupling to out-of-plane components of the VC stray field (Figs. 1 and 2, D). This coupling is the main source of this localized DW pinning and the DW bowing, since due to the large width of the stripe (\(512\) nm) compared to the lateral dimensions of the VC (\(\sim 20\) nm), DW sections far from the VC position will tend to move away under the applied \(H_{z}\), while the DW section close to the VC will stay strongly pinned at its position, causing the significant bowing observed. Further increasing \(H_{z}\) causes the bowing to get stronger and the core to move further up. When \(H_{z}\) reaches \(130\) Oe, the DW depinning is observed (Figs. 1 and 2, D – E). The DW moves past the VC and quickly jumps to the right edge of the disk (Figs. 1 and 2, E) and eventually reaches the right edge of the stripe, leading to its magnetic saturation (Figs 1 and 2, F). Thus, the first half of the stripe hysteresis loop is obtained.
Now, is a negative field is applied (\(H_{z}<0\)), the DW sense of motion will be reversed. Under \(H_{z}=-20\) Oe, the DW will move from the right edge of the stripe towards the right edge of the disk (Figs. 1 and 2, F – H). The motion is similar to what has been previously observed: upon reaching the disk edge, the DW slowly moves underneath it, while the VC moves up under the effect of the DW in-plane stray field. However, as the DW approaches the VC position (Figs. 1 and 2, I), it jumps towards its left side without any additional increase in applied field (Figs. 1 and 2, J), reaching its equilibrium position on the left side of the VC. Similar situations were reported in Heldt _et al._ (2014); Wohlhüter _et al._ (2015), where it was shown that a VC tends to stay in equilibrium near DWs in an underlying magnetic film. Given the domain structure in the stripe, and the negative polarity of the VC, the magnetization of the propagating domain is parallel to the VC magnetization, effectively making it easy for the DW to cross the VC position at a relatively small applied field strength (\(H_{z}=-40\) Oe). The opposite phenomenon is observed when \(H_{z}>0\) (“A” and “B” points in Fig. 1): since the vortex core magnetization, pointing “down” (\(p=-1\)), is antiparallel to the propagating domain magnetization, the core acts as an effective strong pinning barrier for DW propagation, causing the strong DW bowing observed (Fig. 2, D) and the high applied field necessary to precipitate the depinning process (\(+130\) Oe). Finally, increasing the applied field value from \(-40\) Oe to \(-100\) Oe causes the DW to drag towards the left edge of the disk (Figs. 1 and 2, J – K), again with its in-plane stray field component coupled to the in-plane magnetization of the disk (Fig. 2, L). The DW depins from the left edge of the disk at \(-100\) Oe (Figs. 1 and 2, K), quickly moving towards the left edge of the stripe, leading to its magnetic saturation. It is important to notice that no VC reversal is induced by interaction with the underlying DW at any point of the hysteresis loop thus obtained.
If the VC polarity were positive (\(p=+1\)), with the circulation still positive (\(c=+1\)), the simulated hysteresis loop (Fig. 1, red circles and curve) will be symmetric to this one, with the strong DW-VC pinning occurring at negative fields (depinning at \(-130\) Oe, Fig. 1, B’) and the weak pinning occurring at the positive field region of the loop (depinning at \(+40\) Oe, Fig. 1, C’). Again, the DW drags along the regions underneath the disk, depinning from its edges with the same applied fields, regardless of vortex core polarity.
These results indicate that two coupling mechanisms between the DW and the vortex in the disk are present and contribute to the observed behavior: _(i)_ the coupling between the out-of-plane components of the DW stray field (Fig. 3) and the out-of-plane VC magnetization; _(ii)_ the coupling between the in-plane components of the DW stray field (Fig. 3) and the in-plane components of the vortex magnetization.
Since the couplings leading to the observed DW asymmetric pinning are magnetostatic, increasing the distance between the stripe and the disk should decrease the coupling strength in both cases. This coupling strength can be inferred from the strength of the depinning fields extracted from the hysteresis loops. Simulated hysteresis loops with increasingly thicker spacer layers, but the same domain structure in the stripe and in the disk, are shown in Fig. 4. For \(10\) nm, \(14\) nm and \(22\) nm thick spacers, the hysteresis loops obtained are always similar to the one obtained with a \(6\) nm spacer (Figs. 1 and 2), with the DW coupling to both the VC and to the in-plane components of the disk magnetization. On the other hand, the coupling strength decreases as we increase the separation between the stripe and the disk, as the depinning fields from the VC position (right side, Fig. 4) and from the disk left edge (left side, Fig. 4) show. The VC depinning fields are plotted against the spacer thickness in the inset. The strength of this depinning field decreases as an inverse cube law with the spacer thickness, further evidencing the dipolar magnetostatic nature of the coupling.
Further confirmation of the non-trivial nature of the DW-vortex coupling is obtained from simulations where, instead of a disk in a vortex ground state, a static localized out-of-plane magnetic field mimicking the vortex core out-of-plane stray field is applied above the stripe. The hysteresis loops shown in Fig. 5 correspond to simulations where this static field has negative polarity (equivalent to a \(p=-1\) vortex). They show asymmetric reversal, with strong depinning fields for positive applied field (antiparallel to the static field) while for negative applied fields the loops have no extraordinary features. By reversing the polarity of this static field, the depinning processes will appear on the negative side of the loop, with the positive side not showing any interesting features (data not shown). These polarity-dependent depinning fields, and the consequent asymmetries in the magnetization reversal, are reminiscent of situations where DWs are magnetostatically coupled to static pinning fields Metaxas _et al._ (2013); Novak _et al._ (2015). However, the loops in Fig. 5 clearly show that it is not sufficient to consider only the coupling between the DW and the out-of-plane component of the VC stray field as source of DW pinning. When only this static out-of-plane field is present, the DW depinning fields are always weaker than the depinning fields of the DW coupled to the full vortex, showing that the in-plane magnetostatic coupling also plays a significant role in the overall DW pinning process, and that the evolution of the magnetization of the disk while interacting with the DW cannot be neglected.
These two contributions to the magnetostatic DW-vortex coupling behind the observed DW pinning may be better understood with the aid of the energy landscape of the system. In Fig. 6, the sum of the exchange, magnetostatic and anisotropy energies is plotted against the DW position along the stripe for \(6\), \(10\), \(14\) and \(22\) nm thick spacers and both positive and negative out-of-plane applied fields. As the DW approaches the edge of the disk (for \(H_{z}>0\)) the energy decreases, forming a potential well which confirms the energetically favorable in-plane magnetostatic coupling between the DW and the vortex. In these regions close to the disk edges the potential well is symmetric, indicating that the in-plane coupling does not depend on the sign of the applied field (consequently, on the sense of DW motion). As the DW moves further left and reaches the core position (near the middle of the stripe), the energy increases, indicating that the VC effectively acts as an energy barrier for DW propagation. This energy increase is very sharp for the \(6\) nm spacer, but gets weaker for thicker spacers, becoming barely visible when the spacer thickness is \(22\) nm. Furthermore, when the spacer thickness increases, the symmetric potential well becomes less deep, consequence of the thickness-dependence of the magnetostatic coupling.
Under a negative applied field (\(H_{z}<0\)), a DW located at the right edge of the stripe will move towards the left, and the symmetric potential well is still present. However, the VC-induced energy barrier is smaller in this case, making it easier for the DW to move past the VC position. This is the origin of the asymmetric reversal evidenced by the hysteresis loops shown in Figs. 1 and 4. Notice that as the spacer layer thickness increases, not only the vortex-core energy barrier becomes less pronounced, but the energy landscape becomes nearly independent of the applied field polarity (\(22\) nm curves in Fig. 6), leading to a more symmetric magnetization reversal process, as evidenced by the \(22\) nm hysteresis loop in Fig. 4. The analysis of the energy landscapes thus explains the main characteristics of the simulated hysteresis loops, namely: a symmetric broadening caused by the in-plane coupling, and an asymmetric reversal caused by out-of-plane coupling to the VC.
The energy landscapes in Fig. 6 allowed us to develop a 1D model for the propagation of the DW along the stripe Slonczewski, J. C. and Malozemoff, A. P. (1979); Thomas _et al._ (2006); Emori (2013); Lo Conte _et al._ (2015). In this model, the DW dynamics is described in terms of DW position \(q\) and the DW angle \(\psi\) by the following equations:
(1)
\[(1+\alpha^{2})\frac{d\psi}{dt}=\gamma H_{z}-\frac{1}{2}\alpha \gamma H_{k}\sin{2\psi}-\frac{\gamma}{2M_{s}L_{y}L_{z}}\Big{(}\frac{dV}{dq} \Big{)}\] (2)
where \(\Delta=(A/K_{eff})^{1/2}=6.3\) nm is the DW width, with \(K_{eff}=K_{u}-2\pi M_{s}^{2}\), \(K_{u}\) is the perpendicular uniaxial anisotropy constant, \(M_{s}\) is the saturation magnetization, \(A\) is the exchange stiffness constant, \(\gamma\) is the gyromagnetic ratio, \(H_{k}=N_{x}M_{s}\) is the shape anisotropy field with \(N_{x}=L_{z}\ln(2)/(\pi\Delta)\) being the demagnetizing factor, \(\alpha\) is the Gilbert damping parameter, \(L_{y}\) and \(L_{z}\) are the width and the thickness of the stripe and \(V\) is the pinning potential from Fig. 6, approximated by a superposition of elementary functions. All these values were taken from the micromagnetic model defined in Sec. II. The model was used to simulate DW propagation for \(6\), \(14\) and \(22\) nm spacer layers. The resulting hysteresis loop for a \(6\) nm spacer is shown in Fig. 7, along with a hysteresis loop from a full 3D micromagnetic simulation. Despite the crudeness of this 1D model, which ignores the 3D character of the DW-vortex coupling unveiled by the micromagnetic simulations, the main features of the magnetization reversal of the stripe are reproduced, namely: the free propagation under a low field, the pinning under the dot edge, the higher fields necessary to propagate the DW under the disk and the pinning caused by the VC stray field.
## IV Conclusion
It has been demonstrated that magnetic vortices in nanosized disks can effectively pin DWs moving along magnetic stripes with perpendicular anisotropy. Pinning can be achieved entirely by means of magnetostatic coupling between the two structures. The coupling is dependent on both the out-of-plane and the in-plane components of the stray fields and magnetizations of the DW and the vortex, giving rise to a complex pinning scenario which leads to simple phenomenology: asymmetric, broadened hysteresis loops of the magnetic stripes, with their magnetization reversal being easier when the out-of-plane applied field and the VC magnetization are parallel, and harder when they are antiparallel; and the broadening arising from the symmetric in-plane coupling between the DW and the vortex, which is independent of the DW sense of motion. The observed behavior cannot be mimicked by a DW simply coupled to a static out-of-plane field emulating the VC stray field, emphasizing the role of the in-plane coupling between DW stray fields and the vortex. A simplified 1D model of DW propagation where an approximation of the micromagnetic energy landscape, taking into account both in-plane and out-of-plane magnetostatic couplings, is able to qualitatively reproduce the main features of the magnetization process unveiled by the micromagnetic simulations. It is important to stress that in previous works Metaxas _et al._ (2013); Novak _et al._ (2015); Franken _et al._ (2014); van Mourik _et al._ (2014), the DW was in general coupled to nanostructures in a single domain state, from which trivial in-plane van Mourik _et al._ (2014) or out-of-plane Franken _et al._ (2014) stray fields emanated. This lies in contrast to the present study, where a magnetic structure (vortex) from which no in-plane stray field emanates is used to influence the DW motion. These results may be useful to introduce a new way to achieve control of DW or vortex dynamics in spintronic devices. Furthermore, they can serve as a proof of principle and guide experimental efforts to fabricate similar structures and study their magnetization process.
###### Acknowledgements.
RLN acknowledges the Brazilian agencies CNPq and FAPERJ for the postdoctoral grant during part of this work. The authors thank Dr. P. J. Metaxas for useful discussions, and Dr. A. Torres for granting access to the UFSC Physics Department computational facilities.
## References
* Parkin _et al._ (2008)S. S. P. Parkin, M. Hayashi, and L. Thomas, Science **320**, 190 (2008).
* Allwood _et al._ (2005)D. Allwood, G. Xiong, C. Faulkner, D. Atkinson, D. Petit, and R. Cowburn, Science **309**, 1688 (2005).
* Hrkac _et al._ (2011)G. Hrkac, J. Dean, and D. A. Allwood, Phil. Trans. R. Soc. A **369**, 3214 (2011).
* Vieu _et al._ (2002)C. Vieu, J. Gierak, H. Launois, T. Aign, P. Meyer, J.-P. Jamet, J. Ferré, C. Chappert, T. Devolder, V. Mathet, and H. Bernas, J. Appl. Phys. **91**, 3103 (2002).
* Repain _et al._ (2004)V. Repain, J. Jamet, N. Vernier, M. Bauer, J. Ferre, C. Chappert, J. Gierak, and D. Mailly, J. Appl. Phys. **95**, 2614 (2004).
* Franken _et al._ (2011)J. H. Franken, M. Hoeijmakers, R. Lavrijsen, and H. J. M. Swagten, J. Phys.: Condens. Matter **24**, 024216 (2011).
* Adeyeye _et al._ (1997)A. O. Adeyeye, J. A. C. Bland, and C. DABOO, Appl. Phys. Lett. **70**, 3164 (1997).
* Lewis _et al._ (2010)E. R. Lewis, D. Petit, L. O’Brien, A. Fernandez-Pacheco, J. Sampaio, A.-V. Jausovec, H. T. Zeng, D. E. Read, and R. P. Cowburn, Nat. Mater. **9**, 980 (2010).
* Hayashi _et al._ (2006)M. Hayashi, L. Thomas, C. Rettner, R. Moriya, X. Jiang, and S. S. P. Parkin, Phys. Rev. Lett. **97**, 207205 (2006).
* Hayashi _et al._ (2007)M. Hayashi, L. Thomas, C. Rettner, R. Moriya, and S. S. P. Parkin, Nat. Phys. **3**, 21 (2007).
* Bogart _et al._ (2009)L. K. Bogart, D. Atkinson, K. O’Shea, D. McGrouther, and S. McVitie, Phys. Rev. B **79**, 054414 (2009).
* Brandão _et al._ (2014)J. Brandão, R. L. Novak, H. Lozano, P. R. Soledade, A. Mello, F. Garcia, and L. C. Sampaio, J. Appl. Phys. **116**, 193902 (2014).
* Dean _et al._ (2015)J. Dean, M. T. Bryan, J. D. Cooper, A. Virbule, J. E. Cunningham, and T. J. Hayward, Appl. Phys. Lett. **107**, 142405 (2015).
* Metaxas _et al._ (2013)P. J. Metaxas, P.-J. Zermatten, R. L. Novak, S. Rohart, J.-P. Jamet, R. Weil, J. Ferré, A. Mougin, R. L. Stamps, G. Gaudin, V. Baltz, and B. Rodmacq, J. Appl. Phys. **113**, 073906 (2013).
* Novak _et al._ (2015)R. L. Novak, P. J. Metaxas, J.-P. Jamet, R. Weil, J. Ferré, A. Mougin, S. Rohart, R. L. Stamps, P.-J. Zermatten, G. Gaudin, V. Baltz, and B. Rodmacq, J. Phys. D: Appl. Phys. **48**, 235004 (2015).
* Franken _et al._ (2014)J. H. Franken, M. A. J. van der Heijden, T. H. Ellis, R. Lavrijsen, C. Daniels, D. McGrouther, H. J. M. Swagten, and B. Koopmans, Adv. Funct. Mater. **24**, 3508 (2014).
* van Mourik _et al._ (2014)R. A. van Mourik, C. T. Rettner, B. Koopmans, and S. S. P. Parkin, J. Appl. Phys. **115**, 17D503 (2014).
* Cowburn _et al._ (1999)R. Cowburn, D. Koltsov, A. Adeyeye, M. Welland, and D. Tricker, Phys. Rev. Lett. **83**, 1042 (1999).
* Shinjo _et al._ (2000)T. Shinjo, T. Okuno, R. Hassdorf, K. Shigeto, and T. Ono, Science **289**, 930 (2000).
* Wachowiak _et al._ (2002)A. Wachowiak, J. Wiebe, M. Bode, O. Pietzsch, M. Morgenstern, and R. Wiesendanger, Science **298**, 577 (2002).
* Alex Hubert and Rudolf Schäfer (1998)Alex Hubert and Rudolf Schäfer, _Magnetic Domains: The Analysis of Magnetic Microstructures_ (Springer, 1998).
* Feldtkeller and Thomas (1965)E. Feldtkeller and H. Thomas, Phys. kondens. Materie **4**, 8 (1965).
* Heldt _et al._ (2014)G. Heldt, M. T. Bryan, G. Hrkac, S. E. Stevenson, R. V. Chopdekar, J. Raabe, T. Thomson, and L. J. Heyderman, Appl. Phys. Lett. **104**, 182401 (2014).
* Wohlhüter _et al._ (2015)P. Wohlhüter, M. T. Bryan, P. Warnicke, S. Gliga, S. E. Stevenson, G. Heldt, L. Saharan, A. K. Suszka, C. Moutafis, R. V. Chopdekar, J. Raabe, T. Thomson, G. Hrkac, and L. J. Heyderman, Nat. Comm. **6**, 7836 (2015).
* Vansteenkiste _et al._ (2014)A. Vansteenkiste, J. Leliaert, M. Dvornik, M. Helsen, F. Garcia-Sanchez, and B. Van Waeyenberge, AIP Advances **4**, 107133 (2014).
* Slonczewski, J. C. and Malozemoff, A. P. (1979)Slonczewski, J. C. and Malozemoff, A. P., _Magnetic Domain Walls in Bubble Materials_ (Academic Press, 1979).
* Thomas _et al._ (2006)L. Thomas, M. Hayashi, X. Jiang, R. Moriya, C. Rettner, and S. Parkin, Nature **443**, 197 (2006).
* Emori (2013)S. Emori, Nat. Mater. **12**, 611 (2013).
* Lo Conte _et al._ (2015)R. Lo Conte, E. Martinez, A. Hrabec, A. Lamperti, T. Schulz, L. Nasi, L. Lazzarini, R. Mantovan, F. Maccherozzi, S. S. Dhesi, B. Ocker, C. H. Marrows, T. A. Moore, and M. Klaui, Phys. Rev. B **91**, 014433 (2015).
<figure><img src="content_image/1702.02451/x1.png"><figcaption>Figure 1: Hysteresis loops of the Co stripe coupled to a vortex with p=+1 (redsymbols and line) and p=−1 (blue symbols and line). The spacer layer is 6 nmthick. The letters indicate points along the loops that are discussed in thetext and depicted in Fig. 2. The black continuous line corresponds to ahysteresis loop of a free Co stripe. The inset shows the geometry of thesystem investigated. (Color online).</figcaption></figure>
<figure><img src="content_image/1702.02451/x2.png"><figcaption>Figure 2: (A)-(K) Snapshots of the Co stripe (left column) and the Py disk(right column) during the hysteresis loop simulation shown in Fig. 1. Thedashed circles indicate the position of the disk. The out-of-plane componentof the reduced magnetization (mz) is shown in red (mz>0) and blue (mz<0). Thein-plane magnetization of the disk is represented by the arrows. (L) Lateralview of the DW during the stripe magnetization reversal. (Color online).</figcaption></figure>
<figure><img src="content_image/1702.02451/x3.png"><figcaption>Figure 3: Magnetostatic stray field profiles of a Bloch DW. (a) Out-of-planecomponent. (b) In-plane component along stripe length. (Color online).</figcaption></figure>
<figure><img src="content_image/1702.02451/x4.png"><figcaption>Figure 4: Simulated hysteresis loops of the Co stripe coupled to the Py diskfor several thicknesses of the spacer layer. The vortex has c=+1 and p=−1. Theinset shows the fields where depinning of the DW from the vortex core isobserved (filled circles) together with depinning fields obtained fromsimulations where only a fixed external out-of-plane field, mimicking thevortex core field, was applied to the Co stripe (open circles). (Coloronline).</figcaption></figure>
<figure><img src="content_image/1702.02451/x5.png"><figcaption>Figure 5: Simulated hysteresis loops of the Co stripe under the influence of afixed, out-of-plane field acting on the center of the stripe. (Color online).</figcaption></figure>
<figure><img src="content_image/1702.02451/x6.png"><figcaption>Figure 6: Energy landscapes of a DW moving under the influence of a vortexwith c=+1 and p=−1 from (a) left to right, with positive out-of-plane appliedfield, and (b) right to left, with negative out-of-plane applied field. Thevertical dashed lines correspond to the disk edges. (Color online).</figcaption></figure>
<figure><img src="content_image/1702.02451/x7.png"><figcaption>Figure 7: Hysteresis loops for a 6 nm spacer obtained from the 1D model (line)and from a full micromagnetic simulation (red circles). Most features of thefull 3D micromagnetic simulation are reproduced by a simplified 1D model,notably the two couplings (in-plane and out-of-plane) leading to DW pinning.(Color online).</figcaption></figure>
|
1203.4977 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 52866,
"num_imgs": 2,
"llama3_tokens_count": 17288
} | [
"content_image/1203.4977/x1.png",
"content_image/1203.4977/x2.png"
] | # Fighting Decoherence by Feedback-controlled Dissipation
Gernot Schaller
gernot.schaller@tu-berlin.de
Institut für Theoretische Physik, Technische Universität Berlin, Hardenbergstr. 36, 10623 Berlin, Germany
###### Abstract
Repeated closed-loop control operations acting as piecewise-constant Liouville superoperators conditioned on the outcomes of regularly performed measurements may effectively be described by a fixed-point iteration for the density matrix. Even when all Liouville superoperators point to the completely mixed state, feedback of the measurement result may lead to a pure state, which can be interpreted as selective dampening of undesired states. Using a microscopic model, we exemplify this for a single qubit, which can be purified in an arbitrary single-qubit state by tuning the measurement direction and two qubits that may be purified towards a Bell state by applying a special continuous two-local measurement. The method does not require precise knowledge of decoherence channels and works for large reservoir temperatures provided measurement, processing, and control can be implemented in a continuous fashion.
pacs: 03.65.Ta 03.65.Yz 03.67.Bg 03.67.Pp In quantum systems, one typically aims at avoiding decoherence that is often seen as arch-enemy of quantum computation [1]. Simply performing the computation fast enough will fail for most applications as a size-scalable quantum system with large coherence times [2] is yet to be found. Therefore, advanced schemes have been proposed to reduce or inhibit decoherence in quantum computers such as e.g. quantum error correction [3], the use of decoherence-free subspaces [4], or open-loop control [5; 6; 7; 8]. These schemes require quite sophisticated techniques which themselves may not be entirely robust against decoherence and/or control errors.
The idea to revert the perspective by using decoherence constructively is not new [9; 10; 11; 12] but has recently again gained a lot of attention [13; 14; 15; 16; 17]: The simplest example is to relax into the (possibly entangled) ground state of a defined system Hamiltonian, which only requires weak unspecific couplings to a low-temperature reservoir with a sufficiently large energy gap above the ground \(\Delta E\gg k_{\rm B}T\). The limitation to the ground state can be overcome by using multiple reservoirs at different thermal equilibria and different chemical potentials. Then however, the general dissipative engineering paradigm requires to design the interactions such that the system is driven to the desired target state. For a general pure state to be stabilized, this problem is hard to solve, the solution is specific to the target state and may prove difficult to implement experimentally.
Since the success of the centrifugal governor used by James Watt to regulate the speed of steam engines, _feedback_ (closed-loop) control has found wide-spread application with todays standard examples including e.g. thermostats and automatic speed control in cars. Nowadays, the built-in mechanical self-regulation of the centrifugal governor is often replaced by electronic signal processing, which enables one to change the feedback protocol easily. The implementation of quantum feedback control requires the inclusion of the quantum-mechanical measurement process and to make the quantum-mechanical evolution after each measurement subject to control. In the conventional scheme [18; 19], quantum jumps detected during the time-interval \(\Delta t\) are fed back as an instantaneous \(\delta\)-pulse into the system Hamiltonian. In the continuum measurement limit \(\Delta t\to 0\), an effective master equation emerges, where the control operation appears as an instantaneous unitary rotation of the density matrix [20; 21; 22].
In contrast, here we consider a general scheme of periodic measurements at intervals \(\Delta t\) and discuss in detail the opposite case where the collapse due to projective measurements is assumed to be much faster than \(\Delta t\) throughout. The scheme allows for weak measurements and in principle arbitrary control pulse sequences, but we will for simplicity of interpretation specialize on a piecewise-constant time-dependence of the control parameters. In this scenario, even for \(\Delta t\to 0\) a master equation description does not typically emerge. The purpose of this paper is to show that, provided with dissipation that drags the system only to the completely mixed state but allows for strength-tunable system-reservoir interactions, feedback of information obtained by measurements can be used to achieve purification and thereby puts even ”clueless” dissipation to a productive role.
## I Propagation under Feedback
A general quantum measurement is described by a set of measurement operators \(\{M_{m}\}\) satisfying the completeness relation \(\sum_{m}M_{m}^{\dagger}M_{m}=\mbox{\boldmath$1$}\) [1]. Orthogonal projection operators \(M_{n}M_{m}=\delta_{nm}M_{n}\) are a well-known standard case. Under measurement outcome \(m\), the density matrix becomes (we assume an instantaneous collapse of the wave function)
\[\rho\stackrel{{ m}}{{\to}}\frac{M_{m}\rho M_{m}^{ \dagger}}{P(m)}\,,\] (1)
and the outcome probability is given by \(P(m)={\rm Tr}\left\{M_{m}^{\dagger}M_{m}\rho\right\}\). Arranging the \(N^{2}\) elements of the density matrix in a vector, this becomes
\[\rho\stackrel{{ m}}{{\to}}\frac{1}{P(m)}{\cal M}_{m} \rho\,,\] (2)
where \({\cal M}_{m}\) is a superoperator having an \(N^{2}\times N^{2}\) matrix representation. In the following, we will use calligraphic symbols to denote superoperators. In principle, the measurement operators may depend also on the time interval \(\Delta t\) between two measurements, see appendix A.
The most general evolution of the density matrix (including measurements, unitary and dissipative evolution) is governed by a trace- and positivity preserving map
\[\rho(t+\Delta t) = \sum_{\alpha}B_{\alpha}(\Delta t)\rho(t)B_{\alpha}^{\dagger}( \Delta t)\] (3)
\[= \sum_{\alpha}{\cal B}_{\alpha}(\Delta t)\rho(t)\hat{=}{\cal B}( \Delta t)\rho(t)\]
with Kraus operators obeying \(\sum_{\alpha}B_{\alpha}^{\dagger}(\Delta t)B_{\alpha}(\Delta t)=\mbox{ \boldmath$1$}\) and the corresponding superoperator \({\cal B}(\Delta t)\). Formally, feedback can be conveniently described by making the general evolution between two measurements dependent on the outcome of the previous measurement. For example, given density matrix \(\rho(t)\) right before one measurement, the density matrix conditioned on outcome \(m\) at the first measurement becomes right before the next measurement
\[\rho(t+\Delta t) \stackrel{{ m}}{{\to}} \sum_{\alpha}B_{\alpha,m}(\Delta t)\frac{M_{m}\rho(t)M_{m}^{ \dagger}}{P(m)}B_{\alpha,m}^{\dagger}(\Delta t)\] (4)
\[\hat{=} \frac{{\cal B}_{m}(\Delta t){\cal M}_{m}\rho(t)}{P(m)}\,.\]
For an arbitrarily chosen observable, we perform a weighted average for its expectation value at time \(t+\Delta t\) over all measurement outcomes and conditioned evolutions
\[\left<\bar{A}_{t+\Delta t}\right>={\rm Tr}\left\{{\cal A}\left[ \sum_{m}{\cal B}_{m}(\Delta t){\cal M}_{m}\right]\rho(t)\right\}\,,\] (5)
where we have used superoperators for the ease of notation with \({\cal A}\rho\hat{=}A\rho\) and the trace is generally mapped to multiplication with a row vector containing \(N\) entries of value \(1\) at positions containing populations and \(N^{2}-N\) entries of value \(0\) elsewhere. Demanding defines an effective propagator for repeated measurements and feedback
\[{\cal P}_{\rm eff}(\Delta t)=\sum_{m}{\cal B}_{m}(\Delta t){\cal M }_{m}\,.\] (6)
The above propagator may prove useful once microscopic parameters are linked to the superoperators \({\cal B}_{m}(\Delta t)\), which is however in principle possible also for non-Markovian systems [23; 24; 25]. In appendix A we show that it yields an effective master equation as in the conventional Wiseman-Milburn scheme for non-projective quantum jump detection, unitary control and in the continuum limit \(\Delta t\to 0\).
In the following however, we will consider a Lindblad evolution with a single constant Lindblad superoperator \({\cal L}_{m}\) (which may however also include Hamiltonian control [26]) between the measurements \({\cal B}_{m}(\Delta t)=e^{{\cal L}_{m}\Delta t}\) and time-independent measurement superoperators
\[{\cal P}_{\rm eff}(\Delta t)=\sum_{m}e^{{\cal L}_{m}\Delta t}{ \cal M}_{m}\,.\] (7)
Note that in contrast to the Wiseman-Milburn scheme (see also appendix A), the time-independence of the measurement superoperators requires the measurement to be much faster than \(\Delta t\) throughout. The conditioning of the \({\cal L}_{m}\) on the previous measurement result \(m\) defines the feedback protocol. Microscopically, this dependence on the measurement result can be implemented by triggering switches of parameters in the Hamiltonian. We explicitly include the possibility that the stationary state (defined by \({\cal L}_{m}\bar{\rho}=0\)) of each Liouvillian may be the completely mixed state with \({\rm Tr}\left\{\bar{\rho}^{2}\right\}=1/N\). In the following, we will discuss the implications of this effective propagator for the control of qubits.
Typically, decoherence in an open quantum system is modeled by the decay of off-diagonal matrix elements of the density matrix (coherences). In an appropriate basis, this is formally represented by a block form of the Liouvillians \({\cal L}_{m}\), which leads to a decoupling of coherences and diagonal entries (populations). A prototypical example for this is the Born-Markov-secular approximation master equation in the case of a non-degenerate system Hamiltonian [27], where the block structure becomes explicit in the system energy eigenbasis. However, for quantum measurements the measurement superoperators \({\cal M}_{m}\) will not generally have the same block structure as the Liouvillian. It may therefore be expected that the effective propagator in Eq. (7) couples coherences and populations and thereby may lead to nontrivial effects.
In spite of the completeness relation in the original Hilbert space, the measurement superoperators do not sum up to the identity \(\sum_{m}{\cal M}_{m}\neq\mbox{\boldmath$1$}\). This implies that even in the continuum limit, where measurement, processing, and control are repeated at infinitesimally small time intervals \(\Delta t\to 0\), the iteration equation \(\rho(t+\Delta t)={\cal P}_{\rm eff}(\Delta t)\rho(t)\) with (7) does not converge to an effective master equation \(\dot{\rho}\neq{\cal L}_{\rm eff}\rho\). A prominent example for this is the quantum Zeno effect: For a closed system without measurements and without feedback, the action of the Liouvillian without measurement simply becomes \(\rho(t+\Delta t)=e^{{\cal L}_{0}\Delta t}\rho(t)\hat{=}e^{-{\rm i}H\Delta t} \rho e^{+{\rm i}H\Delta t}\), which just encodes the usual unitary evolution. With measurements and without feedback, the effective propagator in Eq. (7) leads to , where the measurement (super-) operators may account for the freezing of the observed quantum state when \(\Delta t\to 0\), as will be discussed in the next section in greater detail.
A continuous description by means of a differential equation for expectation values may however still be possible when the expectation values are consistent with the measurement operators, as will be shown below.
## II Dissipative Purification of a Qubit
As a first example we consider a single qubit parameterized by Pauli matrices. At time intervals \(\Delta t\), we perform strong projective measurements of \(\sigma^{x}\) described by the measurement operators \(M_{\pm}=\frac{1}{2}\left[\mbox{\boldmath$1$}\pm\sigma^{x}\right]\) with superoperator equivalents acting on the vector \(\left(\rho_{00},\rho_{11},\rho_{01},\rho_{10}\right)^{\rm T}\) having the representation
\[{\cal M}_{\pm}=\frac{1}{4}\left(\begin{array}[]{cccc}1&1&\pm 1& \pm 1\\ 1&1&\pm 1&\pm 1\\ \pm 1&\pm 1&1&1\\ \pm 1&\pm 1&1&1\end{array}\right)\,.\] (8)
The feedback protocol is defined by applying an outcome-dependent Liouvillian derived microscopically from the system (S), bath (B) and interaction (SB) Hamiltonians
\[H_{\rm S}^{(\pm)} = \frac{\Omega}{2}\sigma^{z}\,,\qquad H_{\rm B}^{(\pm)}=\sum_{k} \omega_{k}b_{k}^{\dagger}b_{k}\,,\]
\[H_{\rm SB}^{(\pm)} = \lambda_{\pm}\left(\mbox{\boldmath$n$}_{\pm}\cdot\mbox{\boldmath$ \sigma$}\right)\otimes\sum_{k}\left(h_{k}b_{k}+h_{k}^{*}b_{k}^{\dagger}\right)\,,\] (9)
where \(\lambda_{\pm}\geq 0\) and the unit vectors \(\mbox{\boldmath$n$}_{\pm}=\left(\sin(\theta_{\pm})\cos(\phi_{\pm}),\sin(\theta _{\pm})\sin(\phi_{\pm}),\cos(\theta_{\pm})\right)\) characterize strength and direction (e.g., purely dephasing for \(\mbox{\boldmath$n$}=\mbox{\boldmath$e$}_{z}\)) of the dissipation, respectively. The \(b_{k}\) are bosonic annihilation operators of a bath assumed at thermal equilibrium throughout. The case \(\lambda_{\pm}=\lambda\) and \(\mbox{\boldmath$n$}_{\pm}=\mbox{\boldmath$n$}\) corresponds to repeated measurements without feedback applied to an open quantum system. Applying the Born, Markov and secular approximations, the corresponding conditioned Liouville superoperators \({\cal L}_{\pm}\) do not depend on \(\phi_{\pm}\) and both have block structure in the system energy eigenbasis \(\sigma^{z}\left|0/1\right>=(-1)^{(0/1)}\left|0/1\right>\), leading to a decoupling of coherences and populations, see appendix B.
The ability to change the system-reservoir coupling will of course depend on the physical implementation of the qubit. In electronic setups for example (charge qubits), nearby quantum point contacts may not only function as detector: Changing the bias voltage for circuits in the vicinity of the charge qubit will induce a changed system-reservoir coupling. If in contrast the qubit is realized as the lowest two modes of a cavity, tuning the permeability of the cavity walls may yield a similar effect.
### Thermal reservoirs
Let us first consider the simplest case where neither measurement nor control are applied: The continuous action of either Liouvillian would lead to an exponential decay of coherences \({\left|\rho_{01}\right|}^{2}(t)=e^{-\Gamma_{\pm}t}{\left|\rho_{01}\right|}^{2} (0)\) with the dephasing rate
(10)
Here, \(\gamma_{\omega}\equiv\int\left<e^{+{\rm i}H_{\rm B}\tau}Be^{-{\rm i}H_{\rm B} \tau}B\right>e^{+{\rm i}\omega\tau}d\tau\) is the Fourier transform of the bath correlation function with the coupling operator \(B\equiv\sum_{k}\left(h_{k}b_{k}+h_{k}^{*}b_{k}^{\dagger}\right)\) when a thermal state at inverse temperature \(\beta\) is assumed. Analytically continuing the spectral coupling density \(J(\omega)\equiv 2\pi\sum_{k}{\left|h_{k}\right|}^{2}\delta(\omega-\omega_{k})\) to negative \(\omega\) via \(J(-\omega)\equiv-J(+\omega)\) one obtains with the Bose distribution \(n_{\rm B}(\omega)=\left[e^{+\beta\omega}-1\right]^{-1}\) the simple expression \(\gamma_{\omega}=J(\omega)\left[1+n_{\rm B}(\omega)\right]\). For both Liouvillians, the populations will approach the thermalized stationary state characterized by \(\bar{\sigma}^{z}=(\gamma_{-\Omega}-\gamma_{+\Omega})/(\gamma_{-\Omega}+\gamma_ {+\Omega})\). Due to micro-reversibility, the correlation function obeys the Kubo-Martin-Schwinger [27] condition \(\gamma_{-\Omega}=e^{-\beta\Omega}\gamma_{+\Omega}\).
Therefore, for low temperatures \(k_{\rm B}T\ll\Omega\), the qubit will simply decay towards its ground state, which is a trivial example of dissipation-induced purification in the eigenbasis of the system Hamiltonian. Note however that the validity of the Markovian approximation is only expected when the coupling strength is smaller than the temperature, such that the relaxation speed may be slowed by scaling the coupling strength appropriately.
In contrast, for high temperatures and for an Ohmic spectral density \(J(\omega)=2\alpha\omega e^{-\omega/\omega_{\rm c}}\) [28], the Fourier transform of the correlation function becomes essentially flat for large cutoff frequencies \(\omega_{\rm c}\) with a limiting scaling \(\gamma\equiv k_{\rm B}T2\alpha\approx\gamma_{0}\approx\gamma_{\pm\Omega}\). Here, the Markovian approximation becomes exact as the bath correlation function becomes a Dirac-\(\delta\) distribution. The high-temperature stationary state is then just the completely mixed state with \(\left<\bar{\sigma}^{x/y/z}\right>=0\) and in essence, decoherence occurs more rapidly at higher temperatures. In this limit also further terms corresponding to the Lamb-shift are negligible. For simplicity and to stress the drastic merit of feedback control, we will therefore in the following focus on this worst-case scenario for decoherence (i.e., the high-temperature and wide-band limit), where the Fourier transform of the bath correlation function is characterized by the single parameter \(\gamma\).
### Quantum Zeno Limit
Performing repeated measurements but neither allowing for feedback nor dissipation (\(\lambda_{\pm}=0\)), the effective propagator (7) becomes
\[{\cal P}_{\rm Zeno}(\Delta t)=\frac{1}{2}\left(\begin{array}[]{ cccc}1&1&0&0\\ 1&1&0&0\\ 0&0&e^{-{\rm i}\Omega\Delta t}&e^{-{\rm i}\Omega\Delta t}\\ 0&0&e^{+{\rm i}\Omega\Delta t}&e^{+{\rm i}\Omega\Delta t}\end{array}\right)\,,\] (11)
which simply leads to the Quantum Zeno effect [29; 30]. Repeated application of the above propagator yields
\[{\cal P}_{\rm Zeno}^{n}(\Delta t)=\left(\begin{array}[]{cc}\mbox{ \boldmath$1$}&\mbox{\boldmath$0$}\\ \mbox{\boldmath$0$}&\cos^{n-1}(\Omega\Delta t)\mbox{\boldmath$1$}\end{array} \right){\cal P}_{\rm Zeno}(\Delta t)\,,\] (12)
which when \(t=n\Delta t\) is kept constant while \(\Delta t\to 0\) and \(n\to\infty\) reduces further to \({\cal P}_{\rm Zeno}^{n}(\Delta t)\to{\cal P}_{\rm Zeno}(0)\). Obviously, this operator preserves the pure eigenstates of the measurement operators \(\left|\Psi\right>=\frac{1}{\sqrt{2}}\left[\left|0\right>\pm\left|1\right>\right]\), such that the system when initialized in these states will remain frozen.
### Zeno and dissipation
Adding dissipation but without feedback (\(\lambda_{\pm}=\lambda>0\) and \(\mbox{\boldmath$n$}_{\pm}=\mbox{\boldmath$n$}\)), the Quantum Zeno effect does not stabilize the coherences: According to Eq. (7), these decay between the measurements as exemplified by the decaying expectation values
\[\left<\sigma^{x}_{t+\Delta t}\right> = \cos(\Omega\Delta t)e^{-\gamma\Delta t\lambda^{2}[3+\cos(2\theta) ]/2}\left<\sigma^{x}_{t}\right>\,,\]
\[\left<\sigma^{y}_{t+\Delta t}\right> = \sin(\Omega\Delta t)e^{-\gamma\Delta t\lambda^{2}[3+\cos(2\theta) ]/2}\left<\sigma^{x}_{t}\right>\,,\] (13)
whilst \(\left<\sigma^{z}_{t+\Delta t}\right>=0\). Note that the chosen measurement of \(\sigma^{x}\) at time \(t\) projects \(\left<\sigma^{y/z}_{t}\right>\) to zero, which eventually implies that only the expectation value of \(\left<\sigma^{x}_{t}\right>\) has a differential equation limit as \(\Delta t\to 0\).
### Feedback
Finally, with both dissipation and feedback, different conditioned Liouvillians are applied between the measurements. Using the Bloch sphere representation \(\rho=\frac{1}{2}\left[\mbox{\boldmath$1$}+\mbox{\boldmath$r$}\cdot\mbox{ \boldmath$\sigma$}\right]\) with \({\left|\mbox{\boldmath$r$}\right|}\leq 1\) we can rephrase the fixed-point iteration for the density matrix as a fixed-point iteration for the expectation values of the Pauli matrices, see appendix C. Of these, only one has a continuum limit when \(\Delta t\to 0\)
\[\left<\dot{\sigma}^{x}_{t}\right> = -\frac{\gamma}{4}\left\{\lambda_{-}^{2}\left[3+\cos(2\theta_{-}) \right]+\lambda_{+}^{2}\left[3+\cos(2\theta_{+})\right]\right\}\left<\sigma^{x }_{t}\right>\] (14)
\[+\frac{\gamma}{4}\left\{\lambda_{-}^{2}\left[3+\cos(2\theta_{-}) \right]-\lambda_{+}^{2}\left[3+\cos(2\theta_{+})\right]\right\}\,,\]
whilst the other expectation values are continuously projected to zero. First we see that under feedback, the stationary solution \(\left<\bar{\sigma}^{x}\right>\) does not vanish despite the high-temperature limit. Remarkably, it is only weakly dependent on the interaction angles \(\theta_{\pm}\) but is much more sensitive to the ratio of the dampening constants \(\lambda_{\pm}\): The stationary state is purified when \(\lambda_{+}\gg\lambda_{-}\) (where \(\left<\sigma^{x}_{t}\right>\to-1\)) or \(\lambda_{+}\ll\lambda_{-}\) (where \(\left<\sigma^{x}_{t}\right>\to+1\)). The weak dependence on the dissipation direction demonstrates that small control errors (e.g. caused by further decoherence channels) have only small effects on purification efficiency. When the dampening rates differ strongly (strong feedback), the eigenstate of the measurement operator that has the smaller dampening rate becomes purified. The applicability of the effective description of measurement and feedback control by a fixed-point iteration can also be checked by comparing with averaging over many different trajectories, see Fig. 1.
<figure><img src="content_image/1203.4977/x1.png"><figcaption>Figure 1: (Color Online) Comparison of the effective evolution of the ⟨σxt⟩expectation value under feedback control for finite measurement intervals(symbols) with an average of 102, 103, and 104 trajectories (thin dotted,dashed, and solid curves in lighter colors, respectively; single trajectorieswould decay at different pace from ±1 towards 0 between measurements). In thecontinuous measurement limit (top bold curve), the final purity is maximal andwould vanish without feedback. Parameters: λ+=1.0, λ−=5.0, Ω/γ=5.0, θ±=π/2.</figcaption></figure>
Unless either \(\lambda_{-}\) or \(\lambda_{+}\) vanishes (which effectively introduces a decoherence-free subspace), the continuous measurement limit \(\Delta t\to 0\) is one necessary ingredient for purification: Then however, occurrence of purification is not specific to the chosen measurement direction.
A similiar calculation can be performed with the Hamiltonian (II) for a general measurement direction \(M_{\pm}=\frac{1}{2}\left[\mbox{\boldmath$1$}\pm\mbox{\boldmath$n$}\cdot\mbox{ \boldmath$\sigma$}\right]\) defined by \(\mbox{\boldmath$n$}=(\sin(\theta)\cos(\phi),\sin(\theta)\sin(\phi),\cos(\theta))\), see appendix D. Note that in this light, the Maxwell-demon type classical feedback in Ref. [31] appears as a special case where system Hamiltonian of a single-level quantum dot \(H_{\rm S}=\epsilon d^{\dagger}d\) with fermionic operators and measurement operators for empty \(M_{\rm E}=\mbox{\boldmath$1$}-d^{\dagger}d\) and filled \(M_{\rm F}=d^{\dagger}d\) dot states commute. For a general measurement direction one finds for our model that the purification direction coincides with it. The modulus of the Bloch vector of the stationary state does not depend on \(\phi\) and \(\phi_{i}\)
\[\sum_{\alpha=1}^{3}\left<\bar{\sigma}^{\alpha}\right>^{2} = \frac{\left[\lambda_{+}^{2}f(\theta,\theta_{+})-\lambda_{-}^{2}f( \theta,\theta_{-})\right]^{2}}{\left[\lambda_{+}^{2}f(\theta,\theta_{+})+ \lambda_{-}^{2}f(\theta,\theta_{-})\right]^{2}}\] (15)
and only weakly on the remaining angles \(f(\theta,\theta_{i})=5-\cos(2\theta_{i})-\cos(2\theta)\left[1+3\cos(2\theta_{i })\right]\) (the special case \(\theta=\theta_{i}=0\) corresponds to trivial purification in the direction of \(H_{\rm S}\)). It is immediately evident that it approaches one (pure state) when one of the dampening coefficients \(\lambda_{\pm}\) vanishes or is much larger than the other.
## III Dissipative Entangling
For two qubits, one would be interested in a measurement scheme that leads to a purified entangled state. A projective (non-demolition) measurement of the Bell state \(\left|\Psi_{\rm Bell}\right>=\left[\left|00\right>+\left|11\right>\right]/ \sqrt{2}\) corresponds to a two-local measurement Bell (B) projection operator
\[M_{\rm B}=\left|\Psi_{\rm Bell}\right>\left<\Psi_{\rm Bell} \right|=\frac{1}{4}\left[\mbox{\boldmath$1$}+\sigma^{x}_{1}\sigma^{x}_{2}- \sigma^{y}_{1}\sigma^{y}_{2}+\sigma^{z}_{1}\sigma^{z}_{2}\right]\,.\] (16)
Assuming the measurement only to recognize the Bell state (two outcomes), the remaining (R) projector follows from the completeness relation as \(M_{\rm R}=\mbox{\boldmath$1$}-M_{\rm B}\). In the energy eigenbasis defined by the local system Hamiltonian
\[H_{\rm S}=\frac{\omega_{1}}{2}\sigma^{z}_{1}+\frac{\omega_{2}}{2 }\sigma^{z}_{2}\,,\] (17)
the \(16\times 16\) superoperator equivalents of \(M_{\rm B}\rho M_{\rm B}^{\dagger}\) and \(M_{\rm R}\rho M_{\rm R}^{\dagger}\) will not have a Block structure between coherences and populations, see appendix E. For simplicity, we choose to change the strength of the interaction only \(H_{\rm SB}=\lambda_{\rm B/R}\left(\sigma^{x}_{1}+\sigma^{x}_{2}\right)\otimes \sum_{k}\left(h_{k}b_{k}+h_{k}^{*}b_{k}^{\dagger}\right)\) and to keep the bath invariant as in Eq. (II). The resulting Liouvillians in Born-Markov-secular approximation decouple populations and coherences in the system energy eigenbasis and simplify strongly in the high-temperature limit \(\gamma_{\omega}=\gamma\) (where also the Lamb-shift vanishes), see appendix E. According to Eq. (7), the effective propagator becomes \({\cal P}_{\rm eff}(\Delta t)=e^{{\cal L}_{\rm B}\Delta t}{\cal M}_{\rm B}+e^{{ \cal L}_{\rm R}\Delta t}{\cal M}_{\rm R}\) and defines a fixed-point iteration for the density matrix. Just as for a single qubit one may parameterize the density matrix by Pauli matrices [32]\(\rho(t)=\sum_{\alpha\beta\in\{0,x,y,z\}}r^{\alpha\beta}(t)\sigma^{\alpha}_{1} \sigma^{\beta}_{2}\) with \(\sigma^{0}_{1/2}\equiv\mbox{\boldmath$1$}_{1/2}\) and \(r^{00}=1/4\). The generators \(\Sigma^{\alpha\beta}\equiv\sigma^{\alpha}_{1}\sigma^{\beta}_{2}\) of the group \(SU(4)\) are trace orthogonal \({\rm Tr}\left\{\Sigma^{\alpha\beta}\Sigma^{\alpha^{\prime}\beta^{\prime}} \right\}=4\delta_{\alpha\alpha^{\prime}}\delta_{\beta\beta^{\prime}}\), which allows to obtain a fixed-point iteration for the 15 nontrivial expectation values of generalized Pauli matrices \(\left<\Sigma^{\alpha\beta}_{t}\right>=4r^{\alpha\beta}(t)\). The stationary state of the fixed-point iteration (also identifiable as the normalized eigenvector of the effective propagator in Eq. (7) with eigenvalue one) has in the continuum limit \(\Delta t\to 0\) the only non-vanishing stationary values
\[\left<\bar{\Sigma}^{xx}\right>=-\left<\bar{\Sigma}^{yy}\right>=+ \left<\bar{\Sigma}^{zz}\right>=\frac{\lambda_{\rm R}^{2}-\lambda_{\rm B}^{2}}{ \lambda^{2}_{\rm R}+3\lambda^{2}_{\rm B}}\,,\] (18)
see also appendix F. Similarly, the pure Bell state is fully characterized by the only non-vanishing expectation values \(\left<\Sigma^{xx}\right>=-\left<\Sigma^{yy}\right>=\left<\Sigma^{zz}\right>=+1\). Eq. (18) corresponds to a stationary concurrence [33] and purity \(P={\rm Tr}\left\{\rho^{2}\right\}\) of
\[\bar{C} = \frac{\lambda^{2}_{\rm R}-3\lambda^{2}_{\rm B}}{\lambda^{2}_{\rm R }+3\lambda^{2}_{\rm B}}\Theta(\lambda^{2}_{\rm R}-3\lambda^{2}_{\rm B})\,,\;\; \bar{P}=\frac{\lambda_{\rm R}^{4}+3\lambda_{\rm B}^{4}}{\left(\lambda^{2}_{\rm R }+3\lambda^{2}_{\rm B}\right)^{2}}\] (19)
with \(\Theta(x)\) denoting the Heaviside step function. Thus, when the undesired parts of the density matrix are damped much stronger than the entangled parts \(\lambda_{\rm R}\gg\lambda_{\rm B}\), both concurrence and purity approach one, which demonstrates that the method may in principle create pure maximally entangled states. In contrast, without feedback (\(\lambda_{\rm R}=\lambda_{\rm B}\)), the stationary concurrence vanishes and the purity becomes that of a completely mixed two-qubit state. For finite measurement intervals \(\Delta t\), the fidelity of the state preparation reduces in comparison to Eq. (19), see Fig. 2.
<figure><img src="content_image/1203.4977/x2.png"><figcaption>Figure 2: (Color Online) Effective evolution of the non-vanishing ⟨Σαβt⟩expectation values under feedback control for finite measurement intervals(symbols) and in the continuum limit (solid curves). Parameters: λB=1, λR=5,ωAΔt=ωBΔt=0. A non-vanishing stationary concurrence (see inset extending Eq.(19) to finite measurement intervals, with contour lines ranging from 0.1 to0.9 in steps of 0.1) requires short measurement intervals.</figcaption></figure>
A perfect stabilization of the Bell state therefore requires both the continuum limit where measurement, processing, and control are performed much faster than decoherence, and strong control \(\lambda_{\rm R}/\lambda_{\rm B}\to\infty\).
## IV Summary
For a piecewise-constant and instantaneous control, it is possible to explicitly include the measurement process into a quantum feedback scheme for the evolution of an open quantum system. However, unlike standard quantum feedback control, an effective description by a master equation is not possible anymore. Instead, the evolution may be described by a fixed-point iteration of the density matrix. As a very intuitive outcome, purification of eigenstates of the measurement operators can be achieved provided that unwanted parts of the density matrix are damped with a significantly larger rate than the desired ones. By construction, further decoherence channels simply enter the scheme as control errors, for which we have found only a mild effect on the purification efficiency. The scheme is suitable for high temperatures, but generally the time interval at which measurements are performed has to be much smaller than the temperature-dependent decoherence time corresponding to the conditioned Lindblad evolutions. Therefore, even when measurement-induced collapse and control are instantaneous, the purification efficiency will eventually be limited by the finite signal processing speed thereby for classical electronic processing requiring decoherence times in the order of milliseconds.
To achieve significant purification without a decoherence-free subspace, it is important to note that the limitation to Markovian and weakly-coupled systems used in the derivation of the Lindblad dissipators can in the continuum measurement limit be overcome by deriving the trace- and positivity-preserving map via coarse-graining [24; 25], where the coarse-graining timescale is naturally set by the detector sampling frequency \(\Delta t\). A further interesting avenue of research is how the quantum version of the Jarzinsky equality [34; 35] is modified by feedback and non-projective measurements that do not commute with the system Hamiltonian.
## V Acknowledgments
Financial support by the DFG (SCHA 1646/2-1, SFB 910) and stimulating discussions with T. Brandes, C. Emary, G. Kiesslich, and H. Wiseman are gratefully acknowledged.
## References
* (1) M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information_, Cambridge University Press, Cambridge (2000).
* (2) D. P. DiVincenzo, Fortschr. Phys. **48**, 771 (2000).
* (3) P. W. Shor, Phys. Rev. A **52**, 2493(R) (1995).
* (4) P. Zanardi and M. Rasetti, Phys. Rev. Lett. **79**, 3306 (1997).
* (5) L. Viola, E. Knill, and S. Lloyd, Phys. Rev. Lett. **82**, 2417 (1999).
* (6) F. Platzer, F. Mintert, and A. Buchleitner, Phys. Rev. Lett. **105**, 020501 (2010).
* (7) G. A. Paz-Silva, A. T. Rezakhani, J. M. Dominy, and D. A. Lidar, Phys. Rev. Lett. **108**, 080501 (2012).
* (8) B. Hwang, and H.-S. Goan, Phys. Rev. A **85**, 032321 (2012).
* (9) M. H. Rubin, J. Stat. Phys. **28**, 177 (1982).
* (10) J. F. Poyatos, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. **77**, 4728 (1996).
* (11) A. Beige, D. Braun, B. Tregenna, and P. L. Knight, Phys. Rev. Lett. **85**, 1762 (2000).
* (12) S. Lloyd and L. Viola, Phys. Rev. A **65**, 010101 (2001).
* (13) S. Diehl and A. Micheli and A. Kantian and B. Kraus and H. P. Büchler and P. Zoller, Nat. Phys. **4**, 878 (2008).
* (14) F. Verstraete, M. M. Wolf, and J. I. Cirac, Nat. Phys. **5**, 633 (2009).
* (15) K. G. H. Vollbrecht, C. Muschik and J. I. Cirac, Phys. Rev. Lett. **107**, 120502 (2011).
* (16) F. Ticozzi and L. Viola, Proc. Roy. Soc. A **in press**; arXiv:1112.4860 (2012).
* (17) K. Koga, and N. Yamamoto, Phys. Rev. A **85**, 022103 (2012).
* (18) H. M. Wiseman and G. J. Milburn, Phys. Rev. Lett. **70**, 548 (1993).
* (19) H. M. Wiseman, Phys. Rev. A **49**, 2133 (1994).
* (20) H. M. Wiseman and G. J. Milburn, _Quantum Measurement and Control_, Cambridge University Press (2010).
* (21) G. Kiesslich, G. Schaller, C. Emary, and T. Brandes, Phys. Rev. Lett. **107**, 050501 (2011).
* (22) C. Pöltl, C. Emary, and T. Brandes, Phys. Rev. B **84**, 085302 (2011).
* (23) D. A. Lidar and Z. Bihary and K. B. Whaley, Chem. Phys. **268**, 35 (2001).
* (24) G. Schaller and T. Brandes, Phys. Rev. A **78**, 022106 (2008).
* (25) G. Schaller, P. Zedler, and T. Brandes, Phys. Rev. A **79**, 032110 (2009).
* (26) S. Bolognani and F. Ticozzi, IEEE T. Automat. Contr. **55**, 2721 (2010).
* (27) H.-P. Breuer and F. Petruccione, _The Theory of Open Quantum Systems_, Oxford University Press (2002).
* (28) U. Weiss, _Quantum Dissipative Systems_, World Scientific, Singapore (1993).
* (29) B. Misra and E. C. G. Sudarshan, J. Math. Phys. **18**, 756 (1977).
* (30) P. Facchi and S. Pascazio, Phys. Rev. Lett. **89**, 080401 (2002).
* (31) G. Schaller, C. Emary, G. Kiesslich, and T. Brandes, Phys. Rev. B **84**, 085418 (2011).
* (32) E. Brüning, H. Mäkeläcd, A. Messinad, and F. Petruccione, J. Mod. Opt. **59**, 1 (2012).
* (33) W. K. Wootters, Phys. Rev. Lett. **80**, 2245 (1998).
* (34) T. Sagawa and M. Ueda, Phys. Rev. Lett. **104**, 090602 (2010).
* (35) V. Vedral, arXiv:1204.6168 (2012).
## Appendix A Jump measurement limit
Suppose we consider a two-level system under the continuous action of a vaccuum reservoir and a measurement device that detects whether during time intervals \(\Delta t\) a quantum jump has occurred in the system (e.g. decay to the ground state via the emission of a photon). We assume that the decay of the two-level system from its excited state \(\left|1\right>\) to its ground state \(\left|0\right>\) is detected with unit efficiency and occurs with probability \(\gamma\Delta t\). A measurement of such quantum jumps requires a POVM description, since these are non-projective. In addition, the measurement operators for the outcomes click (c) and no-click (nc) will depend on \(\Delta t\)
\[M_{\rm c}(\Delta t)=\sqrt{\gamma\Delta t}\left|0\right>\left<1 \right|\,,\qquad M_{\rm nc}(\Delta t)=\sqrt{1-\gamma\Delta t}\left|1\right> \left<1\right|+\left|0\right>\left<0\right|\,,\] (20)
which automatically obeys the completeness relation \(M_{\rm c}^{\dagger}(\Delta t)M_{\rm c}(\Delta t)+M_{\rm nc}^{\dagger}(\Delta t )M_{\rm nc}(\Delta t)=\mbox{\boldmath$1$}\). They act on the density matrix as
\[M_{\rm c}(\Delta t)\rho M_{\rm c}^{\dagger}(\Delta t) = \gamma\Delta t\rho_{11}\left|0\right>\left<0\right|\,,\]
\[M_{\rm nc}(\Delta t)\rho M_{\rm nc}^{\dagger}(\Delta t) = \rho_{00}\left|0\right>\left<0\right|+(1-\gamma\Delta t)\rho_{11} \left|1\right>\left<1\right|+\sqrt{1-\gamma\Delta t}\rho_{01}\left|0\right> \left<1\right|+\sqrt{1-\gamma\Delta t}\rho_{10}\left|1\right>\left<0\right|\,.\] (21)
In the following, we will need their action for small \(\Delta t\) and therefore state
\[{\cal M}_{\rm c}(0)\rho \hat{=} M_{\rm c}(0)\rho M_{\rm c}^{\dagger}(0)=\mbox{\boldmath$0$}\,,\]
\[{\cal M}_{\rm nc}(0)\rho \hat{=} M_{\rm nc}(0)\rho M_{\rm nc}^{\dagger}(0)=\rho\,,\]
\[{\cal M}_{\rm c}^{\prime}(0)\rho \hat{=} \frac{d}{d\Delta t}\left[M_{\rm c}(\Delta t)\rho M_{\rm c}^{ \dagger}(\Delta t)\right]_{\Delta t=0}=\gamma\rho_{11}\left|0\right>\left<0 \right|\,,\]
\[{\cal M}_{\rm nc}^{\prime}(0)\rho \hat{=} \frac{d}{d\Delta t}\left[M_{\rm c}(\Delta t)\rho M_{\rm c}^{ \dagger}(\Delta t)\right]_{\Delta t=0}=-\gamma\rho_{11}\left|1\right>\left<1 \right|-\frac{\gamma}{2}\rho_{01}\left|0\right>\left<1\right|-\frac{\gamma}{2} \rho_{10}\left|1\right>\left<0\right|\,.\] (22)
The feedback scheme can now be defined by performing an instantaneous unitary transformation \(U_{\rm c}\) (experimentally approximated by using a \(\delta\)-kick like pulse on the system Hamiltonian) of the density matrix whenever a detector click has occured. In addition, the system Hamiltonian (and possible further unmonitored reservoirs) is included in the Lindblad superoperator \({\cal L}_{0}\). Then, the effective propagator becomes
\[{\cal P}(\Delta t)=e^{{\cal L}_{0}\Delta t}e^{\kappa_{\rm c}}{ \cal M}_{\rm c}(\Delta t)+e^{{\cal L}_{0}\Delta t}{\cal M}_{\rm nc}(\Delta t)\,,\] (23)
where \(e^{\kappa_{\rm c}}\rho\equiv U_{\rm c}\rho U_{\rm c}^{\dagger}\) and correspondence to Eq. (6) is established by putting \({\cal B}_{\rm c}(\Delta t)=e^{{\cal L}_{0}\Delta t}e^{\kappa_{\rm c}}\) and \({\cal B}_{\rm nc}(\Delta t)=e^{{\cal L}_{0}\Delta t}\). For small \(\Delta t\) (continuum limit), this allows to define an effective master equation under jump detection and unitary control
\[\dot{\rho} = \lim\limits_{\Delta t\to 0}\frac{\rho(t+\Delta t)-\rho(t)}{\Delta t }=\lim\limits_{\Delta t\to 0}\frac{1}{\Delta t}\left[{\cal P}(\Delta t)-\mbox{ \boldmath$1$}\right]\rho(t)\] (24)
\[= \lim\limits_{\Delta t\to 0}\frac{1}{\Delta t}\left\{\left[e^{ \kappa_{\rm c}}{\cal M}_{\rm c}(0)+{\cal M}_{\rm nc}(0)-\mbox{\boldmath$1$} \right]+\Delta t\left[{\cal L}_{0}e^{\kappa_{\rm c}}{\cal M}_{\rm c}(0)+{\cal L }_{0}{\cal M}_{\rm nc}(0)+e^{\kappa_{\rm c}}{\cal M}_{\rm c}^{\prime}(0)+{\cal M }_{\rm nc}^{\prime}(0)\right]\right\}\rho(t)\]
\[= \left[+{\cal L}_{0}+e^{\kappa_{\rm c}}{\cal M}_{\rm c}^{\prime}(0 )+{\cal M}_{\rm nc}^{\prime}(0)\right]\rho(t)={\cal L}_{\rm fb}\rho(t)\hat{=}{ \cal L}_{\rm fb}[\rho(t)]\,,\]
where we have already used the superoperator equivalents of Eq. (A). At the second line, a necessary ingredient for an effective master equation description becomes obvious: To remove the first term in square brackets, it is formally required that \(\sum_{m}{\cal B}_{m}(0){\cal M}_{m}(0)=\mbox{\boldmath$1$}\). Here, this is fulfilled since infinitely short measurements do not affect the density matrix. Finally, also inserting the derivatives of the measurement superoperators makes the Liouvillian in the effective master equation picture explicit
\[{\cal L}_{\rm fb}[\rho] = {\cal L}_{0}[\rho]+\gamma\left[U_{\rm c}\left|0\right>\left<1 \right|\rho\left|1\right>\left<0\right|U_{\rm c}^{\dagger}-\frac{1}{2}\left\{ \left|1\right>\left<1\right|,\rho\right\}\right]\] (25)
\[= {\cal L}_{0}[\rho]+\gamma\left[L_{\rm c}\rho L_{\rm c}^{\dagger}- \frac{1}{2}\left\{L_{\rm c}^{\dagger}L_{\rm c},\rho\right\}\right]\]
with \(L_{\rm c}=U_{\rm c}\left|0\right>\left<1\right|\), which obviously is a master equation of Lindblad type.
## Appendix B Single-qubit Liouvillian
The Born-, Markov-, and secular approximations lead for a non-degenerate system Hamiltonian generally to a decoupling of the evolution equations for populations and coherences in the system energy eigenbasis. For the single qubit with \(\Omega>0\), non-degeneracy is obviously fulfilled. Starting from Eq. (II) in the main text, the Liouvillians act on the density matrix elements \(\rho_{ij}=\left<i\right|\rho\left|j\right>\) as
\[\dot{\rho}_{00} = -\lambda_{\pm}^{2}\sin^{2}(\theta_{\pm})\gamma_{+\Omega}\rho_{00} +\lambda_{\pm}^{2}\sin^{2}(\theta_{\pm})\gamma_{-\Omega}\rho_{11}\,,\]
\[\dot{\rho}_{11} = +\lambda_{\pm}^{2}\sin^{2}(\theta_{\pm})\gamma_{+\Omega}\rho_{00} -\lambda_{\pm}^{2}\sin^{2}(\theta_{\pm})\gamma_{-\Omega}\rho_{11}\,,\]
\[\dot{\rho}_{01} =\]
\[\dot{\rho}_{10} =\] (26)
where \(\gamma_{\omega}\) is the even (real-valued) Fourier transform of the bath correlation function
\[C(\tau) = \left<e^{+{\rm i}H_{\rm B}\tau}Be^{-{\rm i}H_{\rm B}\tau}B\right> =\sum_{kk^{\prime}}\left<\left[h_{k}b_{k}e^{-{\rm i}\omega_{k}\tau}+h_{k}^{*}b _{k}^{\dagger}e^{+{\rm i}\omega_{k}\tau}\right]\left[h_{k^{\prime}}b_{k^{ \prime}}+h_{k^{\prime}}^{*}b_{k^{\prime}}^{\dagger}\right]\right>\] (27)
\[= \sum_{k}{\left|h_{k}\right|}^{2}\left[e^{-{\rm i}\omega_{k}\tau} \left[1+n_{\rm B}(\omega_{k})\right]+e^{+{\rm i}\omega_{k}\tau}n_{\rm B}( \omega_{k})\right]=\frac{1}{2\pi}\int\limits_{0}^{\infty}J(\omega)\left[e^{-{ \rm i}\omega\tau}\left[1+n_{\rm B}(\omega)\right]+e^{+{\rm i}\omega\tau}n_{\rm B }(\omega)\right]d\omega\]
\[= \frac{1}{2\pi}\int\limits_{-\infty}^{+\infty}J(\omega)\left[1+n_{ \rm B}(\omega)\right]e^{-{\rm i}\omega\tau}d\omega\]
as defined in the main text and similarly the imaginary quantity
\[\sigma_{\omega}=\int\left<e^{+{\rm i}H_{\rm B}\tau}Be^{-{\rm i}H_ {\rm B}\tau}B\right>e^{+{\rm i}\omega\tau}{\rm sgn}\left(\tau\right)d\tau= \frac{{\rm i}}{\pi}{\cal P}\int\frac{\gamma_{\bar{\omega}}}{\omega-\bar{\omega }}d\bar{\omega}\] (28)
denotes its odd Fourier transform, which may also be obtained from the even one via a Cauchy principal value integral. The odd Fourier transform is associated with the Lamb-shift terms that account for an energy renormalization of the qubit. Note that in the high-temperature and wide-band limits that we focus on, \(\gamma_{\omega}\) is approximately flat, such that \(\sigma_{\omega}\approx 0\). Then, the stationary state of both Liouvillians is the completely mixed one with \(\bar{\rho}_{00}=\bar{\rho}_{11}=1/2\) and \(\bar{\rho}_{01}=\bar{\rho}_{10}=0\). However, \(\sigma_{\omega}\) is purely imaginary in any case, which for the evolution of the absolute square of coherences implies Eq. (10) in the main text. Similarly, we obtain thermalization from the evolution of populations by invoking the KMS condition.
## Appendix C Bloch sphere iteration
The Bloch sphere representation of the density matrix \(\rho=\frac{1}{2}\left[\mbox{\boldmath$1$}+\mbox{\boldmath$r$}\cdot\mbox{ \boldmath$\sigma$}\right]\) implies \(\left<\sigma^{\alpha}\right>=r^{\alpha}\) or alternatively for the density vector the representation
(29)
which can be inserted in the fixed-point iteration for the density matrix \(\rho(t+\Delta t)={\cal P}_{\rm eff}(\Delta t)\rho(t)\) to yield iteration equations for the expectation values of Pauli matrices
\[\left<\sigma^{x}_{t+\Delta t}\right> = \frac{\cos(\Omega\Delta t)}{2}\left(e^{-\gamma\Delta t\lambda_{+} ^{2}[3+\cos(2\theta_{+})]/2}-e^{-\gamma\Delta t\lambda_{-}^{2}[3+\cos(2\theta_ {-})]/2}\right)\]
\[+\frac{\cos(\Omega\Delta t)}{2}\left(e^{-\gamma\Delta t\lambda_{+ }^{2}[3+\cos(2\theta_{+})]/2}+e^{-\gamma\Delta t\lambda_{-}^{2}[3+\cos(2\theta _{-})]/2}\right)\left<\sigma^{x}_{t}\right>\,,\]
\[\left<\sigma^{y}_{t+\Delta t}\right> = \frac{\sin(\Omega\Delta t)}{2}\left(e^{-\gamma\Delta t\lambda_{+} ^{2}[3+\cos(2\theta_{+})]/2}-e^{-\gamma\Delta t\lambda_{-}^{2}[3+\cos(2\theta_ {-})]/2}\right)\]
\[+\frac{\sin(\Omega\Delta t)}{2}\left(e^{-\gamma\Delta t\lambda_{+ }^{2}[3+\cos(2\theta_{+})]/2}+e^{-\gamma\Delta t\lambda_{-}^{2}[3+\cos(2\theta _{-})]/2}\right)\left<\sigma^{x}_{t}\right>\,,\]
\[\left<\sigma^{z}_{t+\Delta t}\right> = 0\,.\] (30)
The \(\left<\sigma^{z}\right>\)-expectation values does not recover from zero after the first projection since the chosen system Hamiltonian was proportional to \(\sigma^{z}\), which only generates rotations between \(\sigma^{x}\) and \(\sigma^{y}\). These equations recover the no-feedback limit (\(\lambda_{\pm}=\lambda\) and \(\theta_{\pm}=\theta\)) disscussed and the continuum limit in Eq. (14) in the main text.
## Appendix D Most general measurement direction
For more general projective measurements \(M_{\pm}=\frac{1}{2}\left[\mbox{\boldmath$1$}\pm\mbox{\boldmath$n$}\cdot\mbox{ \boldmath$\sigma$}\right]=M_{\pm}^{\dagger}\), their action on the density vector in ordering \(\rho=(\rho_{00},\rho_{11},\rho_{01},\rho_{10})^{\rm T}\) is given by
\[{\cal M}_{+} =\]
\[{\cal M}_{-} =\] (31)
One can easily check the orthogonal projector properties \({\cal M}_{\pm}^{2}={\cal M}_{\pm}\) and \({\cal M}_{\pm}{\cal M}_{\mp}=\mbox{\boldmath$0$}\). In addition, the special case of Eq. (4) in the main text is obtained by putting \(\theta=\pi/2\) and \(\phi=0\). Inserting the corresponding effective propagator into the fixed-point iteration for the Pauli matrix expectation values one now observes that all expectation values couple to each other. Solving for the stationary state yields Eq. (15) of the main text. Naturally, Eq. (15) is also compatible with the stationary state of Eq. (14) when \(\theta=\pi/2\) and \(\phi=0\).
## Appendix E Two-qubit Liouvillian
In the energy eigenbasis of the two qubits \(\left|00\right>\), \(\left|01\right>\), \(\left|10\right>\), \(\left|11\right>\), the Born-, Markov-, and secular approximations again lead to a decoupled evolution of populations and coherences. The populations (\(\rho_{ij,k\ell}=\left<ij\right|\rho\left|k\ell\right>\)) evolve according to
\[\dot{\rho}_{00,00} = -(\lambda_{\rm B/R}^{2}\gamma_{+\omega_{1}}+\lambda_{\rm B/R}^{2} \gamma_{+\omega_{2}})\rho_{00,00}+\lambda_{\rm B/R}^{2}\gamma_{-\omega_{2}} \rho_{01,01}+\lambda_{\rm B/R}^{2}\gamma_{-\omega_{1}}\rho_{10,10}\,,\]
\[\dot{\rho}_{01,01} = +\lambda_{\rm B/R}^{2}\gamma_{+\omega_{2}}\rho_{00,00}-(\lambda_{ \rm B/R}^{2}\gamma_{+\omega_{1}}+\lambda_{\rm B/R}^{2}\gamma_{-\omega_{2}}) \rho_{01,01}+\lambda_{\rm B/R}^{2}\gamma_{-\omega_{1}}\rho_{11,11}\,,\]
\[\dot{\rho}_{10,10} = +\lambda_{\rm B/R}^{2}\gamma_{+\omega_{1}}\rho_{00,00}-(\lambda_{ \rm B/R}^{2}\gamma_{-\omega_{1}}+\lambda_{\rm B/R}^{2}\gamma_{+\omega_{2}}) \rho_{10,10}+\lambda_{\rm B/R}^{2}\gamma_{-\omega_{2}}\rho_{11,11}\,,\]
\[\dot{\rho}_{11,11} = +\lambda_{\rm B/R}^{2}\gamma_{+\omega_{1}}\rho_{01,01}+\lambda_{ \rm B/R}^{2}\gamma_{+\omega_{2}}\rho_{10,10}-(\lambda_{\rm B/R}^{2}\gamma_{- \omega_{1}}+\lambda_{\rm B/R}^{2}\gamma_{-\omega_{2}})\rho_{11,11}\] (32)
and the coherences decay independently (omitted for brevity). By invoking the KMS condition \(\gamma_{-\omega}=e^{-\beta\omega}\gamma_{+\omega}\) it is immediately obvious that the stationary state is the thermalized one. In the high-temperature limit considered in the paper, the stationary state of either stationary Liouvillian is just the completely mixed state. In this limit, the coherences decay according to
\[\dot{\rho}_{00,01} = \left(-{\rm i}\omega_{2}-2\lambda_{\rm B/R}^{2}\gamma\right)\rho_ {00,01}+\lambda_{\rm B/R}^{2}\gamma\rho_{10,11}\,,\]
\[\dot{\rho}_{00,10} = \left(-{\rm i}\omega_{1}-2\lambda_{\rm B/R}^{2}\gamma\right)\rho_ {00,10}+\lambda_{\rm B/R}^{2}\gamma\rho_{01,11}\,,\]
\[\dot{\rho}_{00,11} = \left(-{\rm i}\omega_{1}-{\rm i}\omega_{2}-2\lambda_{\rm B/R}^{2} \gamma\right)\rho_{00,11}\,,\]
\[\dot{\rho}_{01,00} = \left(+{\rm i}\omega_{2}-2\lambda_{\rm B/R}^{2}\gamma\right)\rho_ {01,00}+\lambda_{\rm B/R}^{2}\gamma\rho_{11,10}\,,\]
\[\dot{\rho}_{01,10} = \left(-{\rm i}\omega_{1}+{\rm i}\omega_{2}-2\lambda_{\rm B/R}^{2} \gamma\right)\rho_{01,10}\,,\]
\[\dot{\rho}_{01,11} = \left(-{\rm i}\omega_{1}-2\lambda_{\rm B/R}^{2}\gamma\right)\rho_ {01,11}+\lambda_{\rm B/R}^{2}\gamma\rho_{00,10}\,,\]
\[\dot{\rho}_{10,00} = \left(+{\rm i}\omega_{1}-2\lambda_{\rm B/R}^{2}\gamma\right)\rho_ {10,00}+\lambda_{\rm B/R}^{2}\gamma\rho_{11,01}\,,\]
\[\dot{\rho}_{10,01} = \left(+{\rm i}\omega_{1}-{\rm i}\omega_{2}-2\lambda_{\rm B/R}^{2} \gamma\right)\rho_{10,01}\,,\]
\[\dot{\rho}_{10,11} = \left(-{\rm i}\omega_{2}-2\lambda_{\rm B/R}^{2}\gamma\right)\rho_ {10,11}+\lambda_{\rm B/R}^{2}\gamma\rho_{00,01}\,,\]
\[\dot{\rho}_{11,00} = \left(+{\rm i}\omega_{1}+{\rm i}\omega_{2}-2\lambda_{\rm B/R}^{2} \gamma\right)\rho_{11,00}\,,\]
\[\dot{\rho}_{11,01} = \left(+{\rm i}\omega_{1}-2\lambda_{\rm B/R}^{2}\gamma\right)\rho_ {11,01}+\lambda_{\rm B/R}^{2}\gamma\rho_{10,00}\,,\]
\[\dot{\rho}_{11,10} = \left(+{\rm i}\omega_{2}-2\lambda_{\rm B/R}^{2}\gamma\right)\rho_ {11,10}+\lambda_{\rm B/R}^{2}\gamma\rho_{01,00}\,,\] (33)
where coherences may couple among themselves provided they belong to superpositions of states with equal energy differences. Inserting \(\gamma_{\omega}\to\gamma\) also in the equations for the populations yields together with the sparse \(16\times 16\) measurement superoperators
\[{\cal M}_{B} = \frac{1}{4}\left(\begin{array}[]{cccccccccccccccc}1&0&0&1&0&0&1&0 &0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 1&0&0&1&0&0&1&0&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 1&0&0&1&0&0&1&0&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 1&0&0&1&0&0&1&0&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\end{array}\right)\] (34)
and
\[{\cal M}_{R} = \frac{1}{4}\left(\begin{array}[]{cccccccccccccccc}1&0&0&1&0&0&-1& 0&0&0&0&0&0&-1&0&0\\ 0&4&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&4&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 1&0&0&1&0&0&-1&0&0&0&0&0&0&-1&0&0\\ 0&0&0&0&2&0&0&0&0&0&0&0&0&0&-2&0\\ 0&0&0&0&0&2&0&0&0&0&0&0&0&0&0&-2\\ -1&0&0&-1&0&0&1&0&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&2&0&-2&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&4&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&-2&0&2&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&2&0&-2&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&4&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&-2&0&2&0&0&0\\ -1&0&0&-1&0&0&1&0&0&0&0&0&0&1&0&0\\ 0&0&0&0&-2&0&0&0&0&0&0&0&0&0&2&0\\ 0&0&0&0&0&-2&0&0&0&0&0&0&0&0&0&2\end{array}\right)\] (35)
acting from the left on density vectors arranged as
\(\Big{(}\rho_{00,00},\rho_{01,01},\rho_{10,10},\rho_{11,11},\rho_{00,01},\rho_{ 00,10},\rho_{00,11},\rho_{01,00},\rho_{01,10},\rho_{01,11},\rho_{10,00},\rho_{ 10,01},\rho_{10,11},\rho_{11,00},\rho_{11,01},\rho_{11,10}\Big{)}^{\rm T}\), the effective propagator for the fixed-point iteration of the density matrix.
## Appendix F Stationary state of two-qubit fixed-point iteration
Mapping the fixed-point iteration for the density matrix to the expectation values of \(\left<\Sigma^{\alpha\beta}\right>\), one obtains lengthy expressions for the 15 nontrivial expectation values: \(\left<\sigma^{x}_{2}\right>\), \(\left<\sigma^{y}_{2}\right>\), \(\left<\sigma^{z}_{2}\right>\), \(\left<\sigma^{x}_{1}\right>\), \(\left<\sigma^{y}_{1}\right>\), \(\left<\sigma^{z}_{1}\right>\), \(\left<\sigma^{x}_{1}\sigma^{x}_{2}\right>\), \(\left<\sigma^{x}_{1}\sigma^{y}_{2}\right>\), \(\left<\sigma^{x}_{1}\sigma^{z}_{2}\right>\), \(\left<\sigma^{y}_{1}\sigma^{x}_{2}\right>\), \(\left<\sigma^{y}_{1}\sigma^{y}_{2}\right>\), \(\left<\sigma^{y}_{1}\sigma^{z}_{2}\right>\), \(\left<\sigma^{z}_{1}\sigma^{x}_{2}\right>\), \(\left<\sigma^{z}_{1}\sigma^{y}_{2}\right>\), and \(\left<\sigma^{z}_{1}\sigma^{z}_{2}\right>\). These can be solved for the stationary state. Alternatively, the stationary state of the iteration can be obtained directly by looking at the trace-normalized eigenvector of the effective propagator with eigenvalue one. For finite \(\Delta t\), it is characterized by vanishing expectation values of all local operators. The non-vanishing stationary expectation values read
\[\left<\sigma^{x}_{1}\sigma^{y}_{2}\right> = \frac{4e^{3(\Lambda_{\rm B}+\Lambda_{\rm R})}\sinh\left[\Lambda_{ \rm B}-\Lambda_{\rm R}\right]\sinh\left[2\Lambda_{\rm R}\right]\sin(\Omega)}{e ^{6\Lambda_{\rm R}}+e^{4\Lambda_{\rm B}+2\Lambda_{\rm R}}\left(3-4e^{4\Lambda_ {\rm R}}\right)+\left(e^{2\Lambda_{\rm B}}+e^{2\Lambda_{\rm R}}\right)\left(2e ^{2\Lambda_{\rm B}+4\Lambda_{\rm R}}-e^{2\Lambda_{\rm B}}-e^{2\Lambda_{\rm R}} \right)\cos(\Omega)}\,,\]
\[\left<\sigma^{x}_{1}\sigma^{x}_{2}\right> = \frac{4e^{3(\Lambda_{\rm B}+\Lambda_{\rm R})}\sinh\left[\Lambda_{ \rm B}-\Lambda_{\rm R}\right]\sinh\left[2\Lambda_{\rm R}\right]\cos(\Omega)}{e ^{6\Lambda_{\rm R}}+e^{4\Lambda_{\rm B}+2\Lambda_{\rm R}}\left(3-4e^{4\Lambda_ {\rm R}}\right)+\left(e^{2\Lambda_{\rm B}}+e^{2\Lambda_{\rm R}}\right)\left(2e ^{2\Lambda_{\rm B}+4\Lambda_{\rm R}}-e^{2\Lambda_{\rm B}}-e^{2\Lambda_{\rm R}} \right)\cos(\Omega)}\,,\]
\[\left<\sigma^{z}_{1}\sigma^{z}_{2}\right> =\]
\[\left<\sigma^{y}_{1}\sigma^{x}_{2}\right> = \left<\sigma^{x}_{1}\sigma^{y}_{2}\right>\,,\qquad\left<\sigma^{y }_{1}\sigma^{y}_{2}\right>=-\left<\sigma^{x}_{1}\sigma^{x}_{2}\right>\,,\] (36)
where we have used the dimensionless quantities \(\Omega\equiv(\omega_{1}+\omega_{2})\Delta t\), \(\Lambda_{\rm B}\equiv\gamma\Delta t\lambda_{\rm B}^{2}\), and \(\Lambda_{\rm R}\equiv\gamma\Delta t\lambda_{\rm R}^{2}\). In the limit \(\Delta t\to 0\), this recovers Eq. (11) in the main text. Even for finite \(\Delta t\) however, the stationary density matrix is just given by three independent parameters \(\alpha_{1}=\left<\sigma^{x}_{1}\sigma^{y}_{2}\right>=\left<\sigma^{y}_{1} \sigma^{x}_{2}\right>\), \(\alpha_{2}=\left<\sigma^{x}_{1}\sigma^{x}_{2}\right>=-\left<\sigma^{y}_{1} \sigma^{y}_{2}\right>\), and \(\alpha_{3}=\left<\sigma^{z}_{1}\sigma^{z}_{2}\right>\), for which one may also express concurrence and purity analytically which generalizes Eq. (19) in the main text (not shown).
|
1008.3123 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 129428,
"num_imgs": 0,
"llama3_tokens_count": 36561
} | [] | Much of the literature on rational cryptography focuses on analyzing the strategic properties of cryptographic protocols. However, due to the presence of computationally-bounded players and the asymptotic nature of cryptographic security, a definition of sequential rationality for this setting has thus far eluded researchers.
We propose a new framework for overcoming these obstacles, and provide the first definitions of computational solution concepts that guarantee sequential rationality. We argue that natural computational variants of subgame perfection are too strong for cryptographic protocols. As an alternative, we introduce a weakening called threat-free Nash equilibrium that is more permissive but still eliminates the undesirable “empty threats” of non-sequential solution concepts.
To demonstrate the applicability of our framework, we revisit the problem of implementing a mediator for correlated equilibria (Dodis-Halevi-Rabin, Crypto’00), and propose a variant of their protocol that is sequentially rational for a non-trivial class of correlated equilibria. Our treatment provides a better understanding of the conditions under which mediators in a correlated equilibrium can be replaced by a stable protocol.
**Keywords:** rational cryptography, Nash equilibrium, subgame perfect equilibrium, sequential rationality, cryptographic protocols, correlated equilibrium
###### Contents
Contents
* 1 Introduction * 1.1 Computational Nash Equilibrium * 1.2 Computational Subgame Perfection
* 2 Our Results * 2.1 Threat-Free Nash Equilibria * 2.2 Strategy-Filters and Tractable Strategies * 2.3 Applications * 2.4 Related Work * 2.5 Future Work
* 3 Game Theory Definitions * 3.1 Extensive Games * 3.2 Nash Equilibrium * 3.3 Subgame Perfect Equilibrium * 3.4 Constrained Games
* 4 Threat-Free Nash Equilibrium * 4.1 A New Solution Concept * 4.2 Vanilla Version * 4.3 Round-Parameterized Version
* 5 The Computational Setting * 5.1 Protocols as Sequences of Games * 5.2 Strategic Representation of Interactive Machines * 5.2.1 \(\varepsilon\)-TFNE for Reduced Strategies * 5.3 Computational Hardness in the Game-Theoretic Setting * 5.3.1 Strategy-filters * 5.3.2 Tractable Reduced Strategies * 5.4 Computational TFNE
* 6 The Coin-Flipping Game
* 7 Correlated Equilibria Without a Mediator * 7.1 The Dodis-Halevi-Rabin Protocol * 7.2 TFNE for Games with Simultaneous Moves at the Leaves * 7.3 Our Protocol
* 8 A General Theorem
* Acknowledgments
* A One-way Functions and Commitment Schemes
## 1 Introduction
A recent line of research has considered replacing the traditional cryptographic modeling of adversaries with a game-theoretic one. Rather than assuming arbitrary _malicious_ behavior, participants are viewed as being self-interested, _rational_ entities that wish to maximize their own profit, and that would deviate from a protocol’s prescribed instructions if and only if it is in their best interest to do so.
Such game theoretic modeling is expected to facilitate the task of protocol design, since rational behavior may be easier to handle than malicious behavior. It also has the advantage of being more realistic in that it does not assume that some of the parties honestly follow the protocol’s instructions, as is frequently done in cryptography.
The interplay between cryptography and game theory can also be beneficial to the latter. For instance, using tools from secure computation, it has been shown how to transform games in the mediated model into games in the unmediated model.
But regardless of whether one analyzes cryptographic protocols from a game theoretic perspective or whether one uses protocols to enhance game theory, it is clear that the results are meaningful only if one provides an adequate framework for such analyses.
### Computational Nash Equilibrium
Applying game-theoretic reasoning in a cryptographic context consists of modeling interaction as a _game_, and designing a protocol that is in _equilibrium_. The game specifies the model of interaction, as well as the utilities of the various players as a function of the game’s outcome. The protocol lays out a specific plan of action for each player, with the goal of realizing some pre-specified task. Once a protocol has been shown to be in equilibrium, rational players are expected to follow it, thus reaching the desired outcome.
A key difficulty in applying game-theoretic reasoning to the analysis of cryptographic protocols stems from the latter’s use of computational infeasibility. Whereas game theory places no bounds on the computational ability of players, in cryptography it is typically assumed that players are computationally bounded. Thus, in order to retain the meaningfulness of cryptographic protocols, it is imperative to restrict the set of strategies that are available to protocol participants. This gives rise to a natural analog of Nash equilibrium (NE), referred to as _computational Nash equilibrium_ (CNE): any polynomial-time computable deviation of a player from the specified protocol can improve her utility by only a negligible amount (assuming other players stick to the prescribed strategy).
Consider, for example, the following (two-stage, zero-sum) game (related to a game studied by Ben-Sasson et al. [4] and Fortnow and Santhanam [7]), which postulates the existence of a one-way permutation \(f:\{0,1\}^{n}\mapsto\{0,1\}^{n}\).
**Example 1.1**: **(One-way permutation game):**
1. \(P_{1}\) chooses some \(x\in\{0,1\}^{n}\), and sends \(f(x)\).
2. \(P_{2}\) sends a message \(z\in\{0,1\}^{n}\).
3. \(P_{2}\) wins (gets payoff 1) if \(z=x\) (and gets -1 otherwise).
In classical game theory, in all NE of this game \(P_{2}\) wins, since there always exists some \(z\) such that \(z=x\). However, in the computational setting, the following is a CNE: both players choose their messages uniformly at random (resulting in an expected loss for \(P_{2}\)). This is true because if \(P_{2}\) chooses \(z\) at random, then \(P_{1}\) can never improve his payoff by not choosing at random. If \(P_{1}\) chooses \(x\) at random, then by the definition of a one-way permutation, any computationally-bounded strategy \(\sigma_{2}\) of \(P_{2}\) will be able to guess the value of \(x\) with at most negligible (in \(n\)) probability. Thus, the expected utility of \(P_{2}\) using \(\sigma_{2}\) is negligible, and so he loses at most that much by sticking to his CNE strategy (i.e. picking some \(z\) at random).
### Computational Subgame Perfection
The notion of CNE serves as a first stepping stone towards a game-theoretic treatment of cryptographic protocols. However, protocols are typically _interactive_, and CNE does not take their sequential nature into consideration.
In traditional game theory interaction is modeled via extensive games. The most basic equilibrium notion in this setting is _subgame perfect equilibrium_ (SPE), which requires players’ strategies to be in NE at any point of the interaction, regardless of the history of prior actions taken by other players. Basically, this ensures that players will not reconsider their actions as a result of reaching certain histories (a.k.a. “empty threats”).
As already noted in previous works (cf. [16, 19, 26]), it is not at all clear how to adapt SPE to the computational setting. A natural approach would be to require the strategies to be CNE at every possible history. However, if we condition on the history, then this means that _different_ machines can and will do much better than the prescribed equilibrium strategy. For example, in the one-way permutation game of Example 1.1, given any message history, a machine \(M\) can simply have the correct inverse hardwired.
Although this requirement can be relaxed to ask that the prescribed strategy should be better than any other fixed machine on all inputs, this again may be too strong, since a fixed machine can always do better on some histories. Therefore, it seems that we must accept the following: for any machine \(M\), with _high probability_ over possible message histories, the prescribed strategy does at least as well as \(M\). However, it turns out that this approach also fails to capture our intuitive understanding of a computational SPE (CSPE). Consider the following (two-stage) variant of the one-way permutation game from Example 1.1:
**Example 1.2**: **(Modified one-way permutation game):**
1. \(P_{1}\) chooses some \(x\in\{0,1\}^{n}\), and sends \(f(x)\).
2. \(P_{2}\) sends a message \(z\in\{0,1\}^{n}\).
3. If exactly one of \(P_{1}\) and \(P_{2}\) send message 0, both players get payoff \(-2\). If both players send message 0, both players get payoff \(+2\). Otherwise, \(P_{2}\) wins (with payoff \(+1\)) if and only if \(z=x\), and the non-winning player loses (with payoff \(-1\)).
Using a similar argument to the one applied in Section 1.1, it can be shown that the strategies in which both players choose a message uniformly at random from \(\{0,1\}^{n}\setminus\{0\}\) satisfy the above “probabilistic” variant of CSPE. However, this equilibrium does not match our intuitive understanding of SPE: \(P_{1}\) will prefer to send message 0 regardless of \(P_{2}\)’s strategy, knowing that \(P_{2}\) will then respond with 0 as well. The threat of playing uniformly from all other messages is empty, and hence should not be admitted by the definition.¹
[FOOTNOTE:1][ENDFOOTNOTE]
The examples above are rather simple, so it is reasonable to expect that issues arising in their analyses are inherent in many other cryptographic protocols. This raises the question of whether a computational variant of SPE is at all attainable in a cryptographic setting.
At the heart of this question is the fact that essentially any cryptographic protocol carries some small (but positive) probability of being broken. This means that, while there may be a polynomial-time TM that can “perform well” on the _average_ message history, there is no single TM that will do better than _all_ other TMs on every history (as for any history there exists some TM that has the corresponding “secret information” hardwired).
This state of affairs calls for an alternative approach. While such an approach should be meaningful enough to express strategic considerations in an interactive setting, it should also be sufficiently weak to be realizable. As demonstrated above, any approach for tackling this challenge should explicitly address the associated probability of error. It should also take asymptotics into consideration.
## 2 Our Results
We propose a new framework for guaranteeing sequential rationality in a computational setting. Our starting point is a weakening of subgame perfection, called _threat-free Nash equilibrium_, that is more permissive, but still eliminates the undesirable empty threats of non-sequential solution concepts.
To cast our new solution concept into the computational setting, we develop a methodology that enables us to “translate” arguments that involve computational infeasibility into a purely game theoretic language. This translation enables us to argue about game theoretic concepts directly, abstracting away complications that are related to computation.
In order to demonstrate the applicability of our framework, we revisit the problem of implementing a mediator for correlated equilibria [6], and propose a protocol that is sequentially rational for a non-trivial class of correlated equilibria (see Section 2.3 for details). Our treatment provides a better understanding of the conditions under which mediators in a correlated equilibrium can be replaced by a stable protocol.
### Threat-Free Nash Equilibria
We introduce _threat-free Nash equilibria_ (TFNE), a weakening of subgame perfection whose objective is to capture strategic considerations in an interactive setting. Loosely speaking, a pair of strategies in an extensive game is a TFNE if it is a NE, and if in addition no player is facing an empty threat at any history.
The problem of empty threats is the following: in a NE of an extensive game, it is possible that a player plays sub-optimally at a history that is reached with probability 0. The other player may strategically choose to deviate from his prescribed strategy and arrive at that history, knowing that this will cause the first player to play an optimal response rather than the prescribed one. In an SPE this problem is eliminated by requiring that no player can play sub-optimally at any history, and so no other player will strategically deviate and take advantage of this.
The main observation leading to the definition of TFNE is that the above requirement may be too strong a condition to eliminate such instability: if an optimal response of a player _decreases_ the utility of the other, then this other player would not want to strategically deviate. By explicitly ruling out this possibility, the instability caused by empty threats is eliminated, despite the equilibrium notion being more permissive than subgame perfection.
To make this precise, we give the first formal definition of an empty threat in extensive games. The definition is regressive: Roughly speaking, a player \(i\) is facing a threat at a history if there is some deviation at that history, along with a threat-free continuation from that history onwards, so that \(i\) increases his overall expected payoff when the players play this new deviation and continuation.
We note that the notion of TFNE is strong enough to eliminate the undesirable strategy of playing randomly in the modified OWP game from Example 1.2 – Claim 5.13 shows that in any computational TFNE of this game the second player outputs 0 after history 0.
### Strategy-Filters and Tractable Strategies
To cast the definition of TFNE into a computational setting, we map the given protocol into a sequence of extensive games using _strategy-filters_ that map computable strategies into their “strategic representation” (the strategic representation corresponds to the strategy effectively played by a given interactive Turing machine). We can then apply pure game theoretic solution concepts, and in particular our newly introduced concept of TFNE, to understand the strategic behavior of players.
Similarly to the definition of CNE, the computational treatment departs from the traditional game theoretic treatment in two crucial ways. First of all, our definition is framed _asymptotically_ (in order to capture computational infeasibility), whereas traditional game-theory is framed for finitely sized games. Second, it allows for a certain _error probability_. This is an artifact of the (typically negligible) probability with which the security of essentially any cryptographic scheme can be broken.
Given a cryptographic protocol, we consider a corresponding sequence of extensive games. The sequence is indexed by a security parameter \(k\) and an error parameter \(\varepsilon\). For each game, we “constrain” the strategies available to players to be a subset of those that can be generated by PPT players in the protocol. Intuitively, the game indexed by \((k,\varepsilon)\) contains those strategies that run in time polynomial in \(k\) and “break crypto” with probability at most \(\varepsilon\). We also require that strategy-filters be _PPT-covering_: that for any polynomially-small \(\varepsilon\), every PPT is eventually a legal strategy, far enough into the sequence of extensive games.
Using this framework we formalize the notion of a computational threat-free Nash equilibrium (CTFNE). To the best of our knowledge this is the first attempt at analyzing sequential strategic reasoning in the presence of computational infeasibility.
### Applications
Our treatment provides a powerful tool for arguing about the strategic behavior of players in a cryptographic protocol. It also enables us to isolate sequential strategic considerations that are suitable for use in cryptographic protocols (so that the solution concept is not too weak and not too strong).
As a warm up, we demonstrate the applicability of our framework and solution concept to the “coin-flipping game” that corresponds to Blum’s coin-flipping protocol [5]. One may view this as playing the classic game of match pennies without simultaneity (but with cryptography). We show that it is possible to exploit the specific structure of the game to implement a correlating device resulting in a CTFNE. This is in contrast to the general approach of [6] that only enables one to argue CNE. This result already demonstrates the added strength of our framework and definition.
We then revisit the general problem of implementing a mediator for correlated equilibria [6], and propose a protocol that is sequentially rational for a non-trivial class of correlated equilibria. In particular, our protocol is in a CTFNE for correlated equilibria that are convex combinations of Nash equilibria and that are “undominated”: There does not exist any convex combination of Nash equilibria for which both players get a strictly higher expected payoff.
Our treatment explores the conditions under which mediators in a correlated equilibrium can be replaced by a stable protocol, and sheds light on some structural properties of such equilibria.
Finally, we prove a general theorem that identifies sufficient conditions for a TFNE in extensive games. Namely, we show that if an undominated NE has the additional property that no player can harm the other by a unilateral deviation, then that NE must also be threat-free.
### Related Work
This paper contributes to the growing literature on rational cryptography. Many of the papers in this line of research, such as [6, 13, 15, 1, 9, 20, 22, 16, 18, 19, 17, 26, 23, 2, 10], explore various solution concepts for cryptographic protocols viewed as games (often in the context of rational secret-sharing). Aside from the works of Lepinski et al. [15, 20], Ong et al. [26], and Gradwohl [10], who work in a different model², all prior literature has considered solution concepts that are non-sequential. More specifically, they all use variants of NE such as strict NE, NE with stability to trembles, and everlasting equilibrium.
[FOOTNOTE:2][ENDFOOTNOTE]
An additional related work is that of Halpern and Pass [14], in which the authors present a general framework for game theory in a setting with computational cost. While their approach to computational limitations is more general than ours, they only address NE. Finally, Fortnow and Santhanam [7] study a different framework for games with computational limits, but also only in the context of NE.
### Future Work
One potential application of our new definition is an analysis of rational secret-sharing protocols. While the design of such a protocol that is in a CTFNE is not within the scope of the current paper, we do provide some intuition about why known gradual release protocols satisfy a slightly weaker solution concept. Consider the following simple setting: each of two players knows a bit, and the XOR of the two bits is the secret. Secret exchange protocols, for example [21], allow the players to exchange their respective bits and thus learn the secret in such a way that even if one of the players cheats, he can reconstruct the secret with probability at most \(\varepsilon\) more than the other player. Then under the assumptions on players’ utilities used by [17], any unilateral deviation from this protocol can get the deviating player an increase of only \(O(\varepsilon)\) in utility. However, since the other player can always correctly guess the secret with almost the same probability (up to the additive \(\varepsilon\)), the potential benefit to a player of deviating, causing the other to deviate, and so on, is also at most \(O(\varepsilon)\). Thus, this protocol is in a computational variant of \(\varepsilon\)-NE and is also \(\varepsilon\)-threat-free. The reason this is weaker than our current solution concept is that we require the benefit from a threat or a deviation to be negligible, whereas in [21] the \(\varepsilon\) is polynomially-small (in the number of rounds of the protocol).
There are numerous other compelling problems left for future work. The first problem is to extend our definition to games with simultaneous moves. While we do offer a partial extension tailored to the problem of implementing a mediator, the problem of defining CTFNE for general games with simultaneous moves is open. Such a definition would be particularly useful for a sequential analysis of protocols with a simultaneous channel. Another natural extension of the definition is to multiple players, as opposed to 2. Such an extension comes with its own challenges, particularly with regard to the possibility of collusion. A third extension is to incorporate the threat-freeness property with stronger variants of NE, such as stability with respect to trembles, strict NE, or survival of iterated elimination of dominated strategies. Finally, we would like to find more applications for our definition. One particularly interesting problem is to extend our results on the implementation of mediators to a larger class of correlated equilibria.
## 3 Game Theory Definitions
### Extensive Games
Informally, a game in extensive form can be described as a game tree in which each node is owned by some player and edges are labeled by legal actions. The game begins at the root, and at each step follows the edge labeled by the action chosen by the current node’s owner. Utilities of players are given at the leaves of the tree. More formally, we have the following standard definition of extensive games (see, for example, Osborne and Rubinstein [27]):
**Definition 3.1** (Extensive game): _A 2-person extensive game is a tuple \(\Gamma=(H,P,A,u)\) where_
* \(H\) _is a set of (finite)_ history _sequences such that the empty word_ \(\epsilon\in H\)_. A history_ \(h\in H\) _is_ terminal _if_ \(\{a\,:\,(h,a)\in H\}=\emptyset\)_. The set of terminal histories is denoted_ \(Z\)_._
* \(P:(H\setminus Z)\rightarrow\{1,2\}\) _is a function that assigns a “next” player to every non-terminal history._
* \(A\) _is a function that, for every non-terminal history_ \(h\in H\setminus Z\)_, assigns a finite set_ \(A(h)=\{a\,:\,(h,a)\in H\}\) _of available actions to player_ \(P(h)\)_._
* \(u=(u_{1},u_{2})\) _is a pair of payoff functions_ \(u_{i}:Z\mapsto\mathbb{R}\)_._
We will denote the two players by \(P_{1}\) and \(P_{2}\) and by \(P_{i}\) and \(P_{-i}\), where \(i\in\{1,2\}\) and \(-i\) is shorthand for \(2-i\).
**Definition 3.2** (Behavioral strategy): Behavioral strategies _of players in an extensive game are collections \(\sigma_{i}=\left(\sigma_{i}(h)\right)_{h:P(h)=i}\) of independent probability measures, where \(\sigma_{i}(h)\) is a probability measure over \(A(h)\)._
For any extensive game \(\Gamma=(H,P,A,u)\), any player \(i\), and any history \(h\) satisfying \(P(h)=i\), we denote by \(\Sigma_{i}(h)\) the set of all probability measures over \(A(h)\). We denote by \(\Sigma_{i}\) the set of all strategies \(\sigma_{i}\) of player \(i\) in \(\Gamma\). For each _profile_\(\sigma=(\sigma_{1},\sigma_{2})\) of strategies, define the _outcome_\(O(\sigma)\) to be the probability distribution over terminal histories that results when each player \(i\) follows strategy \(\sigma_{i}\). Note that if both \(\sigma_{1}\) and \(\sigma_{2}\) are deterministic (i.e. deterministic on every history), then so is the outcome \(O(\sigma)\).
### Nash Equilibrium
Each profile of strategies yields a distribution over outcomes, and we are interested in profiles that guarantee the players some sort of optimal outcomes. There are many solution concepts that capture various meanings of “optimal,” and one of the most basic is the Nash equilibrium (NE).
**Definition 3.3** (Nash equilibrium (NE)): _An \(\varepsilon\)-Nash equilibrium of an extensive game \(\Gamma=(H,P,A,u)\) is a profile \(\sigma^{*}\) of strategies such that for each player \(i\),_
\[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\sigma^{*})\right)\right] \geq\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\sigma^{*}_{-i},\sigma _{i})\right)\right]-\varepsilon\]
_for every strategy \(\sigma_{i}\) of player \(i\). It is a NE if the above holds for \(\varepsilon\leq 0\) and a strict NE if it holds for some \(\varepsilon<0\)._
One of the premises behind the stability of profiles that are in an \(\varepsilon\)-NE is that players will not bother to deviate for a mere gain of \(\varepsilon\). For applications in cryptography we will generally have \(\varepsilon\) be some negligible function, and this corresponds to our understanding that we do not care about negligible gains.
### Subgame Perfect Equilibrium
One of the problems with NE in extensive games is the presence of empty threats: a player’s equilibrium strategy may specify a sub-optimal strategy at a history that is reached with probability 0. The other player, knowing this, may strategically deviate to reach that history, predicting that the first player will also deviate. For more details and explicit examples see any textbook on game theory, such as [27].
The most basic solution to the problem of empty threats is to refine the NE solution, and require a strategy profile to be in a NE at every history in the game. This results in a profile that is in _subgame perfect equilibrium_ (SPE).
**Definition 3.4** (Subgames of extensive game): _For any 2-person extensive game \(\Gamma=(H,P,A,u)\) and any non-terminal history \(h\in H\), the subgame \(\Gamma|_{h}\) is the 2-person extensive game \(\Gamma|_{h}=(H|_{h},P|_{h},A|_{h},u|_{h})\), where_
* \(h^{\prime}\in H|_{h}\) _if and only if_ \(h\circ h^{\prime}\in H\)_,_
* \(P|_{h}(h^{\prime})=P(h\circ h^{\prime})\)_,_
* \(A|_{h}(h^{\prime})=A(h\circ h^{\prime})\)_, and_
* \(u_{i}|_{h}(h^{\prime})=u_{i}(h\circ h^{\prime})\)_._
For each profile \(\sigma=(\sigma_{1},\sigma_{2})\) of strategies and history \(h\in H\), define the _conditional outcome_\(O(\sigma)|_{h}\) to be the probability distribution over terminal histories that results when the game starts at a history \(h\), and from that point onwards each player \(i\) follows strategy \(\sigma_{i}\).
**Definition 3.5** (Subgame perfect equilibrium (SPE)): _An \(\varepsilon\)-subgame perfect equilibrium of an extensive game \(\Gamma=(H,P,A,u)\) is a profile \(\sigma^{*}\) of strategies such that for each player \(i\) and each non-terminal history \(h\in H\),_
\[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\sigma^{*})|_{h}\right) \right]\geq\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\sigma^{*}_{-i} ,\sigma_{i})|_{h}\right)\right]-\varepsilon\]
_for every strategy \(\sigma_{i}\) of player \(i\). It is an SPE if the above holds for \(\varepsilon=0\) and a strict SPE if it holds for some \(\varepsilon<0\)._
### Constrained Games
In the standard game theory literature, where there are no computational constraints on the players, the available strategies \(\sigma_{i}\) of player \(i\) are all possible collections \(\left(\sigma_{i}(h)\right)_{h:P(h)=i}\), where \(\sigma_{i}(h)\) is an arbitrary distribution over \(A(h)\). In our setting, however, we will only consider strategies that can be implemented by computationally bounded ITMs. This requires being able to constrain players’ strategies to a strict subset of the possible strategies. One natural way to restrict the strategies is to allow only a subset of all distributions over \(A(h)\) at each history \(h\). However, this does not enable us to capture more elaborate restrictions, and specifically ones that might result from requiring strategies to be implementable by polynomial time ITMs. (For example, a player might have for every possible history a strategy that plays best response on that history, but no strategy that plays best response on _all_ histories.) To capture these more elaborate restrictions, we consider player \(i\) strategies that are restricted to an arbitrary subset \(T_{i}\) of all possible (mixed) strategies.
Given a pair \(T=(T_{1},T_{2})\) of such sets we can then define a constrained version of a game, in which only strategies that belong to these sets are considered.
**Definition 3.6** (Constrained game): _Let \(\Gamma=(H,P,A,u)\) be an extensive game and let \(T=(T_{1},T_{2})\), where \(T_{i}\subseteq\bigotimes_{h:P(h)=i}\Sigma_{i}(h)\) for each \(i\in\{1,2\}\). The \(T\)-constrained version of \(\Gamma\) is the game in which the only allowed strategies for player \(i\) belong to \(T_{i}\)._
NE of constrained games are defined similarly to regular NE, except that players’ strategies and deviations must be from the constraint sets.
**Definition 3.7** (NE in constrained games): _An \(\varepsilon\)-Nash equilibrium of a \((T_{1},T_{2})\)-constrained version of an extensive game \(\Gamma=(H,P,A,u)\) is a profile \(\sigma^{*}\in(T_{1},T_{2})\) of strategies such that for each player \(i\),_
\[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\sigma^{*})\right)\right] \geq\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\sigma^{*}_{-i},\sigma _{i})\right)\right]-\varepsilon\]
_for every strategy \(\sigma_{i}\in T_{i}\) of player \(i\). It is a NE if the above holds for \(\varepsilon\leq 0\) and a strict NE if it holds for some \(\varepsilon<0\)._
## 4 Threat-Free Nash Equilibrium
Our starting point is the inadequacy of subgame perfection in capturing sequential rationality in a computational context. As argued in Section 1.2, it is unreasonable to require computationally-bounded players to play optimally at every node of a game. In particular, in cryptographic settings this requires breaking the security of the protocol, which is assumed impossible under the computational constraints.
A possible idea might be to require that players “play optimally at every node of the game, under their computational constraints.” However, this idea cannot be interpreted in a sensible way. Computational constraints must be defined “globally,” and thus the notion of playing optimally under some computational constraint on a particular history is senseless. In particular, for any history of some cryptographic protocol, there is a small machine that plays optimally on this specific history _unconditionally_ (and breaks “cryptographic challenges” appearing in this history, by having the solutions hardwired). This machine is efficient, and so meets essentially any computational constraint. So, while under computational constraints every machine fails on cryptographic challenges in most histories, for every history there is a machine that succeeds. We thus assume that a player chooses his machine before the game starts, and cannot change his machine later.
### A New Solution Concept
In light of the above discussion, it seems like the solution concept we are looking for has to reconcile the following seemingly conflicting properties:
1. It implies an optimal strategy for the players _under their computational constraints_, which implies _non-optimal_ play on certain histories.
2. It does not allow empty threats, thus implying “sequential rationality.”
The crucial observation behind our definition is that in order to rule out empty threats, one does not necessarily need to require that players play optimally at _every_ node, because not every non-optimal play carries a threat to other players. In fact, in a typical cryptographic protocol, the security of each player is _building_ on other players not playing optimally (because playing optimally would mean breaking the security of the protocol). Thus, a player’s “declaration” to play non-optimally does not necessarily carry a threat: the other players may even gain from it. More generally, even in non-cryptographic protocols, at least in 2-player perfect information games, we can use the following observation: in any computational challenge, either a player gains from the other not playing optimally, or, if he does not gain, he can avoid introducing that computational challenge to the other player.³
[FOOTNOTE:3][ENDFOOTNOTE]
Following the above observation, we introduce a new solution concept for extensive games. The new solution concept requires that players be in NE, and moreover, that no player impose an empty threat on the other. At the same time, it does not require players to play optimally at every node. In other words, players may (declare to) play non-optimally on non-equilibrium support, yet this declaration of non-optimal play does not carry an empty threat. We call our new solution concept TFNE, for threat-free Nash equilibrium.
To make the above precise, we introduce a formal definition of an empty threat. An empty threat occurs when a player threatens to play “non-rationally” on some history in order to coerce the other player to avoid this history. Crucially, empty threats are such that, had the threatened _not_ believed the threat, had he deviated accordingly, and had the threatening player played “rationally,” the threatened player would have benefitted. To rephrase our intuition: a player faces an empty threat with respect to some strategy profile if by deviating from his prescribed strategy, and having the other player react “rationally,” he improves his payoff (in comparison with sticking to the prescribed strategy and having the other player react “rationally” from then on).
But what does it mean for the other player to react “rationally”? The other player may assume, recursively, that the first player will play a best response, and will not carry out empty threats against him, and so on, leading to a regressive definition.
### Vanilla Version
Before giving the general definition of TFNE that we will use, we present a simpler version that has no slackness parameter and that works for games without constrained strategies.
For a player \(i\) and a history \(h\), two strategies \(\sigma_{i}\) and \(\pi_{i}\) are _equivalent for player \(i\) on \(h\)_ if \(P(h)=i\) and \(\sigma_{i}(h)=\pi_{i}(h)\), or \(P(h)\neq i\). Two strategies _differ only on the subgame \(h\)_ if they are equivalent on every non-terminal history that does not have \(h\) as a prefix. Formally, they are equivalent on every history in \(H\setminus\{h^{\prime}\in H:h^{\prime}=h\circ h^{\prime\prime}\mbox{ for some }h^{\prime\prime}\}\). For a history \(h\in H\), a strategy \(\sigma\), and a distribution \(\tau=\tau(h)\) on \(A(h)\), let
\[\mathrm{Cont}(h,\sigma,\tau)\mathbin{\stackrel{{\rm def }}{{=}}}\Big{\{}\pi:(\pi\;\mathrm{differs\;from}\;\sigma\;\mathrm{only\;on\; the\;subgame}\;h)\;\&\;(\pi(h)=\tau(h))\Big{\}}.\]
We now proceed to define a threat. For simplicity, we will do so for generic games, in which each player’s possible payoffs are distinct. For such games, the set \(\mathrm{Cont}(h,\sigma,\tau)\) always contains exactly one “threat-free” element (defined below).
**Definition 4.1** (Threat): _Let \(\Gamma=(H,P,A,u)\) be an extensive game with distinct payoffs. Let \(\sigma\) be a strategy profile, and let \(h\in H\). Player \(i=P(h)\) is facing a threat at history \(h\) with respect to \(\sigma\) if there exists a distribution \(\tau=\tau(h)\) over \(A(h)\) such that the unique \(\pi\in\mathrm{Cont}(h,\sigma,\tau)\) and \(\pi^{\prime}\in\mathrm{Cont}(h,\sigma,\sigma)\) that are threat-free on \(h\) satisfy_
\[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi)\right)\right]>\mathop {\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi^{\prime})\right)\right],\]
_where strategy \(\pi\) is_ threat-free _on \(h\) if for all\(h^{\prime}\not=\epsilon\) satisfying \(h\circ h^{\prime}\in H\) player \(P(h\circ h^{\prime})\) is not facing a threat at \(h\circ h^{\prime}\) with respect to \(\pi\)._
Note that if \(h\) is such that for all \(a\in A(h)\) it holds that \(h\circ a\in Z\), then any profile \(\pi\) is threat free on \(h\).
**Definition 4.2** (Threat-free Nash equilibrium): _Let \(\Gamma=(H,P,A,u)\) be an extensive game. A strategy profile \(\sigma^{*}\) is a threat-free Nash equilibrium (TFNE) if:_
1. \(\sigma^{*}\) _is a_ \(NE\) _of_ \(\Gamma\)_, and_
2. _for any_ \(h\in H\)_, player_ \(P(h)\) _is not facing a threat at history_ \(h\) _with respect to_ \(\sigma^{*}\)_._
Note that in every profile that is in a TFNE, the effective play matches some SPE profile (more precisely, there is an SPE profile that yields exactly the same distribution on outcomes). This and other properties of threats and TFNE are formalized in the companion paper to this work [11].
In the definition of a threat we used the fact that \(\mathrm{Cont}(h,\sigma,\tau)\) and \(\mathrm{Cont}(h,\sigma,\sigma)\) each contain exactly one profile that is threat-free on \(h\). To show that this must be the case, we have the following proposition, which is not unlike the fact that generic games have unique subgame perfect equilibria.
**Proposition 4.3**: _For any extensive game \(\Gamma=(H,P,A,u)\), strategy profile \(\sigma\), player \(i\), history \(h\in H\setminus Z\) with \(P(h)=i\), and distribution \(\tau\) over \(A(h)\), the set \(\mathrm{Cont}(h,\sigma,\tau)\) contains exactly one profile that is threat-free on \(h\)._
* **Proof:** For any history \(h\in H\setminus Z\), let \(\mathrm{height}(h)\) be the maximal distance between \(h\) and a descendant of \(h\) (i.e. the leaf that is furthest away from \(h\) but lies on the subtree rooted by \(h\)). The proof of the proposition is by induction on \(\mathrm{height}(h)\). For the base case \(\mathrm{height}(h)=1\), note that there is exactly one element in \(\mathrm{Cont}(h,\sigma,\tau)\) and that this profile is threat-free on \(h\) (since \(h\) is a last move of the game). Next, suppose the claim of the proposition holds for all histories \(h\) with \(\mathrm{height}(h)<k\). We will prove that it holds for histories \(h\) with \(\mathrm{height}(h)=k\). To this end, fix such a history \(h^{0}\), and suppose the children of \(h^{0}\) in the game tree are \(h^{1},\ldots,h^{t}\). Suppose also that \(P(h^{0})=i\) and \(P(h^{1})=\ldots=P(h^{t})=-i\), and note that this is without loss of generality. Consider the profile \(\pi^{0}\) that is identical to \(\sigma\) except at history \(h\), and fix \(\pi^{0}(h)=\tau(h)\). We now repeat the following process in succession for each \(j\in\{1,\ldots,t\}\): For any such \(j\), let \[\mathrm{TF}({h^{j}})\mathbin{\stackrel{{\rm def}}{{=}}}\left\{\pi \in\bigcup_{\tau^{j}}\mathrm{Cont}(h^{j},\pi^{j-1},\tau^{j}):\pi\mbox{ is threat-free on }h^{j}\right\}.\] We then choose a profile \(\pi^{j}\in\mathrm{TF}({h^{j}})\) that satisfies \[u_{-i}\left(\pi^{j}\right)\geq u_{-i}\left(\pi^{\prime\prime}\right)\] for all \(\pi^{\prime\prime}\in\mathrm{TF}({h^{j}})\). Because payoffs for player \(-i\) are distinct, it must be the case that there exists a unique maximal \(\pi^{j}\). That is, there can be no \(\pi^{\prime\prime}\) that is different from \(\pi^{j}\) and has the same payoff for player \(-i\). After doing this for all \(h^{j}\in\{h^{1},\ldots,h^{t}\}\) we have a profile \(\pi^{t}\) that we claim is threat-free on \(h\). To see this, observe that for all \(j\in\{1,\ldots,t\}\), \(\pi^{j}\) is threat-free on \(h^{j}\) because we chose it to be a threat-free profile from \(\mathrm{Cont}(h^{j},\pi^{j-1},\tau^{\prime\prime})\). However, since for each \(j\) we chose a _maximal_\(\tau^{j}\), there are no threats at the histories \(h^{j}\) either. Finally, uniqueness of \(\pi^{j}\) is guaranteed by the fact that for each \(j\), our choice of a maximal \(\tau^{j}\) was unique.
### Round-Parameterized Version
For games induced by cryptographic protocols we will need a more general definition of TFNE. We assume that in these games players alternate moves, and thus there is a natural notion of the “rounds” in the game: Player \(i\) makes a move in round 1, then player \(-i\) makes a move in round 2, and so on until the end of the game.
For the general definition, we introduce a few modifications to the vanilla version:
* We add a slackness parameter \(\varepsilon\). This is necessary for our applications in order to handle the probability of error inherent in almost all cryptographic protocols.
* We allow players to be threatened at rounds, rather than just specific histories. This is needed because when we add the slackness parameter, a player might be threatened at a set of histories, where the weight of each individual threat does not exceed the slackness parameter, but the overall weight does.
* Finally, for a player to be threatened, we require that he improve on _all_ threat-free continuations \(\pi\). The reason we need this is that in the general case, there may be more than one \(\pi\) that is threat-free. If a player deviates from his prescribed behavior, he cannot choose _which_ (threat-free) continuation will be played.
The definitions below make use of the notion of a round \(R\) strategy of player \(i\): This is simply a function mapping every history \(h\) that reaches round \(R\) to a distribution over \(A(h)\). For a round \(R\in\mathbb{N}\) we let \(\sigma_{i}(R)\) represent player \(i\)’s round \(R\) strategy implied by \(\sigma\). Let \(\sigma(R)=(\sigma_{1}(R),\sigma_{2}(R))\), and let
\[\mathrm{Cont}(\sigma(1),\ldots,\sigma(R))\mathbin{\stackrel{{ \rm def}}{{=}}}\Big{\{}\pi\in T:\pi(S)=\sigma(S)~{}\forall S\leq R \Big{\}},\]
where \(T=(T_{1},T_{2})\) consists of constraints for players’ strategies.
**Definition 4.4** (\(\varepsilon\)-threat): _Let \(\Gamma=(H,P,A,u)\) be an extensive game with constraints \(T=(T_{1},T_{2})\). Let \(\varepsilon\geq 0\), let \(\sigma\in T\) be a strategy profile, and let \(R\in\mathbb{N}\) be a round of \(\Gamma\). Player \(i=P(R)\) is facing an \(\varepsilon\)-threat at round \(R\) with respect to \(\sigma\) if there exists a round \(R\) strategy \(\tau=\tau(R)\) for player \(i\) such that_
* _the set_ \(\mathrm{Cont}(\sigma(1),\ldots,\sigma(R\!-\!1),\tau(R))\) _is nonempty, and_
* _for all_ \(\pi\in\mathrm{Cont}(\sigma(1),\ldots,\sigma(R\!-\!1),\tau(R))\) _and_ \(\pi^{\prime}\in\mathrm{Cont}(\sigma(1),\ldots,\sigma(R))\) _that are_ \(\varepsilon\)_-threat-free on_ \(R\)__ \[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi)\right) \right]>\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi^{\prime}) \right)\right]+\varepsilon,\]
_where strategy \(\pi\) is \(\varepsilon\)-threat-free on \(R\) if for all rounds \(S>R\) it holds that player \(P(S)\) is not facing an \(\varepsilon\)-threat at round \(S\) with respect to \(\pi\)._
Note that if \(R\) is the last round of the game, then any profile \(\pi\in T\) is \(\varepsilon\)-threat-free on \(R\). Using Definition 4.4, we can now define an \(\varepsilon\)-TFNE.
**Definition 4.5** (\(\varepsilon\)-threat-free Nash equilibrium): _Let \(\Gamma=(H,P,A,u)\) be an extensive game with constraints \(T=(T_{1},T_{2})\). A strategy profile \(\sigma^{*}\in T\) is an \(\varepsilon\)-threat-free Nash equilibrium (\(\varepsilon\)-TFNE) if:_
1. \(\sigma^{*}\) _is an_ \(\varepsilon\)_-NE of_ \(\Gamma\)_, and_
2. _for any round_ \(R\) _of_ \(\Gamma\)_, player_ \(P(R)\) _is not facing an_ \(\varepsilon\)_-threat at round_ \(R\) _with respect to_ \(\sigma^{*}\)_._
As is the case for Definition 4.1, Definition 4.4 (and hence Definition 4.5) would not be (semantically) well-defined if either one of the sets \(\mathrm{Cont}(\sigma(1),\ldots,\sigma(R\!-\!1),\tau(R))\) or \(\mathrm{Cont}(\sigma(1),\ldots,\sigma(R))\) would not contain at least one profile \(\pi\) that is \(\varepsilon\)-threat-free on \(R\). The following proposition shows that this can never be the case.
**Proposition 4.6**: _Let \(\Gamma=(H,P,A,u)\) be an extensive game with constraints \(T=(T_{1},T_{2})\). Let \(\varepsilon\geq 0\), let \(\sigma\in T\) be a strategy profile, and let \(R\) be a round of \(\Gamma\). For any round \(R\) strategy \(\tau=\tau(R)\) for player \(i=P(R)\), if the set \(\mathrm{Cont}(\sigma(1),\ldots,\sigma(R\!-\!1),\tau(R))\) is nonempty then it contains at least one profile \(\pi\) that is \(\varepsilon\)-threat-free on \(R\)._
* **Proof:** For any round \(R\) of \(\Gamma\), let \(\mathrm{height}(R)\) be the distance between \(h\) and the last round of \(\Gamma\). The proof of the proposition is by induction on \(\mathrm{height}(R)\). For the base case \(\mathrm{height}(R)=0\), note that, by the hypothesis of the proposition, the set \(\mathrm{Cont}(\sigma(1),\ldots,\sigma(R\!-\!1),\tau(R))\) is nonempty. Since \(R\) is the last round of the game, the set contains exactly one profile, \((\sigma(1),\ldots,\sigma(R\!-\!1),\tau(R))\), and this profile is vacuously \(\varepsilon\)-threat-free on \(R\). Next, suppose the claim of the proposition holds for all rounds \(R\) with \(\mathrm{height}(R)<k\). We will prove that it holds for round \(R\) satisfying \(\mathrm{height}(R)=k\). Let \(i=P(R)\), and assume that there exists some \(\pi^{\prime}\in\mathrm{Cont}(\sigma(1),\ldots,\sigma(R\!-\!1),\tau(R))\). We would like to show that \(\mathrm{Cont}(\sigma(1),\ldots,\sigma(R\!-\!1),\tau(R))\) contains at least one profile \(\pi\) that is \(\varepsilon\)-threat-free on \(R\). By the inductive hypothesis we have that, for any round \(R+1\) strategy \(\tau^{\prime}\) of player \(-i\), if the set \(\mathrm{Cont}(\sigma(1),\ldots,\sigma(R-1),\tau(R),\tau^{\prime}(R+~{}1))\) is nonempty then it contains at least one profile that is \(\varepsilon\)-threat-free on \(R+1\) (since \(\mathrm{height}(R+1)<k\)). We will choose a profile that has a _maximal_\(\tau^{\prime}\) as follows. Let \[\mathrm{TF}({R+1})\mathbin{\stackrel{{\rm def}}{{=}}}\left\{\pi\in \bigcup_{\tau^{\prime}}\mathrm{Cont}(\sigma(1),\ldots,\sigma(R-1),\tau(R),\tau ^{\prime}(R+1)):\pi\mbox{ is }\varepsilon\mbox{-threat-free on }R+1\right\},\] and note that \(\mathrm{TF}({R+1})\) must be nonempty. This is because there always exists at least one \(\tau^{\prime}\) for which \(\mathrm{Cont}(\sigma(1),\ldots,\sigma(R-1),\tau(R),\tau^{\prime}(R+1))\) is nonempty: namely, we could have \(\tau^{\prime}(R+1)=\pi^{\prime}(R+1)\). Since \(\mathrm{Cont}(\sigma(1),\ldots,\sigma(R-1),\tau(R),\pi^{\prime}(R+1))\) is nonempty by assumption, it must contain a profile that is \(\varepsilon\)-threat-free on \(R+1\) (by the inductive hypothesis). We now choose a profile \(\pi\in\mathrm{TF}({R+1})\) that satisfies \[u_{-i}\left(\pi\right)\geq u_{-i}\left(\pi^{\prime\prime}\right)-\varepsilon\] for all \(\pi^{\prime\prime}\in\mathrm{TF}({R+1})\). So now we have a profile \(\pi\in\mathrm{Cont}(\sigma(1),\ldots,\sigma(R\!-\!1),\tau(R))\), which we claim is \(\varepsilon\)-threat-free on round \(R\). To see this, note that \(\pi\) is \(\varepsilon\)-threat-free on \(R+1\) by the way we chose it (i.e. a profile from \(\mathrm{Cont}(\sigma(1),\ldots,\sigma(R-1),\tau(R),\tau^{\prime}(R+1))\) that is \(\varepsilon\)-threat-free on \(R+1\)). However, since we chose a _maximal_\(\tau^{\prime}\) (up to \(\varepsilon\)), there is no \(\varepsilon\)-threat at round \(R+1\) either. Thus \(\pi\) is \(\varepsilon\)-threat-free on \(R\).
## 5 The Computational Setting
In the following we explain how to use the notion of TFNE for cryptographic protocols. In Section 5.1 we describe how to view a cryptographic protocol as a sequence of extensive games. In Section 5.2 we show how to translate the behavior of an interactive TM to a sequence of strategies. In Section 5.3 we show how to express computational hardness in a game-theoretic setting. Finally, in Section 5.4 we give our definition of computational TFNE.
### Protocols as Sequences of Games
When placing cryptographic protocols in the framework of extensive games, the possible messages of players in a protocol correspond to the available actions in the game tree, and the prescribed instructions correspond to a strategy in the game.
The protocol is parameterized by a security parameter \(k\in\mathbb{N}\). The set of possible messages in the protocol, as well as its prescribed instructions, typically depend on this \(k\). Assigning for each \(k\) and each party a payoff for every outcome, a protocol naturally induces a sequence \(\Gamma^{(k)}=(H^{(k)},P^{(k)},A^{(k)},u^{(k)})\) of extensive games, where:
* \(H^{(k)}\) is the set of possible _transcripts_ of the protocol (sequences of messages exchanged between the parties). A history \(h\in H^{(k)}\) is _terminal_ if the prescribed instructions of the protocol instruct the player whose turn it is to play next to halt on input \(h\).
* \(P^{(k)}:(H^{(k)}\setminus Z^{(k)})\rightarrow\{1,2\}\) is a function that assigns a “next” player to every non-terminal history.
* \(A^{(k)}\) is a function that assigns to every non-terminal history \(h\in H^{(k)}\setminus Z^{(k)}\) a set \(A^{(k)}(h)=\{m\,:\,(h\circ m)\in H^{(k)}\}\) of possible protocol messages to player \(P^{(k)}(h)\).⁴ [FOOTNOTE:4][ENDFOOTNOTE]
* \(u^{(k)}=(u_{1}^{(k)},u^{(k)}_{2})\) is a vector of payoff functions \(u^{(k)}_{i}:Z^{(k)}\rightarrow\mathbb{R}\).
A sequence \(\Gamma=\{\Gamma^{(k)}\}_{k\in\mathbb{N}}\) of games defined as above is referred to as a _computational game_.
**Remark 5.1**: In the following we will consider games played by Turing machines. Thus, actions will be represented by strings. As opposed to traditional game theory, where players are computationally unbounded, in our case the names of the actions will be significant. For example, in the One-way Permutation Game, if we encode player 1’s action \(f(x)\) by the string \(x\) for every \(x\in\{0,1\}^{k}\), then inverting the one-way permutation becomes easy for player 2. However, to avoid too much notation, we will identify actions with their string representation. The reader should keep in mind, however, that actions are always strings, and that changing the string representation of actions might be _with_ loss of generality. __
### Strategic Representation of Interactive Machines
Protocols are defined in terms of _interactive Turing machines_ (ITMs) – see [8] for a formal definition. More specifically, the prescribed behavior for each player is defined via an ITM, and any possible deviation of this player corresponds to choosing a different ITM. In order to argue about the protocol in a game-theoretic manner we formalize, using game-theoretic notions, the strategic behavior implied by ITMs. We believe this formalization is necessary for our treatment or any game-theoretic analysis of ITMs, in particular because, to the best of our knowledge, it has never been done before. However, because this section somewhat departs from the main thrust of the paper, the reader may skip to Section 5.3, keeping the following (informally stated) conclusion in mind: The strategic behavior of an ITM for player \(i\) in a protocol may be seen as a collection of independent distributions on actions, one for each of player \(i\)’s histories that are reached with positive probability given the ITM of player \(i\) and some strategy profile of the other players. We refer to this collection as the behavioral reduced strategy induced by the ITM.
When considering some computational game \(\Gamma^{(k)}\) in a sequence \(\Gamma=\{\Gamma^{(k)}\}_{k\in\mathbb{N}}\) and an ITM “playing” this game (with input \(1^{k}\)), the machine does not, strictly speaking, define a strategy. Informally, the machine specifies how to play _only on histories that are not inconsistent with the specification on earlier histories in the game_. That is, an ITM for player \(i\) specifies distributions on actions for all histories on which it is player \(i\)’s turn, except those it cannot reach based on its own specification on earlier histories. This is the case, because when fixing the other player’s moves, the distribution on actions the machine plays on a history that cannot be reached is simply undefined, as we are conditioning on an event with probability \(0\). In the following, we show that the prescribed behavior of an ITM can be seen as a convex combination of _reduced strategies_ (which we call _mixed reduced strategy_), to be defined next. We then define the natural analogue of _behavioral reduced strategy_, and argue that for every mixed reduced strategy there exists a behavioral reduced strategy that is outcome-equivalent. We will eventually use behavioral reduced strategies to describe the behavior induced by ITMs.
**Definition 5.2** (Reduced strategy (adapted from [27])): _Given a game \(\Gamma=(H,P,A,u)\), a (pure) reduced strategy for player \(i\) is a function \(\sigma_{i}\) whose domain is a subset of \(\{h\in H|P(h)=i\}\) with the following properties:_
* _For every_ \(h\) _in the domain of_ \(\sigma_{i}\) _it holds that_ \(\sigma_{i}(h)\in A(h)\)_._
* \(h=(a_{1},\dots,a_{m})\) _is in the domain of_ \(\sigma_{i}\) _if and only if for any_ \(1\leq\ell\leq m-1\) _such that_ \(P(a_{1},\dots,a_{\ell})=i\) _it holds that_ \((a_{1},\dots,a_{\ell})\) _is in the domain of_ \(\sigma_{i}\) _and_ \(\sigma_{i}(a_{1},\dots,a_{\ell})=a_{\ell+1}\)_._
**Definition 5.3** (Mixed reduced strategy): _A mixed reduced strategy for player \(i\) is a distribution over reduced strategies for player \(i\)._
Given an ITM for \(\Gamma^{(k)}\), for every instance of internal randomness for that machine (i.e., a vector of coins), the induced behavior of that ITM is exactly a reduced strategy. This is the case because for every profile of pure strategies (or reduced pure strategies) of the other players, the randomness naturally defines an action for every history that is consistent with its previous actions (the sequence of these actions, together with the profile, defines the outcome of the game), and on the other hand, naturally the randomness does not define an action for histories that are not consistent with that randomness (as with that randomness the machine will never reach these histories). It follows that an ITM defines a distribution over reduced (pure) strategies, i.e., a mixed reduced strategy. We now formalize this claim.
**Definition 5.4** (Induced mixed reduced strategy of an ITM): _Let \(M\) be a probabilistic ITM for player \(i\) in the extensive game \(\Gamma\). Assume that \(M\) halts for any infinite vector of coins and any sequence of messages sent by the other players, and let \(t\) be a bound on the number of coins it reads. Let \(r\) be a (sufficiently long) coin vector for \(M\). Then the induced pure reduced strategy \(\sigma^{(r)}_{i}\) of \(M\) with randomness \(r\) is defined as follows:_
* \(h=(a_{1},\dots,a_{m})\) _is in the domain of_ \(\sigma^{(r)}_{i}\) _if and only if:_ * \(P(a_{1},\dots,a_{m})=i\)_;_ * _For any_ \(1\leq\ell\leq m-1\) _such that_ \(P(a_{1},\dots,a_{\ell})=i\) _it holds that_ \((a_{1},\dots,a_{\ell})\) _is in the domain of_ \(\sigma^{(r)}_{i}\) _and when_ \(M\) _with randomness_ \(r\) _participates in an interaction, conditioned on the sequence of sent messages being_ \((a_{1},\dots,a_{\ell})\) _(where_ \(a_{\ell+1}\) _is a message sent by the ITM representing player_ \(P(a_{1},\dots,a_{\ell})\) _for any_ \(1\leq\ell\leq m-1\)_), the message sent by_ \(M\) _is_ \(a_{\ell+1}\)_._⁵__ [FOOTNOTE:5][ENDFOOTNOTE]
* _For any_ \(h=(a_{1},\dots,a_{m})\) _in the domain of_ \(\sigma^{(r)}_{i}\)_, the action_ \(\sigma^{(r)}_{i}(a_{1},\cdots,a_{m})\) _is the message sent by_ \(M\) _with randomness_ \(r\) _conditioned on the sequence of sent messages being_ \((a_{1},\dots,a_{m})\)_._
_The mixed reduced strategy induced by \(M\) is now defined as follows: the probability assigned to any pure reduced strategy \(\sigma\) is the probability that the induced reduced strategy of \(M\) with randomness \(r\) is \(\sigma\), where \(r\) is uniformly chosen from \(U_{t}\)._
In [27] it is shown that for perfect-recall extensive games (which are the only games we will consider here), every mixed strategy has a behavioral strategy that is outcome equivalent. (Two strategies are outcome-equivalent if for every profile of pure strategies of the other players the two strategies induce the same distribution on outcomes; A mixed strategy is a distribution on pure strategies). Next, we define the behavioral analogue of a mixed reduced strategy, and argue that the same holds for mixed and behavioral _reduced_ strategies: For perfect-recall extensive games, every mixed reduced strategy has a behavioral reduced strategy that is outcome equivalent.
**Definition 5.5** (Behavioral reduced strategy): _Given a game \(\Gamma=(H,P,A,u)\), a behavioral reduced strategy for player \(i\) is a collection \(\sigma_{i}=\left(\sigma_{i}(h)\right)_{h\in{\cal H}}\) of independent probability measures, where \({\cal H}\) is a subset of \(\{h\in H|P(h)=i\}\), with the following properties:_
* \(\sigma_{i}(h)\) _is a probability measure over_ \(A(h)\) _for every_ \(h\) _in_ \({\cal H}\)_._
* \(h=(a_{1},\dots,a_{m})\) _is in_ \({\cal H}\) _if and only if for any_ \(1\leq\ell\leq m-1\) _such that_ \(P(a_{1},\dots,a_{\ell})=i\) _it holds that_ \((a_{1},\dots,a_{\ell})\in{\cal H}\) _and_ \(\sigma_{i}(a_{1},\dots,a_{\ell})(a_{\ell+1})>0\)_._
**Claim 5.6**: _Every mixed reduced strategy has a behavioral reduced strategy that is outcome equivalent._
* **Proof Sketch:** Every pure reduced strategy \(\sigma_{i}\) for player \(i\) can be extended to a (full) pure strategy by assigning arbitrary values to all histories in \(\{h:P(h)=i\}\) for which \(\sigma_{i}\) is undefined. The two strategies will be outcome-equivalent, as the outcome is only affected by the consistent histories of \(\sigma_{i}\). It follows that every mixed reduced strategy can be extended to a mixed (full) strategy that is outcome-equivalent. On the other hand, every behavioral strategy \(\sigma_{i}=\left(\sigma_{i}(h)\right)_{h:P(h)=i}\) can be restricted to a behavioral reduced strategy by restricting the collection of probability measures accordingly. Again, the two strategies will be outcome-equivalent, as the distribution on outcomes is only affected by the consistent histories of \(\sigma_{i}\). Finally, as mentioned above, in [27] it is shown that for perfect-recall extensive games, every mixed strategy has a behavioral strategy that is outcome equivalent. Thus, given some mixed reduced strategy we extended it to a mixed strategy that is outcome-equivalent, then transform it to a behavioral strategy that is outcome-equivalent, and finally we restrict the resulting behavioral strategy to an outcome-equivalent behavioral reduced strategy. \(\Box\)
As argued above, ITMs induce mixed reduced strategies, and by Claim 5.6, these induce behavioral reduced strategies. Thus, in the following we will model ITMs by behavioral reduced strategies. This is captured by the notion of _strategic representation_.
**Definition 5.7** (Strategic representation of an ITM): _Let \(\Gamma\) be a game and let \(i\in\{1,2\}\). Let \(M\) be an ITM for player \(i\). Assume that \(M\) halts for any infinite vector of coins and any sequence of messages sent by the other players. Let \(\sigma\) be the mixed reduced strategy induced by \(M\). Then the_ strategic representation _of \(M\) is the behavioral reduced strategy that is outcome-equivalent to \(\sigma\).⁶_
[FOOTNOTE:6][ENDFOOTNOTE]
_Similarly, for a sequence of games \(\{\Gamma^{(k)}\}_{k\in\mathbb{N}}\) and an ITM \(M\) that takes a security parameter \(1^{k}\), the strategic representation of \(M\) is the sequence of strategic representations of \(M(1),M(1^{2}),M(1^{3}),\dots\)._
#### 5.2.1 \(\varepsilon\)-TFNE for Reduced Strategies
In Section 4.3 we presented our general definition of TFNE. However, that definition was framed for strategies and, following the conclusion of the previous section, we actually care about reduced strategies. To make Definition 4.5 work for reduced strategies we notice that only two small changes need to be made: We need to define the notion of a round \(R\) reduced strategy, and we need to allow the constraint sets \(T_{1}\) and \(T_{2}\) to include behavioral reduced strategies.
**Definition 5.8** (Round \(R\) reduced strategy): _Let \(\Gamma=(H,P,A,u)\) be an extensive game, let \(R\) be a round of \(\Gamma\), and let \(\sigma_{i}\) be a behavioral reduced strategy of player \(i=P(R)\). Then \(\tau=\tau(R)\) is a round \(R\) reduced strategy of player \(i\) consistent with \(\sigma_{i}\) if the following hold:_
* _When_ \(R=1\)_,_ \(\tau(1)\) _is a distribution over_ \(A(\epsilon)\)_._
* _Otherwise, there exists some behavioral reduced strategy_ \(\pi_{i}\) _of player_ \(i\) _for which_ \(\pi_{i}(j)=\sigma_{i}(j)\) _for all_ \(j\in\{1,\ldots,R\!-\!1\}\)_, and such that_ \(\pi_{i}(R)=\tau_{i}(R)\)_._
Throughout the paper, the behavioral reduced strategy \(\sigma_{i}\) with which \(\tau(R)\) is consistent will be evident from the context, and so we omit reference to this consistency requirement.
Next, we modify the definition of constraints (Definition 3.6) by allowing each constraint set \(T_{i}\) to be a subset of \(\bigotimes_{h:P(h)=i}(\Sigma_{i}(h)\cup\perp)\), where \(\sigma_{i}(h)=\perp\) if the history \(h\) is not in the domain of the reduced strategy \(\sigma_{i}\).
Finally, we observe that, following the two modifications above, Definitions 4.4 and 4.5 work for behavioral reduced strategies as well (replacing “strategy” by “behavioral reduced strategy” and “round \(R\) strategy” by “round \(R\) reduced strategy”).
### Computational Hardness in the Game-Theoretic Setting
The security of cryptographic protocols stems from the assumption on the limitation of the computational power of the players. In our strategic analysis of games, we also expect to deduce the (sequential) equilibrium from this limitation. However, because protocols are parameterized by a security parameter, a strategic analysis of protocols requires dealing with a _sequence_ of games rather than a single game. While relating to the sequence of games is crucial in order to express computational hardness (as this hardness is defined in an asymptotic manner), this raises a new difficulty: How do we extend the definition of TFNE to sequences of games?
An appealing approach might be to try to define empty threats for sequences of games. That is, one might consider the effect of deviations on the expected payoff as \(k\) goes to infinity (much like the derivation of CNE from NE). However, to the best of our understanding this approach cannot work. Loosely speaking, this is because in order to relate to empty threats one has to consider deviations in internal nodes of the game tree, and it is not clear how to define such deviations for sequences of games. Typically, the structure of the game tree changes with \(k\), so it is not clear even how to define an “internal node” in a _sequence_ of games.
Instead, our approach insists on analyzing empty threats for _individual_ games. Thus, our solution concept reflects a hybrid approach that relates to a protocol both as a family of _individual, extensive games_ and as a _sequence_ of _normal-form games_. To eliminate empty threats one must relate to the _interactive_ aspect of each _individual_ game (as this is the setting where threats are defined). In order to claim players are playing optimally under their computational constraints, one must think of the protocol as a _sequence_ of _one-shot_ games (because computational hardness is meaningful only when players are required to choose their machines in advance, and as the traditional notion of hardness is stated asymptotically).
#### 5.3.1 Strategy-filters
When considering computational games \(\Gamma=\{\Gamma^{(k)}\}_{k\in\mathbb{N}}\), the computational bounds on the players will be expressed by restricting the space of available strategies for the players. The available sequences of reduced strategies for the players will be exactly those that can be played by the ITMs that meet the computational bound on the players. In our case we will consider PPT ITMs.
While on the one hand every PPT ITM fails on cryptographic challenges for large enough values of the security parameter \(k\) (under appropriate assumptions), on the other hand, PPT ITMs can have arbitrarily large size and thus arbitrarily much information hardwired, and so for every \(k\) there is a PPT ITM that breaks the cryptographic challenges with security parameter \(k\). In our analysis, we would like to “filter” machines according to their ability to break cryptographic challenges for specific \(k\)’s, and allow using them only in games that correspond to large enough \(k\)’s, where these machines fail (and in particular, cannot use hard-wiring to solve the cryptographic challenges).
To this end, we define the notion of a _strategy-filter_. For each value \(k\) of the security parameter and value \(\varepsilon\), a strategy-filter maps the ITM \(M\) to either \(\bot\) or to its strategic representation, according to whether \(M(1^{k})\) violates level of security \(\varepsilon\) or does not (respectively).
**Definition 5.9** (Strategy-filter): _Let \(\Gamma=\{\Gamma^{(k)}\}_{k\in\mathbb{N}}\) be a computational game and let \(i\) be a player. A strategy-filter is a sequence \(F_{i}=\{F_{i}^{(k)}:{\cal M}\times[0,1]\rightarrow\Sigma^{(k)}_{i}\cup\{\bot\} \}_{k\in\mathbb{N}}\) such that for every ITM \(M\), every \(k\in\mathbb{N}\) and every \(\varepsilon\in[0,1]\), it holds that either \(F_{i}^{(k)}(M,\varepsilon)=\bot\), or \(F_{i}^{(k)}(M,\varepsilon)=\sigma_{i}^{(k)}\), where \(\sigma_{i}^{(k)}\) is the strategic representation of the machine \(M(1^{k},\cdot)\)._
A strategy-filter is meaningful if it allows us to reason about all reduced strategies that are considered to be feasible, in our case PPT implementable reduced strategies, and in particular does not filter them out. This is captured in the following definition.
**Definition 5.10** (PPT-covering filter): _A strategy-filter \(F_{i}\) is said to be PPT-covering if for every PPT ITM \(M\) and any positive polynomial \(p(\cdot)\) there exists \(k_{0}\) such that for all \(k\geq k_{0}\), it holds that \(F_{i}^{(k)}(M,1/p(k))\neq\bot\)._
Typically, protocols have the following security guarantee (under computational assumptions): for every \(i\), every PPT ITM \(M\) of \(P_{i}\) and every polynomial \(p(\cdot)\), there exists \(k_{0}\) such that for any \(k\geq k_{0}\), the ITM \(M\) does not break level of security \(1/p(k)\) in the protocol with security parameter \(k\). Such a protocol will naturally have a PPT-covering filter, where if \(F_{i}^{(k)}(M,\varepsilon)\neq\bot\) then the reduced strategy \(F_{i}^{(k)}(M,\varepsilon)\) “does not break level of security \(\varepsilon\) in the game \(\Gamma^{(k)}\).”
#### 5.3.2 Tractable Reduced Strategies
As reflected above, the asymptotic nature of defining security does not determine any level of security for any \(k\). Rather, it dictates that any PPT ITM “eventually fails in violating \(1/p(k)\) security” for any \(p(\cdot)\) (where “eventually” means for large enough \(k\)). Thus, we follow the same approach in our game theoretic analysis: roughly speaking, our solution concept requires that \(\varepsilon\)-security will imply \(\varepsilon\)-stability for any \(k\) (rather than requiring a particular level of stability for each \(k\)). More formally, we require that for any \(k\) and any \(\varepsilon\), the game induced by the protocol with security parameter \(k\) be in \(\varepsilon\)-TFNE, given that the available strategies for the players are those that do not break level of security \(\varepsilon\). Thus, for any pair \((k,\varepsilon)\) we will consider the game \(\Gamma^{(k)}\) with available reduced strategies restricted to those that guarantee \(\varepsilon\)-security. The following definition derives from a PPT-covering filter, for each such game, the set of available reduced strategies for each player.
**Definition 5.11** (Tractable reduced strategies): _Let \(F_{i}\) be a PPT-covering filter. For every \(k\in\mathbb{N}\) and \(\varepsilon\in[0,1]\) we define the set \(T^{(k)}_{i,\varepsilon}(F_{i})\) of \((k,\varepsilon)\)-tractable reduced strategies for player \(i\in\{1,2\}\) as_
\[\{F_{i}^{(k)}(M,\varepsilon)|M~{}\mbox{is a PPT ITM and}~{}F_{i}^{(k)}(M,\varepsilon)\neq\bot\}.\]
Whenever \(F_{i}\) will be understood from the context, we will write \(T^{(k)}_{i,\varepsilon}\) to mean \(T^{(k)}_{i,\varepsilon}(F_{i})\).
### Computational TFNE
We can now define our computational variant of TFNE. Roughly, the definition requires that there exist a family of PPT compatible constraints such that for any \(k\) and any \(\varepsilon\), the strategies played by the machines on input security parameter \(k\) are in \(\varepsilon\)-TFNE in the game indexed by \((k,\varepsilon)\).
**Definition 5.12** (Computational TFNE): _Let \(\Gamma\) be a computational game. A pair of PPT machines \((M_{1},M_{2})\) is said to be in a computational threat-free Nash equilibrium (CTFNE) of \(\Gamma\) if there exists a pair of PPT-covering filters \((F_{1},F_{2})\) such that for every \(k,\varepsilon\) for which \(F_{1}^{(k)}(M_{1},\varepsilon)\) and \(F_{2}^{(k)}(M_{2},\varepsilon)\) are tractable the profile \((F_{1}^{(k)}(M_{1},\varepsilon),F_{2}^{(k)}(M_{2},\varepsilon))\) constitutes an \(\varepsilon\)-TFNE in the \((T^{(k)}_{1,\varepsilon},T^{(k)}_{2,\varepsilon})\)-constrained version of \(\Gamma^{(k)}\)._
The expressive power of Definition 5.12 is illustrated through the following claim, which refers to Example 1.2. We omit the proof, and proceed to more interesting applications in sections 6 and 7.
**Claim 5.13**: _In the modified one-way permutation game,_
* _the strategy profile in which_ \(P_{1}\) _plays 0 and_ \(P_{2}\) _plays 0 after a history of 0 and randomly otherwise is a CTFNE, and_
* _any profile in which_ \(P_{2}\) _plays randomly after history 0 is not a CTFNE._
We note that part (ii) of the claim can easily be extended to profiles in which, after history 0, \(P_{2}\) plays 0 with probability at most \(1-p(k)\) for any polynomial \(p\).
## 6 The Coin-Flipping Game
In the following we describe a classic protocol for coin-flipping, formulated as a sequence of games (parameterized by a security parameter \(k\)). We then show that the prescribed behavior according to that protocol constitutes a CTFNE in the sequence of games.
Following is an informal description of the sequence of games. We assume some perfectly binding commitment scheme with the following properties (see Appendix A for a formal definition):
* For any security parameter \(k\) (which is a common input to the sender and receiver), the “commit” phase consists of one message from the sender to the receiver, denoted \({\sf com}^{(k)}\), which is of length bounded by \(p(k)\) for some polynomial \(p\).
* For any PPT ITM, the advantage in guessing the committed value given the aforementioned message is negligible in \(k\).
The description defines the legal messages in each game. Recall that at any phase where a player is supposed to send a message, the move “abort” is legal (and well-defined). Note also, that any illegal message is interpreted as abort by the other player. The game \(\Gamma^{(k)}\) is defined as follows:
1. Player 1 chooses a string \(c\) of length at most \(p(k)\) and sends it to player 2.
2. Player 2 chooses a bit \(r_{2}\), and sends \(r_{2}\) to player 1.
3. Player 1 does one of the following: (1) sends to player 2 \({\sf decom}\), where \({\sf decom}\) is a legal decommitment to \(c\) revealing that the committed value was \(1-r_{2}\) (in that case the payoffs are (1,0)); or (2) aborts (in that case the payoffs are (0,1)).
Any other abort results in the aborting player receiving payoff 0, and the other player receiving 1.
We now describe a pair of interactive ITMs for the game \(\Gamma^{(k)}\) that form a CTFNE. We describe them interleaved, in the form of a protocol. We denote the ITMs playing the strategies of \(P_{1},P_{2}\) by \(M_{1},M_{2}\), respectively.
1. Player 1 chooses a random bit \(r_{1}\), and sends \(c={\sf com}^{(k)}(r_{1})\) to player 2 (player 1 also obtains \({\sf decom}\), which is a legal decommitment to \(c\)).
2. Player 2 chooses a random bit \(r_{2}\), and sends \(r_{2}\) to player 1.
3. If \(r_{1}\neq r_{2}\), player 1 sends \({\sf decom}\) to player 2. Else, player 1 aborts.
**Theorem 6.1**: _The pair \((M_{1},M_{2})\) forms a CTFNE for the protocol above._
* **Proof:** First we define the functions \(F_{1}^{(k)}\) and \(F_{2}^{(k)}\). For any \(k\), the function \(F_{1}^{(k)}\) never maps to \(\bot\) (this, roughly speaking, reflects the fact that the protocol is secure against an all-powerful player 1). For \(F_{2}\) we use the following rule: \(F_{2}^{(k)}(M,\varepsilon)=\bot\) if and only if “for security parameter \(k\), the PPT ITM \(M\) guesses the committed value with advantage greater than \(\varepsilon\).” More formally, \(F_{2}^{(k)}(M,\varepsilon)=\bot\) if and only if when player 1 sends as the first message a random commitment of a random bit (i.e., chooses a random bit and then uses the aforementioned commitment scheme using uniformly random coins), then the message with which \(M\) reacts is the committed value of player 1 with probability greater than \(1/2+\varepsilon\). The fact that \(F_{1}\) is PPT-covering is straightforward. The fact that \(F_{2}\) is PPT covering follows directly from the security of the commitment scheme: For any positive polynomial \(p\), every PPT ITM has advantage smaller than \(1/p(k)\) in guessing the committed value with security parameter \(k\), for large enough \(k\)’s. Next, we need to show that for every \(k,\varepsilon\) for which \(F_{1}^{(k)}(M_{1},\varepsilon)\not=\perp\) and \(F_{2}^{(k)}(M_{2},\varepsilon)\not=\perp\) the profile \((F_{1}^{(k)}(M_{1},\varepsilon),F_{2}^{(k)}(M_{2},\varepsilon))\) constitutes an \(\varepsilon\)-TFNE in the \(T=(T^{(k)}_{1,\varepsilon},T^{(k)}_{2,\varepsilon})\)-constrained version of \(\Gamma^{(k)}\). Let \(k,\varepsilon\) be as above, and let \(\sigma=(\sigma_{1},\sigma_{2})=(F_{1}^{(k)}(M_{1},\varepsilon),F_{2}^{(k)}(M_{ 2},\varepsilon))\). We first show that \(\sigma\) constitutes an \(\varepsilon\)-NE in the \(T\)-constrained version of \(\Gamma^{(k)}\). The strategy \(\sigma_{1}\) chooses a random commitment of a random bit in round 1, and in round 3 decommits whenever it can. It is easy to see that this is optimal, as player 2 always guesses the committed value with probability \(1/2\), and so there is no strategy for player 1 for which he can decommit with probability greater than \(1/2\) in round 3. It is also easy to see that player 2’s strategy is an \(\varepsilon\) best-response, as any PPT ITM \(M_{2}\) for player 2 for which \(F_{2}^{(k)}(M_{2},\varepsilon)\not=\perp\) does not guess with advantage more than \(\varepsilon\). We conclude that \(\sigma\) constitutes an \(\varepsilon\)-NE in the \(T\)-constrained version of the game \(\Gamma^{(k)}\). Next, we show that no player is facing an \(\varepsilon\)-threat with respect to \(\sigma\) at any round of the \(T\)-constrained version of \(\Gamma^{(k)}\). Note that for both players, the expected payoff according to \(\sigma\) is \(1/2\). Suppose some player is facing an \(\varepsilon\)-threat with respect to \(\sigma\). We divide the proof into cases. Case 1 – \(P_{1}\) is facing an \(\varepsilon\)-threat in round 3:In order for \(P_{1}\) to improve in Step 3 by more than \(\varepsilon\), it must play a round 3 strategy \(\tau(3)\) in which he sends \({\sf decom}\) that proves that \(r_{1}\neq r_{2}\) with larger probability than in \(\sigma\). However, since in \(\sigma\) player 1 sends \({\sf decom}\) whenever \(r_{1}\neq r_{2}\) (and otherwise no such \({\sf decom}\) exists, since the commitment is perfectly binding), we conclude that no such \(\tau(3)\) exists. Case 2 – \(P_{2}\) is facing an \(\varepsilon\)-threat in round 2:According to the constraints, \(P_{2}\) cannot guess \(r_{1}\) with probability greater than \(1/2+\varepsilon\). So in order for him to improve by _more_ than \(\varepsilon\), it must be the case that he has some round 2 strategy \(\tau(2)\), such that in any \(\varepsilon\)-threat-free continuation in \(\mathrm{Cont}(\sigma(1),\tau(2))\) player 1 aborts with positive probability conditioned on \(r_{1}\not=r_{2}\). However, any continuation where \(P_{1}\) aborts with zero probability conditioned on \(r_{1}\not=r_{2}\) (and sends \({\sf decom}\)) is \(\varepsilon\)-threat-free, and so there is no deviation for \(P_{2}\) for which he improves on _all_\(\varepsilon\)-threat-free continuation. Case 3 – \(P_{1}\) is facing an \(\varepsilon\)-threat in round 1:Since \(\sigma\) is \(\varepsilon\)-threat-free on round 1, if \(P_{1}\) is threatened in round 1 then he has a round 1 strategy \(\tau(1)\) so that for all \(\varepsilon\)-threat-free profiles in \(\mathrm{Cont}(\tau(1))\) his expected payoff is greater than \(1/2+\varepsilon\). Consider the profile \(\sigma^{\prime}=(\tau(1),\sigma(2),\sigma(3))\). This profile gives both players an expected payoff of \(1/2\) (assuming \(\tau(1)\) aborts with probability 0, which is clearly optimal), and is \(\varepsilon\)-threat-free on round 2 (by the same argument as Case 1 above). If \(\sigma^{\prime}\) is \(\varepsilon\)-threat-free on round 1 as well, then \(P_{1}\) does not improve by more than \(\varepsilon\) using the deviation \(\tau(1)\). If \(\sigma^{\prime}\) is not \(\varepsilon\)-threat-free on round 1, then in any \(\varepsilon\)-threat-free profile in \(\mathrm{Cont}(\tau(1))\) player 2’s payoff must be greater than \(1/2+\varepsilon\). However, this means that \(P_{1}\)’s payoff is less than \(1/2\), and again he does not improve using the deviation \(\tau(1)\). Hence, the postulated \(\tau(1)\) does not exist, and so \(P_{1}\) is not facing an \(\varepsilon\)-threat in round 1.
## 7 Correlated Equilibria Without a Mediator
In one of the first papers to consider the intersection of game theory and cryptography, Dodis, Halevi and Rabin proposed an appealing methodology for implementing a correlated equilibrium in a 2-player normal-form game without making use of a mediator [6]. Under standard hardness assumptions, they showed that for any 2-player normal-form game \(\Gamma\) and any correlated equilibrium \(\sigma\) for \(\Gamma\), there exists a new 2-player extensive “extended game” \(\Gamma^{\prime}\) and a CNE \(\sigma^{\prime}\) for \(\Gamma^{\prime}\), such that \(\sigma\) and \(\sigma^{\prime}\) achieve the same payoffs for the players. (Strictly speaking \(\Gamma^{\prime}\) is a sequence of games indexed by a security parameter, and a CNE is defined for a sequence.) However, as already pointed out by Dodis et al., their protocol lacks a satisfactory analysis of its sequential nature – the resulting “extended game” is an extensive game, but the solution concept they use, CNE, is not strong enough for these games.
In the following, we extend the definition of CTFNE to allow handling this setting (that is, we define CTFNE for extensive games with simultaneous moves at the leaves), give some justification for our new definition, and then provide a new protocol for removing the mediator that achieves CTFNE in a wide class of correlated equilibria that are in the convex hull of Nash equilibria (see definition below).
### The Dodis-Halevi-Rabin Protocol
The “extended game” \(\Gamma^{\prime}\) consists of 2 phases. In the first phase (“preamble phase”), the players execute a protocol for sampling a pair under the distribution \(\sigma\), and in the second phase each player plays the action implied by the sampled pair, in the original normal-form game. The CNE of the extended game is the profile that consists of each player playing the protocol honestly in the first phase, and then in the second phase, if the other player did not abort, choosing the action by the protocol’s result, and otherwise “punishing” the other player by choosing a “min-max” action (i.e., choosing an action minimizing the utility resulting from the other player’s best response).
This profile is indeed a CNE because an efficient player can achieve only a negligible advantage by trying to break the cryptography in the first phase, cannot achieve any advantage by aborting in the first phase (as this minimizes its best possible move in the second phase), and cannot gain any advantage in the expectation of the payoff by deviating in the second phase, because the players are playing a pair of actions from a correlated equilibrium.
### TFNE for Games with Simultaneous Moves at the Leaves
The definition of an extensive game with simultaneous moves is similar to the definition of an ordinary extensive game. The main difference is that now the function \(P\) maps to (nonempty) sets of players rather than to single players. The definition of history is then changed to a sequence of sets of actions rather than a sequence of actions, and the definitions of a strategy and a payoff function are both also changed accordingly. For a formal definition see Osborne and Rubinstein [27].
In order to adjust our definition for extensive games with simultaneous moves, we notice that when a player deviates on a history with a simultaneous move, he cannot expect the other to react to this deviation (because they both play at the same time). However, in order to argue that a profile is rational, we still need to require that for every simultaneous move in the equilibrium support, each player is playing a “best response” given the other player’s prescribed behavior. This means the prescribed behavior for the players should form some kind of equilibrium for normal-form games. In our case, the prescribed behavior will form a NE. The question of what should a CTFNE profile prescribe in off-equilibrium-support histories is more delicate: Clearly, in order to claim that the profile is “rational,” again we need some kind of equilibrium for normal-form games. In our case the only deviation will be prematurely aborting without completing the preamble phase, which leads to the original normal-form game without agreeing on a sampled pair. In this case one can argue that after one player aborted, the other (non-aborting) player cannot assume the aborting player will play his prescribed behavior in the simultaneous move (as he is already not following his prescribed behavior). However, we argue that it is in fact still rational to assume the aborting player will play his prescribed behavior. The justification for this claim is essentially the same as the justification for the rationality of NE. Once there is a prescribed behavior that is a NE, each player knows the other has no incentive to deviate, and so he also has no incentive to deviate. The essential difference between a deviation in an extensive game and a deviation in a simultaneous move, is that in the former, once a player deviated, the other player is facing a fact. He now has to readjust his behavior according to this deviation. However, in the latter, there is no point for a player to deviate from the prescribed NE, because the other player will not know about this deviation prior to choosing his move (if at all). Thus, for terminal leaves that are off-equilibrium-support (i.e., in the original normal-form game that follows an abort of some player), we claim it is sufficient for a CTFNE to prescribe a NE as well.
The bottom line of this discussion is that players cannot assume other players will deviate from any prescribed NE in any terminal leaf. Thus, our new definition of TFNE for extensive games with simultaneous moves at the leaves (abbreviated GSML) is essentially the same as the original definition, except that (i) we require a profile in TFNE to prescribe a NE in any terminal leaf, and (ii) in the definition of a threat we do not allow a player to assume the other will deviate from his strategy in any NE at a terminal leaf. In order to formally modify our definition of TFNE to achieve (ii), essentially we would need to define the only threat-free continuation on a leaf to be the one that assigns to the players the actions in the prescribed NE (which expresses the idea that a player is not allowed to assume the other will deviate from his strategy in any NE).
However, we adopt an equivalent, simpler convention. Given a GSML \(\Gamma\) and a profile \(\sigma\) that assigns a NE at every simultaneous move, we look at a slightly modified game \(\Gamma^{\prime}\): All simultaneous moves are removed, and instead at each leaf where a simultaneous move was removed each player is assigned his expected payoff in the corresponding NE for that leaf. Note that the modified game is now a regular extensive game with _no_ simultaneous moves. We then “prune” the strategy profile to remove all the distributions on actions on all simultaneous leaves and denote the resulting profile \(\sigma^{\prime}\). We say that \(\sigma\) is a TFNE in \(\Gamma\) if \(\sigma^{\prime}\) is a TFNE in \(\Gamma^{\prime}\). We call \(\Gamma^{\prime}\) and \(\sigma^{\prime}\) the _pruned representation_ of \(\Gamma\) and \(\sigma\).
The definition of CTFNE for GSML is derived from the above definition of TFNE for GSML, similarly to the derivation of CTFNE from TFNE in the non-simultaneous case.
A note on the strength of our definitionIt seems that for general GSMLs our definition is too strong. The reason is that in certain cases it is computationally intractable for the players to play the prescribed NE in every leaf (it is easy to construct simple sequences of games where one cannot assign tractable Nash equilibria at all leaves). While we do not yet know how to relax our definition to apply to these cases, we believe our definition, when met, is sufficient.
### Our Protocol
For a non-trivial class of correlated equilibria, we show how to modify the DHR protocol to achieve CTFNE. Our basic idea is to use Nash equilibria as “punishments” for aborting players. That is, if there is a NE that assigns to a player a payoff at most his expected payoff when not aborting, then assigning this NE in case he aborts serves as a punishment and yields that the player has no incentive to abort. In the following we characterize a family of correlated equilibria for which we can use the aforementioned punishing technique, and prove that for this family we can remove the mediator while achieving CTFNE.
We say that a correlated equilibrium \(\pi\) is a _convex combination of Nash equilibria_ if \(\pi\) is induced by a distribution on (possibly mixed) Nash equilibria. (The set of such distributions is sometimes referred to as the _convex hull of Nash equilibria_.) Note that any such distribution is a correlated equilibrium (CE), but the converse is not true.
Let \(\pi\) be a correlated equilibrium for a two-player game \(\Gamma\) that is a convex combination of a set \(N\) of NEs. We say that \(\pi\) is _weakly Pareto optimal_ if there does not exist a different CE \(\rho\) in the convex hull of \(N\) for which both \(\mathop{\mathrm{E}}\displaylimits[u_{1}(O(\rho))]>\mathop{\mathrm{E}} \displaylimits[u_{1}(O(\pi))]\) and \(\mathop{\mathrm{E}}\displaylimits[u_{2}(O(\rho))]>\mathop{\mathrm{E}} \displaylimits[u_{2}(O(\pi))]\).
We say that a distribution is _samplable_ if there exists a probabilistic TM that halts on every infinite randomness vector, and can sample it. This is equivalent to requiring that all probabilities can be expressed in binary (assuming we work over \(\{0,1\}\)). Note that every distribution can be approximated arbitrarily accurately by a samplabale distribution.
**Theorem 7.1**: _Assume there exists a non-interactive computationally binding commitment scheme. Let \(\pi\) be a weakly Pareto optimal correlated equilibrium for a two-player game \(\Gamma\) that is a samplable convex combination \(\Pi\) of some set of samplable Nash equilibria. Then there exists an extended extensive game and a profile that achieves the same expected payoffs as \(\pi\) and is a CTFNE._
* **Proof:** Since \(\Pi\) is samplable, the common denominator of all probabilities in \(\Pi\) is a power of two. Thus, we can assume \(\Pi\) is a _uniform_ distribution on a sequence of Nash equilibria that may contain repetitions, where the length of the sequence is a power of two. Let \(2^{\ell}\) be the length of that sequence, and let \((\pi_{0^{\ell}},\dots,\pi_{1^{\ell}})\) be that sequence. Note that the distribution \(\pi\) can now be generated by first choosing uniformly at random a string \(r\) in \(\{0,1\}^{\ell}\), and then choosing a pair of actions according to \(\pi_{r}\). Let \(\widehat{\sigma}^{i}\) be the NE that assigns the worst payoff for \(P_{i}\) (this value represents the “severest punishment” for player \(i\)). Our protocol embeds a 2-party string sampling protocol, which is a simple generalization of the Blum coin flipping protocol [5]. The protocol consists of simply running the Blum protocol in parallel for a fixed number of times. This protocol, in turn, relies on a perfectly binding commitment scheme as in Section 6, whose formal definition can be found in Appendix A. As in Section 6, we describe the two ITMs that form the protocol in an interleaved manner. We denote the ITMs playing the strategies of \(P_{1},P_{2}\) by \(M_{1},M_{2}\), respectively. * Round 1: Player 1 chooses uniformly at random a string \(r=(r_{1},\dots,r_{\ell})\) from \(\{0,1\}^{\ell}\), and sends \(c=(c_{1}={\sf com}^{(k)}(r_{1}),\dots,c_{\ell}={\sf com}^{(k)}(r_{\ell}))\) to player 2 (player 1 also obtains \(({\sf decom}_{1},\dots,{\sf decom}_{\ell})\), where \({\sf decom}_{i}\) is a legal decommitment with respect to \(c_{i}\) and \(r_{i}\)). * Round 2: If player 1 aborted, the assigned NE is \(\widehat{\sigma}^{1}\). Else, player 2 chooses a uniformly random string \(r^{\prime}=(r^{\prime}_{1},\dots,r^{\prime}_{\ell})\) from \(\{0,1\}^{\ell}\), and sends \(r^{\prime}\) to player 1. * Round 3: If player 2 aborted, the assigned NE is \(\widehat{\sigma}^{2}\). Else, player 1 sends the message \(((r_{1},{\sf decom}_{1}),\dots,(r_{\ell},{\sf decom}_{\ell}))\). * If player 1 aborted, the assigned NE is \(\widehat{\sigma}^{1}\). Else, player 2 verifies that \({\sf decom}_{i}\) is a legal decommitment with respect to \(c_{i}\) and \(r_{i}\) for \(1\leq i\leq\ell\). If the verification fails (which is equivalent to an abort of player 1, as it means player 1 sent an illegal message), the assigned NE is \(\widehat{\sigma}^{1}\). Else, the assigned NE is \(\pi_{r\oplus r^{\prime}}\) (where \(\oplus\) is bitwise exclusive-or). **Lemma 7.2**: _The pair \((M_{1},M_{2})\) forms a CTFNE for the protocol above._ * **Proof:** Let \(\{\tilde{\Gamma}^{(k)}\}_{k\in\mathbb{N}}\) be the sequence of games induced by the protocol. Denote the pruned representation of \(\tilde{\Gamma}^{(k)}\) by \(\Gamma^{(k)}\). Let \(\tilde{\sigma}^{(k)}_{1},\tilde{\sigma}^{(k)}_{2}\) be the strategies of \(P_{1},P_{2}\) in the protocol with security parameter \(k\), and let \(\sigma^{(k)}_{1},\sigma^{(k)}_{2}\) be their pruned representations. Let \(\sigma^{(k)}=(\sigma^{(k)}_{1},\sigma^{(k)}_{2})\). We prove that \(\{\sigma^{(k)}\}\) is a CTFNE in \(\{\Gamma^{(k)}\}\), which, by the discussion of Section 7.2, implies that \(\{\tilde{\sigma}^{(k)}\}\) is a CTFNE in \(\{\tilde{\Gamma}^{(k)}\}\). First we define the functions \(F_{1}^{(k)}\) and \(F_{2}^{(k)}\). For any \(k\), the function \(F_{1}^{(k)}\) never maps to \(\bot\) (this, roughly speaking, reflects the fact that the protocol is secure against an all-powerful player 1, which follows from the perfect binding property of the commitment scheme). For \(F_{2}\) we use the following rule: \(F_{2}^{(k)}(M,\varepsilon)=\bot\) if and only if \[\mathop{\mathrm{E}}\displaylimits[u_{2}^{(k)}(O(\sigma_{1}^{(k)},\sigma^{(k)}_ {M}))]\geq\mathop{\mathrm{E}}\displaylimits[u_{2}^{(k)}(O(\sigma^{(k)}))]+\varepsilon,\] (1) where \(\sigma^{(k)}_{M}\) is the strategic representation of machine \(M\) and \(\sigma_{1}^{(k)}\) is the strategic representation of machine \(M_{1}\), both with security parameter \(k\). In other words, \(P_{2}\) cannot unilaterally \(\varepsilon\)-improve in the \((T^{(k)}_{1,\varepsilon},T^{(k)}_{2,\varepsilon})\)-constrained version of \(\Gamma^{(k)}\). The fact that \(F_{1}\) is PPT-covering is straightforward. The fact that \(F_{2}\) is PPT covering follows from the security of the commitment scheme, as we prove next. **Claim 7.3**: _The strategy-filter \(F_{2}\) is PPT-covering._ * * **Proof:** Suppose \(F_{2}\) is not PPT-covering. Then from (1) there is a PPT ITM \(M\) and a polynomial \(p\) such that \[\mathop{\mathrm{E}}\displaylimits[u_{2}^{(k)}(O(\sigma^{(k)}_{1},\sigma_{M}^{( k)}))]\geq\mathop{\mathrm{E}}\displaylimits[u_{2}^{(k)}(O(\sigma^{(k)}_{1}, \sigma^{(k)}_{2}))]+1/p(k)\] (2) for infinitely many \(k\)’s, where \(\sigma_{M}^{(k)}\) is the strategic representation of the machine \(M\) with security parameter \(k\). First, we show that we can assume \(M\) does not abort in round 2. An abort of \(P_{2}\) leads to a leaf with \(\widehat{\sigma}^{2}\). But since \(\pi\) is a convex combination of NEs, following the protocol would mean playing a NE. Since by definition \(\widehat{\sigma}^{2}\) is the worst NE for player 2, it follows that the machine \(M^{\prime}\) that behaves the same as \(M\), but whenever \(M\) aborts, \(M^{\prime}\) instead follows the protocol (i.e. acts like \(M_{2}\)) does at least as well as \(M\). The machine \(M^{\prime}\) is well-defined, as the reduced strategy \(\sigma^{(k)}_{2}\) is in fact a full strategy, and is defined everywhere.⁷ [FOOTNOTE:7][ENDFOOTNOTE] Since the payoffs in \(\{\Gamma^{(k)}\}\) are bounded in \(k\) and the number of NEs in \(\pi\) is fixed in \(k\), by (2) there exists a polynomial \(p\) and (at least one) \(s\in\{0,1\}^{\ell}\) such that for infinitely many \(k\)’s \[\Pr[O(\sigma^{(k)}_{1},\sigma_{M}^{(k)})=\pi_{s}]-\Pr[O(\sigma^{(k)}_{1}, \sigma_{2}^{(k)})=\pi_{s}]\geq 1/{p(k)}.\] It follows that for infinitely many \(k\)’s \[\Pr_{(\sigma^{(k)}_{1},\sigma_{M}^{(k)})}[r\oplus r^{\prime}=s]\ -\Pr_{(\sigma ^{(k)}_{1},\sigma_{2}^{(k)})}[r\oplus r^{\prime}=s]\geq 1/{p(k)}.\] (3) **Claim 7.4**: _There exists a polynomial \(q\) such that for each \(k\) satisfying (3) there exists some \(i\in\{1,\ldots,\ell\}\) for which_ \[\Pr_{(\sigma^{(k)}_{1},\sigma_{M}^{(k)})}[r_{i}\oplus r^{\prime}_{i}=s_{i}|r_{ j}\oplus r^{\prime}_{j}=s_{j}\mbox{ }\forall j<i]-1/2\geq 1/{q(k)}.\] (4) * * **Proof:** We show that the claim holds with \(q(k)=2^{\ell}\cdot p(k)\). Let \(k\) be such that (3) holds, and suppose towards a contradiction that (4) does not hold for any \(i\in\{1,\ldots,\ell\}\). Then \[\Pr_{(\sigma^{(k)}_{1},\sigma_{M}^{(k)})}[r\oplus r^{\prime} =s]\ -\Pr_{(\sigma^{(k)}_{1},\sigma_{2}^{(k)})}[r\oplus r^{\prime }=s]\] \[= \Pr_{(\sigma^{(k)}_{1},\sigma_{M}^{(k)})}[r_{1}\oplus r^{\prime}_ {1}=s_{1}]\cdot\Pr_{(\sigma^{(k)}_{1},\sigma_{M}^{(k)})}[r_{2}\oplus r^{\prime }_{2}=s_{2}|r_{1}\oplus r^{\prime}_{1}=s_{1}]\cdot\ldots\] \[\cdot\Pr_{(\sigma^{(k)}_{1},\sigma_{M}^{(k)})}[r_{\ell}\oplus r^{ \prime}_{\ell}=s_{\ell}|r_{j}\oplus r^{\prime}_{j}=s_{j}\mbox{ }\forall j<\ell ]-\Pr_{(\sigma^{(k)}_{1},\sigma_{2}^{(k)})}[r\oplus r^{\prime}=s]\] \[< \left(\frac{1}{2}+\frac{1}{q(k)}\right)^{\ell}-\frac{1}{2^{\ell}}\] \[< \frac{2^{\ell}}{q(k)}=\frac{1}{p(k)}.\] The first inequality holds since the distribution on \(r\oplus r^{\prime}\) in \((\sigma^{(k)}_{1},\sigma_{2}^{(k)})\) is uniform on \(\{0,1\}^{\ell}\). The second inequality follows from the observation that in \(\left(1/2+1/{q(k)}\right)^{\ell}\) we are summing over \(2^{\ell}\) terms, one equal to \(1/2^{\ell}\) and the others strictly smaller than \(1/q(k)\). Thus, we get a contradiction to (3). Since there are infinitely many \(k\)’s for which (4) holds, and because \(\ell\) is fixed, there must exist some \(i\in\{1,\ldots,\ell\}\) for which (4) holds infinitely often. This, however, yields a PPT machine \(A\) that breaks the hiding property of the commitment scheme: Given a commitment \(c={\sf com}^{(k)}(r)\) for a uniformly chosen random bit \(r\), the machine \(A\) chooses uniformly at random a string \((r_{1},\dots,r_{i-1},r_{i+1},\ldots,r_{\ell})\) from \(\{0,1\}^{\ell-1}\), and runs \(M\) on \[(c_{1}={\sf com}^{(k)}(r_{1}),\dots,c_{i-1}={\sf com}^{(k)}(r_{i-1}),c,c_{i+1} ={\sf com}^{(k)}(r_{i+1}),\ldots,c_{\ell}={\sf com}^{(k)}(r_{\ell}))\] to get output \(r^{\prime}\). Then, if \(r_{j}\oplus r^{\prime}_{j}=s_{j}\mbox{ }\forall j<i\), algorithm \(A\) outputs \(s_{i}\oplus r^{\prime}_{i}\), and otherwise \(A\) outputs a uniformly random bit. Clearly \(A\) is a PPT machine. From (3) it follows that infinitely often, with probability at least \(1/2^{\ell}\) it will be the case that \(r_{j}\oplus r^{\prime}_{j}=s_{j}\mbox{ }\forall j<i\). Once \(r^{\prime}\) is such that \(r_{j}\oplus r^{\prime}_{j}=s_{j}\mbox{ }\forall j<i\), (4) implies that \(\Pr[s_{i}\oplus r^{\prime}_{i}=r_{i}|r_{j}\oplus r^{\prime}_{j}=s_{j}\mbox{ } \forall j<i]\geq 1/2+q(k)\). Thus, in total, for infinitely many \(k\)’s it holds that \[\Pr[s_{i}\oplus r^{\prime}_{i}=r_{i}]=\left(1-\frac{1}{2^{\ell}}\right)\cdot \frac{1}{2}+\frac{1}{2^{\ell}}\cdot\left(\frac{1}{2}+q(k)\right)=\frac{1}{2}+ \frac{q(k)}{2^{\ell}},\] which means that \(A\) breaks the hiding property of the commitment scheme. This is a contradiction. Next, we show that for all \(k,\varepsilon\) for which \(F_{1}^{(k)}(M_{1},\varepsilon)\not=\perp\) and \(F_{2}^{(k)}(M_{2},\varepsilon)\not=\perp\) the profile \((F_{1}^{(k)}(M_{1},\varepsilon),F_{2}^{(k)}(M_{2},\varepsilon))\) constitutes an \(\varepsilon\)-TFNE in the \(T=(T^{(k)}_{1,\varepsilon},T^{(k)}_{2,\varepsilon})\)-constrained version of \(\Gamma^{(k)}\). Let \(k,\varepsilon\) be as above, and let \(\sigma=(\sigma_{1},\sigma_{2})=(F_{1}^{(k)}(M_{1},\varepsilon),F_{2}^{(k)}(M_{ 2},\varepsilon))\). We first show that \(\sigma\) constitutes an \(\varepsilon\)-NE in the \(T\)-constrained version of \(\Gamma^{(k)}\). Suppose \(P_{1}\) unilaterally \(\varepsilon\)-improves in the \(T\)-constrained version of \(\Gamma^{(k)}\). From similar arguments as above we can assume \(P_{1}\) never aborts. But when \(P_{1}\) never aborts the outcome is exactly \(\pi\), as the players are playing \(\pi_{r\oplus r^{\prime}}\), and \(r^{\prime}\) is chosen uniformly at random. Suppose now that \(P_{2}\) unilaterally \(\varepsilon\)-improves in the \(T\)-constrained version of \(\Gamma^{(k)}\). However, this is a contradiction to the constraints, that state that for any \(k\)\(P_{2}\) cannot unilaterally \(\varepsilon\)-improve in the \((T^{(k)}_{1,\varepsilon},T^{(k)}_{2,\varepsilon})\)-constrained version of \(\Gamma^{(k)}\). Next, we show that no player is \(\varepsilon\)-threatened with respect to \(\sigma\) at any round of the \(T\)-constrained version of \(\Gamma^{(k)}\). To this end, suppose towards a contradiction that some player is \(\varepsilon\)-threatened with respect to \(\sigma\). We divide the proof into cases. Case 1 – \(P_{1}\) is facing an \(\varepsilon\)-threat in round 3:In step 3 player 1 has exactly two options: He can (i) play honestly, send \(((r_{1},{\sf decom}_{1}),\dots,(r_{\ell},{\sf decom}_{\ell}))\) which he generated in round 1, and receive \(\mathop{\mathrm{E}}\displaylimits[u_{1}(O(\sigma))]\), or he can (ii) abort and receive \(\mathop{\mathrm{E}}\displaylimits[u_{1}(O(\widehat{\sigma}^{1}))]\). The value \(\mathop{\mathrm{E}}\displaylimits[u_{1}(O(\widehat{\sigma}^{1}))]\) is at most \(\mathop{\mathrm{E}}\displaylimits[u_{1}(O(\sigma))]\), and so \(P_{1}\) cannot improve over \(\mathop{\mathrm{E}}\displaylimits[u_{1}(O(\sigma))]\). Hence player 1 is not facing an \(\varepsilon\)-threat at round 3. Case 2 – \(P_{2}\) is facing an \(\varepsilon\)-threat in round 2:We first note that for any round 1 strategy for \(P_{1}\) and round 2 strategy for \(P_{2}\), the round strategy of playing honestly in round 3 for \(P_{1}\) is threat-free, since he cannot improve over that strategy (again, since his only deviation is aborting, which gives him the worst possible NE). Thus, if \(P_{2}\) is \(\varepsilon\)-threatened at round 2, he has some round strategy that \(\varepsilon\)-improves over \(\mathop{\mathrm{E}}\displaylimits[u_{2}(O(\sigma))]\) when \(P_{1}\) plays in round 3 (and 1) according to the protocol. This means that \(P_{2}\) unilaterally \(\varepsilon\)-improves, which contradicts the constraints (as well as the \(\varepsilon\)-NE). Case 3 – \(P_{1}\) is facing an \(\varepsilon\)-threat in round 1:If \(P_{1}\) is \(\varepsilon\)-threatened in round 1, he has some round 1 strategy \(\tau(1)\) for which every \(\varepsilon\)-threat-free continuation \(\varepsilon\)-improves over every \(\varepsilon\)-threat-free continuation of \(\sigma_{1}(1)\). We will describe an \(\varepsilon\)-threat-free continuation of \(\tau(1)\) and an \(\varepsilon\)-threat-free continuation of \(\sigma_{1}(1)\) that contradict this. The \(\varepsilon\)-threat-free continuation of \(\sigma_{1}(1)\): We established in Case 2 that when \(P_{1}\) plays honestly in round 1, if \(P_{2}\) plays honestly in round 2 he is not \(\varepsilon\)-threatened. We also established there that \(P_{1}\) playing honestly in round 3 is always \(\varepsilon\)-threat-free. If follows that the continuation of both players playing honestly in rounds 2 and 3 is an \(\varepsilon\)-threat-free continuation of \(\sigma_{1}(1)\). On this profile \(P_{1}\) receives \(\mathop{\mathrm{E}}\displaylimits[u_{1}(O(\sigma))]\). The \(\varepsilon\)-threat-free continuation of \(\tau_{1}(1)\): As we established in Case 2, playing honestly in round 3 is always \(\varepsilon\)-threat-free for \(P_{1}\). Now, note that there is no profile in which both players improve simultaneously – because all leaves are Nash equilibria, such a profile would be a distribution on Nash equilibria that contradicts the Pareto-optimality of \(\pi\). Note also that because \(P_{1}\) receives the worst possible payoff when he aborts, it follows that he improves also conditioned on not aborting (as this can only help him). Thus, in any threat-free continuation of \(\tau(1)\), conditioned on \(P_{1}\) not aborting in round 1, \(P_{2}\) again cannot improve over \(\mathop{\mathrm{E}}\displaylimits[u_{2}(O(\sigma))]\), as this again contradicts the Pareto-optimality of \(\pi\). However, if \(P_{2}\) plays honestly in round 2 and then \(P_{1}\) plays honestly in round 3, then \(P_{2}\) receives exactly \(\mathop{\mathrm{E}}\displaylimits[u_{2}(O(\sigma))]\) conditioned on \(P_{1}\) not aborting in round 1. It follows that this continuation is the best possible for \(P_{2}\), and thus \(P_{2}\) is not \(\varepsilon\)-threatened in round 2 of this continuation. It follows that this continuation is \(\varepsilon\)-threat-free. However, in this continuation \(P_{1}\) receives \(\mathop{\mathrm{E}}\displaylimits[u_{1}(O(\sigma))]\) conditioned on not aborting, and thus receives at most \(\mathop{\mathrm{E}}\displaylimits[u_{1}(O(\sigma))]\) without the conditioning. This completes the proof of the theorem.
## 8 A General Theorem
In this section we prove a general theorem identifying sufficient conditions for a strategy profile to be a TFNE. The first condition is that the profile must be weakly Pareto optimal:
**Definition 8.1** (Weakly Pareto optimal): _A strategy profile \(\sigma\in T\) of an extensive game \(\Gamma=(H,P,A,u)\) with constraints \(T\) is weakly Pareto optimal if there does not exist a strategy profile \(\pi\in T\) for which both \(\mathop{\mathrm{E}}\displaylimits[u_{1}(O(\pi))]>\mathop{\mathrm{E}} \displaylimits[u_{1}(O(\sigma))]\) and \(\mathop{\mathrm{E}}\displaylimits[u_{2}(O(\pi))]>\mathop{\mathrm{E}} \displaylimits[u_{2}(O(\sigma))]\)._
Next, we require the profile to be \(\varepsilon\)-safe. Intuitively, this just means that a player cannot harm the other too much by a unilateral deviation (as opposed to not being able to gain too much, which is the NE condition).
**Definition 8.2** (\(\varepsilon\)-safe): _A strategy profile \(\sigma=(\sigma_{1},\sigma_{2})\in T\) of an extensive game \(\Gamma=(H,P,A,u)\) with constraints \(T=(T_{1},T_{2})\) is \(\varepsilon\)-safe if for each player \(i\),_
\[\mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\sigma)\right)\right]\geq \mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\sigma^{\prime}_{i}, \sigma_{-i})\right)\right]-\varepsilon\]
_for every strategy \(\sigma^{\prime}_{i}\in T_{i}\) of player \(i\)._
Finally, we have the following theorem. Note that we are implicitly assuming that the extensive games in the claim are derived from a cryptographic protocol or some other setting in which it is natural to discuss the “rounds” of a game.
**Theorem 8.3**: _Let \(\Gamma=(H,P,A,u)\) be an extensive game with constraints \(T=(T_{1},T_{2})\), and let \(\sigma=(\sigma_{1},\sigma_{2})\) be a weakly Pareto optimal \(\varepsilon\)-NE of \(\Gamma\) that is \(\varepsilon\)-safe. Then \(\sigma\) is an \(\varepsilon\)-TFNE of \(\Gamma\)._
We also have the following corollary.
**Corollary 8.4**: Let \(\Gamma=(H,P,A,u)\) be a _zero-sum_ extensive game with constraints \(T=(T_{1},T_{2})\), and let \(\sigma\) be an \(\varepsilon\)-NE of \(\Gamma\). Then \(\sigma\) is an \(\varepsilon\)-TFNE of \(\Gamma\).__
The corollary follows from the observation that any \(\varepsilon\)-NE of a zero-sum game is both weakly Pareto optimal and \(\varepsilon\)-safe. Note that the corollary implies the threat-freeness part of Theorem 6.1.
We now prove Theorem 8.3.
* **Proof:** Suppose towards contradiction that at least one of the players is facing an \(\varepsilon\)-threat with respect to \(\sigma\) at some round. Let \(R\) be the latest such round: that is, player \(i\) is facing an \(\varepsilon\)-threat at round \(R\) with respect to \(\sigma\), and no player is facing an \(\varepsilon\)-threat at any round \(R^{\prime}\) that follows \(R\). By Definition 4.4 it follows that there exists a round \(R\) strategy \(\tau=\tau(R)\) for player \(i\) such that the set \(\mathrm{Cont}(\sigma(1,\ldots,R\!-\!1),\tau(R))\) is nonempty, and such that for all \(\pi\in\mathrm{Cont}(\sigma(1,\ldots,R\!-\!1),\tau(R))\) and \(\pi^{\prime}\in\mathrm{Cont}(\sigma(1,\ldots,R))\) that are \(\varepsilon\)-threat-free on \(R\) it holds that \[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi)\right)\right]>\mathop {\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi^{\prime})\right)\right]+\varepsilon,\] (5) where \[\sigma(1,\ldots,S)\mathbin{\stackrel{{\rm def}}{{=}}} \sigma(1),\ldots,\sigma(S)\] and \[\mathrm{Cont}(\sigma(1,\ldots,R))\mathbin{\stackrel{{ \rm def}}{{=}}}\Big{\{}\pi\in T:\pi(S)=\sigma(S)\mbox{ for all }S\leq R\Big{\}}.\] Note that \(\sigma\in\mathrm{Cont}(\sigma(1,\ldots,R))\). Also note that, because \(R\) is the latest round on which an \(\varepsilon\)-threat occurs, the profile \(\sigma\) is \(\varepsilon\)-threat-free on \(R\). Using inequality (5) we can then infer that for any \(\pi\in\mathrm{Cont}(\sigma(1,\ldots,R\!-\!1),\tau(R))\) that is \(\varepsilon\)-threat-free on \(R\) it holds that \[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi)\right)\right]>\mathop {\mathrm{E}}\displaylimits\left[u_{i}\left(O(\sigma)\right)\right]+\varepsilon.\] (6) Let \(\pi^{1}\in\mathrm{Cont}(\sigma(1,\ldots,R\!-\!1),\tau(R))\) be one such \(\varepsilon\)-threat-free profile, and let \({\sigma^{1}}=(\pi^{1}_{i},\sigma_{-i})\). Fix \(R^{1}=R\) and \(\tau^{1}=\tau\) for consistent notation. We next ask, is player \(i\) facing an \(\varepsilon\)-threat with respect to \({\sigma^{1}}\) at any round \(R^{\prime}\) that follows \(R^{1}\)? If yes, let \(R^{2}\) be the next such round: there is no \(R^{\prime}\) between \(R^{1}\) and \(R^{2}\) on which player \(i\) is facing an \(\varepsilon\)-threat with respect to \({\sigma^{1}}\). By Definition 4.4 it follows that there exists a round \(R^{2}\) strategy \(\tau^{2}\) for player \(i\) such that \(\mathrm{Cont}(\sigma^{1}(1,\ldots,R^{2}\!-\!1),\tau^{2}(R^{2}))\) is nonempty, and such that for all \(\pi\in\mathrm{Cont}(\sigma^{1}(1,\ldots,R^{2}\!-\!1),\tau^{2}(R^{2}))\) and \(\pi^{\prime}\in\mathrm{Cont}(\sigma^{1}(1,\ldots,R^{2}))\) that are \(\varepsilon\)-threat-free on \(R^{2}\) it holds that \[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi)\right)\right]>\mathop {\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi^{\prime})\right)\right]+\varepsilon.\] Assume \(\tau^{2}\) is maximal, in the sense that for any \(\pi\in\mathrm{Cont}(\sigma^{1}(1,\ldots,R^{2}-1),\tau^{2}(R^{2}))\) that is \(\varepsilon\)-threat-free on \(R^{2}\), player \(i\) is _not_ facing an \(\varepsilon\)-threat at round \(R^{2}\) with respect to \(\pi\). Pick some arbitrary \(\pi^{2}\in\mathrm{Cont}(\sigma^{1}(1,\ldots,R^{2}\!-\!1),\tau^{2}(R^{2}))\), and fix \({\sigma^{2}}=(\pi^{2}_{i},\sigma_{-i})\). We now repeat the above procedure, finding the next threat to player \(i\) and letting him act on that threat, as follows. For \(t=3,4,\ldots\) we ask, is player \(i\) facing an \(\varepsilon\)-threat with respect to \({\sigma^{t-1}}\) at any round \(R^{\prime}\) that follows \(R^{t-1}\)? If yes, let \(R^{t}\) be the next such round: there is no \(R^{\prime}\) between \(R^{t-1}\) and \(R^{t}\) on which player \(i\) is facing an \(\varepsilon\)-threat with respect to \({\sigma^{t-1}}\). By Definition 4.4 it follows that there exists a round \(R^{t}\) strategy \(\tau^{t}\) for player \(i\) such that \(\mathrm{Cont}(\sigma^{t-1}(1,\ldots,R^{t}\!-\!1),\tau^{t}(R^{t}))\) is nonempty, and such that for all \(\pi\in\mathrm{Cont}(\sigma^{t-1}(1,\ldots,R^{t}\!-\!1),\tau^{t}(R^{t}))\) and \(\pi^{\prime}\in\mathrm{Cont}(\sigma^{t-1}(1,\ldots,R^{t}))\) that are \(\varepsilon\)-threat-free on \(R^{t}\) it holds that \[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi)\right)\right]>\mathop {\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi^{\prime})\right)\right]+\varepsilon.\] Assume \(\tau^{t}\) is maximal, in the sense that for any \(\pi\in\mathrm{Cont}(\sigma^{t-1}(1,\ldots,R^{t}\!-~{}\!1),\tau^{t}(R^{t}))\) that is \(\varepsilon\)-threat-free on \(R^{t}\), player \(i\) is _not_ facing an \(\varepsilon\)-threat at round \(R^{t}\) with respect to \(\pi\). Pick some arbitrary \(\pi^{t}\in\mathrm{Cont}(\sigma^{t-1}(1,\ldots,R^{t}\!-\!1),\tau^{t}(R^{t}))\), and fix \({\sigma^{t}}=(\pi^{t}_{i},\sigma_{-i})\). Finally, after repeating this for all \(t\) until there are no more \(\varepsilon\)-threats to \(P_{i}\) on any round that follows \(R\), we are left with a profile \({\sigma^{C}}=(\pi^{C}_{i},\sigma_{-i})\) on which player \(i\) is not facing an \(\varepsilon\)-threat at any round below \(R\). Fix \(\rho={\sigma^{C}}\), and recall that, by construction, \(\rho_{-i}=\sigma_{-i}\). Because \(\sigma\) is \(\varepsilon\)-safe, it must be the case that \[\mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\rho)\right)\right]\geq \mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\sigma)\right)\right]-\varepsilon.\] (7) We next ask, is player \(-i\) facing an \(\varepsilon\)-threat with respect to \(\rho\) at any round \(S\) that follows \(R\)? As the following claim shows, the answer is positive: **Claim 8.5**: _Player \(-i\) is facing an \(\varepsilon\)-threat with respect to \(\rho\) at some round \(S\) that follows \(R\)._ * * **Proof:** Suppose not. By our construction of \(\rho\), player \(i\) is also not facing an \(\varepsilon\)-threat with respect to \(\rho\) at any round that follows \(R\). This means that the profile \(\rho\) is \(\varepsilon\)-threat-free on the subgames \(R\). Since \(\rho\in\mathrm{Cont}(\sigma(1,\ldots,R-1),\tau(R))\) and since \(\sigma\in\mathrm{Cont}(\sigma(1,\ldots,R))\) is \(\varepsilon\)-threat-free on \(R\), we can then use (6) to infer that \[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\rho)\right)\right]> \mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\sigma)\right)\right]+\varepsilon.\] However, since \(\rho=(\pi^{C}_{i},\sigma_{-i})\) is a _unilateral_ deviation of player \(i\), this contradicts the fact that \(\sigma\) constitutes an \(\varepsilon\)-NE. Let \(S^{1}\) be the latest round on which \(P_{-i}\) is facing an \(\varepsilon\)-threat with respect to \(\rho\). By Definition 4.4 it follows that there exists a round \(S^{1}\) strategy \(\mu^{1}\) for player \(-i\) such that \(\mathrm{Cont}(\rho(1,\ldots,S^{1}\!-\!1),\mu^{1}(S^{1}))\) is nonempty, and such that for all \(\pi\in\mathrm{Cont}(\rho(1,\ldots,S^{1}\!-\!1),\mu^{1}(S^{1}))\) and \(\pi^{\prime}\in\mathrm{Cont}(\rho(1,\ldots,S^{1}))\) that are \(\varepsilon\)-threat-free on \(S^{1}\) it holds that \[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi)\right)\right]>\mathop {\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi^{\prime})\right)\right]+\varepsilon.\] Assume \(\mu^{1}\) is maximal, in the sense that for any \(\pi\in\mathrm{Cont}(\rho(1,\ldots,S^{1}\!-\!1),\mu^{1}(S^{1}))\) that is \(\varepsilon\)-threat-free on \(S^{1}\), player \(-i\) is _not_ facing an \(\varepsilon\)-threat at round \(S^{1}\) with respect to \(\pi\). Pick some \(\rho^{1}\in\mathrm{Cont}(\rho(1,\ldots,S^{1}\!-\!1),\mu^{1}(S^{1}))\) that is \(\varepsilon\)-threat-free on \(S^{1}\) – such a \(\rho^{1}\) must exist by Proposition 4.6. Now, note that because \(S^{1}\) was the last round on which \(P_{-i}\) is facing an \(\varepsilon\)-threat, and because \(P_{i}\) is not facing an \(\varepsilon\)-threat at any round following \(R\) with respect to \(\rho\), it must be the case that \(\rho\) is \(\varepsilon\)-threat-free on \(S^{1}\). Since \(\rho\in\mathrm{Cont}(\rho(1,\ldots,S^{1}))\) we then have that \[\mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\rho^{1})\right)\right]> \mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\rho)\right)\right]+ \varepsilon\geq\mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\sigma) \right)\right],\] where the second inequality follows from (7). We now repeat the above procedure, finding the preceding threat to player \(-i\) (but that still follows \(R\)) and letting him act on that threat, as follows. For \(t=2,3,\ldots\) we ask, is \(P_{-i}\) facing an \(\varepsilon\)-threat with respect to \(\rho^{t-1}\) at any round \(S\) that follows \(R\)? If yes, let \(S^{t}\) be the latest such round. By Definition 4.4 it follows that there exists a round \(S^{t}\) strategy \(\mu^{t}\) for player \(-i\) such that \(\mathrm{Cont}(\rho^{t-1}(1,\ldots,S^{t}\!-\!1),\mu^{t}(S^{t}))\) is nonempty, and such that for all \(\pi\in\mathrm{Cont}(\rho^{t-1}(1,\ldots,S^{t}\!-\!1),\mu^{t}(S^{t}))\) and \(\pi^{\prime}\in\mathrm{Cont}(\rho^{t-1}(1,\ldots,S^{t}))\) that are \(\varepsilon\)-threat-free on \(S^{t}\) it holds that \[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi)\right)\right]>\mathop {\mathrm{E}}\displaylimits\left[u_{i}\left(O(\pi^{\prime})\right)\right]+\varepsilon.\] Assume \(\mu^{t}\) is maximal, in the sense that for any \(\pi\in\mathrm{Cont}(\rho^{t-1}(1,\ldots,S^{t}\!-~{}\!1),\mu^{t}(S^{t}))\) that is \(\varepsilon\)-threat-free on \(S^{t}\), player \(-i\) is _not_ facing an \(\varepsilon\)-threat at round \(S^{t}\) with respect to \(\pi\). Pick some \(\rho^{t}\in\mathrm{Cont}(\rho^{t-1}(1,\ldots,S^{t}\!-~{}\!1),\mu^{t}(S^{t}))\) that is \(\varepsilon\)-threat-free on \(S^{t}\) – again, such a \(\rho^{t}\) must exist by Proposition 4.6. Now, note that because \(S^{t}\) was the last round on which \(P_{-i}\) is facing an \(\varepsilon\)-threat, \(P_{-i}\) is not facing an \(\varepsilon\)-threat with respect to \(\rho^{t-1}\) at any round following \(S^{t}\). Since \(\rho^{t-1}\) was chosen to be \(\varepsilon\)-threat free on \(S^{t-1}\), player \(i\) is not facing an \(\varepsilon\)-threat with respect to \(\rho^{t-1}\) at any round following \(S^{t-1}\). Finally, by construction, \(P_{i}\) is not facing an \(\varepsilon\)-threat at any round following \(R\) with respect to \(\rho\). Since \(\rho\) and \(\rho^{t-1}\) are equivalent up to round \(S^{t-1}\), it must be the case that \(P_{i}\) is not facing an \(\varepsilon\)-threat with respect to \(\rho^{t-1}\) at any round between \(S^{t}\) and \(S^{t-1}\) either. Thus, \(\rho^{t-1}\) is \(\varepsilon\)-threat-free on \(S^{t}\). Since \(\rho^{t-1}\in\mathrm{Cont}(\rho^{t-1}(1,\ldots,S^{t}))\), we then have that \[\mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\rho^{t}) \right)\right] >\mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\rho^{t-1}) \right)\right]+\varepsilon\] \[\geq\mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\sigma) \right)\right]+(t-1)\cdot\varepsilon.\] Finally, after repeating this for all \(t\) until there are no more \(\varepsilon\)-threats to \(P_{-i}\) at any round that follows \(R\), we are left with a profile \(\rho^{D}\in\mathrm{Cont}(\sigma(1,\ldots,R\!-\!1),\tau(R))\) on which both \(P_{i}\) and \(P_{-i}\) are not facing an \(\varepsilon\)-threat at any round that follows \(R\). We can then use (6) to infer that \[\mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\rho^{D})\right)\right]> \mathop{\mathrm{E}}\displaylimits\left[u_{i}\left(O(\sigma)\right)\right]+\varepsilon.\] Furthermore, \(\rho^{D}\) satisfies \[\mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\rho^{D})\right)\right]> \mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O(\rho^{D-1})\right)\right] +D\cdot\varepsilon\geq\mathop{\mathrm{E}}\displaylimits\left[u_{-i}\left(O( \sigma)\right)\right],\] since \(D\geq 1\). We conclude that on the profile \(\rho^{D}\) both players strictly improve over \(\sigma\), contradicting the weak Pareto optimality of \(\sigma\). Hence no player is facing an \(\varepsilon\)-threat with respect to \(\sigma\) at any round \(R\), and this, coupled with the fact that \(\sigma\) is an \(\varepsilon\)-NE, yields that profile an \(\varepsilon\)-TFNE.
## Acknowledgments
We thank Eddie Dekel, Oded Goldreich, Ehud Kalai, Eran Omri, and Gil Segev for helpful conversations, and the anonymous referees for careful reading and insightful comments.
## References
* [1] I. Abraham, D. Dolev, R. Gonen, , and J. Halpern. Distributed computing meets game theory: robust mechanisms for rational secret sharing and multiparty computation. In _In 25th ACM Symposium Annual on Principles of Distributed Computing_, pages 53–62, 2006.
* [2] G. Asharov and Y. Lindell. Utility dependence in correct and fair rational secret sharing. In _Advances in Cryptology Crypto_, pages 559–576, 2009. A full version, containing additional results, is avalable at http://eprint.iacr.org/2009/373.
* [3] Y. Aumann and Y. Lindell. Security against covert adversaries: Efficient protocols for realistic adversaries. To appear in _Journal of Cryptology_. An extended abstract appeared in TCC 2007. Full version can be found at http://u.cs.biu.ac.il/ lindell/PAPERS/covert.pdf.
* [4] E. Ben-Sasson, A. Tauman-Kalai, and E. Kalai. An approach to bounded rationality. In _Advances in Neural Information Processing Systems_, 2007.
* [5] M. Blum. Coin flipping by telephone. In _CRYPTO_, pages 11–15, 1981.
* [6] Y. Dodis, S. Halevi, and T. Rabin. A cryptographic solution to a game theoretic problem. In _In Advances in Cryptology Crypto_, pages 11–15, 2000.
* [7] L. Fortnow and R. Santhanam. Bounding rationality by discounting time. In _Proceedings of the First Symposium on Innovations in Computer Science_, 2010.
* [8] O. Goldreich. _Foundation of Cryptography – Basic Tools_. Cambridge University Press, 2001.
* [9] S. D. Gordon and J. Katz. Rational secret sharing, revisited. In _In 5th Intl. Conf. on Security and Cryptography for Networks (SCN)_, pages 229–241, 2006.
* [10] R. Gradwohl. Rationality in the full-information model. In _TCC_, 2010.
* [11] R. Gradwohl, N. Livne, and A. Rosen. Incredible threats. In preparation.
* [12] I. Haitner and O. Reingold. Statistically-hiding commitment from any one-way function. In _STOC 2007_, pages 1 – 10, 2007.
* [13] J. Halpern and V. Teague. Rational secret sharing and multiparty computation: Extended abstract. In _36th Annual ACM Symposium on Theory of Computing (STOC)_, pages 623–632, 2004.
* [14] J. Y. Halpern and R. Pass. Game theory with costly computation. In _First Symposium on Innovations in Computer Science_, 2010.
* [15] S. Izmalkov, S. Micali, , and M. Lepinski. Rational secure computation and ideal mechanism design. In _FOCS_, 2005.
* [16] J. Katz. Bridging game theory and cryptography: Recent results and future directions. In _5th Theory of Cryptography Conference TCC_, pages 251–272, 2008.
* [17] J. Katz, G. Fuchsbauer, and D. Naccache. Efficient rational secret sharing in the standard communication model. In _TCC_, 2010.
* [18] G. Kol and M. Naor. Cryptography and game theory: Designing protocols for exchanging information. In _5th Theory of Cryptography Conference TCC_, pages 320–339, 2008.
* [19] G. Kol and M. Naor. Games for exchanging information. In _40th Annual ACM Symposium on Theory of Computing (STOC)_, pages 423–432, 2008.
* [20] M. Lepinski, S. Micali, and A. shelat. Collusion-free protocols. In _STOC_, 2005.
* [21] M. Luby, S. Micali, and C. Rackoff. How to simultaneously exchange a secret bit by flipping a symmetrically-biased coin. In _FOCS_, pages 11–21, 1983.
* [22] A. Lysyanskaya and N. Triandopoulos. Rationality and adversarial behavior in multi-party computation. In _In Advances in Cryptology Crypto_, pages 180–197, 2006.
* [23] S. Micali and A. Shelat. Truly rational secret sharing. In _6th Theory of Cryptography Conference TCC_, pages 54–71, 2009.
* [24] M. Naor, R. Ostrovsky, R. Venkatesan, and M. Yung. Perfect zero-knowledge arguments for np using any one-way permutation. _Jour. of Cryptology_, 11:87–108, 1998.
* [25] M. Naor and M. Yung. Universal one-way hash functions and their cryptographic applications. In _21st STOC_, pages 33–43, 1989.
* [26] S. J. Ong, D. Parkes, A. Rosen, and S. Vadhan. Fairness with an honest minority and a rational majority. In _Theory of Cryptography Conference TCC_, pages 36–53, 2009.
* [27] M. J. Osborne and A. Rubinstein. _A Course in Game Theory_. MIT Press, 1994.
* [28] I. Damgård, T. Pedersen, and B. Pfitzmann. On the existence of statistically hiding bit commitment schemes and fail-stop signatures. In _Crypto93_, pages 250–265, 1993.
## Appendix A One-way Functions and Commitment Schemes
A function \(f\) is one-way if it is easy to compute but hard to invert given the image of a random input. More formally,
**Definition A.1** (One-way functions): _A function \(f:\{0,1\}^{*}\rightarrow\{0,1\}^{*}\) is said to be one-way if the following two conditions hold:_
1. _There exists a polynomial-time algorithm that on input_ \(x\) _outputs_ \(f(x)\)_._
2. _For every probabilistic polynomial-time algorithm_ \(\mathcal{A}\)_, every polynomial_ \(p(\cdot)\)_, and all sufficiently large_ \(n\)_’s_ \[\Pr\left[\mathcal{A}(1^{n},f(U_{n}))\in f^{-1}(f(U_{n}))\right]<\frac{1}{p(n)}\enspace,\] _where_ \(U_{n}\) _denotes the uniform distribution over_ \(\{0,1\}^{n}\)_._
In this paper we also deal with one-way permutations, and we note that the above definition naturally extends to consider permutations.
A commitment scheme is a two-stage interactive protocol between a sender and a receiver. After the first stage of the protocol, which is referred to as the _commit stage_, the sender is bound to at most one value, not yet revealed to the receiver. In the second stage, which is referred to as the _reveal stage_, the sender reveals its committed value to the receiver. For simplicity of exposition, we will focus on bit-commitment schemes, i.e., commitment schemes in which the committed value is only one bit. A bit-commitment scheme is defined via a triplet of probabilistic polynomial-time Turing-machines \((\mathcal{S},\mathcal{R},\mathcal{V})\) such that:
* \(\mathcal{S}\) receives as input the security parameter \(1^{n}\) and a bit \(b\). Following its interaction, it outputs some information \({\sf decom}\) (the decommitment).
* \(\mathcal{R}\) receives as input the security parameter \(1^{n}\). Following its interaction, it outputs a state information \({\sf com}\) (the commitment).
* \(\mathcal{V}\) (acting as the receiver in the reveal stage⁸) receives as input the security parameter \(1^{n}\), a commitment \({\sf com}\) and a decommitment \({\sf decom}\). It outputs either a bit \(b^{\prime}\) or \(\bot\). [FOOTNOTE:8][ENDFOOTNOTE]
Denote by \(({\sf decom}|{\sf com})\leftarrow\langle\mathcal{S}(1^{n},b),\mathcal{R}(1^{n})\rangle\) the experiment in which \(\mathcal{S}\) and \(\mathcal{R}\) interact (using the given inputs and uniformly chosen random coins), and then \(\mathcal{S}\) outputs \({\sf decom}\) while \(\mathcal{R}\) outputs \({\sf com}\). It is required that for all \(n\), every bit \(b\), and every pair \(({\sf decom}|{\sf com})\) that may be output by \(\langle\mathcal{S}(1^{n},b),\mathcal{R}(1^{n})\rangle\), it holds that \(\mathcal{V}({\sf com},{\sf decom})=b\).⁹
[FOOTNOTE:9][ENDFOOTNOTE]
The security of a commitment scheme can be defined in two complementary ways, protecting against either an all-powerful sender or an all-powerful receiver. The former are referred to as _statistically-binding_ commitment schemes, whereas the latter are referred to as _statistically-hiding_ commitment schemes. For simplicity, we assume that the associated “error” is zero, resulting in _perfectly-binding_ and _perfectly-hiding_ commitments schemes.
In order to define the security properties of such schemes, we first introduce the following notation. Given a commitment scheme \((\mathcal{S},\mathcal{R},\mathcal{V})\) and a Turing machine \(\mathcal{R}^{*}\), we denote by \({\sf view}_{\langle\mathcal{S}(b),\mathcal{R}^{*}\rangle}(1^{n})\) the distribution of the view of \(\mathcal{R}^{*}\) when interacting with \(\mathcal{S}(1^{n},b)\). This view consists of \(\mathcal{R}^{*}\)’s random coins and of the sequence of messages it receives from \(\mathcal{S}\). The distribution is taken over the random coins of both \(\mathcal{S}\) and \(\mathcal{R}\). Similarly, given a Turing machine \(\mathcal{S}^{*}\) we denote by \({\sf view}_{\langle\mathcal{S^{*}}(1^{n}),\mathcal{R}\rangle}(1^{n})\) the view of \(\mathcal{S}^{*}\) when interacting with \(\mathcal{R}(1^{n})\). Note that whenever no computational restrictions are assumed on \(\mathcal{S}^{*}\) or \(\mathcal{R}^{*}\), then without loss of generality they can be assumed to be deterministic.
**Definition A.2** (Perfectly-binding commitment): _A bit-commitment scheme \((\mathcal{S},\mathcal{R},\mathcal{V})\) is said to be perfectly-hiding if it satisfies the following two properties:_
* **Computational hiding:** _for every probabilistic polynomial-time Turing machine_ \(\mathcal{R}^{*}\) _the ensembles_ \(\{{\sf view}_{\langle\mathcal{S}(0),\mathcal{R}^{*}\rangle}(1^{n})\}_{n\in \mathbb{N}}\) _and_ \(\{{\sf view}_{\langle\mathcal{S}(1),\mathcal{R}^{*}\rangle}(1^{n})\}_{n\in \mathbb{N}}\) _are computationally indistinguishable._
* **Perfect binding:** _for every Turing machine_ \(\mathcal{S}^{*}\)__ \[\Pr\left[({\sf(decom,decom^{\prime})}|{\sf com})\leftarrow\langle\mathcal{S}^{ *}(1^{n}),\mathcal{R}(1^{n})\rangle:\genfrac{}{}{0.0pt}{}{\mathcal{V}({\sf com },{\sf decom})=0}{\mathcal{V}({\sf com},{\sf decom^{\prime}})=1}\right]=0\enspace,\] _for all sufficiently large_ \(n\)_, where the probability is taken over the random coins of_ \(\mathcal{R}\)_._
Perfectly-binding commitments can be constructed assuming the existence of any one-way permutation [5]. The construction is “non-interactive,” meaning that the commitment phase consists of a single message sent from the sender \(\mathcal{S}\) to the receiver \(\mathcal{R}\).
**Definition A.3** (Perfectly-hiding commitment): _A bit-commitment scheme \((\mathcal{S},\mathcal{R},\mathcal{V})\) is said to be perfectly-hiding if it satisfies the following two properties:_
* **Perfect hiding:** _for every Turing machine_ \(\mathcal{R}^{*}\) _the ensembles_ \(\{{\sf view}_{\langle\mathcal{S}(0),\mathcal{R}^{*}\rangle}(1^{n})\}_{n\in \mathbb{N}}\) _and_ \(\{{\sf view}_{\langle\mathcal{S}(1),\mathcal{R}^{*}\rangle}(1^{n})\}_{n\in \mathbb{N}}\) _are identically distributed._
* **Computational binding:** _for every probabilistic polynomial-time Turing machine_ \(\mathcal{S}^{*}\) _the exists a negligible function_ \(\mu(n)\) _so that_ \[\Pr\left[({\sf(decom,decom^{\prime})}|{\sf com})\leftarrow\langle\mathcal{S}^{ *}(1^{n}),\mathcal{R}(1^{n})\rangle:\genfrac{}{}{0.0pt}{}{\mathcal{V}({\sf com },{\sf decom})=0}{\mathcal{V}({\sf com},{\sf decom^{\prime}})=1}\right]<\mu(n)\enspace,\] _for all sufficiently large_ \(n\)_, where the probability is taken over the random coins of both_ \(\mathcal{S}^{*}\) _and_ \(\mathcal{R}\)_._
Perfectly-hiding commitments can be constructed assuming the existence of any one-way permutation [24]. This construction is “highly-interactive,” in that the commitment phase requires the exchange of \(n-1\) messages between the sender and the receiver, where \(n\) is the security parameter. By relaxing the hiding condition to be only “statistical” it is possible to weaken the underlying assumption to the existence of one-way functions [12]. Assuming the existence of collision resistant hash functions, it is possible to construct two-message statistically-hiding commitments [25, 28].
|
1201.4376 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 32285,
"num_imgs": 2,
"llama3_tokens_count": 6724
} | [
"content_image/1201.4376/x1.png",
"content_image/1201.4376/x2.png"
] | # Participatory Privacy:
Enabling Privacy in Participatory Sensing
Emiliano De Cristofaro
Palo Alto Research Center (PARC)
edc@parc.com
Claudio Soriente
ETH Zurich, Switzerland
claudio.soriente@inf.ethz.ch
###### Abstract
Participatory Sensing is an emerging computing paradigm that enables the distributed collection of data by self-selected participants. It allows the increasing number of mobile phone users to share local knowledge acquired by their sensor-equipped devices, e.g., to monitor temperature, pollution level or consumer pricing information. While research initiatives and prototypes proliferate, their real-world impact is often bounded to comprehensive user participation. If users have no incentive, or feel that their privacy might be endangered, it is likely that they will not participate.
In this article, we focus on privacy protection in Participatory Sensing and introduce a suitable privacy-enhanced infrastructure. First, we provide a set of definitions of privacy requirements for both data producers (i.e., users providing sensed information) and consumers (i.e., applications accessing the data). Then, we propose an efficient solution designed for mobile phone users, which incurs very low overhead. Finally, we discuss a number of open problems and possible research directions.
## 1 Introduction
In the last decade, researchers have envisioned the outbreak of Wireless Sensor Networks (WSNs) and predicted the widespread installation of sensors, e.g., in infrastructures, buildings, woods, rivers, or even the atmosphere. This has triggered a lot of interest in many different WSN topics, including identifying and addressing security issues, such as data integrity, node capture, secure routing, etc. On the contrary, privacy has not really been a concern in WSNs, as sensors are usually owned, operated, and queried by the same entity. (For instance, the National Department of Transportation deploys sensors and collects traffic information related to national highways.)
On the other hand, the proliferation of mobile phones, along with their pervasive connectivity, has propelled the amount of digital data produced and processed everyday. This has driven researchers and IT professionals to discuss and develop a novel sensing paradigm, where sensors are not deployed in specific locations, but are _carried_ around by people. Today, many different sensors are already deployed in our mobile phones, and soon all our gadgets (e.g., even our clothes or cars) will embed a multitude of sensors (e.g., GPS, digital imagers, accelerometers, etc.). As a result, data collected by sensor-equipped devices becomes of extreme interest to other users and applications. For instance, mobile phones may report (in real-time) temperature or noise level; similarly, cars may inform on traffic conditions.
This paradigm is called _Participatory Sensing_ (PS) – sometimes also referred to as _opportunistic_ or _urban_ sensing [3]. It combines the ubiquity of personal devices with sensing capabilities typical of WSN. As the number of mobile phone subscriptions exceeds 5 billions, PS becomes a cutting-edge and effective distributed-computing (as well as business) model. We argue that PS appreciably expands the capabilities of WSN applications, e.g., allowing effective monitoring in scenarios where the set up of a WSN is either not economical or not feasible.
However, its success is strongly related to the number of users actually willing to commit personal device resources to sensing applications, and thus, to associated privacy concerns. Observe that sensing devices are no longer “dull” gadgets, owned by the entity querying them. They are personal devices that follow users at all times, and their reports often expose personal and sensitive information. Consider, for instance, a PS application like http://www.gasbuddy.com/ where gas prices are monitored via user reports, and information announced by participants inevitably exposes their current and past locations, hence, their movements. If users have no incentive in contributing sensed data or feel that their privacy might be violated, they will (most likely) refuse to participate. Thus, not only traditional security but also privacy issues must be taken into account.
In this article, we focus on privacy protection in PS. We define privacy in this new context, present a privacy-enhanced PS infrastructure, and elaborate on a number of desirable features which constitute challenging research problems. Proposed privacy-protecting layer can be easily adopted by available PS applications to enforce privacy and enhance user participation.
## 2 Participatory Sensing
**What is Participatory Sensing?**PS is an emerging paradigm that focuses on the seamless collection of information from a large number of connected, always-on, always-carried devices, such as mobile phones. PS leverages the wide proliferation of commodity sensor-equipped devices and the ubiquity of broadband network infrastructure to provide sensing applications where deployment of a WSN infrastructure is not economical or not feasible. PS provides fine-grained monitoring of environmental trends without the need to set up a sensing infrastructure. Our mobile phones _are_ the sensing infrastructure and the number and variety of applications are potentially unlimited. Users can monitor gas prices (http://www.gasbuddy.com/), traffic information (http://www.waze.com/), available parking spots (http://spotswitch.com/), just to cite a few. We refer readers to [4] for an updated list of papers and projects related to PS.
**What _isn’t_ Participatory Sensing?**PS is not a mere evolution of WSN, where motes are replaced by mobile phones. Sensors are now relatively powerful devices, such as mobile phones, with much greater resources than WSN motes. Their batteries can be easily recharged and production cost constraints are not as tight. They are extremely _mobile_, as they leverage the ambulation of their carriers. Moreover, in traditional WSNs, the network operator is always assumed to manage and own the sensors. On the contrary, this assumption does not fit most PS scenarios, where mobile devices are _tasked_ to participate into gathering and sharing local knowledge. Hence, a sensor (or its owner) might choose whether to participate or not. As a result, in PS applications, different entities co-exist and might not trust each other.
**Participatory Sensing Components.** A typical PS infrastructure involves (at least) the following parties:
1. **Mobile Nodes** are the union of a carrier (i.e., a user) with a sensor installed on a mobile phone or other portable, wireless-enabled device. They provide reports and form the basis of any PS application.
2. **Queriers** subscribe to information collected in a PS application (e.g., “temperature in Irvine, CA”) and obtain corresponding reports.
3. **Network Operators** manage the network used to collect and deliver sensor measurements , e.g., they maintain GSM and/or 3G/4G networks.
4. **Service Providers** act as intermediaries between Queriers and Mobile Nodes, in order to deliver report of interest to Queriers.
Queriers can subscribe to the appropriate Service Provider for one or more type of measurements. For example, assume that Alice subscribes to “available parking spots on W 16th Street, New York”, or Bob is interested in the “temperature in Central Park, New York”. In turn, Mobile Nodes share local knowledge—either voluntary or in return for some profit—with one or more Service Providers, that make information available to Queriers. For example, assume Carol’ mobile phone sends report “3 available parking spots on E 56th, New York”, while John’s device sends “\(74^{o}F\) in Central Park, New York”.
As Mobile Nodes and Queriers have no direct communication nor mutual knowledge, Service Providers route reports matching specific subscriptions to their original Queriers. In fact, Mobile Nodes ignore which Queriers (if any) are interested in their reports. For example, the Service Provider forwards John’s temperature report to Bob; Carol’s parking report is not sent to Alice as it refers to a different location.
## 3 Privacy Concerns
PS provides an effective solution to a wide range of applications, however, it prompts several security and privacy concerns that need to be carefully addressed.
On the one hand, issues such as confidentiality or integrity can be mitigated using state-of-the-art techniques. For instance, all parties can be protected from external eavesdroppers using SSL/TLS. The latter provides a secure channel between any two parties, so that communications between Mobile Nodes and Service Providers or between Service Providers and Queriers are kept confidential.
On the other hand, the need for privacy protection stems from the potential leakage of personal information to _internal adversaries_. Indeed, as the Service Provider collects _all_ data (i.e., reports and queries), it might learn a considerable amount of sensitive information about both Mobile Nodes and Queriers, and violate the privacy of their movements, interests, habits, and more. For instance, the Service Provider learns that both Bob and John are located in Central Park, New York. It also learns that Alice is driving on W 16th Street, looking for parking. The continuous collection of information over long periods allows the Service Provider to meticulously profile users.
Further, as data collected through PS applications becomes available to external entities and organizations (i.e, the Queriers), query interests also become sensitive and need to be hidden. For instance, Service Providers should not learn which interests are “hot”.
Finally, there is a tension between privacy and accountability as PS business models may require, at the very least, that reports are available only to entitled (e.g., authorized or paying) members.
However, we claim there is one main reason to protect privacy. If users feel that their privacy is endangered, they will deny sharing their reports. Specifically, it is required that the Service Provider performs report/query matching but learns no information about query interests. Also, data reports should not reveal to the Service Provider, the Network Operator, or unauthorized Queriers, any information about a Mobile Node’s identity, its location, the type of measurement (e.g., temperature) or the quantitative information (e.g., \(74^{o}F\)).
## 4 A Novel Privacy-Enhanced Participatory Sensing Infrastructure
We now present our innovative solution for a Privacy-Enhanced Participatory Sensing Infrastructure (PEPSI). We describe its architecture and privacy desiderata, and overview our instantiation. Finally, we discuss efficiency costs introduced by the privacy-protecting layer.
### PEPSI Architecture
PEPSI protects privacy using efficient cryptographic tools. Similar to other cryptographic solutions, it introduces an additional (offline) entity, namely the Registration Authority. It sets up system parameters and manages Mobile Nodes or Queriers registration. However, the Registration Authority is not involved in real-time operations (e.g., query/report matching) nor is it trusted to intervene for protecting participants’ privacy.
<figure><img src="content_image/1201.4376/x1.png"><figcaption>Figure 1: Privacy-Enhanced Participatory Sensing Infrastructure.</figcaption></figure>
Figure 1 illustrates the PEPSI architecture. The Registration Authority can be instantiated by any entity in charge of managing participants registration (e.g., a phone manufacturer). A Service Provider offers PS applications (used, for instance, to report and access pollution data) and acts as an intermediary between Queriers and Mobile Nodes. Finally, Mobile Nodes send measurements acquired via their sensors using the network infrastructure and Queriers are users or organizations (e.g., bikers) interested in obtaining reports (e.g., pollution levels).
PEPSI allows the Service Provider to perform report/query matching while guaranteeing the privacy of both mobile Nodes and Queriers. It aims at providing (provable) privacy _by design_, and starts off with defining a clear set of privacy properties.
### Privacy Desiderata
The _privacy desiderata_ of PS applications can be formalized as follows:
* _Soundness:_ Upon subscribing to a query, Queriers in possession of the appropriate authorization always obtain the desired query results.
* _Node Privacy:_ Neither the Network Operator, the Service Provider, nor any unauthorized Querier, learn any information about the type of measurement or the data reported by a Mobile Node. Also, Mobile Nodes should not learn any information about other nodes’ reports. Only Queriers in possession of the corresponding authorization obtain reported measurements.
* _Query Privacy:_ Neither the Network Operator, the Service Provider, nor any Mobile Node or any other Querier, learn any information about Queriers’ subscriptions.
* _Report Unlinkability:_ No entity can successfully link two or more reports as originating from the same Mobile Node. However, as we discuss below, we do not pursue Report Unlinkability with respect to the Network Operator.
* _Location Privacy:_ No entity can learn the current location of a Mobile Node. (Again, excluding the Network Operator).
In realistic scenarios, it appears unlikely – if not impossible – to guarantee Report Unlinkability and Location Privacy with respect to the Network Operator. In fact, PS strongly relies on the increasing use of broadband 3G/4G connectivity. In these networks, current technology does not allow to provide user anonymity with respect to the Network Operator. Mobile Nodes are identified through their International Mobile Subscriber Identity, and any technique for identifier obfuscation would lead to service disruption (e.g., the device would not receive incoming calls). Further, the regular usage of cellular networks (e.g., incoming/outgoing phone calls), as well as heartbeat messages exchanged with the network infrastructure, irremediably reveal device’s location. To provide Report Unlinkability/Location Privacy with respect to other parties, we need to trust the Network Operator (who routes Mobile Nodes’ reports to Service Providers) not to forward any information identifying the Mobile Nodes (e.g., the identifier, the cell from which the report was originated, etc.).
### PEPSI Construction
One of the main goals of PEPSI is to hide reports and queries to unintended parties. Thus, those cannot be transmitted _in-the-clear_, but need to be encrypted. In this section, we discuss how to achieve, at the same time, (1) secure encryption of reports and queries, and (2) efficient and oblivious matching by the Service Provider. Due to space limitation and to ease presentation, we only provide an overview of our construction (with no technical details). We refer interested readers to the extended version of the paper (available on project page [4]) for a complete description of our techniques, as well as formal cryptographic proofs.
**A naïve solution.** Traditional confidentiality means are not suited for PS applications. Recall that in our context, Mobile Nodes and Queriers have no mutual knowledge or common history: that is, Mobile Nodes provide reports obliviously of (any) potential receiver, while Queriers subscribe to data reports not knowing who (if any) will every provide measurements of interest. Hence, we cannot assume that each Mobile Node shares a unique pairwise secret key with each Querier and that reports are encrypted under that key via a symmetric-key cipher (e.g., AES). Even if we were to allow interactions between Mobile Nodes and Queriers, we would still need the former to encrypt reports under each key shared with Queriers. This would generate a number of ciphertexts quadratic in the number of measurements. Alternatively, we could use a public key encryption scheme and provide Mobile Nodes with the public keys of Queriers. Still, scalability would be an issue as each report would be encrypted under the public key of each Querier. In general, because of scalability and loose coupling between data producers and consumers, Mobile Nodes cannot provide measurements intended for a specific Querier and the latter cannot ask for data from a given Mobile Node.
Our main building block is Identity-Based Encryption (IBE) — a cryptographic primitive, based on bilinear map pairings, that enables asymmetric encryption using any string (“identity”) as a public key. In IBE, anyone can derive public keys from some unique information about the recipient’s identity. Private decryption keys are generated by a third-party, called the Private Key Generator (PKG). _Our intuition is to use a tagging mechanism on top of IBE._
**Report Encryption.** We assume that each report or subscription is identified by a set of labels, or keywords. These are used as “identities” in an IBE scheme. For example, labels “Temperature” and “Central Park, NY” can be used to derive a unique public encryption key, associated to a secret decryption key. Thus, Mobile Nodes can encrypt sensed data using report’s labels as the (public) encryption key. Queriers should then obtain the private decryption keys corresponding to the labels of interest. Those are obtained, upon query registration, from the Registration Authority – which, in practice, acts like a PKG.
**Efficient Matching using Cryptographic Tags.** After enabling encryption/decryption of reports, we need to allow the Service Provider to efficiently match them against queries. In fact, the application of IBE to PS settings is not trivial: with a straightforward use of IBE, oblivious matching of queries and reports would be impossible. In other words, the Service Provider would forward _all_ (encrypted) reports to all Queriers; each of them will only be able to decrypt reports of interests, i.e., the ones for which they hold the decryption keys. However, given the large amount of reports produced by Mobile Nodes, this would incur a considerable overhead for the Querier, that must try to decrypt all reports using each of her decryption keys. To address this problem, we propose an efficient tagging mechanisms: Mobile Nodes _tag_ each report with a cryptographic token that identifies the nature of the report only to authorized Queriers, but does not leak any information about the report itself. Tags are computed using the same labels used to derive encryption keys. Similarly, Queriers compute tags for the labels defining their interests (using the corresponding decryption keys) and provide them to the Service Provider at query subscription.
Our main contribution, in this context, is to exploit the mathematical properties of bilinear map pairings: we ensure that, whenever a report matches a query, corresponding tags also match. In other words, a tag computed by John using the encryption key derived from label “temperature in Central Park, New York”, is equal to the tag computed by Bob using the decryption key computed over the same label. Specifically, Mobile Nodes upload reports along with the respective tags, while Queriers define their subscriptions uploading the tags they compute at the Service Provider. The latter can find matches (i.e., a tag related to a report equals the tag related to a subscription) without learning any information about underlying queries/reports.
<figure><img src="content_image/1201.4376/x2.png"><figcaption>Figure 2: PEPSI operations.</figcaption></figure>
### PEPSI Operations
Figure 2 shows how PEPSI work. The upper part of the figure depicts the offline operations where the Registration Authority is involved to register both Mobile Nodes and Queriers.
**Querier Registration.** In the example, Querier \(\mathcal{Q}\) (the laptop on the right side) picks “Temp” among the list of available queries and obtains the corresponding decryption key (yellow key).
**Mobile Node Registration.** Similarly, Mobile Node \(\mathcal{M}\) (the mobile phone on the left side) decides to report about temperature in its location and obtains the corresponding secret used for tagging (grey key).
The bottom part of Figure 2 shows the online operations where the Service Provider is involved.
**Querier Subscription.**\(\mathcal{Q}\) subscribes to queries of type “Temp” in “Irvine, CA” using these keywords and the decryption key acquired offline, to compute a (green) tag; the algorithm is referred to as \(\gets TAG()\). The tag leaks no information about \(\mathcal{Q}\)’s interest and is uploaded at the Service Provider.
**Data Report.** Any time \(\mathcal{M}\) wants to report about temperature, it derives the public decryption key (red key) for reports of type “Temp” (via the \(\gets IBE()\) algorithm) and encrypts the measurement; encrypted data is pictured as a vault. \(\mathcal{M}\) also tags the report using the secret acquired offline and a list of keywords characterizing the report; in the example \(\mathcal{M}\) uses keywords “Temp” and “Irvine, CA”. Our tagging mechanism leverages the properties of bilinear maps to make sure that, if \(\mathcal{M}\) and \(\mathcal{Q}\) use the same keywords, they will compute the same tag, despite each of them is using a different secret (\(\mathcal{M}\) is using the grey key while \(\mathcal{Q}\) is using the yellow one). As before, the tag and the encrypted report leak no information about the nature of the report or the nominal value of the measurement. Both tag and encrypted data are forwarded to the Service Provider.
**Report Delivery.** The Service Provider only needs to match tags sent by Mobile Nodes with the ones uploaded by Queriers. If the tags match, the corresponding encrypted report is forwarded to the Querier. In the example of Figure 2 the green tag matches the blue one, so the encrypted report (the vault) is forwarded to \(\mathcal{Q}\). Finally, \(\mathcal{Q}\) can decrypt the report using the decryption key and recover the temperature measurement.
### PEPSI overhead
Resources in PS are not as constrained as in WSNs, nonetheless, overhead incurred at Mobile Nodes should still be minimized. To foster the adoption of our solution in current PS applications we provide an experimental evaluation of the cost of cryptographic operations used to achieve intended privacy features. We implemented protocol operations executed by Mobile Nodes on a Nokia N900 (equipped with a 600 MHz ARM processor and 256 MB RAM). Computation overhead, for every report, is due to the computation of the tag and the encryption of the measurement. In our experiments, we experience an average time (over \(100\) trials) of \(93.47ms\) to perform these operations.
Communication overhead is merely due to the transmission of the tag, which is the output of a hash function (e.g., SHA-1), thus, it is relative small (\(160\)-bit). The encryption of the measurement generates almost no overhead, since, using state-of-the-art symmetric-key ciphers (e.g., AES), ciphertext’s length is almost the same as plaintext’s.
Tag computation by Queriers is performed only once, during query subscription. Upon reception of measurement of interests, Queriers perform symmetric-key decryption, which incurs a negligible overhead.
Finally, note that the Service Provider incurs no communication nor computational overhead: its task is limited to comparing output of hash functions (i.e., tags) and forwarding reports. From a functional point of view, the work of the Service Provider is no different from that in a non privacy-preserving solution. Thus, privacy protection incurs no overhead at the Service Provider and enjoys scalability to large-scale scenarios. We conclude that our architecture is practical enough, today, to be deployed for real-world PS applications.
## 5 Related Work
**Participatory Sensing Projects.** In the last few years, Participatory Sensing initiatives have multiplied, ranging from research prototypes to deployed systems. Due to space limitations we briefly review some PS application that apparently expose participant privacy (e.g., location, habits, etc.). Each of them can be easily enhanced with our privacy-protecting layer. Interested readers may find a larger list of PS applications at [4]. Quake-Catcher [1] aims at building the world’s largest, low-cost strong-motion seismic network by utilizing accelerometers embedded in any internet-connected device. Kim et al. [9] use the power of PS for meaningful places (e.g., home, office, etc.) discovery. PS has been shown to be an effective mean to monitor levels of air pollution [13], noise pollution [12] and water quality [10]. PS to aid health care providers in patient monitoring has been investigated in [11].
**Privacy.** Only little attention has been paid to arising privacy issues in PS [15]. The authors of [2] study privacy in participatory sensing relying on weak assumptions: they attempted to protect _anonymity_ of Mobile Nodes through the use of Mix Networks. (A Mix Network is a statistical-based anonymizing infrastructure that provides \(k\)-anonymity – i.e., an adversary cannot tell a user from a set of \(k\)). However, Mix Networks are unsuitable for many PS settings. They do not attain provable privacy guarantees and assume the presence of an ubiquitous WiFi infrastructure used by Mobile Nodes, whereas, PS applications do leverage the increasing use of broadband 3G/4G connectivity. In fact, an ubiquitous presence of open WiFi networks is not realistic today nor anticipated in the next future. By contrast, our work aims at identifying a minimal set of realistic assumptions and clear privacy guarantees to be achieved with provable security.
The work in [14] studies privacy-preserving data aggregation, (e.g., computation of sum, average, variance, etc.). Similarly, [6] presents a solution for community statistics on time-series data, while protecting anonymity (using data perturbation in a closed community with a known empirical data distribution). Finally, [7] aims at guaranteeing integrity and authenticity of user-generated contents, by employing Trusted Platform Modules (TPMs).
The main technical challenge in providing provable privacy in participatory sensing infrastructure stems from the simultaneous presence of several mutually untrusted (and potentially unknown) entities, including data producers, data consumers, and Service Providers. A similar scenario arises in the context of _Publish-Subscribe_ networks [5], which face similar privacy concerns. However, state-of-the-art solutions (e.g., [8]) assume an a-priori knowledge (and key exchange) between publishers and subscribers, while PS application require loose coupling between Mobile Nodes and Queriers. This makes impossible to apply them to the PS scenario, where data producers and consumers may not know each other. Our solution protects their privacy while requiring no direct interaction between the two parties.
## 6 Conclusion & Open Problems
Participatory Sensing is a novel computing paradigm that bears a great potential. If users are incentivized to contribute personal device resources, a number of novel applications and business models will arose. In this article we discussed the problem of protecting privacy in Participatory Sensing. We claim that user participation cannot be afforded without protecting the privacy of both data consumers and data producers. We also proposed the architecture of a privacy-preserving Participatory Sensing infrastructure and introduced an efficient cryptographic solution that achieves privacy with provable security. Our solution can be adopted by current Participatory Sensing applications to enforce privacy and enhance user participation, with little overhead.
This work represents an initial foray into robust privacy guarantees in PS, thus, much remains to be done. Items for future work, include (but are not limited to):
1. Protecting query privacy with respect to the Registration Authority. Recall, in fact, that Querier Alice needs to obtain the IBE decryption keys from the Registration Authority, which would then learn Alice’s query interests.
2. Protecting node privacy with respect to the Network Operator. Current technology does not allow to hide users’ locations and identities from to the Network Operator. Hence, it is an interesting challenge to guarantee node anonymity in broadband networks.
3. Addressing collusion attacks, where multiple entities might collaborate in order to violate the privacy of Mobile Nodes or Queriers.
4. Improving the syntax of supported query types. In fact, PEPSI so far allows query/report matching based on the tags provided by both Mobile Nodes and Queriers. However, PS applications might require more complex queries where Queriers are interested in an aggregate of the reports (e.g., average or sum), or even complex query predicates (e.g., comparisons). While simple aggregate function evaluation over encrypted data is viable with available cryptographic techniques (e.g., homomorphic encryption), enabling _efficient_ evaluation of complex predicates remains an open challenge.
## References
* [1] E.S. Cochran and J.F. Lawrence and C. Christensen and R.S. Jakka, _The QuakeCatcher Network: Citizen science expanding seismic horizons,_ Seismological Research Letters, vol. 80, 2009, pp. 26-30
* [2] C. Cornelius and A. Kapadia and D. Kotz and D. Peebles and M. Shin and N. Triandopoulos, _AnonySense: Privacy-aware people-centric sensing,_ 6th International Conference on Mobile Systems, Applications, and Services (MobiSys), 2008, pp. 211-224.
* [3] D Cuff and M.H. Hansen and J. Kang, _Urban sensing: out of the woods,_ Commun. ACM, vol. 51, no. 3, 2008, pp. 24-33.
* [4] E. De Cristofaro and C. Soriente, _Privacy-Preserving Participatory Sensing Infrastructure,_http://www.emilianodc.com/PEPSI/.
* [5] P.T. Eugster and P.A. Felber and R. Guerraoui and A.M. Kermarrec, _The many faces of publish/subscribe,_ ACM Computing Surveys, vol. 35, no. 2, 2003, pp. 114-131.
* [6] R.K. Ganti and N. Pham and Y.E. Tsai and T.F. Abdelzaher, _PoolView: stream privacy for grassroots participatory sensing,_ 6th International Conference on Embedded Networked Sensor Systems (SenSys) 2008, pp. 281-294.
* [7] P. Gilbert and L.P. Cox and J. Jung and D. Wetherall, _Toward trustworthy mobile sensing,_ 11th Workshop on Mobile Computing Systems and Applications (HotMobile), 2010, pp. 31-36.
* [8] M. Ion and G. Russello and B. Crispo, _Supporting Publication and Subscription Confidentiality in Pub/Sub Networks,_ 6th Iternational ICST Conference on Security and Privacy in Communication Networks (SecureComm), 2010, pp. 272-289.
* [9] D.H. Kim and J. Hightower and R. Govindan and D. Estrin, _Discovering semantically meaningful places from pervasive RF-beacons,_ 11th International Conference on Ubiquitous Computing (UbiComp), 2009, pp. 21-30.
* [10] S. Kuznetsov and E. Paulos, _Participatory sensing in public spaces: activating urban surfaces with sensor probes,_ ACM Conference on Designing Interactive Systems (DIS), 2010, pp. 21-30.
* [11] B. Longstaff and S. Reddy and D. Estrin, _Improving activity classification for health applications on mobile devices using active and semi-supervised learning,_ 4th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2010, pp. 1-7.
* [12] N. Maisonneuve and M. Stevens and M.E. Niessen and L. Steels, _NoiseTube: Measuring and mapping noise pollution with mobile phones,_ 4th International ICSC Symposium on Information Technologies in Environmental Engineering (ITEE), 2009, pp. 215-228.
* [13] E. Paulos and R.J. Honicky and E. Goodman, _Sensing Atmosphere,_ Sensing on Everyday Mobile Phones in Support of Participatory Research (SenSys workshop), 2007, pp. 1-3.
* [14] J. Shi and R. Zhang and Y. Liu and Y. Zhang, _PriSense: Privacy-Preserving Data Aggregation in People-Centric Urban Sensing Systems,_ 29th IEEE International Conference on Computer Communications (INFOCOM), 2010, pp. 758-766.
* [15] K. Shilton, _Four billion little brothers?: Privacy, mobile phones, and ubiquitous data collection,_ Communications of the ACM, vol. 52, no. 11, 2009, pp 48-53.
|
0803.3280 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 6470,
"num_imgs": 0,
"llama3_tokens_count": 2644
} | [] | ## Appendix. Derivation of the modified data equations.
It suffices to consider the contributions of the s-mode (\({\bf\Psi}_{0}\)) to the fluxes of \({\cal M}\) and \(Q^{2}\) because only the s-mode contributes to the flux of \(Q^{2}\) [1], and only the flux of \(Q^{2}\) is significant when \(\kappa=\O\). For these contributions one has [1]
\[-\partial_{u}{\cal M} \!\!= \langle\,(\partial_{u}{\bf\Psi}_{0})^{2}\rangle\biggl{|}_{\I}-\xi \,\partial^{2}_{uu}\langle\,{\bf\Psi}_{0}{}^{2}\rangle\biggl{|}_{\I}\;,\] (A.1)
\[\partial_{u}Q^{2} \!\!=\] (A.2)
Here \(\xi\) is the parameter of the scalar-field equation [1] which in 4 dimensions is \(1/6\), and in 2 dimensions is zero. Both expectation values in (A.1), (A.2) are expressed through the same spectral function [1]:
\[\langle\,(\partial_{u}{\bf\Psi}_{0})^{2}\rangle\biggl{|}_{\I}=\frac{2}{(4\pi)^ {2}}\int\limits_{0}^{\infty}d\eout\,I_{0}(\eout,u)+\mbox{c.c.}\;,\] (A.3)
\[\langle\,{\bf\Psi}_{0}{}^{2}\rangle\biggl{|}_{\I}=\frac{2}{(4\pi)^{2}}\int \limits_{0}^{\infty}\frac{d\eout}{{\rm i}\eout}\int\limits_{-\infty}^{u}d{\bar {u}}\,I_{0}(\eout,{\bar{u}})\exp\left({\rm i}\eout(u-{\bar{u}})\right)+\mbox{c .c.}\;,\] (A.4)
\[I_{0}(\eout,u)=\int\limits_{0}^{\infty}d\ein\,{\dot{U}}(u)\int\limits_{-\infty }^{u+0}du^{\prime}\,\ein{\dot{U}}(u^{\prime})\exp\Bigl{(}{\rm i}(\Omega-\Omega ^{\prime})\Bigr{)}\;,\] (A.5)
\[\Omega-\Omega^{\prime}=\ein\left(U(u)-U(u^{\prime})\right)+\eout(u-u^{\prime})\;,\] (A.6)
\[{\dot{U}}(u)\equiv\frac{dU(u)}{du}=\exp\left(-\int\limits_{-\infty}^{u}du\, \kappa\right)\;.\] (A.7)
The calculation needs to be done under condition (26). In (A.5) make the replacement of the integration variables
\[y=\ein{\dot{U}}(u)\frac{1}{\wkappa(u)}\;,\qquad x=\ein{\dot{U}}(u^{\prime}) \frac{1}{\wkappa(u^{\prime})}\;.\] (A.8)
For the needed Jacobian to emerge, \(\wkappa(u)\) should satisfy Eq. (30), and, in order that the lower limit in \(x\) could be set equal to zero, \(\wkappa(u)\) should satisfy the boundary condition (31). With the function \(\wkappa(u)\) thus defined, one obtains
\[I_{0}(\eout,u)=\wkappa(u)\left[1+P\left(u,{\rm i}z,\frac{d}{d{\rm i}z}\right) \right]F(z)\] (A.9)
where
\[z=\frac{\eout}{\wkappa(u)}\;,\qquad F(z)=\frac{2\pi z{\rm e}^{-\pi z }}{{\rm e}^{\pi z}-{\rm e}^{-\pi z}}\;,\] (A.10)
and the function \(P\) is defined as follows. The equation
\[\ln\frac{x}{y}=\int\limits^{u}_{u^{\prime}}du^{\prime\prime}\,\wkappa(u^{ \prime\prime})\] (A.11)
following from (A.8) and (30) should be solved with respect to the quantity
\[\wkappa(u)(u-u^{\prime})=\ln\frac{x}{y}+f\Bigl{(}u,\ln\frac{x}{y}\Bigr{)}\;.\] (A.12)
The function \(P\) is expressed through \(f\) in (A.12) as
\[P\left(u,{\rm i}z,\ln\frac{x}{y}\right) \!\!= \exp\left({\rm i}zf\Bigl{(}u,\ln\frac{x}{y}\Bigr{)}\right)-1\] (A.13)
\[\!\!= {}\sum_{k,p}h_{k,p}(u)({\rm i}z)^{k}\left(\ln\frac{x}{y}\right)^{ k+p}\;,\qquad k\geq 1\;,\quad p\geq 1\]
and is a series of the form (A.13). Only the real part of \(I_{0}(\eout,u)\), and, therefore, only the even \(p\) in the series (A.13) contribute to the expectation value (A.3). Upon the insertion of (A.9) and (A.13) with \(p=2n\) in the spectral integral (A.3), this integral boils down to
\[\int\limits_{0}^{\infty}dz\,({\rm i}z)^{k}\left(\frac{d}{d{\rm i}z}\right)^{k+ 2n}F(z)={\rm i}(-1)^{k}k!\left(\frac{d}{d{\rm i}z}\right)^{2n-1}F(z)\Biggr{|}_ { z=0}\;.\] (A.14)
The even powers of \(z\) in \(F(z)\) drop out of this expression. Only the odd part of \(F(z)\) contributes, and this odd part is
\[\frac{1}{2}\left(F(z)-F(-z)\right)=-\pi z\;.\] (A.15)
It follows that only the terms with \(p=2\) in the series (A.13) contribute to the expectation value (A.3), and this contribution can be calculated:
\[\langle\,(\partial_{u}{\bf\Psi}_{0})^{2}\rangle\biggl{|}_{\I}=\frac{1}{48\pi} \left(\wkappa^{2}-\frac{2}{\wkappa}\,\frac{d^{2}\wkappa}{du^{2}}+\frac{3}{ \wkappa^{2}}\,\left(\frac{d\wkappa}{du}\right)^{2}\right)\;.\] (A.16)
Eq. (30) for \(\wkappa\) can now be used to obtain finally
\[\langle\,(\partial_{u}{\bf\Psi}_{0})^{2}\rangle\biggl{|}_{\I}=\frac{1}{48\pi} \left(\kappa^{2}+2\,\frac{d\kappa}{du}\right)\;.\] (A.17)
If this expression is inserted in (A.1) with \(\xi=0\), the result for \(\partial_{u}{\cal M}\) will be precisely the one that the 2-dimensional effective action gives. This is a good check. For the calculation of the expectation value (A.4), introduce in (A.4) the new integration variables
\[\gamma=\eout(u-{\bar{u}})\;,\qquad\sigma=\int\limits^{u}_{\bar{u}}du^{\prime \prime}\,\wkappa(u^{\prime\prime})\;,\] (A.18)
and, as discussed in Ref. [1], set \(\wkappa(u)=\kappa(u)=0\) for \(u<u_{0}\). Then (A.4) will take the form
\[\langle\,{\bf\Psi}_{0}{}^{2}\rangle\biggl{|}_{\I}=\frac{2}{(4\pi)^{2}}\int \limits_{0}^{\infty}\frac{d\gamma}{{\rm i}\gamma}\,{\rm e}^{{\rm i} \gamma}\int\limits_{0}^{\Gamma}d\sigma\left(1+P\left({\bar{u}},{\rm i}{\bar{z} },\frac{d}{d{\rm i}{\bar{z}}}\right)\right)F({\bar{z}})+\mbox{c.c.}\] (A.19)
where
\[\Gamma=\int\limits^{u}_{u_{0}}du\,\wkappa\;,\qquad{\bar{z}}=\frac{\gamma}{ \wkappa({\bar{u}})(u-{\bar{u}})}\;,\] (A.20)
and the denominator of \({\bar{z}}\) should be expressed through \(\sigma\). This expression (with the obvious change of notation) is just (A.12). It is only important that
\[{\bar{z}}=O\left(\frac{\gamma}{\sigma}\right)\to 0\quad\;\mbox{as}\quad\sigma\to\infty\] (A.21)
since one only needs the asymptotics of (A.19) at \(\Gamma\to\infty\). From the leading asymptotics, the contribution of \(P\) drops out entirely, and one obtains
\[\langle\,{\bf\Psi}_{0}{}^{2}\rangle\biggl{|}_{\I}=\frac{4}{(4\pi)^{2}}\left( \int\limits_{0}^{\infty}d\gamma\,\frac{\sin\gamma}{\gamma}\right)F(0)\,\Gamma= \frac{1}{8\pi}\Gamma\;,\qquad\Gamma\to\infty\] (A.22)
\[\partial_{u}\langle\,{\bf\Psi}_{0}{}^{2}\rangle\biggl{|}_{\I}=\frac{1}{8\pi}\, \wkappa(u)\;,\qquad\int\limits^{u}_{u_{0}}du\,\wkappa\to\infty\;.\] (A.23)
That \(\Gamma\to\infty\) is the needed limit, i.e., that \(\Gamma\to\infty\) follows from (26) can be seen from the integrated Eq. (30):
\[\exp\left(-\int\limits^{u}_{u_{0}}du\,\wkappa\right)=\frac{\wkappa(u_{0})}{ \wkappa(u)}\exp\left(-\int\limits^{u}_{u_{0}}du\,\kappa\right)\;.\] (A.24)
Since \(\wkappa(u_{0})=O(1)\), this quantity vanishes in the limit (26) by virtue of the boundary condition (31). Eqs. (A.1), (A.2) with \(\xi=1/6\) and the expectation values inserted from (A.17) and (A.23) are the equations presented in the text.
|
1705.09257 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 41598,
"num_imgs": 5,
"llama3_tokens_count": 14561
} | [
"content_image/1705.09257/x1.png",
"content_image/1705.09257/x2.png",
"content_image/1705.09257/x3.png",
"content_image/1705.09257/x5.png",
"content_image/1705.09257/x6.png"
] | # Study of the \(Dkk\) and \(DK\bar{K}\) systems
V. R. Debastiani
vinicius.rodrigues@ific.uv.es
Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, Aptdo. 22085, 46071 Valencia, Spain
J. M. Dias
jorgivan.morais@ific.uv.es
Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, Aptdo. 22085, 46071 Valencia, Spain
Instituto de Física, Universidade de São Paulo, C.P. 66318, 05389-970 São Paulo, SP, Brazil
E. Oset
oset@ific.uv.es
Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia-CSIC Institutos de Investigación de Paterna, Aptdo. 22085, 46071 Valencia, Spain
February 29, 2024
###### Abstract
Using the Fixed Center Approximation to Faddeev equations we have investigated the \(DKK\) and \(DK\bar{K}\) three-body systems, considering that the \(DK\) dynamically generates, through its \(I=0\) component, the \(D^{*}_{s0}(2317)\) molecule. According to our findings, for \(DK\bar{K}\) interaction we have found an evidence of a state \(I(J^{P})=1/2(0^{-})\) just above the \(D^{*}_{s0}(2317)\bar{K}\) threshold and around the \(Df_{0}(980)\) thresholds, with mass about \(2833-2858\) MeV, made mostly of \(Df_{0}(980)\). On the other hand, no evidence related to a state from the \(DKK\) interaction is found. The state found could be seen in the \(\pi\pi D\) invariant mass.
## I Introduction
The study of three-body systems is one of the starting points in the study of nuclei and nuclear dynamics. The traditional Quantum Mechanical approach to this problem is based on the Faddeev equations [1] and the main application was done for three nucleons systems. The simplicity of the Faddeev equations is deceiving since in practice its evaluation is very involved and one approximation or another is done to solve them. One popular choice is the use of separable potentials to construct the two-body scattering amplitudes via the Alt-Grassberger-Sandhas (AGS) form of the Faddeev equations [2]. Incorporation of chiral symmetry into the scheme has lead to interesting developments [3]. Another way to tackle these three-body systems is using a variational method [4; 5; 6]. Gradually, other systems involving not only nucleons or hyperons but mesons were tackled. The interaction of \(K^{-}d\) at threshold was thoroughly investigated using Faddeev equations [7; 8], or approximations to it, basically the Fixed Center Approximation (FCA) [9]. The investigation of a possible state of \(K^{-}pp\) nature has also received much attention [10; 11; 12; 13; 14; 15; 16] and, according to the calculations done in Ref. [17], the recent J-PARC experiment [18] has found support for this state.
Another step in this direction was the investigation of systems with two mesons and one baryon. Surprisingly it was found in Refs. [19; 20; 21] that with such systems one could obtain the low energy baryon states of \(J^{P}=1/2^{+}\). Work in this direction with different methods was also done in Ref. [22] for the \(\bar{K}\bar{K}N\) system and in Ref. [23] for the \(K\bar{K}N\) system. In this latter case a bound system developed, giving rise to a \(N^{*}\) state around 1920 MeV, mostly made of a \(Na_{0}(980)\), which was also predicted in Ref. [21].
Systems of three mesons also followed, and in Ref. [24] the \(\phi K\bar{K}\) system was studied and shown to reproduce the properties of the \(\phi(2170)\). Similarly, in Ref. [25] the \(KK\bar{K}\) system is studied and a bound cluster found is associated to the \(K(1460)\). Another similar system, the \(\pi K\bar{K}\) is studied in Ref. [26] and the state found is associated to the \(\pi(1300)\). The \(\eta K\bar{K}\) and \(\eta^{\prime}K\bar{K}\) systems are also studied in Refs. [26; 27; 28] and they are revised in Ref. [29] with the full Faddeev equations and more solid results.
An important result was found in Refs. [19; 20; 21; 24]. In the Faddeev equations one uses input from the two-body amplitudes of the different components and the off-shell part of the amplitudes appears in the calculations. This off-shell part is unphysical and observables cannot depend upon it. The finding in those works was that the use of chiral Lagrangians provides three-body contact terms that cancel the off-shell two-body contributions. In other calculations empirical three-body forces are introduced which might have some genuine part, but an important part of it will serve the purpose of effectively cancelling these unphysical off-shell contributions. Rather than putting these terms empirically, and fitting them to some data, the message of those works is that to make predictions it is safer to use as input only on-shell two-body amplitudes, without extra three-body terms, and an example of it is given in Ref. [21].
Extension to the charm sector was also done. The \(DNN\) system, analogous to the \(\bar{K}NN\) system is studied in Ref. [30], and the \(NDK\), \(\bar{K}DN\) and \(ND\bar{D}\) molecules are studied in Ref. [31]. In the hidden charm sector a resonance is found for the \(J/\psi K\bar{K}\) system which is associated to the \(Y(4260)\) in Ref. [32]. Closer to our work is the one of Ref. [33] where the \(DK\bar{K}\) is studied using QCD sum rules and Faddeev equations and in both methods a state coupling strongly to \(Df_{0}(980)\) is found. We will study this system with a different method, and in addition the \(DKK\) system.
The former review of work done shows a constant feature, which is that systems that add \(K\bar{K}\) to another particle turn out to generate states in which the \(K\bar{K}\) clusters around the \(f_{0}(980)\) or the \(a_{0}(980)\). The \(DKK\) system benefits from the \(DK\) attraction that forms the \(D_{s0}^{*}(2317)\) according to works using chiral Lagrangians and unitary approach [34; 35; 36; 37; 38; 39]. It is also supported by analysis of lattice QCD data [42]. However, the \(KK\) interaction is repulsive and the system might not bind. On the other hand, the \(DK\bar{K}\) system has repulsion for \(D\bar{K}\) in \(I=1\), and attraction for \(I=0\), and the \(DK\) interaction is attractive, as it is also the \(K\bar{K}\). Altogether this latter system could have more chances to bind than the \(DKK\) system, but a detailed calculation is called for to find the answer, and this is the purpose of the present work.
The starting point of our approach is to use the FCA with a preexisting molecule, which is the \(D_{s0}^{*}(2317)\), formed from the \(DK\) interaction. On top of that, another \(K\) (or \(\bar{K}\)) is introduced which is allowed to undergo multiple scattering with the \(D\) and \(K\) components of the molecule. The resulting thing, as we shall see, is that in the \(DKK\) system we do not see a signal of a three-body bound state, but in the \(DK\bar{K}\) system we find a peak which we interpret as the \(K\bar{K}\) fusing to produce the \(f_{0}(980)\) which gets then bound to the \(D\) meson, and a narrow peak appears at an energy below the \(Df_{0}(980)\) threshold. Such state could be seen in the \(\pi\pi D\) invariant mass.
## II Formalism
The Fixed Center Approximation (FCA) to Faddeev equations is useful when a light hadron \(H_{3}\) interacts with a cluster \(H\) composed of two other hadrons \(H_{1}\) and \(H_{2}\), \(H[H_{1}\,H_{2}]\), which are heavier than the first one, i.e. \(M_{(H[H_{1}\,H_{2}])}>M_{H_{3}}\). This cluster comes out from the two-body interaction between the hadrons \(H_{1}\) and \(H_{2}\) that can be described using a chiral unitary approach in coupled channels. Hence, the Faddeev equations in this approximation have as an input the two-body \(t\) matrices for the different pairs of mesons which form the system and, in this way the generated bound states and resonances are encoded. In our case, we have \(H_{1}=D\) and \(H_{2}=K\) while \(H_{3}=\bar{K}\) if we consider the \(DK\bar{K}\) interaction or \(H_{3}=K\) for the \(DKK\) system. Both three-body interactions involve the \(D^{*}_{s0}(2317)\) and \(f_{0}(980)/a_{0}(980)\) molecules that, according to Refs. [37; 43] are dynamically generated through \(DK\) and \(K\bar{K}\) interactions, respectively, taking into account their associated coupled channels. Therefore, we shall have the following channels contributing to the three-body interaction systems we are concerned: (1) \(K^{-}[D^{+}K^{0}]\), (2) \(K^{-}[D^{0}K^{+}]\), (3) \(\bar{K}^{0}[D^{0}K^{0}]\), (4) \([D^{+}K^{0}]K^{-}\), (5) \([D^{0}K^{+}]K^{-}\) and (6) \([D^{0}K^{0}]\bar{K}^{0}\) for the \(DK\bar{K}\) interaction and (1) \(K^{+}[D^{+}K^{0}]\), (2) \(K^{+}[D^{0}K^{+}]\), (3) \(K^{0}[D^{+}K^{+}]\), (4) \([D^{+}K^{0}]K^{+}\), (5) \([D^{0}K^{+}]K^{+}\) and (6) \([D^{+}K^{+}]K^{0}\) for \(DKK\) system. Note that the states (1), (2) and (3) are the same as (4), (5) and (6), respectively. Their distinction is to signify that the interaction in the FCA formalism occurs with the particle outside the cluster, which is represented by the brackets \([\,.\,.\,.]\), and the particle of the cluster next to it. This allows for a compact formulation that describes all the charge exchange steps and distinguishes the interaction with the right or left component of the cluster [17]. These channels will contribute to the \(T_{DK\bar{K}}\) and \(T_{DKK}\) three-body scattering matrices and, if those interactions generate bound states or resonances, they will manifest as a pole in the solutions of the Faddeev equations. In what follows we shall discuss how to construct these three-body scattering matrices and its solution for both, the \(DK\bar{K}\) and \(DKK\) systems.
<figure><img src="content_image/1705.09257/x1.png"><figcaption>Figure 1: Feynman diagrams for the K− multiple scattering of the processK−D+K0. The white circle indicates the D¯K→D¯K scattering amplitude while thegray bubble is associated with the one for DK¯K.</figcaption></figure>
### \(DK\bar{K}\) and \(DK\bar{K}\) three-body systems
In order to write the contributions to Faddeev equations of all the channels mentioned previously, we shall adopt the following procedure to construct the relevant amplitudes: for each channel the anti-kaon (kaon) meson to the left side in (1), (2) and (3) interacts with the hadron to its right side. Similarly, for the (4), (5) and (6) the \(K\) or \(\bar{K}\) to the right interacts with the particle to its left. In doing so, we can distinguish the order of the anti-kaon (kaon) and two other mesons with which the anti-kaon (kaon) interacts first and last. This procedure is similar to that used in Ref. [17] to study the \(\bar{K}NN\) interaction. For instance, in the \(DK\bar{K}\) system, the channel (1) \(K^{-}[D^{+}K^{0}]\) in the initial state, means that the \(K^{-}\) interacts with the \(D^{+}\) meson to its right. The channel (4) \([D^{+}K^{0}]K^{-}\) indicates that the \(K^{-}\) interacts with the \(K^{0}\) to its left. This procedure allows us to divide the multiple anti-kaon (kaon) scattering process in such a way that the formulation of the multiple scattering becomes easier.
In order to illustrate the structure of the multiple scattering in the fixed center approximation we define the partition functions \(T^{\rm FCA}_{ij}\), which contain all possible intermediate multiple steps, where the first index refers to the initial \(\bar{K}[DK]\), (1), (2) and (3) or \([DK]\bar{K}\) (4), (5) and (6) states and the second index to the final state. If we consider the \(K^{-}[D^{+}K^{0}]\to K^{-}[D^{+}K^{0}]\) amplitude denoted by \(T^{\rm FCA}_{11}\), which is diagramatically represented in Fig. 1, it provides the following expression [45; 17]
\[T^{\rm FCA}_{11}(s)=t_{1}+t_{1}\,G_{0}\,T^{\rm FCA}_{41}+t_{2}\, G_{0}\,T^{\rm FCA}_{61}\,,\] (1)
which tells us that the transition between the \(K^{-}[D^{+}K^{0}]\) to itself is given in terms of a single and double scattering, coupled to the amplitudes \(T^{\rm FCA}_{ij}\) related to the other channels. As a result, the three-body problem is given in terms of the \(T^{\rm FCA}_{ij}\) partitions, where the \(i,\,j\) indices run from 1 to 6 and stand for the initial and final channels, respectively, and as we will discuss later, can be displayed in a matrix form.
In Eq. (1), \(s\) is the Mandelstam variable that is equal to the squared of the three-body energy system, while \(t_{1}\) and \(t_{2}\) are, respectively, the \(D^{+}K^{-}\to D^{+}K^{-}\) and \(D^{+}K^{-}\to D^{0}\bar{K}^{0}\) two-body scattering amplitudes studied in Ref. [37], in which the authors have applied the chiral unitary approach in coupled channels to investigate the \(D\bar{K}\) and \(DK\) two-body interaction. \(G_{0}\) is the kaon propagator [40] between the particles of the cluster, which is evaluated through the equation below
\[G_{0}(s)=\frac{1}{2M_{D^{*}_{s0}}}\int\frac{d^{3}{\bf q}}{(2\pi)^{3}}\frac{F_{ R}({\bf q})}{(q^{0})^{2}-\omega^{2}_{K}({\bf q})+i\epsilon}\,,\] (2)
with \(\omega^{2}_{K}({\bf q})\equiv{\bf q}^{2}+m^{2}_{K}\) and \(q^{0}\) is the energy carried by kaon meson in the cluster rest frame where \(F({\bf q})\) is calculated, which corresponds to the following expression
\[q^{0}(s)=\frac{s-m^{2}_{K}-M^{2}_{D^{*}_{s0}}}{2M_{D^{*}_{s0}}}\,.\] (3)
In this work, we are using the isospin symmetric masses such that \(m_{D}\) and \(m_{K}\) are the \(D\) and \(K\) mesons average masses, respectively, while \(M_{D^{*}_{s0}}\) is the \(D^{*}_{s0}\) molecule mass. This molecule dynamics does not come into play explicitly in our formalism. The information on the molecule is encoded in the function \(F_{R}({\bf q})\) appearing in Eq. (2), the form factor, which is related to the cluster wave function by a Fourier transform, as discussed in Refs. [45; 46]. According to these works, for the form factor to be used consistently, the theory that generates the bound states and resonances (clusters), the chiral unitary approach, which is developed for scattering amplitudes, has to be extended to wave functions. This was done in those references for \(s\)-wave bound states, \(s\)-wave resonant states as well as in states with arbitrary angular momentum [41]. In our work we need the form factor expression only for \(s\)-wave bound states, which is given by [45]
\[F_{R}({\bf q})=\frac{1}{N}\int\limits_{|{\bf p}|,|{\bf p}-{\bf q }|<\Lambda}\,d^{3}{\bf p}\,\,\frac{1}{M_{D^{*}_{s0}}-\omega_{D}({\bf p})- \omega_{K}({\bf p})}\,\frac{1}{M_{D^{*}_{s0}}-\omega_{D}({\bf p}-{\bf q})- \omega_{K}({\bf p}-{\bf q})}\,,\] (4)
where \(\omega_{D}({\bf p})\equiv\sqrt{{\bf p}^{2}+m^{2}_{D}}\) and the normalization factor \(N\) is
\[N=\int\limits_{|{\bf p}|<\Lambda}\,d^{3}{\bf p}\,\Big{(}\,\frac{ 1}{M_{D^{*}_{s0}}-\omega_{D}({\bf p})-\omega_{K}({\bf p})}\Big{)}^{2}\,.\] (5)
The upper integration limit \(\Lambda\) has the same value of the cut-off used to regularize the loop \(DK\), adjusted in order to get the \(D^{*}_{s0}(2317)\) molecule from the \(DK\) interaction.
Analogously to \(T^{\rm FCA}_{11}\) expressed in Eq. (1), we can calculate all the relevant multiple scattering amplitudes, the partitions \(T^{\rm FCA}_{ij}\), using similar diagrams like the one in Fig. 1. As a result, they can be written as
\[T^{\rm FCA}_{ij}(s)=V^{\rm FCA}_{ij}(s)+\sum\limits_{l=1}^{6} \tilde{V}^{\rm FCA}_{il}(s)\,G_{0}(s)\,T^{\rm FCA}_{lj}(s)\,,\] (6)
where \(V_{ij}\) and \(\tilde{V}_{il}\) are the elements of the matrices below
\[V^{\rm FCA}=\left(\begin{array}[]{@{\,}cccccc@{\,}}\,t_{1}&0&t_{2}&0&0&0\,\\ \,0&t_{3}&0&0&0&0\,\\ \,t_{2}&0&t_{4}&0&0&0\,\\ \,0&0&0&t_{5}&0&0\,\\ \,0&0&0&0&t_{6}&t_{7}\,\\ \,0&0&0&0&t_{7}&t_{8}\,\\ \end{array}\right),\quad\tilde{V}^{\rm FCA}=\left(\begin{array}[]{@{\,}cccccc@ {\,}}\,0&0&0&t_{1}&0&t_{2}\,\\ \,0&0&0&0&t_{3}&0\,\\ \,0&0&0&t_{2}&0&t_{4}\,\\ \,t_{5}&0&0&0&0&0\,\\ \,0&t_{6}&t_{7}&0&0&0\,\\ \,0&t_{7}&t_{8}&0&0&0\,\\ \end{array}\right).\] (7)
Therefore, according to Eq. (6), in our case we can solve the three-body problem in terms of the multiple scattering amplitudes given by partitions \(T^{\rm FCA}_{ij}\), which contain only the \(D\bar{K}\) and \(K\bar{K}\) two-body amplitudes. Thus, for the \(DK\bar{K}\) system the solution of the scattering equation, Eq. (6), will be
\[T^{\rm FCA}_{ij}(s)=\sum\limits_{l=1}^{6}\Big{[}\,1-\tilde{V}^{ \rm FCA}(s)\,G_{0}(s)\,\Big{]}^{-1}_{il}\,V_{lj}^{\rm FCA}(s)\,.\] (8)
Analogously, for the \(DKK\) system, we will have the same solution as in Eq. (8). However, in this case, the \(\tilde{V}^{\rm FCA}\) and \(V^{\rm FCA}\) matrices, in terms of the \(DK\) and \(KK\) two-body amplitudes, are now given by
\[V^{\rm FCA}=\left(\begin{array}[]{@{\,}cccccc@{\,}}\,\bar{t}_{1}&0&0&0&0&0\,\\ \,0&\bar{t}_{2}&\bar{t}_{3}&0&0&0\,\\ \,0&\bar{t}_{3}&\bar{t}_{4}&0&0&0\,\\ \,0&0&0&\bar{t}_{5}&0&\bar{t}_{5}\,\\ \,0&0&0&0&\bar{t}_{6}&0\,\\ \,0&0&0&\bar{t}_{5}&0&\bar{t}_{5}\,\\ \end{array}\right),\quad\tilde{V}^{\rm FCA}=\left(\begin{array}[]{@{\,}cccccc@ {\,}}\,0&0&0&\bar{t}_{1}&0&0\,\\ \,0&0&0&0&\bar{t}_{2}&\bar{t}_{3}\,\\ \,0&0&0&0&\bar{t}_{3}&\bar{t}_{4}\,\\ \,\bar{t}_{5}&0&\bar{t}_{5}&0&0&0\,\\ \,0&\bar{t}_{6}&0&0&0&0\,\\ \,\bar{t}_{5}&0&\bar{t}_{5}&0&0&0\,\\ \end{array}\right).\] (9)
The elements of the matrices in Eqs. (7) and (9), i.e. \(t_{1},\,t_{2},\,.\,.\,.\,,t_{8}\) and \(\bar{t}_{1},\,.\,.\,.,\,\bar{t}_{6}\) related to the three-body interaction \(DK\bar{K}\) and \(DKK\) systems are the two-body scattering matrices elements, respectively, given by
\[\begin{array}[]{@{\,}cc@{\,}}\,t_{1}=t_{D^{+}K^{-}\to D^{+}K^{-}}\,;&t_{4}=t_{ D^{0}\bar{K}^{0}\to D^{0}\bar{K}^{0}}\,;\,\\ \,t_{2}=t_{D^{+}K^{-}\to D^{0}\bar{K}^{0}}\,;&t_{5}=t_{K^{0}K^{-}\to K^{0}K^{- }}\,;\,\\ \,t_{3}=t_{D^{0}K^{-}\to D^{0}K^{-}}\,;&t_{6}=t_{K^{+}K^{-}\to K^{+}K^{-}}\,; \,\\ \end{array}\quad\begin{array}[]{@{\,}c@{\,}}\,t_{7}=t_{K^{+}K^{-}\to K^{0}\bar {K}^{0}}\,;\,\\ \,t_{8}=t_{K^{0}\bar{K}^{0}\to K^{0}\bar{K}^{0}}\,,\,\\ \,\end{array}\] (10)
and
\[\begin{array}[]{@{\,}cc@{\,}}\,\bar{t}_{1}=t_{D^{+}K^{+}\to D^{+}K^{+}}\,;& \bar{t}_{4}=t_{D^{+}K^{0}\to D^{+}K^{0}}\,;\,\\ \,\bar{t}_{2}=t_{D^{0}K^{+}\to D^{0}K^{+}}\,;&\bar{t}_{5}=t_{K^{+}K^{0}\to K^{ +}K^{0}}\,;\,\\ \,\bar{t}_{3}=t_{D^{0}K^{+}\to D^{+}K^{0}}\,;&\bar{t}_{6}=t_{K^{+}K^{+}\to K^{ +}K^{+}}\,,\,\\ \end{array}\] (11)
which we shall discuss in the next subsection.
It is important to mention that, in this work, we are using the Mandl and Shaw normalization, which has different weight factors for the particle fields. In order to use these factors in a consistent manner in our problem, we should take into account how they appear in the single-scattering and double-scattering as well as in the full amplitude. The detailed calculation on how to do this can be found in Refs. [45; 46; 40]. According to these works, this is done multiplying the two-body amplitudes by the factor \(M_{c}/M_{1(2)}\), where \(M_{c}\) is the cluster mass while \(M_{1(2)}\) is associated with the mass of the hadrons \(H_{1}\) and \(H_{2}\). In our case, we have \(M_{c}/M_{D}\) for the two-body amplitudes related to the \(D\bar{K}(DK)\) and \(M_{c}/M_{K}\) for the one related to the \(K\bar{K}(KK)\) appearing in Eqs. (10) and (11).
Once we solve the Faddeev equations for the systems we are concerned, we have to write this solution in such a way that it represents the amplitude of a \(\bar{K}\,(K)\) meson interacting with the \(D^{*}_{s0}\) molecule, which is the \(DK\) cluster written into an \(I=0\) combination. Taking into account that \(|DK(I=0)\,\rangle=(1/\sqrt{2})\,|\,D^{+}K^{0}+D^{0}K^{+}\,\rangle\) (recall \((D^{+},-D^{0})\) is the isospin doublet), and summing the cases where the odd \(\bar{K}\,(K)\) interacts first to the left (right) of the cluster, and finishes interacting at the left (right) we obtain the following combination for both \(DK\bar{K}\) and \(DKK\) system,
\[T_{X-D^{*}_{s0}} = \frac{1}{2}\Big{(}T^{\rm FCA}_{11}+T^{\rm FCA}_{12}+T^{\rm FCA}_{ 14}+T^{\rm FCA}_{15}+T^{\rm FCA}_{21}+T^{\rm FCA}_{22}+T^{\rm FCA}_{24}+T^{\rm FCA }_{25}+T^{\rm FCA}_{41}+T^{\rm FCA}_{42}+T^{\rm FCA}_{44}+T^{\rm FCA}_{45}\] (12)
\[+ T^{\rm FCA}_{51}+T^{\rm FCA}_{52}+T^{\rm FCA}_{54}+T^{\rm FCA}_{ 55}\Big{)}\,,\]
where \(X\) denotes a \(\bar{K}\) in the \(DK\bar{K}\) case and a \(K\) meson for \(DKK\) interaction.
### Two-body amplitudes
In order to solve the Faddeev equations using the FCA for the systems we are concerned, we need to know the two-body scattering amplitudes appearing in Eqs. (10) and (11). They were studied in Refs. [37; 43]. These amplitudes are calculated using the chiral unitary approach (for a review see [47]). In this model, the transition amplitudes between the different pairs of mesons are extracted from Lagrangians based on symmetries as chiral and heavy quark symmetries. Then, they are unitarized using them as the kernels of the Bethe-Salpeter equation, which in its on-shell factorization form is given by
\[t=(1-v\,G)^{-1}\,v\,,\] (13)
where \(G\) is the two meson loop function and its expression in dimensional regularization method is
\[G(s_{i}) = \frac{1}{16\pi^{2}}\Big{\{}\alpha_{i}(\mu)+log\frac{m^{2}_{1}}{ \mu^{2}}+\frac{m_{2}^{2}-m^{2}_{1}+s_{i}}{2s_{i}}log\frac{m^{2}_{2}}{m^{2}_{1} }+\frac{p}{\sqrt{s_{i}}}\Big{[}\,log(s_{i}-m^{2}_{2}+M^{2}_{1}+2p\sqrt{s_{i}})\] (14)
\[- log(-s_{i}+m^{2}_{2}-m^{2}_{1}+2p\sqrt{s_{i}})+log(s_{i}+m^{2}_{ 2}-m^{2}_{1}+2p\sqrt{s_{i}})-log(-s_{i}-m^{2}_{2}+M^{2}_{1}+2p\sqrt{s_{i}}) \Big{]}\Big{\}}\,,\]
with \(m_{1}\) and \(m_{2}\) standing for the \(i\)-channel meson masses in the loop and \(p\) is the three-momentum in the two meson center-of-mass energy, \(\sqrt{s_{i}}\). In Eq. (14) \(\mu\) is a scale fixed a priori and the subtraction constant \(\alpha(\mu)\) is a free parameter. In Ref. [37], \(\mu\) is considered to be equal to \(1500\) MeV for the \(D\bar{K}\) system, corresponding to \(\alpha_{D\bar{K}}=-1.15\). On the other hand, since the amount of \(DK\) content in \(D^{*}_{s0}(2317)\) is about \(70\%\)[42], we consider just one channel, with \(\alpha_{DK}=-0.925\), adjusted to provide the \(D^{*}_{s0}(2317)\) peak, corresponding to a cut-off value equal to \(650\) MeV. This value also has to be used as the upper limit in the integrals given by Eqs. (4) and (5). For the \(f_{0}(980)/a_{0}(980)\) we consider the same channels as Refs. [49; 50] where a cut-off equal to \(600\) MeV was used to regularize the loops, given by
\[G(s_{l})=\int\frac{d^{3}{\bf q}}{(2\pi)^{3}}\frac{\omega_{1}({\bf q})+\omega_{ 2}({\bf q})}{2\omega_{1}({\bf q})\omega_{2}({\bf q})}\frac{1}{(P^{0})^{2}-[ \omega_{1}({\bf q})+\omega_{2}({\bf q})]^{2}+i\epsilon}\,,\] (15)
where \((P^{0})^{2}=s_{l}\), the two-body center-of-mass energy squared. The index \(l\) stands for the following channels: 1) \(\pi^{+}\pi^{-}\), 2) \(\pi^{0}\pi^{0}\), 3) \(K^{+}K^{-}\), 4) \(K^{0}\bar{K}^{0}\), 5) \(\eta\eta\) and 6) \(\pi\eta\). In each channel \(\omega_{1(2)}({\bf q})=\sqrt{{\bf q}^{2}+m_{1(2)}^{2}}\,\), where \(m_{1(2)}\) is the mass of the mesons inside the loop.
In order to get the scattering amplitude for the \(KK\) interaction, we follow Ref. [43]. First, we have to find the kernel \(v\) to be used in Eq. (13). This kernel is the lowest order amplitude describing the \(KK\) interaction and it is calculated using the chiral Lagrangian
\[\mathcal{L}_{2}=\frac{1}{12\,f_{\pi}^{2}}\langle\,\,(\partial_{ \mu}\Phi\,\Phi-\Phi\,\partial_{\mu}\Phi)^{2}+M\Phi^{4}\,\,\rangle\,,\] (16)
where \(\langle\,.\,.\,.\,\rangle\) means the trace in the flavour space of the \(SU(3)\) matrices appearing in \(\Phi\) and \(M\) while \(f_{\pi}\) is the pion decay constant. The matrices \(\Phi\) and \(M\) are given by
\[\Phi=\left(\begin{array}[]{@{\,}ccc@{\,}}\,\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta _{8}}{\sqrt{6}}&\pi^{+}&K^{+}\,\\ \,\pi^{-}&-\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta_{8}}{\sqrt{6}}&K^{0}\,\\ \,K^{-}&\bar{K}^{0}&-\frac{2\,\eta_{8}}{\sqrt{6}}\,\\ \end{array}\right)\,;\quad M=\left(\begin{array}[]{@{\,}ccc@{\,}}\,m^{2}_{\pi} &0&0\,\\ \,0&m^{2}_{\pi}&0\,\\ \,0&0&2\,m^{2}_{K}-m^{2}_{\pi}\,\\ \end{array}\right)\,,\] (17)
where in \(M\) we have taken the isospin limit (\(m_{u}=m_{d}\)). Hence, from Eqs. (16) and (17) we can calculate the tree level amplitudes for \(K^{+}K^{0}\) and \(K^{+}K^{+}\), which after projection in \(s\)-wave read as
(18)
where \(s_{KK}\) is the Mandelstam variable \(s\) in the \(KK\) center-of-mass frame. From these equations one finds that \(v^{I=0}_{KK}=0\) (and \(t^{I=0}_{KK}=0\)) and taking the unitary normalization appropriate for identical particles \(|K^{+}K^{+},I=1\rangle=|K^{+}K^{+}\rangle/\sqrt{2}\), we find \(v^{I=1}_{KK}=\frac{1}{2}v_{K^{+}K^{+}\,,\,K^{+}K^{+}}\). The \(t\) matrix will be \(t^{I=1}_{KK}=(1-v^{I=1}_{KK}\,G_{KK})^{-1}\,v^{I=1}_{KK}\), and then \(t^{I=1}_{KK}\) has to be multiplied by two to restore the good normalization. Therefore, using these expressions we obtain the \(KK\) scattering amplitudes \(\bar{t}_{5}\) and \(\bar{t}_{6}\) present in Eq. (11) (\(\bar{t}_{6}=t^{I=1}_{KK}\), \(\bar{t}_{5}=\frac{1}{2}t^{I=1}_{KK}\), with \(t^{I=1}_{KK}\) with the good normalization), where we have used a cut-off of \(600\) MeV to regularize the \(KK\) loops, the same that was used in the \(K\bar{K}\) and coupled channels system. After these considerations we are able to determine all the two-body amplitudes in Eqs. (10) and (11).
It is worth mentioning that the arguments of the partitions \(T^{\rm FCA}_{ij}(s)\) and the \(t_{i}(s_{i})\) two-body amplitudes are different. While the former is written into the three-body center-of-mass energy \(\sqrt{s}\), the latter is given in the two-body one. In order to write the \(\sqrt{s_{i}}\)’s in terms of \(\sqrt{s}\), we are going to use the same transformations used in Ref. [51; 40], which are
\[s_{DK(D\bar{K})}=m^{2}_{K}+m^{2}_{D}+\frac{1}{2M^{2}_{D^{*}_{s0} }}(s-m^{2}_{K}-M^{2}_{D^{*}_{s0}})\,(M^{2}_{D^{*}_{s0}}+m^{2}_{D}-m^{2}_{K}),\] (19)
where the subscript \(DK(D\bar{K})\) stands for the two-body channels associated with the energy in the center-of-mass of \(DK(D\bar{K})\). Analogously, for the energy in the \(KK(K\bar{K}\)) center-of-mass, we have
\[s_{KK(K\bar{K})}=2\,m^{2}_{K}+\frac{1}{2M^{2}_{D^{*}_{s0}}}(s-m^ {2}_{K}-M^{2}_{D^{*}_{s0}})\,(M^{2}_{D^{*}_{s0}}+m^{2}_{K}-m^{2}_{D})\,.\] (20)
In this work, we are going to call this set of transformations Prescription I. In order to estimate the uncertainties in our calculations, we will also use another set of transformations, which we are going call Prescription II, given by
\[s_{DK(D\bar{K})}=\Big{(}\frac{\sqrt{s}}{M_{D^{*}_{s0}}+m_{K}} \Big{)}^{2}\,\Big{(}m_{K}+\frac{m_{D}\,M_{D^{*}_{s0}}}{m_{D}+m_{K}}\Big{)}^{2} -{\bf P}^{2}_{2}\,,\] (21)
and
\[s_{KK(K\bar{K})}=\Big{(}\frac{\sqrt{s}}{M_{D^{*}_{s0}}+m_{K}} \Big{)}^{2}\,\Big{(}m_{K}+\frac{m_{K}\,M_{D^{*}_{s0}}}{m_{D}+m_{K}}\Big{)}^{2} -{\bf P}^{2}_{1}\,,\] (22)
where \({\bf P}_{1}\) and \({\bf P}_{2}\) stand for the momenta of the \(D\) and \(K\) mesons in the cluster, which we take equal and such that the kinetic energy in the \(DK\) cluster is of the order of the binding energy, hence \({\bf P}^{2}_{1}={\bf P}^{2}_{2}=2\tilde{\mu}B_{D^{*}_{s0}}=2\tilde{\mu}\,(m_{D }+m_{K}-M_{D^{*}_{s0}})\), with \(\tilde{\mu}\) the reduced mass of \(DK\). This prescription is based on another one discussed in Refs. [51; 44], which shares the binding energy among the three-particles proportionally to their respective masses.
## III Results
In all our calculations we use \(m_{K}=495\) MeV, \(m_{D}=1865\) MeV, \(m_{D_{s0}^{*}(2317)}=2317\) MeV, \(m_{\pi}=138\) MeV, \(m_{\eta}=548\) MeV and \(f_{\pi}=93\) MeV. In Fig. 2 we plot the energies in the center-of-mass of each of the two-body systems as a function of the energy of the center-of-mass of the three-body system, according to Eqs. (19), (20), (21) and (22). Both prescriptions map the energy range around 2812 MeV, the threshold of \(D_{s0}^{*}(2317)K\) (or \(D_{s0}^{*}(2317)\bar{K}\)), to an energy range around each of the thresholds of the two-body interactions, _i. e._ the \(KK\) system (or \(K\bar{K}\)) interact in the energy range around 990 MeV in their center-of-mass, which corresponds to \(2\,m_{K}\), and the \(DK\) (or \(D\bar{K}\)) interact in the energy range around 2360 MeV, which corresponds to \(m_{K}+m_{D}\).
The main uncertainty in our calculation is the difference between these two ways of mapping the total energy into the center-of-mass of each two-body system. This feature was also found in other works using FCA, for instance in Ref. [51].
<figure><img src="content_image/1705.09257/x2.png"><figcaption>Figure 2: Energy distribution in the center-of-mass of each two-body system asa function of the total energy of the three-body system, using prescriptions Iand II. Here s1=sDK(D¯K) and s2=sKK(K¯K). The lower curves are for KK or K¯Kand the upper curves are for DK or D¯K.</figcaption></figure>
### The \(DK\bar{K}\) system
In Fig. (a)a we show the result of the total Faddeev amplitude squared from Eq. (12) using Prescription I. We can see a strong peak around 2833 MeV, which could be interpreted as a \(D[f_{0}(980)/a_{0}(980)]\) bound state, since it is below the \(D[f_{0}(980)/a_{0}(980)]\) threshold of \(2855\) MeV. On the other hand, using Prescription II we observe a peak around 2858 MeV, as can seen in Fig. (b)b, and now could be interpreted as a \(D[f_{0}(980)/a_{0}(980)]\) resonance since it is above its threshold.
<figure><img src="content_image/1705.09257/x3.png"><figcaption>(a)</figcaption></figure>
In order to investigate if this strong peak in the \(DK\bar{K}\) system comes mostly from \(K\bar{K}\) merging into \(a_{0}(980)\) or \(f_{0}(980)\), we have separated the \(K\bar{K}\) amplitudes (that enter in the Faddeev equations) in isospin basis and selected only one contribution at a time. In Fig. 4 we show the results where the \(I=0\) component of \(K\bar{K}\) was removed, therefore there is no \(f_{0}(980)\) contribution. In this figure we can see clearly the shape of the \(a_{0}(980)\) in the three-body amplitude, that peaks around 2842 MeV in Prescription I (and 2886 MeV in Prescription II), which according to Fig. 2, correspond to 990 MeV in the \(K\bar{K}\) center-of-mass, exactly where the \(a_{0}(980)\) peak results from the \(I=1\)\(K\bar{K}\) two-body amplitude. Notice that when we removed the \(I=0\) isospin component from the \(K\bar{K}\) amplitude the strength of the peaks in \(|T_{DD\bar{K}}|^{2}\) have decreased more than two orders of magnitude in both prescriptions, pointing out that the \(f_{0}(980)\) is indeed the most important contribution coming from \(K\bar{K}\). It is interesting to recall that the same conclusion was obtained in [33], where no apparent signal for \(Da_{0}(980)\) was found. Furthermore, the small cusps seen in both prescriptions at 2812 MeV in Fig. 4 correspond to the \(D^{*}_{s0}(2317)\bar{K}\) threshold. In Table 1 we compile the results of both prescriptions.
<figure><img src="content_image/1705.09257/x5.png"><figcaption>Figure 4: Results for the DK¯K amplitude squared after removing the f0(980)contribution, using prescriptions I and II.</figcaption></figure>
The results for the \(DK\bar{K}\) system point out to the formation of a three-body state: the \(D[f_{0}(980)/a_{0}(980)]\), in which the \(Df_{0}(980)\) is the strongest contribution in both prescriptions. Specifically, in Prescription I the \(Df_{0}(980)\) state would be bound by about \(20\) MeV, while in Prescription II it would correspond to a resonance. This latter result would be similar to the findings of Ref. [33] where a peak is seen at higher energy, forming a \(Df_{0}(980)\) resonant state at 2890 MeV.
As mentioned previously, the difference between the results of prescriptions I and II should be interpreted as the main uncertainty in our approach, but what emerges from both pictures is that a \(Df_{0}(980)\) state is formed, slightly bound or unbound.
| Prescription I Prescription II
---|---
| √s | |T|2 | √s | |T|2
Total | 2833 | 6.8×106 | 2858 | 1.8×107
I=1 only | 2842 | 7.7×104 | 2886 | 7.8×104
Table 1: Comparison between position and intensity of the peaks found in the
DK¯K amplitude.
We would like to note that the theoretical uncertainty of the present method is of the order of \(25\) MeV. To put this number in a proper context we can recall that the uncertainty in the QCD sum rules method in Ref. [33] is far larger, with a mass given by \(m_{Df_{0}}=(2926\pm 237)\) MeV (the uncertainty for the mass in the Faddeev method of Ref. [33] is not given).
### The \(Dkk\) system
In Fig. 5 we show the \(DKK\) total amplitude squared from Eq. (12) using prescriptions I and II. We can see that in both prescriptions the amplitude decreases around 2812 MeV which corresponds to the \(D_{s0}^{*}(2317)K\) threshold, and both have a maximum below this threshold, while Prescription II also develops a broad structure above threshold, but no clear peak which could indicate that a bound state or a resonance is found.
<figure><img src="content_image/1705.09257/x6.png"><figcaption>Figure 5: Results for the total DKK amplitude squared using prescriptions Iand II.</figcaption></figure>
As a physical interpretation we could say that, even though the \(DK\) interaction is attractive and responsible for the strong binding that generates the \(D_{s0}^{*}(2317)\), the repulsion between \(KK\) seems to be of the same magnitude and prevents the \(DKK\) system to form a bound state.
One might be tempted to associate the peak below threshold to a physical state, but this is not the case. Indeed, one should note that the strength of \(|T_{DKK}|^{2}\) in Fig. 5 is about three orders of magnitude smaller than for \(|T_{DK\bar{K}}|^{2}\) in Fig. (b)b, which simply indicates that no special hadron structure has been formed in this case.
## IV Conclusions
In this work, we have used the FCA to Faddeev equations in order to look for bound states or resonances generated from \(DK\bar{K}\) and \(DKK\) three-body interactions. The cluster \(DK\) in the \(I=0\) component is the well known \(D^{*}_{s0}(2317)\) bound state studied by means of the chiral unitary approach. From the \(DK\bar{K}\) interaction we found a \(I(J^{P})=1/2(0^{-})\) state with mass about \(2833-2858\) MeV, where the uncertainties were estimated taking into account two different prescriptions to obtain \(\sqrt{s_{DK}}\) and \(\sqrt{s_{KK}}\) from the total energy of the system \(\sqrt{s}\). Our findings corroborated those of Ref. [33], where the authors studied the \(DK\bar{K}\) interaction using two different nonperturbative calculation tools, the QCD sum rules and the Faddeev equations without FCA. They found a state around \(2890\) MeV which is above the \(Df_{0}(980)\) threshold. As we have pointed out before, this state could be seen in the \(\pi\,\pi\,D\) invariant mass distribution. Therefore, as in Ref. [33], we also suggest the search for such a state in future experiments. On the other hand, for the \(DKK\) system we found an enhancement effect, but with a very small strength compared to the \(DK\bar{K}\) system and should not be related to a physical bound state. In this case, the repulsion between \(KK\) seems to be of the same magnitude as the attraction on the \(DK\) interaction, preventing the formation of the three-body molecular state.
## Acknowledgments
V. R. Debastiani wishes to acknowledge the support from the Programa Santiago Grisolia of Generalitat Valenciana (Exp. GRISOLIA/2015/005). J. M. Dias would like to thank the Brazilian funding agency FAPESP for the financial support. This work is also partly supported by the Spanish Ministerio de Economia y Competitividad and European FEDER funds under the contract number FIS2014-57026-REDT, FIS2014-51948-C2-1-P, and FIS2014-51948-C2-2-P, and the Generalitat Valenciana in the program Prometeo II-2014/068.
## References
* [1] L. D. Faddeev, Sov. Phys. JETP 12, 1014 (1961) [Zh. Eksp. Teor. Fiz. 39, 1459 (1960)].
* [2] E. O. Alt, P. Grassberger and W. Sandhas, Nucl. Phys. B **2**, 167 (1967).
* [3] E. Epelbaum, Prog. Part. Nucl. Phys. **57**, 654 (2006), [nucl-th/0509032].
* [4] E. Hiyama, Y. Kino and M. Kamimura, Prog. Part. Nucl. Phys. **51**, 223 (2003).
* [5] A. Dote, T. Hyodo and W. Weise, Phys. Rev. C **79**, 014003 (2009), [arXiv:0806.4917 [nucl-th]].
* [6] Y. Kanada-En’yo and D. Jido, Phys. Rev. C **78**, 025212 (2008), [arXiv:0804.3124 [nucl-th]].
* [7] G. Toker, A. Gal and J. M. Eisenberg, Nucl. Phys. A **362**, 405 (1981).
* [8] M. Torres, R. H. Dalitz and A. Deloff, Phys. Lett. B **174**, 213 (1986).
* [9] S. S. Kamalov, E. Oset and A. Ramos, Nucl. Phys. A **690**, 494 (2001), [nucl-th/0010054].
* [10] A. Dote, T. Hyodo and W. Weise, Nucl. Phys. A **804**, 197 (2008), [arXiv:0802.0238 [nucl-th]].
* [11] N. V. Shevchenko, A. Gal and J. Mares, Phys. Rev. Lett. **98**, 082301 (2007), [nucl-th/0610022].
* [12] Y. Ikeda and T. Sato, Phys. Rev. C **76**, 035203 (2007), [arXiv:0704.1978 [nucl-th]].
* [13] M. Bayar, J. Yamagata-Sekihara and E. Oset, Phys. Rev. C **84**, 015209 (2011), [arXiv:1102.2854 [hep-ph]].
* [14] M. Bayar and E. Oset, Phys. Rev. C **88**, no. 4, 044003 (2013), [arXiv:1207.1661 [hep-ph]].
* [15] T. Uchino, T. Hyodo and M. Oka, Nucl. Phys. A **868-869**, 53 (2011), [arXiv:1106.0095 [nucl-th]].
* [16] P. Bicudo, Phys. Rev. D **76**, 031502 (2007) [hep-ph/0701008].
* [17] T. Sekihara, E. Oset and A. Ramos, PTEP **2016**, no. 12, 123D03 (2016), [arXiv:1607.02058 [hep-ph]].
* [18] Y. Sada _et al._ [J-PARC E15 Collaboration], PTEP **2016**, no. 5, 051D01 (2016), [arXiv:1601.06876 [nucl-ex]].
* [19] A. Martinez Torres, K. P. Khemchandani and E. Oset, Phys. Rev. C **77**, 042203 (2008), [arXiv:0706.2330 [nucl-th]].
* [20] K. P. Khemchandani, A. Martinez Torres and E. Oset, Eur. Phys. J. A **37**, 233 (2008), [arXiv:0804.4670 [nucl-th]].
* [21] A. Martinez Torres, K. P. Khemchandani and E. Oset, Phys. Rev. C **79**, 065207 (2009), [arXiv:0812.2235 [nucl-th]].
* [22] Y. Kanada-En’yo and D. Jido, Phys. Rev. C **78**, 025212 (2008), [arXiv:0804.3124 [nucl-th]].
* [23] A. Martinez Torres and D. Jido, Phys. Rev. C **82**, 038202 (2010), [arXiv:1008.0457 [nucl-th]].
* [24] A. Martinez Torres, K. P. Khemchandani, L. S. Geng, M. Napsuciale and E. Oset, Phys. Rev. D **78**, 074031 (2008), [arXiv:0801.3635 [nucl-th]].
* [25] A. Martinez Torres, D. Jido and Y. Kanada-En’yo, Phys. Rev. C **83**, 065205 (2011), [arXiv:1102.1505 [nucl-th]].
* [26] A. Martinez Torres, K. P. Khemchandani, D. Jido and A. Hosaka, Phys. Rev. D **84**, 074027 (2011), [arXiv:1106.6101 [nucl-th]].
* [27] W. Liang, C. W. Xiao and E. Oset, Phys. Rev. D **88**, no. 11, 114024 (2013), [arXiv:1309.7310 [hep-ph]].
* [28] M. Albaladejo, J. A. Oller and L. Roca, Phys. Rev. D **82**, 094019 (2010), [arXiv:1011.1434 [hep-ph]].
* [29] A. Martinez Torres and K. P. Khemchandani, Phys. Rev. D **94**, 076007 (2016), [arXiv:1607.02102 [hep-ph]].
* [30] M. Bayar, C. W. Xiao, T. Hyodo, A. Dote, M. Oka and E. Oset, Phys. Rev. C **86**, 044004 (2012), [arXiv:1205.2275 [hep-ph]].
* [31] C. W. Xiao, M. Bayar and E. Oset, Phys. Rev. D **84**, 034037 (2011), [arXiv:1106.0459 [hep-ph]].
* [32] A. Martinez Torres, K. P. Khemchandani, D. Gamermann and E. Oset, Phys. Rev. D **80**, 094012 (2009), [arXiv:0906.5333 [nucl-th]].
* [33] A. Martinez Torres, K. P. Khemchandani, M. Nielsen and F. S. Navarra, Phys. Rev. D **87**, no. 3, 034025 (2013), [arXiv:1209.5992 [hep-ph]].
* [34] E. E. Kolomeitsev and M. F. M. Lutz, Phys. Lett. B **582**, 39 (2004), [hep-ph/0307133].
* [35] J. Hofmann and M. F. M. Lutz, Nucl. Phys. A **733**, 142 (2004), [hep-ph/0308263].
* [36] F. K. Guo, P. N. Shen, H. C. Chiang, R. G. Ping and B. S. Zou, Phys. Lett. B **641**, 278 (2006), [hep-ph/0603072].
* [37] D. Gamermann, E. Oset, D. Strottman and M. J. Vicente Vacas, Phys. Rev. D **76**, 074016 (2007), [hep-ph/0612179].
* [38] F. K. Guo, C. Hanhart, S. Krewald and U. G. Meißner, Phys. Lett. B **666**, 251 (2008), [arXiv:0806.3374 [hep-ph]].
* [39] F. K. Guo, C. Hanhart and U. G. Meißner, Eur. Phys. J. A **40**, 171 (2009), [arXiv:0901.1597 [hep-ph]].
* [40] J. Yamagata-Sekihara, L. Roca and E. Oset, Phys. Rev. D **82**, 094017 (2010), Erratum: [Phys. Rev. D **85**, 119905 (2012)], [arXiv:1010.0525 [hep-ph]].
* [41] F. Aceti and E. Oset, Phys. Rev. D **86**, 014012 (2012), [arXiv:1202.4607 [hep-ph]].
* [42] A. Martinez Torres, E. Oset, S. Prelovsek and A. Ramos, JHEP **1505**, 153 (2015), [arXiv:1412.1706 [hep-lat]].
* [43] J. A. Oller and E. Oset, Nucl. Phys. A **620**, 438 (1997), Erratum: [Nucl. Phys. A **652**, 407 (1999)], [hep-ph/9702314].
* [44] M. Bayar, X. L. Ren and E. Oset, Eur. Phys. J. A **51**, no. 5, 61 (2015), [arXiv:1501.02962 [hep-ph]].
* [45] L. Roca and E. Oset, Phys. Rev. D **82**, 054013 (2010), [arXiv:1005.0283 [hep-ph]].
* [46] J. Yamagata-Sekihara, J. Nieves and E. Oset, Phys. Rev. D **83**, 014003 (2011), [arXiv:1007.3923 [hep-ph]].
* [47] J. A. Oller, E. Oset and A. Ramos, Prog. Part. Nucl. Phys. **45**, 157 (2000), [hep-ph/0002193].
* [48] L. Liu, K. Orginos, F. K. Guo, C. Hanhart and U. G. Meißner, Phys. Rev. D **87**, no. 1, 014508 (2013), [arXiv:1208.4535 [hep-lat]].
* [49] J. J. Xie, L. R. Dai and E. Oset, Phys. Lett. B **742**, 363 (2015), [arXiv:1409.0401 [hep-ph]].
* [50] J. M. Dias, F. S. Navarra, M. Nielsen and E. Oset, Phys. Rev. D **94**, no. 9, 096002 (2016), [arXiv:1601.04635 [hep-ph]].
* [51] M. Bayar, P. Fernandez-Soler, Z. F. Sun and E. Oset, Eur. Phys. J. A **52**, no. 4, 106 (2016), [arXiv:1510.06570 [hep-ph]].
|
0808.3089 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 17066,
"num_imgs": 0,
"llama3_tokens_count": 5588
} | [] | # Survey of Hopf Fibrations and Rotation Conventions in Mathematics and Physics
David W. Lyons
Mathematical Sciences
Lebanon Valley College
lyons@lvc.edu
###### Abstract
We present a unifying framework for understanding several different versions of the Hopf fibration, and use this framework to reconcile two methods of representing rotations of 3-space by unitary matrices—the mathematician’s convention based on quaternion algebra, and the physicist’s convention based on the Bloch sphere.
## 1 Introduction
Sophus Lie made a profound contribution to mathematics and physics in the late 19th century by developing a theory based on his observation that solutions to certain problems in mechanics must be invariant under rigid motions of space, and that the structure in symmetry groups can be exploited to solve differential equations [1]. Although Lie theory is a rare find in the undergraduate curriculum, one of its topics—the special orthogonal group \(SO(3)\) of rotations of space—is impossible to miss in courses such as linear algebra, differential equations and quantum mechanics.
In theory and in practical computations, mathematicians and physicists use \(2\times 2\) unitary matrices as a replacement for \(3\times 3\) real orthogonal matrices. How this is done, and more important, why this is natural, are main points of this article. The explanation rests on the _Hopf fibration_. Our secondary aim is to reconcile the differences between math and physics conventions in the use of unitary matrices to represent rotations. This is accomplished by comparing different versions of the Hopf fibration.
The exposition presented here requires no special background beyond university level vector calculus, linear algebra, and an introduction to group theory. Definitions for those few objects which may exceed this minimum background—projective space, higher dimensional spheres, commutative diagrams and quaternions—are given in the Appendix.
## 2 A Survey of Hopf Fibrations
Heinz Hopf defined a mapping in his 1931 paper [2] that we now call the Hopf fibration. It was a landmark discovery in the young subject of algebraic topology that has since been recognized in many guises in mathematics and physics with applications including magnetic monopoles, rigid body mechanics, and quantum information theory [6].
The heart of the Hopf map is the canonical projection
\[{\mathbb{C}}^{2}\setminus\{{\mathbf{0}}\}\stackrel{{\pi}}{{\to}}{ \mathbb{P}}^{1}\] (1)
that sends the complex vector \((z,w)\) to its equivalence class \([z,w]\) in projective space. We interpret this as a map \(S^{3}\to S^{2}\) by identifying \(S^{3}\) as the subset of norm 1 vectors in \({\mathbb{R}}^{4}={\mathbb{C}}^{2}\), and by identifying \({\mathbb{P}}^{1}\) with \(S^{2}\). The latter identification is a two-step procedure. First identify \({\mathbb{P}}^{1}\) with the extended complex plane \({\mathbb{C}}^{+}={\mathbb{C}}\cup\{\infty\}\). One way to do this is the map
\[\mbox{\rm chart}\colon{\mathbb{P}}^{1}\to{\mathbb{C}}^{+}\]
given by \([z_{0},z_{1}]\mapsto z_{0}/z_{1}\) (“chart” is for “coordinate chart”). Second, identify \({\mathbb{C}}^{+}\) with \(S^{2}\) using some version of stereographic projection. We will use two stereographic projections, \(\mbox{\rm stereo}_{j}\colon S^{2}\to{\mathbb{C}}^{+}\) for \(j=1,3\), given by
\[\mbox{\rm stereo}_{1}(x,y,z) = \frac{y}{1-x}+i\frac{z}{1-x}\]
\[\mbox{\rm stereo}_{3}(x,y,z) = \frac{x}{1-z}+i\frac{y}{1-z}\]
and \(\mbox{\rm stereo}_{1}(1,0,0)=\infty=\mbox{\rm stereo}_{3}(0,0,1)\). We put these maps together to form a template for the generic Hopf map. Here and in diagrams that follow, we highlight the core map (1) with a frame.
(2)
H. Hopf’s original map [2] arises from this template by choosing \(j=3\). One obtains variations by altering the identifications with \(S^{3}\) on the left and with \(S^{2}\) on the right, for example, by using alternative coordinate charts on \({\mathbb{P}}^{1}\) and by choosing different basepoints for stereographic projection. These variations are motivated by the desire to adapt coordinates to fit particular interpretations.
The projection (1) comes to life when we view it in terms of group action. In general, when a group \(G\) acts on a set \(X\), we have a bijection¹
[FOOTNOTE:1][ENDFOOTNOTE]
\[G/I_{x}\leftrightarrow{\cal O}_{x}\] (3)
given by \(gI_{x}\leftrightarrow gx\) for each \(x\in X\), where \(I_{x}=\{g\in G\colon gx=x\}\) is the isotropy subgroup for \(x\) and \({\cal O}_{x}=\{gx\colon g\in G\}\) is the orbit of \(x\). We now apply this fact twice, where the group is \(G=SU(2)\) and the actions arise from the natural action of \(G\) on \({\mathbb{C}}^{2}\). First, let \(X\) be the the set of norm 1 vectors in \({\mathbb{C}}^{2}\). The action on \(X\) is transitive (\({\cal O}_{x}=X\) for all \(x\)) and the isotropy subgroup of every point is trivial, so we have the identification
\[SU(2)\widetilde{\longrightarrow}S^{3}\] (4)
given by \(g\leftrightarrow g(1,0)\). Second, let \(X={\mathbb{P}}^{1}\). The action of \(G\) on \(X\) is transitive, and the isotropy subgroup of the point \([1,0]\) is the torus
\[T=\left\{\left[\begin{array}[]{cc}e^{i\theta}&0\\ 0&e^{-i\theta}\end{array}\right]\colon\theta\in{\mathbb{R}}\right\},\]
so we have the identification
\[SU(2)/T\widetilde{\longrightarrow}{\mathbb{P}}^{1}\] (5)
given by \(gT\leftrightarrow g[1,0]\). Now we can rephrase the heart of the Hopf map (1) as the map
\[SU(2)\to{\mathbb{P}}^{1}\] (6)
given by \(g\mapsto g[1,0]\), where “rephrase” means the following diagram commutes.
Now we are ready to define and compare several versions of the Hopf fibration in terms of (1) and (6). We begin with a Hopf fibration expressed in the language of quaternion algebra.
We identify the quaternions \({\mathbb{H}}\) with \({\mathbb{R}}^{4}\) and \({\mathbb{C}}^{2}\) via
\[x_{0}+x_{1}{\mathbf{i}}+x_{2}{\mathbf{j}}+x_{3}{\mathbf{k}}\leftrightarrow(x_{ 0},x_{1},x_{2},x_{3})\leftrightarrow(x_{0}+ix_{1},x_{2}+ix_{3})\] (7)
and regard \({\mathbb{H}}\) as a real vector space with canonical basis \(\{1,{\mathbf{i}},{\mathbf{j}},{\mathbf{k}}\}\) and also as a complex vector space with canonical basis \(\{1,{\mathbf{j}}\}\). We identify \({\mathbb{R}}^{3}\) with the pure quaternions, that is, the subspace of \({\mathbb{R}}^{4}\) consisting of points with zero in the first coordinate. Under this identification, the name \(p\) for point \(p=(x,y,z)\) in \({\mathbb{R}}^{3}\) shall also denote the quaternion \(p=x{\mathbf{i}}+y{\mathbf{j}}+z{\mathbf{k}}\). We identify the unit length quaternions with \(S^{3}\subset{\mathbb{R}}^{4}\). The 2-sphere \(S^{2}\subset{\mathbb{R}}^{3}\) is identified with the “equator” of \(S^{3}\) which is the set of unit length pure quaternions.
The group \(SU(2)\) is isomorphic with the group of unit quaternions via the map
\[\left[\begin{array}[]{cc}z&w\\ -\overline{w}&\overline{z}\end{array}\right]\leftrightarrow z+w{\mathbf{j}}\] (8)
where \((z,w)\) is a unit length vector in \({\mathbb{C}}^{2}\) [3]. The group of unit quaternions is also naturally identified with \(S^{3}\) via (7). It is important to note that we now have two distinct identifications of \(SU(2)\) with \(S^{3}\subset{\mathbb{C}}^{2}\). The matrix \(\left[\begin{array}[]{cc}z&w\\ -\overline{w}&\overline{z}\end{array}\right]\) identifies with \((z,w)\) by (8) and identifies with \((z,-\overline{w})\) by (4). We will denote by \(T\) the map
\[T\colon S^{3}\stackrel{{(\ref{quatmatident})}}{{\to}}SU(2) \stackrel{{(\ref{su2tos3byaction})}}{{\to}}S^{3}\]
given by \((z,w)\mapsto(z,-\overline{w})\) that arises from combining these identifications. We call it \(T\) for “transpose” because this is the map you get when you interpret the quaternion as a matrix by (8), transpose it, then reinterpret as a point in \({\mathbb{C}}^{2}\) by (8). In real coordinates, transpose is given by \((a,b,c,d)^{T}=(a,b,-c,d)\).
The group of unit quaternions acts naturally on the subspace of pure quaternions (where we interpret the pure quaternions as \({\mathbb{R}}^{3}\), see [5] and [6] for details) via
\[S^{3}\times{\mathbb{R}}^{3}\to{\mathbb{R}}^{3}\] (9)
given by \((g,p)\mapsto gpg^{\ast}\), where \(p\) is a pure quaternion, \(g\) is a unit quaternion and \(g^{\ast}\) is the conjugate of \(g\) (the conjugate of \(x_{0}+x_{1}{\mathbf{i}}+x_{2}{\mathbf{j}}+x_{3}{\mathbf{k}}\) is \(x_{0}-x_{1}{\mathbf{i}}-x_{2}{\mathbf{j}}-x_{3}{\mathbf{k}}\) and is what you get if you take the hermitian (conjugate transpose) of \(g\) viewed as a matrix via (8)). This action preserves the Euclidean length of \(p\), and so restricts to an action on \(S^{2}\).
\[S^{3}\times S^{2}\to S^{2}\] (10)
We choose the basepoint \(p_{0}=(1,0,0)={\mathbf{i}}\) and define a map \(S^{3}\to S^{2}\) by
\[g\mapsto g{\mathbf{i}}g^{\ast}.\] (11)
The action (10) is transitive and the isotropy subgroup of \(p_{0}\) is \(\{e^{i\theta}\}\). As matrices, this isotropy subgroup is the same as the torus \(T\). Thus the map (11) identifies with the Hopf fibration (6) as shown in the following commutative diagram. The correspondence on the bottom row of the diagram is given by \(g[1,0]\leftrightarrow g{\mathbf{i}}g^{\ast}\).
Another Hopf map (although it is rarely if ever identified as such) arises from a coordinate system on \(S^{2}\) called the Bloch sphere². It is defined as follows: given \((a,b)\) in \({\mathbb{C}}^{2}\) with \(a\) real, the equations \(a=\cos\theta/2\) and let \(b=e^{i\phi}\sin\theta/2\) determine spherical coordinates \((\theta,\phi)\) for the point
[FOOTNOTE:2][ENDFOOTNOTE]
\[(\cos\phi\sin\theta,\sin\phi\sin\theta,\cos\theta)\]
on \(S^{2}\). This is equivalent to the following.
\[\mbox{\rm Bloch}(a,b)=\mbox{\rm stereo}_{3}^{-1}(\overline{a/b})\] (12)
We will take the map “Bloch” to be given by (12) whether or not \(a\) is real. Here is a comparison diagram that shows how the quaternion action and the Bloch coordinate projection fit into the generic scheme (2). From now on, we will use the labels “HopfClassic”, “QuatHopf”, and “Bloch” to refer to the Hopf’s original map, the map (11), and (12), respectively.
The following commutative diagram demonstrates identifications among Hopf fibrations appearing vertically in dotted line frames. Hopf’s original map is the second column from the left.
We conclude with one more comparison (by commutative diagram) of Bloch and QuatHopf. The label “reverse” denotes the reflection of \({\mathbb{R}}^{3}\) that sends \((x,y,z)\) to \((z,y,x)\).
(13)
## 3 Rotations by Hopf Actions
In the action (9) of the unit quaternions on \({\mathbb{R}}^{3}\), the quaternion \(g=a+b{\mathbf{i}}+c{\mathbf{j}}+d{\mathbf{k}}\) acts as a rotation by \(\theta/2\) radians about the axis specified by the unit length vector \(\hat{n}=(n_{1},n_{2},n_{3})\) where \(\theta,\hat{n}\) are given by the following equations [5, 6].
\[a = \cos\theta/2\]
\[(b,c,d) = \sin\theta/2\;\hat{n}\]
Given a real number \(\theta\) and a point \(\hat{n}\) on \(S^{2}\), let
\[g_{Q}=g_{Q}(\theta,\hat{n})=\cos\theta/2+\sin\theta/2(n_{1}{\mathbf{i}}+n_{2}{ \mathbf{j}}+n_{3}{\mathbf{k}}).\]
We view \(g_{Q}\) both as a quaternion and as the matrix
\[g_{Q}=\left[\begin{array}[]{cc}\cos\theta/2+in_{1}\sin\theta/2&\sin\theta/2(n_ {2}+in_{3})\\ \sin\theta/2(-n_{2}+in_{3})&\cos\theta/2-in_{1}\sin\theta/2\end{array}\right]\]
associated via (8). Let us denote by \(R(\theta,\hat{n},p)\) the image of \(p\) under the rotation by \(\theta\) radians about the axis specified by \(\hat{n}\). Then we have
\[g_{Q}pg_{Q}^{\ast}=R(\theta,\hat{n},p).\]
We can also write \(R(\theta,\hat{n},p)\) in terms of the Hopf fibration in the following way. Let \(h_{Q}\) be any preimage of \(p\) under QuatHopf. Then we have
\[\mbox{\rm QuatHopf}(g_{Q}h_{Q})=R(\theta,\hat{n},p).\]
Here is the one-line proof.
\[\mbox{\rm QuatHopf}(g_{Q}h_{Q})=(g_{Q}h_{Q})i(g_{Q}h_{Q})^{\ast}=g_{Q}(\mbox{ \rm QuatHopf}(h_{Q}))g_{Q}^{\ast}=g_{Q}pg_{Q}^{\ast}.\]
There is a corresponding expression in terms of Bloch [4]. Given a real number \(\theta\) and a point \(\hat{n}\) on \(S^{2}\), let
\[g_{B}=g_{B}(\theta,\hat{n})=\left[\begin{array}[]{cc}\cos\theta/2-in_{3}\sin \theta/2&\sin\theta/2(-n_{2}-in_{1})\\ \sin\theta/2(n_{2}-in_{1})&\cos\theta/2+in_{3}\sin\theta/2\end{array}\right].\]
Let \(h_{B}\) be any preimage of \(p\) under Bloch. Then we have
\[\mbox{\rm Bloch}(g_{B}h_{B})=R(\theta,\hat{n},p).\]
The purpose of the remainder of this section is to explain the equality
\[\mbox{\rm QuatHopf}(g_{Q}h_{Q})=\mbox{\rm Bloch}(g_{B}h_{B}).\] (14)
First observe that the multiplications \(g_{Q}h_{Q}\) and \(g_{B}h_{B}\) are _different_ operations. The binary operation in the expression \(g_{Q}h_{Q}\) is quaternion multiplication or matrix multiplication, depending on whether you view \(g_{Q},h_{Q}\) as quaternions or matrices. The binary operation in \(g_{B}h_{B}\) is the multiplication of the \(2\times 2\) matrix \(g_{B}\) by the \(2\times 1\) vector \(h_{B}\). To keep track of this distinction, we will write \(g_{B}\odot h_{B}\) to denote the latter operation. Having pointed out the difference, we now relate the two operations. Let \(\tilde{h_{B}}\) denote the quaternion associated to \(h_{B}\) by (7), that is, if \(h_{B}=(z,w)\), then \(\tilde{h_{B}}=z+w{\mathbf{j}}\). Then we have
\[g_{B}\odot h_{B}=\tilde{h_{B}}g_{B}^{T}\] (15)
where the operation on the right-hand side is quaternion multiplication and we view \(g_{B}\) as a quaternion by (8), or the operation is matrix multiplication where we view \(\tilde{h_{B}}\) as a \(2\times 2\) matrix by (8).
Now we can derive (14). We have
\[\mbox{\rm Bloch}(g_{B}\odot h_{B}) = \mbox{\rm Bloch}(\tilde{h_{B}}g_{B}^{T})\] (16)
\[= \mbox{\rm reverse}(\mbox{\rm QuatHopf}(g_{B}\tilde{h_{B}}^{T}))\] (17)
\[= g_{Q}pg_{Q}^{\ast}.\] (18)
The first equality (16) is by (15). The second equality (17) is by (13). Here is a geometric explanation for the final equality (18). Interpret \(h_{B}^{T}\) as a QuatHopf lift of \((z,y,x)\) (by virtue of (13)) and interpret \(g_{B}\) as a (“quat”) rotation by \(-\theta\) around \((n_{3},n_{2},n_{1})\), so \(\mbox{\rm QuatHopf}(g_{B}\tilde{h_{B}}^{T})\) calculates \(R(-\theta,\mbox{\rm reverse}(\hat{n}),\mbox{\rm reverse}(p))\). So reversing this result is the same as rotating \(p\) by \(g_{Q}\). Thus we have completed our goal or reconciling Bloch sphere rotation conventions with the standard quaternion approach. We conclude with a commutative diagram that expresses (14).
## 4 Appendix
The set \({\mathbb{P}}^{1}\), called _1-dimensional complex projective space_, is the set of equivalence classes in \({\mathbb{C}}^{2}\setminus\{{\bf 0}\}\), where \({\bf 0}\) denotes the zero vector \({\bf 0}=(0,0)\), with respect to the equivalence relation \(\sim\) defined by \((z,w)\sim(z^{\prime},w^{\prime})\) if and only if \((z,w)=\lambda(z^{\prime},w^{\prime})\) for some nonzero complex scalar \(\lambda\).
The set \(S^{n}\), called the \(n\)_-dimensional sphere_, or simply the \(n\)_-sphere_, is the set of points \((x_{0},x_{1},\ldots,x_{n})\) in \({\mathbb{R}}^{n+1}\) that satisfy
\[x_{0}^{2}+x_{1}^{2}+\cdots+x_{n}^{2}=1.\]
To say that a diagram of sets and functions _commutes_ means that if there are two different function compositions that start at set \(A\) and end at set \(B\), then those compositions must be equal as functions. For example, to say the following diagram commutes means that \(r\mathbin{\circ}t=b\mathbin{\circ}\ell\).
The _quaternions_ are the set \({\mathbb{R}}^{4}\) endowed with a noncommutative multiplication operation, given below. The standard basis vectors
\[(1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,1)\]
are denoted \(1,{\mathbf{i}},{\mathbf{j}},{\mathbf{k}}\), respectively, so that the vector \((x_{0},x_{1},x_{2},x_{3})\) in \({\mathbb{R}}^{4}\) is written \(x_{0}+x_{1}{\mathbf{i}}+x_{2}{\mathbf{j}}+x_{3}{\mathbf{k}}\) as a quaternion. The multiplication is determined by the relations
\[{\mathbf{i}}^{2}={\mathbf{j}}^{2}={\mathbf{k}}^{2}=-1\]
\[{\mathbf{i}}{\mathbf{j}}={\mathbf{k}}\hskip 18.0675pt{\mathbf{j}}{\mathbf{k}}= {\mathbf{i}}\hskip 18.0675pt{\mathbf{k}}{\mathbf{i}}={\mathbf{j}}\]
\[{\mathbf{j}}{\mathbf{i}}=-{\mathbf{k}}\hskip 18.0675pt{\mathbf{k}}{\mathbf{j}} =-{\mathbf{i}}\hskip 18.0675pt{\mathbf{i}}{\mathbf{k}}=-{\mathbf{j}}\]
and extending linearly.
## References
* [1] M. Ackerman et al., editors. _Lie Groups: History, Frontiers and Applications, Vol. 1._ Math Sci Press, Brookline, MA, 1975.
* [2] H. Hopf. Über die Abbildungen der dreidimensionalen Sphäre auf die Kugelfläche. _Math. Ann._ 104:637–665, 1931.
* [3] Theodor Bröcker and Tammo tom Dieck. Representations of Compact Lie Groups. Springer-Verlag, 1985.
* [4] Michael A. Nielsen and Isaac L. Chuang. _Quantum Computation and Quantum Information_. Cambridge University Press, 2000.
* [5] Michael Henle. _Modern Geometries, 2nd edition_. Prentice Hall, 2001.
* [6] David W. Lyons. An Elementary Introduction to the Hopf Fibration. _Mathematics Magazine_ 76(2):87–98, 2003.
|
1705.04075 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 48916,
"num_imgs": 8,
"llama3_tokens_count": 13002
} | [
"content_image/1705.04075/x1.png",
"content_image/1705.04075/x3.png",
"content_image/1705.04075/x6.png",
"content_image/1705.04075/x15.png",
"content_image/1705.04075/x23.png",
"content_image/1705.04075/x26.png",
"content_image/1705.04075/x27.png",
"content_image/1705.04075/x36.png"
] | # Oxygen - dislocation interaction in zirconium from first principles
Nermine Chaari¹
David Rodney
Emmanuel Clouet
emmanuel.clouet@cea.fr
DEN-Service de Recherches de Métallurgie Physique, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France
Institut Lumière Matière, CNRS-Université Claude Bernard Lyon 1, F-69622 Villeurbanne, France
[FOOTNOTE:1][ENDFOOTNOTE]
###### Abstract
Plasticity in zirconium alloys is mainly controlled by the interaction of \(1/3\,\hkl<1-210>\) screw dislocations with oxygen atoms in interstitial octahedral sites of the hexagonal close-packed lattice. This process is studied here using _ab initio_ calculations based on the density functional theory. The atomic simulations show that a strong repulsion exists only when the O atoms lie in the dislocation core and belong to the prismatic dislocation habit plane. This is a consequence of the destruction of the octahedral sites by the stacking fault arising from the dislocation dissociation. Because of the repulsion, the dislocation partially cross-slips to an adjacent prismatic plane, in agreement with experiments where the lattice friction on screw dislocations in Zr-O alloys has been attributed to the presence of jogs on the dislocations due to local cross-slip.
keywords: Dislocation, Plasticity, Zirconium, Oxygen, Density Functional Theory, Hardening †
[FOOTNOTE:†][ENDFOOTNOTE]
## 1 Introduction
Zirconium is a hexagonal close-packed (hcp) transition metal, which deforms mainly by glide of \(\hkl<a>=1/3\,\hkl<1-210>\) dislocations in (10-10) prismatic planes Rapperport (1959); Rapperport and Hartley (1960); Caillard and Martin (2003). All experimental studies Tyson (1967); Das Gupta and Arunachalam (1968); Levine (1968); Mills and Craig (1968); Baldwin and Reedhill (1968); Soo and Higgins (1968); Akhtar and Teghtsoonian (1971); Sastry et al. (1971); Akhtar (12, 13); Caillard and Martin (2003) point to a low-temperature behavior controlled by lattice friction acting on screw dislocations. This lattice friction is extrinsic in nature and arises due to interactions with interstitial solute atoms like oxygen, as indicated by the rapid increase of the critical resolved shear stress for prismatic slip in single crystals when the oxygen content increases Mills and Craig (1968); Baldwin and Reedhill (1968); Soo and Higgins (1968); Akhtar and Teghtsoonian (1971). Moreover, microscopic observations of deformed Zr alloys Bailey (1962); Akhtar and Teghtsoonian (1971); Ferrer et al. (2002) show long rectilinear screw dislocations, whose mobility is therefore limited compared to other orientations.
By way of contrast, in pure zirconium, dislocations of all characters have similar mobilities Clouet et al. (2015) and the plastic behavior becomes almost athermal like in fcc metals for very low O contents (\(\lesssim 100\) ppm in weight) Mills and Craig (1968); Soo and Higgins (1968). The easy prismatic glide of screw dislocations in pure Zr is consistent with _ab initio_ calculations, which show that screw dislocations have a ground state dissociated in a prismatic plane Ferrer et al. (2002); Domain and Legris (2000); Clouet (2012); Clouet et al. (2015), with a small energy barrier opposing prismatic glide and a corresponding Peierls stress below 21 MPa Clouet (2012). Screw dislocations in Zr can adopt other configurations with a core partly or totally spread in a first-order pyramidal plane Chaari et al. (19, 20); Clouet et al. (2015). These configurations however are of higher energy and contribute only to secondary slip, which is activated above room temperature.
Oxygen is therefore added to zirconium alloys for its strengthening effect at an average content between 600 and 1200 wppm Zr (0.3-0.7 at.%) and with a maximum accepted content of 2000 wppm (1.2 at.%) Lemaignan (2012). The same hardening of prismatic slip by O solutes has been reported in titanium Churchman (1954); Tyson (1967); Akhtar and Teghtsoonian (1975); Conrad (1981); Naka et al. (1988); Biget and Saada (1989); Caillard and Martin (2003). However, unlike in Zr, an intrinsic lattice friction exists even in pure Ti, because in this metal, the screw dislocations glide by a locking-unlocking mechanism due to the fact that their ground state is dissociated in a first-order pyramidal plane Clouet et al. (2015).
The origin of hardening caused by an O addition in Zr and Ti is not clearly understood. A control of dislocation glide through their interaction with interstitial solute atoms, as described by Fleisher model Fleischer (1962), has been proposed Levine (1968); Tanaka and Conrad (1972); Akhtar and Teghtsoonian (1975), but it is not clear why such interaction would mainly affect the screw dislocations. Moreover, this interaction must involve dislocation cores Tiwari et al. (1972); Naka et al. (1988), since no elastic interaction exists in an hcp crystal between an ¡a¿ screw dislocation and an interstitial solute atom in an octahedral interstitial site Tyson (1967). Other authors attributed the lattice friction to the presence of jogs on the screw dislocations, with an increase of the jog density with the O content Bailey (1962); Mills and Craig (1968); Soo and Higgins (1968); Akhtar (12). _Ab initio_ calculations in Ti Yu et al. (2015) have evidenced a short range repulsive interaction between screw dislocations and oxygen atoms, which can lead to solid solution hardening and to the creation of jogs by local dislocation cross-slip.
The aim of the present article is to characterize the short-range interactions between ¡a¿ screw dislocations and O atoms in Zr in order to better understand the effect of O addition on the lattice friction in this metal. Since core effects are expected, we will use an electronic structure description of atomic interactions based on _ab initio_ calculations. We first analyse the interaction of O atoms with the prismatic and pyramidal stacking faults involved in the various potential dissociations of the screw dislocation. Interaction of an oxygen atom with the three different stable and metastable configurations of an ¡a¿ screw dislocation is then investigated. Consequences of the repulsive interaction evidenced by these calculations are discussed in the last section.
## 2 Methods
_Ab initio_ calculations are performed with the Pwscf code Giannozzi et al. (2009). We use the generalized gradient approximation with the exchange-correlation functional of Perdew, Burke and Ernzerhof Perdew et al. (1996). Zr and O atoms are modelled with an ultrasoft Vanderbilt pseudo-potential, with 4s and 4p semi-core electrons included in the valence states of Zr. Electronic wave functions are described with plane waves with an energy cutoff of 28 Ry. A regular grid is used for the integration in reciprocal space, with \(14\times 14\times 8\) k-points for the primitive hcp cell and an equivalent k-point density for larger supercells. The electronic density of state is broadened with the Methfessel-Paxton function, with a broadening of 0.3 eV. Atoms are relaxed until all atomic force components are below 10 meV/Å. The same set of _ab initio_ parameters has already been shown to predict Zr lattice parameters and elastic constants in good agreement with experimental data Clouet (2012) and to provide an accurate description of ¡a¿ screw dislocations in pure zirconium Clouet (2012); Chaari et al. (19); Clouet et al. (2015).
cell | Nsites | ΔE (eV)
---|---|---
O | T | BT | C
3×3×2 hcp | 36 | 0. | 0.90 | 0.93 | 1.82
4×4×3 hcp | 96 | 0. | 0.87 | 0.91 | 1.90
5×5×4 hcp | 200 | 0. | 0.84 | 0.87 | 1.91
8×5×2 dislo | 320 | 0. | 0.90 | 0.95 |
Table 1: Energy of an O atom in a perfect hcp lattice for different
configurations: octaheadral (O) taken as the reference, tetrahedral (T), basal
tetrahedral (BT) and crowdion (C). Results are given for different supercells
defined by their number Nsites of lattice sites. The supercell is based either
on the hcp primitive cell or corresponds to the cell used for dislocation
modeling (cf. §4.1).
We checked that the most stable position for O atoms in the hcp Zr lattice is the octahedral interstitial site. Other interstitial positions – tetrahedral, basal tetrahedral and crowdion sites – have a higher energy (Tab. 1). The energy difference between these various interstitial configurations varies only slightly with the size of the supercell, showing that there is no complex long-range interaction between O atoms and their periodic images.
## 3 Oxygen interaction with stacking faults
¡a¿ screw dislocations in Zr can adopt several configurations with either a planar or a non-planar dissociation in either a prismatic or a pyramidal-I plane, or a combination of both Clouet et al. (2015). We therefore study the interaction of O atoms with the corresponding prismatic and pyramidal stacking faults Clouet (2012); Chaari et al. (19, 20), as a first step towards modeling the interaction of O atoms with screw dislocations.
### Simulation setup
The interaction energy of an O atom with a stacking fault is defined as
\[E^{\rm int}=E_{\textrm{SF-O}}-E_{\rm O}-E_{\rm SF}+E_{\rm bulk},\]
where \(E_{\textrm{SF-O}}\), \(E_{\rm O}\), \(E_{\rm SF}\), and \(E_{\rm bulk}\) are the energies respectively of the cell containing both the O atom and the stacking fault, the cell with only the O atom, the cell with only the stacking fault, and the perfect hcp bulk cell. The same tri-periodic simulation cell is used for all four calculations, with only a shift equal to the fault vector added to the periodicity vector perpendicular to the fault plane for \(E_{\rm SF-O}\) and \(E_{\rm SF}\)Clouet (2012). Only one fault and one O atom are introduced in the simulation cells. The O atom is placed in the already relaxed stacking fault and full atomic relaxations, without constraint, are then performed.
The lengths of the periodicity vectors are defined by three integers, \(n\), \(m\), and \(p\). For the prismatic stacking fault, these vectors are \(n/3\,\hkl[1-210]\), \(m\,\hkl[0001]\), and \(p\,\hkl[10-10]\), and for the pyramidal fault \(n/3\,\hkl[1-210]\), \(m/3\,\hkl[2-1-13]\), and \(p/3\,\hkl[-1-123]\). In both cases, the number of atoms in the simulation cell is \(4nmp\). The integers \(n\) and \(m\) fix the surface of the stacking fault, and thus the fraction of solute atom per fault area, whereas \(p\) controls the distance between the fault plane and its periodic images. We have checked the convergence of the calculations with respect to these cell dimensions in Tables 2 and 3.
### Prismatic stacking fault
<figure><img src="content_image/1705.04075/x1.png"><figcaption>(a) \hkl10-10 fault</figcaption></figure>
site | 4×4×3 | 4×2×4 | 2×4×4 | 4×4×4 | 5×5×4
---|---|---|---|---|---
OP | 164 | 188 | 166 | 161 | -
O1 | 33 | - | - | 26 | 27
O2 | 44 | - | - | 40 | 40
Table 2: Interaction energies (meV) of an oxygen atom with a prismatic
stacking fault for different positions of the oxygen interstitial (see Fig.
1a). Calculations are performed for different n×m×p cell sizes corresponding
to the periodicity vectors n/3\hkl[1−210], m\hkl[0001], and p\hkl[10−10].
The octahedral sites located in the fault plane (sites \(O_{0}\) in Fig. 1a) are destroyed by the shear associated with the stacking fault. These sites become unstable positions for the O atom, which migrates during atomic relaxation to a nearby new octahedral site created by the fault. This site, labelled \(O_{\rm P}\) in Fig. 1a, has a geometry close to an octahedral site in the body centered cubic (bcc) structure Ghazisaeidi and Trinkle (2014). It is a stable position for the O atom, without any reconstruction of the fault, but the interaction energy is positive, and therefore repulsive (Tab. 2). This is in agreement with oxygen being an \(\alpha\)-stabilizer since O is then expected to have a higher energy in the bcc than in the hcp structure. A similar result has been reported in Ti Ghazisaeidi and Trinkle (2014); Yu et al. (2015); Kwaśniak et al. (2016).
A slightly repulsive energy is also obtained for the octahedral sites located in the immediate vicinity of the fault plane (sites \(O_{1}\) and \(O_{2}\) in Fig. 1a) (see Tab. 2). There is therefore no attractive site and oxygen atoms can only increase the prismatic stacking fault energy in Zr.
### First-order pyramidal stacking fault
site | 4×2×4 | 4×3×4 | 6×3×4 | 4×3×5
---|---|---|---|---
OP | 130 | 129 | 131 | 123
Oπ | 30 | 22 | 29 | -
O′1 | 75 | 72 | 77 | 34
O′2 | 0 | 0 | 0 | 10
O′3 | 0 | 0 | 0 | 0
Table 3: Interaction energies (meV) of an oxygen atom with a pyramidal
stacking fault for different positions of the oxygen interstitial (see Fig.
1b). Calculations are performed for different n×m×p cell sizes corresponding
to the periodicity vectors n/3\hkl[1−210], m/3\hkl[2−1−13], and
p/3\hkl[−1−123].
The pyramidal fault also destroys the octahedral sites located in the fault plane (sites \(O_{0}\) in Fig. 1b) and creates new bcc-like octahedral sites (sites \(O_{\rm P}\) in Fig. 1b) that are stable insertion sites but with a repulsive interaction energy (Tab. 3), although slightly less than with the prismatic fault (Tab. 2).
Since the pyramidal stacking fault corresponds to a two-layer twin in the pyramidal stacking Chaari et al. (19, 20), new octahedral sites, labelled \(O_{\pi}\) in Fig. 1b, are created and correspond to octahedral sites in the twinned hcp crystal. These are stable insertion sites for O atom, although still with a slightly repulsive interaction energy (Tab. 2).
Finally, a repulsive interaction energy is also obtained for the octahedral interstitial sites in the close vicinity of the fault (sites labelled \(O^{\prime}_{1}\), \(O^{\prime}_{2}\), and \(O^{\prime}_{3}\) in Fig. 1b), but this interaction energy rapidly vanishes away from the fault (Tab. 3).
In conclusion, no attractive insertion site exists for O atoms inside or in the vicinity of neither a prismatic nor a pyramidal-I stacking fault in Zr.
## 4 Oxygen interaction with dislocations
### Simulation setup
Dislocations are modeled in simulation cells with full periodic boundary conditions, which requires to introduce a dipole of dislocations of opposite Burgers vectors Rodney et al. (2017). A quadrupolar periodic array of dislocations, described as an S arrangement in Ref. Clouet (2012) is used. The periodicity vectors of the simulation cell, before introduction of the dislocation dipole, are \(\vec{a}_{x}=n\,a[10\bar{1}0]\), \(\vec{a}_{y}=m\,c[0001]\) and \(\vec{a}_{z}=p\,\vec{b}=p/3\ a[1\bar{2}10]\), where \(n\), \(m\) and \(p\) are three integers. Most of the calculations were performed with \(n=5\), \(m=8\) and \(p=2\), corresponding to a simulation cell with 320 Zr atoms. A \(2\times 1\times 7\) k-point grid was used for this simulation cell.
We considered the three different configurations that a screw dislocation can adopt in Zr Clouet et al. (2015): the ground state dissociated in a prismatic plane (Figs. 2a and 3) and two metastable configurations with either a nonplanar core spread in a combination of prismatic and pyramidal planes Chaari et al. (19); Clouet et al. (2015) (Figs. 2b and 4) or a planar core spread in a pyramidal plane Clouet et al. (2015) (Fig. 5). In a \(5\times 8\times p\) simulation cell, the relative energies of the metastable configurations are \(\Delta E_{1}=1.7\) and \(\Delta E_{2}=12.1\) meV Å\({}^{-1}\), respectively.
One oxygen atom is introduced in the simulation cell in an interstitial site close to one of the dislocations of the dipole. Introducing two oxygen atoms in equivalent positions near both dislocations is also obviously possible but leads to slower convergence when the dislocation cores reconstruct during relaxation. Moreover, in section 4.4, we will check that introducing one or two solutes leads to similar energies. Also, most of the calculations were performed with a dislocation length \(l_{z}=2b\) with \(p=2\). There is therefore one O atom every other repeating distance along the dislocation line. This corresponds to a very high concentration of O atoms, but in section 4.4, we will also check that the O-dislocation interaction energy does not depend on the distance between O atoms along the dislocation line as long as \(p\geq 2\).
The interaction energy is defined as
\[E^{\rm int}=E_{\rm Dislo-O}-E_{\rm O}-E_{\rm Dislo}+E_{\rm bulk},\] (1)
where \(E_{\rm Dislo-O}\), \(E_{\rm O}\), \(E_{\rm Dislo}\), and \(E_{\rm bulk}\) are the energies of the same \(5\times 8\times 2\) cell containing respectively both the dislocation and the O atom, only the O atom, only the dislocation, and no defect nor solute. The reference energy for the dislocation, \(E_{\rm Dislo}\), is the energy of the initial dislocation configuration before introduction of the O atom. This definition of the interaction energy assumes that the elastic energy of the dislocation periodic array is the same with or without the O atom, which is a valid approximation when no core reconstruction is induced by the solute (_cf_. §4.4).
<figure><img src="content_image/1705.04075/x3.png"><figcaption>(a) prismatic</figcaption></figure>
<figure><img src="content_image/1705.04075/x6.png"><figcaption>Figure 3: Core structure of a screw dislocation initially dissociated in aprismatic plane in presence of an O impurity. The initial configuration of thedislocation, before introduction of the O atom, is shown in (a) and (f) withthe various insertion sites for the O atom. The relaxed structures are shownin (b-e) for an O atom lying in an octahedral site destroyed by the prismaticspreading of the dislocation and in (g-i) for an O atom in an octahedral sitecreated by the prismatic fault. The position of the O atom is shown with agreen symbol (see Fig. 1 for a definition of the different symbols). Thecontour map shows the dislocation density according to the Nye tensor.</figcaption></figure>
The relaxed dislocation core structures are visualized using both differential displacements and dislocation density contour maps in Figs. 3, 4 and 5. In these projections in the plane perpendicular to the dislocation line, Zr atoms are sketched by empty and filled symbols depending on their (1-210) plane in the initial perfect crystal. Different symbols are used depending on the neighbourhood of the Zr atom. Differential displacement maps Vitek et al. (1970) show the strain created by the screw dislocation. In these maps, arrows between neighbouring [1-210] atomic columns have an amplitude proportional to the [1-210] component of the differential displacement between the two columns. Displacements smaller than \(0.1b\) are not shown for clarity. Contour maps of the dislocation density, deduced from the screw component along the [1-210] direction of the Nye tensor, are also shown. The Nye tensor is extracted from the relaxed atomic structures using the method developed by Hartley and Mishin Hartley and Mishin (2005). For post-processing, the neighbourhood of an atom is defined by a sphere of radius 1.15 times the lattice parameter. This neighbourhood is identified with one of the following perfect patterns: same neighbourhood as in a perfect hcp lattice (circles), in an unrelaxed prismatic stacking fault (squares), and in a perfect hcp lattice after a (-1011) or a (10-11) mirror symmetry to correspond with the two variants of the unrelaxed pyramidal stacking fault Chaari et al. (19, 20) (upward and downward triangles). The angle threshold on atomic bonds was set to 10\({}^{\circ}\).
### Dislocation prismatic configuration
<figure><img src="content_image/1705.04075/x15.png"><figcaption>Figure 4: Core structure of a screw dislocation initially partly spread in apyramidal plane and partly in two adjacent prismatic planes, in presence of anO impurity. The initial configuration of the dislocation, before introductionof the O atom, is shown in (a) and (d) with the various insertion sites forthe O atom. The relaxed structures are shown in (b-c) for octahedral sitesdestroyed by the prismatic spreading of the dislocation, in (e) for theoctahedral site created by the pyramidal fault and in (f-h) for octahedralsites created by the prismatic faults.</figcaption></figure>
When the dislocation is in its ground state, _i.e._ dissociated in a prismatic plane, a repulsive interaction with the O atom is obtained for all potential insertion sites of the solute (Fig. 2a). The interaction is only slightly repulsive when the O atom sits in an octahedral site away from the dislocation habit plane. On the other hand, when the octahedral site belongs to the prismatic plane of dissociation, the interaction increases up to 0.12 eV as the oxygen gets closer to the dislocation center. This repulsive interaction in the dislocation core is due to the prismatic stacking fault created by the dislocation dissociation, which destroys the octahedral sites of the hcp lattice, as discussed in section 3.2 and shown as green diamonds in Fig. 3a-e. During relaxation, the O atom remains immobile but as seen in Fig. 3c-e, the screw dislocation cross-slips to restore the initial octahedral site around the O atom. Such a cross-slip corresponds to a conservative motion of one half of the stacking fault perpendicular to itself, as already evidenced in pure Zr Chaari et al. (19). The relaxed dislocation is then partly spread in the initial prismatic plane and partly in a cross-slipped pyramidal plane.
The prismatic stacking fault also creates new octahedral insertion sites (sites labelled \(O_{\rm P}\) in Fig. 1a), which can also be found in the dislocation core (green squares in Fig. 3f-i). Upon relaxation, they all lead to almost identical configurations, with repulsive energies between 0.15 and 0.20 eV (Fig. 2a), close to the value obtained with a perfect stacking fault (Tab. 2). In all cases, the dislocation remains fully dissociated in its prismatic habit plane and glides during relaxation such that the O atom is positioned near the inner side of the closest Shockley partial. One can only see a slight variation of the dislocation prismatic spreading in Figs. 3g-i.
### Dislocation pyramidal configurations
We now examine the interaction of an O atom with the pyramidal configurations of the screw dislocation, starting with the configuration of lower energy, which exhibits a non-planar dissociation in a pyramidal plane and two adjacent prismatic planes Chaari et al. (19). When the O atom sits in an octahedral site outside the stacking faults, the interaction is small, between -0.02 and 0.05 eV (Fig. 2b). Contrary to the prismatic configuration, at least one attractive position exists, located right above the pyramidal stacking fault created by the dislocation dissociation. This slight attraction (\(-0.02\) eV) may result from the elastic interaction of the O atom with the two edge disconnections, which connect the pyramidal stacking fault to the two prismatic faults Chaari et al. (19).
When the O atom lies in the prismatic parts of the stacking fault ribbon, the same repulsive interaction is observed as with the prismatic configuration. For octahedral sites destroyed by the prismatic stacking fault, the O atom repels the dislocation, which cross-slips back in its prismatic ground state (Fig. 4b-c). For octahedral sites created by the prismatic stacking fault (Fig. 4f-h), no core reconstruction nor solute migration occurs and the interaction is repulsive (between 0.14 and 0.26 eV).
The octahedral site created by the pyramidal stacking fault (sites \(O_{\pi}\) in Fig. 1b) is also found in the dislocation core (upward triangle in Fig. 2b and Fig. 4d,e). This site, located exactly at the dislocation center, leads to an attractive interaction (\(-0.06\) eV), without modification of the dislocation configuration apart from a slight increase of the spreading (Fig. 4e). The interaction of the O atom in this \(O_{\pi}\) site with the perfect stacking fault was found slightly repulsive (Tab. 3). This example shows that although calculations with stacking faults are a reasonable surrogate to rationalize solute interaction with dislocation cores, they cannot fully substitute a real atomic modeling of the dislocation core
<figure><img src="content_image/1705.04075/x23.png"><figcaption>Figure 5: Core structure of a screw dislocation initially dissociated in thepyramidal plane in presence of an O impurity. The initial configuration of thedislocation, before introduction of the O atom, is shown in (a) with thevarious insertion sites. The relaxed structures are shown in (b) for anoctahedral site destroyed by the pyramidal spreading of the dislocation and in(c) for an octahedral site created by the pyramidal fault.</figcaption></figure>
The same attractive configuration is also obtained when an O atom interacts with the metastable core of higher energy, which is planar and fully dissociated in a pyramidal plane (Fig. 5). We only considered insertion sites in the stacking fault ribbon arising from the pyramidal dissociation. The position corresponding to the dislocation center is an octahedral site destroyed by the pyramidal stacking fault (green diamond in Fig. 5a,b). When an O atom is inserted at this site, the dislocation relaxes to its metastable nonplanar configuration of lower energy and the O atom shuffles to the attractive position seen above (Fig. 5b). If inserted directly in this new octahedral site, the O atom does not move but the dislocation glides in its pyramidal habit plane to position its center at to the O position (Fig. 5c) and the dislocation core again reconstructs to adopt the metastable nonplanar configuration of lower energy.
### Oxygen segregation
<figure><img src="content_image/1705.04075/x26.png"><figcaption>Figure 6: Interaction energy Eint of an oxygen atom with a screw dislocationwhen the oxygen is inserted at the center of the pyramidal configuration (siteOπ, Fig. 4e) as a function of the atomic fraction xO=b/lz of solute atoms onthe dislocation lines. Calculations were performed using either two soluteatoms, one on each dislocation of the dipole, or a single solute on one of thedislocations.</figcaption></figure>
In previous subsection, we showed that when the dislocation is in its nonplanar metastable configuration, the insertion site \(O_{\pi}\) at its center (Fig. 4e) is attractive and can therefore lead to segregation. We now look at how this attractive interaction varies for various oxygen contents along the dislocation line, using \(5\times 8\times p\) simulation cells with different dislocation lengths \(l_{z}=p\,b\). As only one insertion site \(O_{\pi}\) exists per periodicity length \(b\), the atomic fraction of oxygen segregated on the line is \(x_{\rm O}=b/l_{z}=1/p\). Calculations were performed inserting either one solute atom on one of the dislocations of the dipole, or one solute on each dislocation of the dipole. For the atomic fraction \(x_{\rm O}=1/2\) (\(5\times 8\times 2\) supercell), using one or two O atoms leads only to a small difference (8 meV) on the interaction energy (Fig. 6). This shows that, in absence of core reconstruction, the variation of the elastic interaction between the two dislocations has only a small impact on the value of the interaction energy between the O atom and the dislocation.
To obtain the energy associated with the segregation of oxygen on the dislocation, we need to consider that the O atom is coming from a dilute solid solution. The interaction energy (Eq. 1) is therefore calculated, for all O atomic fractions, by considering the same reference for the O solute energy, \(E_{\rm O}-E_{\rm bulk}\), obtained with the \(5\times 5\times 4\) hcp supercell where the O atom can be considered isolated (_cf_. §2).
As seen in Fig. 6, the interaction energy is rather constant for cell heights \(l_{z}\geq 2b\), _i.e._ for atomic fraction \(x_{\rm O}\leq 1/2\). This validates our initial choice of a supercell of height \(l_{z}=2b\). On the other hand, the interaction energy varies rapidly and becomes positive when the supercell height is \(l_{z}=b\). This situation, which corresponds to a full saturation of the dislocation line by O atoms, is therefore not possible thermodynamically and reflects a strong repulsion between O atoms in first neighbor positions. Assuming that the difference in energy between \(l_{z}=b\) and \(l_{z}=2b\) is due only to the repulsion between O atoms, we obtain a repulsion energy of \(98\) meV, which compares well with the nearest neighbor interaction energy, 120 meV, computed in a perfect \(5\times 5\times 4\) hcp supercell. This is consistent with the fact that the octahedral insertion sites are similar whether they are located at the dislocation center or in a perfect hcp crystal.
We see from Fig. 6 that the binding energy of the O atom to the screw dislocation does not exceed 63 meV. As a comparison, the activation energy for O diffusion in bulk Zr is 2.08 eV Bakker et al. (1990). The binding energy is therefore too low to lead to any relevant segregation at temperatures where O solutes are mobile. Considering a simple mean-field thermodynamic model Ventelon et al. (2015) with a constant segregation energy \(E^{\rm seg}=63\) meV, the atomic fraction of oxygen segregated in the dislocation core is given by \(x_{\rm O}^{\rm d}=x_{\rm O}^{0}\exp{\left(E^{\rm seg}/kT\right)}/\left[1+x_{ \rm O}^{0}\exp{\left(E^{\rm seg}/kT\right)}\right]\). For an oxygen nominal concentration \(x_{\rm O}^{0}=0.01\) typical of zirconium industrial alloys, the concentration of O in the dislocation core is only 0.09 at 300 K and 0.03 at 600 K. These values should be considered as upper approximations as the repulsive interaction energy between first nearest neighbour O atoms has been neglected in the segregation energy.
## 5 Oxygen hardening
<figure><img src="content_image/1705.04075/x27.png"><figcaption>Figure 7: Dislocation cross-slip induced by an oxygen atom. The oxygenposition is indicated by a green circle. The O-dislocation interaction energyEint, considering the dislocation prismatic configuration as a reference, isgiven below each configuration. For configurations c and d, because of thecore reconstruction, part of this interaction energy arises from a variationof the elastic interaction between the two dislocations composing the dipole.The drawing below illustrates the creation of the two jogs on the screwdislocation induced by the local cross-slip event.</figcaption></figure>
We have shown above that the interaction of oxygen atoms with a screw dislocation is mainly repulsive. When the O atom is located out of the dislocation glide plane, the repulsion is small (\(E^{\rm int}\lesssim 60\) meV) and can only lead to a marginal contribution to the hardening observed experimentally. On the other hand, when the O atom belongs to the dislocation glide plane, the repulsion is stronger and may induce a cross-slip of the dislocation, as seen in Fig. 3. Using the relaxed configurations obtained for the different positions of the O solute atom, we have reconstructed in Fig. 7 the path that a screw dislocation would follow upon crossing an O atom. The most striking result is that the dislocation bypasses the solute atom by double cross-slip through the pyramidal plane (Fig. 7d-e). The ease of cross-slip arises from the small energy barrier between the dislocation prismatic ground state and its nonplanar metastable state Chaari et al. (19, 20); Clouet et al. (2015). This solute bypass mechanism by dislocation cross-slip is consistent with the experimental observation that oxygen addition favors cross-slip and promotes non-prismatic slip Baldwin and Reedhill (1968).
Along the bypass mechanism, the interaction energy with the O atom does not exceed 80 meV, a value just slightly higher than when the O atom is not in the glide plane. However we should keep in mind that the double cross-slip seen here will be localized along the dislocation line and will therefore lead to the creation of a pair of jogs as illustrated in the lower part of Fig. 7. DFT calculations are too expensive computationally to include such jogs, but we know from crystallography that they will be of edge character and will therefore move conservatively only along the dislocation line and not in the gliding direction, thus producing a lattice friction. Such jogs can also interact with the surrounding solute atoms, thus leading to an elastic interaction between the jogged screw dislocation and the O atoms. We believe that the resistance due to these jogs on the glide of screw dislocations is at the origin of the extrinsic lattice friction measured experimentally in zirconium alloys containing oxygen. This scenario has also been proposed to explain the O hardening in Ti Yu et al. (2015). In the case of Zr alloys, it agrees with the analysis of the critical resolved shear stresses and activation volumes measured for various O contents Mills and Craig (1968); Akhtar (12); Soo and Higgins (1968); Ardell (1966); Moon et al. (2006); Morrow et al. (2013, 2016).
Both Mills and Craig Mills and Craig (1968) and Akhtar Akhtar (12) observed during tensile experiments on Zr single crystals an athermal regime between 600 and 800 K, followed by a thermal activation of the yield stress with an activation energy corresponding to self diffusion. Mills and Craig Mills and Craig (1968) applied the model of Hirsch and Warrington Hirsch and Warrington (1961) and attributed the athermal regime to the drag of sessile jogs on screw dislocations. They assumed that these jogs resulted from the dislocation interaction with O atoms, an assumption in agreement with the cross-slip interaction mechanism evidenced in the present _ab initio_ calculations. When the temperature becomes high enough to allow for vacancy diffusion, the motion of the jogged screw dislocation is controlled by the vacancy flux at the jogs and the yield stress becomes thermally activated. Soo and Higgings Soo and Higgins (1968) also conclude from their tensile experiments on single crystals that slip is controlled by the non-conservative motion of jogged screw dislocations above 300 K, and thus at a lower temperature than in the previous works. The height of the superjogs estimated from the experimental data is between 1 and 2 atomic distances, consistent with the double cross-slip between adjacent prismatic planes observed here. We note that Ardell Ardell (1966) and Mills et al.Moon et al. (2006); Morrow et al. (2013, 2016) also proposed that creep in zirconium alloys is controlled by the motion of jogged screw dislocations through vacancy diffusion Barrett and Nix (1965).
In agreement with these models, Bailey Bailey (1962) observed heavily jogged screw dislocations with post-mortem transmission electron microscopy (TEM) in strained commercial grade zirconium alloys (0.1 wt.% O and N) at room temperature. The “wavyness” of the screw dislocation, a feature of their jog content, increases with the plastic deformation. The same TEM observations performed in a pure Zr alloy (0.03 wt.%O) showed a tangled dislocation microstructure, with no preferential elongation of the dislocations along their screw orientation. Dislocation dipoles resulting from the motion of jogged screw dislocations with a jog height larger than an atomic distance were also observed. Bailey proposed that the jogs created on the screw dislocations were mobile and could therefore condensate to form superjogs on the screw dislocations, but that O atoms decrease their mobility, thus explaining why heavily jogged screw dislocations are observed in the less pure Zr alloy. Such superjogs were observed by Caillard et al.Caillard et al. (2015) in in situ TEM straining experiments. These authors did not support jog drag controlled by vacancy diffusion, but proposed instead an unzipping process corresponding to a conservative glide of the superjogs along the dislocations.
<figure><img src="content_image/1705.04075/x36.png"><figcaption>Figure 8: Possible shuffling of the O atom when interacting with a screwdislocation. The O atom migrates to an octahedral site created by a) theprismatic and b) the pyramidal spreading of the dislocation.</figcaption></figure>
In the above scenario, the interstitial atom remains immobile. However, as illustrated in Fig. 8a, upon first contact with the screw dislocation, the O atom could possibly jump to one of the new octahedral sites \(O_{\rm P}\) created by the prismatic stacking fault. The dislocation would then glide forward to reach the configuration shown in Figs. 3g-i and reproduced on the right of Fig. 8a. This configuration is however of higher energy than any of the cross-slipped configurations and shuffling should therefore operate only if the corresponding excess energy is lower than the energy necessary to create the jog pair by double cross-slip. We should note however that bypass by cross-slip is only possible for screw dislocations. Non-screw dislocations, which cannot change their glide plane, will therefore probably interact with the O atoms in their prismatic dissociation plane through such a shuffling mechanism, thus leading to a localized repulsive interaction. But this shuffling mechanism does not a priori discriminate between screw and non-screw dislocations and can therefore not fully account for the lattice friction induced by O atoms primarily on the screw dislocations.
A second possible shuffling mechanism is available after the first cross-slip event, when the dislocation has a non-planar core spread in a pyramidal and two adjacent prismatic planes (Fig. 7d and e). The O atom can then migrate to the octahedral site \(O_{\pi}\) created in the pyramidal stacking fault (Fig. 8b). The final state corresponds to the attractive configuration discussed in the previous section and has a lower energy than any other configurations. However, preliminary NEB calculations (atomic forces converged only to 0.4 eV Å\({}^{-1}\)) indicate that the activation energy necessary for the O migration is high (\(\sim\)1 eV), thus excluding also this second shuffling mechanism.
## 6 Conclusion
_Ab initio_ calculations show that a repulsive interaction exists between O atoms and ¡a¿ screw dislocations when the O atoms are located in the dislocation habit plane. This repulsion results from the destruction of the octahedral interstitial sites by the stacking fault associated with the dislocation dissociation. This is true both for the dislocation ground state dissociated in the prismatic plane and for the metastable states partially or totally dissociated in the pyramidal planes, since both the prismatic and the pyramidal stacking faults destroy the O insertion sites. As a consequence of this repulsive interaction, the screw dislocation cross-slips to restore the octahedral insertion site. This cross-slip event induced by the oxygen atom will create two jogs on the screw dislocation, which probably explains the lattice friction acting against screw dislocation glide in zirconium alloys containing oxygen. Such a scenario is in agreement with the hardening and creep models proposed to rationalize tensile tests and creep experiments in Zr alloys, although the dynamics of the jogs along the dislocation line and whether they will annihilate or coalesce after unpinning from O atoms remains to be explored. To this end, the bypass mechanism must be simulated in three dimensions with the dislocation in presence of a representative distribution of O atoms. DFT calculations are obviously too expensive to perform such calculations, but the present work can serve in the future as input to develop approximate but reliable models of atomic interactions.
**Acknowledgements - This work was performed using HPC resources from GENCI-CINES and -TGCC (Grants 2015-096847). The author also acknowledge PRACE for access to the Curie resources based in France at TGCC (project PlasTitZir). DR acknowledges support from LABEX iMUST (ANR-10-LABX-0064) of Université de Lyon (program “Investissements d’Avenir”, ANR-11-IDEX-0007)**
## References
* Rapperport (1959) E. J. Rapperport, Room temperature deformation processes in zirconium, Acta Metall. 7 (1959) 254–260.
* Rapperport and Hartley (1960) E. J. Rapperport, C. S. Hartley, Deformation modes of zirconium at 77\({}^{\circ}\), 575\({}^{\circ}\), and 1075\({}^{\circ}\)K, Trans. AIME 218 (1960) 869–876.
* Caillard and Martin (2003) D. Caillard, J. L. Martin, Thermally activated mechanisms in crystal plasticity, Pergamon, Amsterdam, 2003.
* Tyson (1967) W. R. Tyson, Strengthening of hcp Zr, Ti and Hf by interstitial solutes – a review, Can. Metall. Q. 6 (1967) 301–332.
* Das Gupta and Arunachalam (1968) P. Das Gupta, V. S. Arunachalam, Thermally activated deformation in dilute zirconium/oxygen alloys, J. Mater. Sci. 3 (1968) 271–281.
* Levine (1968) E. D. Levine, Prismatic slip in zirconium, Trans. Jap. Inst. Metals 9, suppl. (1968) 832–836.
* Mills and Craig (1968) D. Mills, G. B. Craig, The plastic deformation of zirconium-oxygen alloy single crystals in the range 77 to 950 K, Trans. AIME 242 (1968) 1881–1890.
* Baldwin and Reedhill (1968) D. H. Baldwin, R. E. Reedhill, Some effects of oxygen on tensile deformation of polycrystalline zirconium, Trans. AIME 242 (1968) 661.
* Soo and Higgins (1968) P. Soo, G. T. Higgins, The deformation of zirconium-oxygen single crystals, Acta Metall. 16 (1968) 177–186.
* Akhtar and Teghtsoonian (1971)A. Akhtar, A. Teghtsoonian, Plastic deformation of zirconium single crystals, Acta Metall. 19 (1971) 655–663.
* Sastry et al. (1971) D. H. Sastry, Y. V. R. K. Prasad, K. I. Vasu, An evaluation of rate-controlling obstacles for low-temperature deformation of zirconium, J. Mater. Sci. 6 (1971) 332–341.
* Akhtar (1975a) A. Akhtar, Prismatic slip in zirconium single crystals at elevated temperatures, Metall. Mater. Trans. A 6 (1975a) 1217–1222.
* Akhtar (1975b) A. Akhtar, Schmid’s law and prismatic slip of zirconium, Scripta Metall. 9 (1975b) 859–861.
* Bailey (1962) J. Bailey, Electron microscope studies of dislocations in deformed zirconium, J. Nucl. Mater. 7 (1962) 300–310.
* Ferrer et al. (2002) F. Ferrer, A. Barbu, T. Bretheau, J. Crépin, F. Willaime, D. Charquet, The effect of small concentrations of sulfur on the plasticity of zirconium alloys at intermediate temperatures, in: G. D. Moan, P. Rudling (Eds.), Zirconium in the nuclear industry: thirteenth international symposium, volume 1423 of _American Society for Testing and Materials Special Technical Publication_, American Society Testing and Materials, W Conshohocken, USA, 2002, pp. 863–885. doi:10.1520/STP11420S, 13th International Symposium on Zirconium in the Nuclear Industry, Annecy, France, June 10-14, 2001.
* Clouet et al. (2015) E. Clouet, D. Caillard, N. Chaari, F. Onimus, D. Rodney, Dislocation locking versus easy glide in titanium and zirconium, Nat. Mater. 14 (2015) 931–936.
* Domain and Legris (2000) C. Domain, A. Legris, Atomic scale simulation of the effect of hydrogen on dislocations in Zr, in: Mat. Res. Soc. Symp. Proc., volume 653, 2000, p. Z3.8. doi:10.1557/PROC-653-Z3.8.1.
* Clouet (2012) E. Clouet, Screw dislocation in zirconium: An ab initio study, Phys. Rev. B 86 (2012) 144104.
* Chaari et al. (2014a) N. Chaari, E. Clouet, D. Rodney, First-principles study of secondary slip in zirconium, Phys. Rev. Lett. 112 (2014a) 075504.
* Chaari et al. (2014b) N. Chaari, E. Clouet, D. Rodney, First order pyramidal slip of \(1/3\ \langle 1\bar{2}10\rangle\) screw dislocations in zirconium, Metall. Mater. Trans. A 45 (2014b) 5898–5905.
* Lemaignan (2012) C. Lemaignan, Zirconium Alloys: Properties and Characteristics, Elsevier, 2012, pp. 217–232. doi:10.1016/B978-0-08-056033-5.00015-X.
* Churchman (1954) A. T. Churchman, The slip modes of titanium and the effect of purity on their occurrence during tensile deformation of single crystals, Proc. R. Soc. Lond. A 226 (1954) 216–226.
* Akhtar and Teghtsoonian (1975) A. Akhtar, E. Teghtsoonian, Prismatic slip in \(\alpha\)-titanium single crystals, Metall. Mater. Trans. A 6 (1975) 2201–2208.
* Conrad (1981) H. Conrad, Effect of interstitial solutes on the strength and ductility of titanium, Prog. Mater. Sci. 26 (1981) 123–403.
* Naka et al. (1988) S. Naka, A. Lasalmonie, P. Costa, L. P. Kubin, The low-temperature plastic deformation of \(\alpha\) titanium and the core structure of \(a\)-type screw dislocations, Philos. Mag. A 57 (1988) 717–740.
* Biget and Saada (1989) M. P. Biget, G. Saada, Low-temperature plasticity of high-purity \(\alpha\)-titanium single crystals, Philos. Mag. A 59 (1989) 747–757.
* Fleischer (1962) R. L. Fleischer, Rapid solution hardening, dislocation mobility, and the flow stress of crystals, J. Appl. Phys. 33 (1962) 3504–3508.
* Tanaka and Conrad (1972) T. Tanaka, H. Conrad, Deformation kinetics for \(\left\{10\bar{1}0\right\}\left<11\bar{2}0\right>\) slip in titanium single crystals below \(0.4{T}_{m}\), Acta Metall. 20 (1972) 1019–1029.
* Tiwari et al. (1972) S. N. Tiwari, D. J. Lloyd, K. Tangri, The deformation of polycrystalline zirconium, Metall. Trans. 3 (1972) 2605–2612.
* Yu et al. (2015) Q. Yu, L. Qi, T. Tsuru, R. Traylor, D. Rugg, J. W. Morris, M. Asta, D. C. Chrzan, A. M. Minor, Origin of dramatic oxygen solute strengthening effect in titanium, Science 347 (2015) 635–639.
* Giannozzi et al. (2009) P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. Dal Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, R. M. Wentzcovitch, QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials, J. Phys.: Condens. Matter 21 (2009) 395502.
* Perdew et al. (1996) J. P. Perdew, K. Burke, M. Ernzerhof, Generalized gradient approximation made simple, Phys. Rev. Lett. 77 (1996) 3865–3868.
* Ghazisaeidi and Trinkle (2014) M. Ghazisaeidi, D. Trinkle, Interaction of oxygen interstitials with lattice faults in Ti, Acta Mater. 76 (2014) 82–86.
* Kwaśniak et al. (2016) P. Kwaśniak, H. Garbacz, K. Kurzydlowski, Solid solution strengthening of hexagonal titanium alloys: Restoring forces and stacking faults calculated from first principles, Acta Mater. 102 (2016) 304–314.
* Rodney et al. (2017) D. Rodney, L. Ventelon, E. Clouet, L. Pizzagalli, F. Willaime, Ab initio modeling of dislocation core properties in metals and semiconductors, Acta Mater. 124 (2017) 633–659.
* Vitek et al. (1970) V. Vitek, R. C. Perrin, D. K. Bowen, The core structure of \(1/2\langle 111\rangle\) screw dislocations in b.c.c. crystals, Philos. Mag. 21 (1970) 1049–1073.
* Hartley and Mishin (2005) C. S. Hartley, Y. Mishin, Characterization and visualization of the lattice misfit associated with dislocation cores, Acta Mater. 53 (2005) 1313–1321.
* Bakker et al. (1990) H. Bakker, H. P. Bonzel, C. M. Bruff, M. A. Dayananda, W. Gust, J. Horvth, I. Kaur, G. Kidson, A. D. LeClaire, H. Mehrer, G. Murch, G. Neumann, N. Stolica, N. A. Stolwijk, Diffusion in solid metals and alloys, in: H. Mehrer (Ed.), Landolt-Börnstein, New Series, Group III, volume 26, Springer-Verlag, Berlin, 1990.
* Ventelon et al. (2015) L. Ventelon, B. Lüthi, E. Clouet, L. Proville, B. Legrand, D. Rodney, F. Willaime, Dislocation core reconstruction induced by carbon segregation in bcc iron, Phys. Rev. B 91 (2015) 220102.
* Ardell (1966) A. J. Ardell, Dislocation mobility and the steady-state creep of crystals with special reference to \(\alpha\) zirconium, J. Appl. Phys. 37 (1966) 2910–2911.
* Moon et al. (2006) J. H. Moon, P. E. Cantonwine, K. R. Anderson, S. Karthikeyan, M. J. Mills, Characterization and modeling of creep mechanisms in zircaloy-4, J. Nucl. Mater. 353 (2006) 177–189.
* Morrow et al. (2013) B. M. Morrow, R. W. Kozar, K. R. Anderson, M. J. Mills, An examination of the use of the modified jogged-screw model for predicting creep behavior in Zircaloy-4, Acta Mater. 61 (2013) 4452–4460.
* Morrow et al. (2016) B. Morrow, R. Kozar, K. Anderson, M. Mills, Substructure evolution of Zircaloy-4 during creep and implications for the modified jogged-screw model, Mater. Sci. Eng. A 665 (2016) 90–97.
* Hirsch and Warrington (1961) P. B. Hirsch, D. H. Warrington, The flow stress of aluminium and copper at high temperatures, Philos. Mag. 6 (1961) 735–768.
* Barrett and Nix (1965) C. R. Barrett, W. D. Nix, A model for steady state creep based on the motion of jogged screw dislocations, Acta Metall. 13 (1965) 1247–1258.
* Caillard et al. (2015) D. Caillard, M. Rautenberg, X. Feaugas, Dislocation mechanisms in a zirconium alloy in the high-temperature regime: An in situ TEM investigation, Acta Mater. 87 (2015) 283–292.
|
1710.00987 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 25360,
"num_imgs": 0,
"llama3_tokens_count": 6256
} | [] | # Annotation and Detection of Emotion in Text-based Dialogue Systems with CNN
Jialiang Zhaobit
Qi Gaobit
[
alanzjl@foxmail.com
gaoqi@bit.edu.cn
###### Abstract
Knowledge of users’ emotion states helps improve human-computer interaction. In this work, we presented EmoNet, an emotion detector of Chinese daily dialogues based on deep convolutional neural networks. In order to maintain the original linguistic features, such as the order, commonly used methods like segmentation and keywords extraction were not adopted, instead we increased the depth of CNN and tried to let CNN learn inner linguistic relationships. Our main contribution is that we presented a new model and a new pipeline which can be used in multi-language environment to solve sentimental problems. Experimental results shows EmoNet has a great capacity in learning the emotion of dialogues and achieves a better result than other state of art detectors do.
emotion detection, CNN, human-computer interaction, natural language processing bit]School of Automation, Beijing Institute of Technology, Beijing 100081, P. R. China
## 1 Introduction
Emotion detection is potentially important in voice recognition systems, which provides improvements to further human-computer interactions.
Emotion detection can be achieved by various approaches, including text, gestures, facial expression, electrocardiographs, etc[1, 2, 3, 4]. Much work based on these features has been introduced over the last decade[5]. However, text is the most common way for communication in the Internet. Text-based emotion detection has multiple applications in human-computer interaction, especially in search engines and chatting robots. With the knowledge of users’ emotion, search engines could give more specific results, while chatting robots could reply in a more natural way.
Methods in text-based emotion detection can be divided into three categories: keyword-based detection, learning-based detection, and hybrid detection[5]. CNNs (convolutional neural networks) have shown their great capacity of learning the nature of images and speeches over the past few years. With a deep configuration and a large training dataset, keyword extraction might be not necessary anymore, which will significantly simplify the classification process.
In this work, we presented and tested a Chinese daily dialogues emotion detector based on deep convolutional neural network without segmentation or keyword extraction. This paper is divided into 7 sections. Section 2 gives an introduction of the pre-processing step. Section 3 gives the CNN configuration of EmoNet. Section 4 gives details of the training process. In section 5, we tested different configurations and hyper-parameters and gave evaluations based on top-1 accuracy and comparisons with other state of art detectors in different scales.
## 2 Dialogue Pre-processing
We manually labeled over \(12,000\) dialogues for training and evaluation. They are divided into 4 categories: positive, negative, wondering, and neutral. Most of them come from scripts of TV shows, movies and books. Numbers of dialogues in different categories are listed in Tab.1
\hhlineOverall | Positive | Negative | Wondering | Neutral
---|---|---|---|---
12186 | 3679 | 4205 | 1747 | 2555
\hhline | | | |
Table 1: Numbers of dialogues in different categories
### Typical Routine
Before diving into the training process, normally several pre-processing steps should be done in order to improve overall efficiency. For English speech classification work, a typical routine is as follows:
* Spell check: A widely used library for spell check is _PyEnchant_. This step is optional.
* Stemming or Lemmatization[6]: Both of them are used to find the original form of words, e.g. ’talk’ is the original form of ’talking’ or ’talked’.
* Case normalization: Convert input to uniform uppercase or lowercase. For example, ”Today I’m happy.” might be converted to ”today i’m happy.”.
However, for Chinese classification or recognition system, a special step, segmentation, generally needs to be done. Normally researchers will build a LUT (look up table) which contains thousands of common used Chinese words before training, and then the frequency of appearance of each word in the target dialogue will be checked. A number of segmentation algorithms have been developed over the last decade[7, 8, 9]. The frequency vector is finally used as training materials.
One of the most appealing advantage with this method is that outputs of segmentation are of uniform length (equals to the length of pre-built LUT), so that during most classification algorithms, e.g. KNN and SVM, it can achieve a better results. Another important advantage is that segmentation with pre-selected keywords saves much time for reseachers and improves the efficiency[10].
However, drawbacks with this algorithm shouldn’t be ignored.
* Over segmentation: Similar to image precessing, over segmentation may significantly decrease the accuracy of our classifier[11]. For example, ’not happy’ may be segmented to ’not’ and ’happy’. Obviously a single ’not’ is meaningless but a ’happy’ leads to a positive emotion, thus this sentence will be classified as positive because of over-segmentation, while the original emotion should be negative.
* Loss of linguistic features: Order of words is lost after segmentation. Same set of words with different orders sometimes gives absolutely opposite emotions. For example, as demonstrated in Fig.1, ”He’s happy while I’m sad.”, and ”I’m happy while he’s sad” share a same vocabulary, but give opposite emotion.
[FIGURE:S2.F1][ENDFIGURE]
### Pre-pocessing in EmoNet
In order to avoid the problems listed above, we didn’t adopt segmentation in EmoNet. And because our net is tested in Chinese dialogues, stemming and lemmatization are not necessary.
#### 2.2.1 Removal of Stop Words
Many words or expressions do not contribute to emotion or have same like-hood of occuring in those sentences not relevant to the target one. Such words are called stop words[12]. In order to save storage space and improve classification efficiency[13], we need to remove them before further training. A list of Chinese stop words from _Data Hall_ is used[14].
[FIGURE:S2.F2][ENDFIGURE]
As shown in Fig.2, we searched for stop words in every source dialogue and removed them in this step. Notice that the order is kept.
#### 2.2.2 Re-encode Input Characters
UTF-8 (8-bit Unicode Transformation Format) is a large and powerful set of characters which encompasses most of the world’s writing systems[15]. With UTF-8, EmoNet gains the ability to perform in multi-language dialogue systems. Normally, one UTF-8 character is encoded with 1-6 byte, and in Chinese encoding, 3-byte UTF-8 is used.
However, because of UTF-8’s large range, most information in this encoding system is useless. And because of the great number of Chinese characters, it’s unnecessary to express each character precisely. The next step is to re-sample the encoding of input characters.
Firstly, because our net currently only contains Chinese, English and Arabic numerals, we can truncate UTF-8 to a new set that only contains characters of these vocabularies. Tab.2 gives encoding of Chinese characters, full-width English characters and full-width Arabic numerals in UTF-8.
\hhlineAlphabet | Chinese | Uppercase English
---|---|---
UTF-8 | u _4e00_ \- u _9fa5_ | u _ff21_ \- u _ff3a_
Num | 20902 | 26
\hhlineAlphabet | Lowercase English | Numerals
UTF-8 | u _ff41_ \- u _ff5a_ | u _ff10_ \- u _ff19_
Num | 26 | 10
\hhline | |
Table 2: UTF-8 encoding of full-width alphabets
After this procedure, there are totally \(20964\) characters left, most of which are Chinese characters.
#### 2.2.3 Encoding and Re-sampling
Unlike image processing system, encoding of characters is not ”continuous”, which means adjacent codes not necessary give similar meanings, while value in imaging systems gives continuous effect in hue, saturation, etc. Thus we don’t need to represent each character precisely. In EmoNet, we re-sample the \(20964\) codes into \(256\) codes with Eq. 1.
\[\lambda_{new}=\lambda_{ori}\ \%\ 256\] (1)
After this procedure, every character is represented with one Byte. We set the maximum length of one dialogue to be 144 in EmoNet, thus totally \(144\times 1=144\ Bytes\) are used to represent one dialogue.
## 3 Convolutional Neural Network in EmoNet
We adopted a deep CNN to do the classification job in EmoNet. CNN is a powerful tool with an adjustable capacity by adjusting its depth and breadth. With large datasets, CNN can achieve high performance in learning the nature of speech systems[16].
In EmoNet, we adopted a CNN which shares a similar but deeper configuration with LeNet-5[17], which consists of steps of convolutions, pooling, and fully-connected layers.
Fig.3 gives the overall structure of EmoNet.
[FIGURE:S3.F3][ENDFIGURE]
Input layer is a \(144\times 1\) vector. The augmentation layer is used for feature expansion, which outputs a matrix of \(32\times 32\times 3\). Five convolutional layers are used, and most of them come with a max pooling layer for sub-sampling and dimension reduction. The last convolutional layer gives an output of \(6\times 6\times 256\). A fully-connected layer with dropout and _softmax_ method is used for classification. The Output is of 4-dimension, positive, negative, wondering, and neutral.
Following sections give introduction of function and configuration of these layers in detail.
### Augmentation Layer
We applied an augmentation layer to expand feature from the input and scale for the following layers. Because generally every convolutional layer outputs a matrix of smaller size but larger dimension compared with its input matrix, in order to apply enough number of convolutional layers, we need to expand the input matrix to a larger square matrix.
In every convolutional layer, we chose to use a filter of size \(5\times 5\) and stride \(1\). In order to outputs a \(12\times 12\times n\) in the last step of convolutional step and apply 5 convolutional layers, we need an square output of size 32 in the augmentation step, which can be calculated with Eq. 2.
\[\begin{split} Size&=S_{output}+N_{layers}\times(S_{ filter}-Stride)\\ &=12+5\times(5-1)\\ &=32\end{split}\] (2)
An affine layer follows Eq.3, where the weight matrix \(W\) is of size \([144,32\times 32\times 3=3072]\), and bias vector \(B\) is of size \([3072,1]\).
\[Output=W\cdot Input+B\] (3)
### Convolutional and Pooling Layer
The convolutional part consists of 5 convolutional layers and 4 pooling layers.
#### 3.2.1 Convolutional Layer
The convolutional layer adopts a filter to compute convolutions and generates a higher dimension matrix. In EmoNet, the filter sizes of all the 5 convolutional layers are chosen as \(5\times 5\). Filter dimensions vary in different layers.
Typically, a sigmoid nonlinearity (or activation function), which is given in Eq.4, is used after every convolutional layer.
\[S(x)=\frac{1}{1+e^{-x}}\] (4)
However, because sigmoid is easy to saturate, and according to Nair and Hinton, we use RectifiedLinear Units (ReLUs)[19], a non-saturating nonlinearity instead. RELUs follow Eq.5.
\[S(x)=\left\{\begin{aligned} x,\ x\geq 0\\ 0,\ x<0\end{aligned}\right.\] (5)
#### 3.2.2 Pooling Layer
Pooling layers introduce invariance, reduce dimension and prevent overfitting in CNNs[18].
We adopted overlapping pooling layers in this network. Overlapping pooling means adjacent pooling regions overlap with each other. For example, the pooling region size is denoted as \(s\times s\), and stride is denoted as \(z\). If \(s\geq z\), which is used in most CNNs, it is non-overlapping. If \(s<z\), then it becomes an overlapping pooling. According to Krizhevsky, Sutskever, and Hinton, an overlapping scheme will reduce error rate and decrease the probability of overfitting[20]. We adopted \(s=5,z=1\) in EmoNet.
Also, many pooling methods have been developed, such as max pooling, average pooling, chunk pooling, etc. We used max pooling in EmoNet. Its principle is illustrated in Fig.4.
[FIGURE:S3.F4][ENDFIGURE]
### Fully Connected Neural Network with Dropout
#### 3.3.1 Fully Connected Layers
A 3-layer fully connected neural network is connected to the last convolutional layer. Fully connected layers are affine layers, and every neuron in one layer is connected to each neuron in its adjacent layer. Each neuron of the first two layers comes with an RELU activation. Detailed configuration is listed in Tab.3.
\hhlineLayer | Layer 1 | Layer 2 | Layer 3
---|---|---|---
Number of neurons | 1024 | 1024 | 5
Number of inputs | 9216 | 1024 | 1024
Number of outputs | 1024 | 1024 | 5
\hhline | | |
Table 3: Structure of Fully Connected Layers
The final layer outputs a \(5x1\) vector. This vector is then transformed into 5 categories, positive, negative, wondering, neutral and meaningless (which means this input should be discarded) by softmax classification.
#### 3.3.2 Overfitting
Overfitting is a common problem in neural networks, especially when researchers don’t have a large enough dataset. Nowadays there are two popular methods to prevent overfitting, batch normalization and dropout.
We tested EmoNet with batch normalization first. During training, inputs of every layer are always changing with the fact that parameters in the prior layers are changing at the same time. This phenomenon, which is also called internal covariate shift[21], slows down the training process and requires careful initialization. Batch normalization solves this problem by normalizing layer inputs. However, in EmoNet, it reduced the error rate only by 2.5% to 3.8% compared with EmoNet without batch normalization with same amount of training. We think because our input is characters, which are discrete (while input of imaging systems are continuous), normalizing them might be unhelpful. Thus we tried dropout instead.
The core principle of dropout is to randomly disable some neurons, along with its connections during trainning[22], but do nothing during test. According to Srivastava et al., the optimal retention probability \(p\) of input layer should be close to 100% while with hidden layers it should be close to 50%[22]. We set \(p_{input}\) as \(1.0\) and \(p_{hidden}\) as \(0.3\), and we saw an reduction of error rate of around 6.5%.
## 4 Training Process
Parameters need to be updated during training, and normally people use back-propagation[24]. During this procedure, many parameters updating methods have been developed, such as SGD, momentum, RMSprop, and Adam. Adam is a first-order gradient-based optimization based on adaptive estimates of lower-order moments which can achieve a faster convergence than other methods like SGD, momentum and RMSprop[25].
We chose Adam as EmoNet’s parameter updating method. We divided the dataset into 2 parts, one of which was used for training, the other was used for evaluation. Cross validation[26] is used during the parameter updating process, and the training data was further divided into 32 mini-batch for cross validation.
## 5 Optimization
### Test of Different Configurations
Configuration of EmoNet is modified from VGGNet[23]. We tested different configurations of EmoNet. Validation error and training speed within one epoch using same amount of data are used as evaluation criteria. Tab.4 gives the configurations we tested, and Fig.5 gives comparison between them. Matplotlib[27], a python plotting package, is used for drawing.
[FIGURE:S5.F5][ENDFIGURE]
We chose configuration B, which achieved relatively high accuracy and fast performance at the same time, as the final structure of EmoNet.
### Test of Different Parameters
Learning rate, regulation strength in affine layers, and mean of initialization are also important hyper-parameters that give huge influence on final results. Because dropout makes careful initialization less important[22], we only tested different sets of learning rate \(\gamma\) and regulation strength \(L\). Results are collected with same amount of input data and same number of training iterations. Results are shown in Fig.6.
The group of highest accuracy, group \(C\) is chosen, where \(\gamma=5\times 10^{-6},L=1.5\times 10^{-4}\). The loss v.s. time curve of the first epoch is shown in Fig.7.
### Evaluation
With optimized parameters, EmoNet was trained with tensorflow and Nvidia GPU. After 100 epochs’ training, an overall top-1 accuracy of 72.8% was achieved. Tab.5 gives accuracy of each category separately.
According to the results, neutral and wondering states are relatively easier to detect than neutral and negative states.
Wondering states are the easiest to detect, which is also easy to interpret. Generally dialogues of this emotion state come with question marks, or some specific words, like ”what”, ”why”, ”how”, etc.
\hhlineNo. | A | B | C | D
---|---|---|---|---
Number of layers | 9 weighted layers | 9 weighted layers | 9 weighted layers | 9 weighted layers
\hhline Input 144×1 vector
Augmentation 3072 affine layer
ConvLayers 32-D | Conv32 | Conv32 | Conv32 | Conv32
Conv32 | | |
Maxpool 5×5, stride 1
ConvLayers 64-D | Conv64 | Conv64 | Conv64 | Conv64
| Conv64 | |
Maxpool 5×5, stride 1
ConvLayers 128-D | Conv128 | Conv128 | Conv128 | Conv128
| | Conv128 |
Maxpool 5×5, stride 1
ConvLayers 256-D | Conv256 | Conv256 | Conv256 | Conv256
| | | Conv256
Maxpool 5×5, stride 1
Fully connected layer 1024
Dropout
Fully connected layer 1024
Dropout
Fully connected layer 5
Softmax
\hhline | | | |
Table 4: Configurations of EmoNet
[FIGURE:S5.F6][ENDFIGURE]
Although accuracy of neutral state is high, we found that most emotion states of false detected dialogues were detected as neutral. This phenomenon may result from two facts, the first one is that a neutral state is too vague and general, and the other one is that unlike other states in which we can find specific emotional words (for example, ”why” for wondering states and ”happy” for positive states), there are no specific words for neutral states.
Detection with positive states and negative states are similar. However, as we can see, accuracy with positive states is low. This may result from the quality of the training materials. Ironies are very common in our source, and it is sometimes even hard for us to distinguish whether a dialogue is positive or negative.
Comparisons between EmoNet and state of art Chinese text-based emotion detectors are given in Tab.6, Tab.7, and Tab.8.
_Multi-Model Net_ was given by Ze-Jing Chuang and Chung-Hsien Wu[28], and _ESiN_ was given by Jianhua Tao[29].
[FIGURE:S5.F7][ENDFIGURE]
\hhlineClass | neutral | positive | wondering | negative
---|---|---|---|---
top-1 accuracy | 0.73 | 0.57 | 0.85 | 0.69
\hhline | | | |
Table 5: Evaluation
\hhlineDetector | EmoNet | Multi-Modal Net | ESiN
---|---|---|---
top-1 accuracy | 0.72 | 0.6548 | not given
\hhline | | |
Table 6: Comparison of overall accuracy
\hhlineDetector | EmoNet | Multi-Modal Net | ESiN
---|---|---|---
top-1 accuracy | 0.70 | 0.6466 | over 0.7
\hhline | | |
Table 7: Comparison of accuracy of emotional states
\hhlineDetector | EmoNet | Multi-Modal Net | ESiN
---|---|---|---
top-1 accuracy | 0.73 | 0.7137 | peak over 0.8
\hhline | | |
Table 8: Comparison of accuracy of neutral states
## 6 Conclusion
In this paper, EmoNet was presented, an emotion detection system based on deep convolutional neural networks. We analyzed the feature of Chinese encoding and adopted a pre-processing step without segmentation, stemming or lemmatization, which introduces difficulties but address the problem of loss of linguistic features. Now in EmoNet, a simple re-sampling step was used to replace these steps. In the future, we will try some other algorithms. We are now working in developing a linear mapping system which can map arbitrary-length dialogues into equal-length outputs.
Different CNN configurations and different hyper-parameters were tested. An overall accuracy of 0.72 was achieved with 12,000 training dialogues, 100 epochs’ training and the optimized modal. Top-1 accuracy of EmoNet is higher than other Chinese text based emotion detector. However, it’s foreseeable that with more training materials, EmoNet has the capacity to achieve a better performance.
## References
* [1] Cohn J F, Katz G S. Bimodal expression of emotion by face and voice[C]//Proceedings of the sixth ACM international conference on Multimedia: Face/gesture recognition and their applications. ACM, 1998: 41-44.
* [2] Devillers L, Lamel L, Vasilescu I. Emotion detection in task-oriented spoken dialogues[C]//Multimedia and Expo, 2003. ICME’03. Proceedings. 2003 International Conference on. IEEE, 2003, 3: III-549.
* [3] Agrafioti F, Hatzinakos D, Anderson A K. ECG pattern analysis for emotion detection[J]. IEEE Transactions on Affective Computing, 2012, 3(1): 102-115.
* [4] Gunes H, Piccardi M. Bi-modal emotion recognition from expressive face and body gestures[J]. Journal of Network and Computer Applications, 2007, 30(4): 1334-1345.
* [5] Tinghao Y, Hsieh C T, Soo V W. Towards Text-based Emotion Detection[C]//Proc. of International Conference on Information Management and Engineering.[S. l.]: IEEE Press. 2009: 70-74.
* [6] Korenius T, Laurikkala J, J?rvelin K, et al. Stemming and lemmatization in the clustering of finnish text documents[C]//Proceedings of the thirteenth ACM international conference on Information and knowledge management. ACM, 2004: 625-633.
* [7] Liu K Y, Zheng J H. Research of automatic Chinese word segmentation[C]//Machine Learning and Cybernetics, 2002. Proceedings. 2002 International Conference on. IEEE, 2002, 2: 805-809.
* [8] Peng F, Feng F, McCallum A. Chinese segmentation and new word detection using conditional random fields[C]//Proceedings of the 20th international conference on Computational Linguistics. Association for Computational Linguistics, 2004: 562.
* [9] Sproat R, Gale W, Shih C, et al. A stochastic finite-state word-segmentation algorithm for Chinese[J]. Computational linguistics, 1996, 22(3): 377-404.
* [10] Huang C, Zhao H. Chinese word segmentation: A decade review[J]. Journal of Chinese Information Processing, 2007, 21(3): 8-20.
* [11] Patino L. Fuzzy relations applied to minimize over segmentation in watershed algorithms[J]. Pattern Recognition Letters, 2005, 26(6): 819-828.
* [12] Wilbur W J, Sirotkin K. The automatic identification of stop words[J]. Journal of information science, 1992, 18(1): 45-55.
* [13] Hao L, Hao L. Automatic identification of stop words in Chinese text classification[C]//Computer Science and Software Engineering, 2008 International Conference on. IEEE, 2008, 1: 718-722.
* [14] Data Hall, List of Chinese Stop Words[J]. 2016-07-05]. http://www.datatang.com/data/19300/.(Data Hall. Stop Words Set.
* [15] Yergeau F. UTF-8, a transformation format of Unicode and ISO 10646[R]. 1996.
* [16] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[C]//Advances in neural information processing systems. 2012: 1097-1105.
* [17] LeCun Y, Jackel L D, Bottou L, et al. Learning algorithms for classification: A comparison on handwritten digit recognition[J]. Neural networks: the statistical mechanics perspective, 1995, 261: 276.
* [18] Scherer D, M ller A, Behnke S. Evaluation of pooling operations in convolutional architectures for object recognition[J]. Artificial Neural Networks CICANN 2010, 2010: 92-101.
* [19] Nair V, Hinton G E. Rectified linear units improve restricted boltzmann machines[C]//Proceedings of the 27th international conference on machine learning (ICML-10). 2010: 807-814.
* [20] Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks[C]//Advances in neural information processing systems. 2012: 1097-1105.
* [21] Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[C]//International Conference on Machine Learning. 2015: 448-456.
* [22] Srivastava N, Hinton G E, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15(1): 1929-1958.
* [23] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
* [24] Hecht-Nielsen R. Theory of the backpropagation neural network[J]. Neural Networks, 1988, 1(Supplement-1): 445-448.
* [25] Kingma D, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.
* [26] Kohavi R. A study of cross-validation and bootstrap for accuracy estimation and model selection[C]//Ijcai. 1995, 14(2): 1137-1145.
* [27] Barrett P, Hunter J, Miller J T, et al. matplotlib–A Portable Python Plotting Package[C]//Astronomical Data Analysis Software and Systems XIV. 2005, 347: 91.
* [28] Chuang Z J, Wu C H. Multi-modal emotion recognition from speech and text[J]. Computational Linguistics and Chinese Language Processing, 2004, 9(2): 45-62.
* [29] Tao J. Context based emotion detection from text input[C]//Interspeech. 2004.
|
1804.07026 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 26504,
"num_imgs": 3,
"llama3_tokens_count": 7484
} | [
"content_image/1804.07026/x1.png",
"content_image/1804.07026/x2.png",
"content_image/1804.07026/x3.png"
] | # Constraints on a Spin-Dependent Exotic Interaction between Electrons with Single Electron Spin Quantum Sensors
Xing Rong
CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China
Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China
Man Jiao
CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China
Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China
Jianpei Geng
CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China
School of Electronic Science and Applied Physics, Hefei University of Technology, Hefei 230009, China
Bo Zhang
bz8810@ustc.edu.cn
CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China
Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China
Tianyu Xie
CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China
Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China
Fazhan Shi
CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China
Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China
Chang-Kui Duan
CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China
Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China
Yi-Fu Cai
CAS Key Laboratory for Research in Galaxies and Cosmology, Department of Astronomy, University of Science and Technology of China, Hefei 230026, China
School of Astronomy and Space Science, University of Science and Technology of China, Hefei 230026, China
Jiangfeng Du
djf@ustc.edu.cn
CAS Key Laboratory of Microscale Magnetic Resonance and Department of Modern Physics, University of Science and Technology of China, Hefei 230026, China
Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China
Hefei National Laboratory for Physical Sciences at the Microscale, University of Science and Technology of China, Hefei 230026, China
March 2, 2024
###### Abstract
A new laboratory bound on the axial-vector mediated interaction between electron spins at micrometer scale is established with single nitrogen-vacancy (NV) centers in diamond. A single crystal of \(p\)-terphenyl doped pentacene-\(d_{14}\) under laser pumping provides the source of polarized electron spins. Based on the measurement of polarization signal via nitrogen-vacancy centers, we set a constraint for the exotic electron-electron coupling \(g_{A}^{e}g_{A}^{e}\), within the force range from 10 to 900 \(\mu\)m. The obtained upper bound of the coupling at 500 \(\mu\)m is \(|g_{A}^{e}g_{A}^{e}/4\pi\hbar c|\leq 1.8\times 10^{-19}\), which is one order of magnitude more stringent than a previous experiment. Our result shows that the NV center can be a promising platform for searching for new particles predicted by theories beyond the standard model.
Given our ignorance of the ultraviolet completion of particle physics, it is of great importance to investigate new particles beyond the standard model [1]. Theoretical predicted particles, such as pseudoscalars fields (axion and axionlike particles [3; 2]) and axial-vector fields (paraphotons [4] and extra \(Z\) bosons [5; 6]), have stimulated attentions in a wide variety of researches [7]. It has been well motivated for decades from the requirement of cosmology [8], namely, the candidates of dark matter [9] and dark energy [10], and from the understanding of the symmetries of charge conjugation and parity in quantum chromodynamics (QCD) [11], as well as predictions from string theory [1]. The exchange of these hypothetical particles results in spin-dependent forces [6], which were originally discussed by Moody and Wilczek [2]. Various laboratory searching experiments focus on the detection of macroscopic axial-vector dipole-dipole forces between polarized electrons, described by \(V_{2}\) potentials in Ref. [6], ranging from the atomic scale to the radius of the Earth [7]. The series of stringent constraints on this coupling have been set by torsion-pendulum experiments [12; 13], a trapped ions experiment [14], positronium hyperfine spectroscopy [15; 16; 7], helium fine-structure spectroscopy [17], and by using polarized electrons within the Earth [18]. Recently, data from STM-ESR experiments [19; 20] have been used to constrain exotic dipole-dipole interactions between electrons at nanometer scale [21].
In this Letter, we established a new constraint on an exotic dipole-dipole interaction between electrons at the micrometer scale by single nitrogen-vacancy (NV) centers in diamond. The source of polarized electrons was provided by a single crystal of \(p\)-terphenyl doped pentacene-\(d_{14}\) under laser pumping [22]. The sensor can be engineered to be sensitive to the signal from polarized electrons[23]. Based on our recent measurement of polarized electrons by NV centers, we have established a new constraint on the axial-vector mediated interaction between electrons at micrometer scale, which considerably improves on previous experimental bounds.
<figure><img src="content_image/1804.07026/x1.png"><figcaption>Figure 1: (a) Schematic experimental setup. A NV center in diamond, which islabeled as S, is used to search for the exotic dipole-dipole interactionbetween electrons. The source of polarized electrons is provided by laserpumped pentacene doped in a p-terphenyl single crystal with the thicknessbeing h=15 μm. The radius of the laser beam for pumping pentacene is r0=35 μm.The distance between the NV center and the surface of the single crystal islabeled by d. An external magnetic field, B0=512 G, was applied along the NV’saxis (labeled by the z axis). (b) The electronic energy diagram of pentacenewithin a p-terphenyl host lattice under external magnetic field B0. A laserwith a 520-nm wavelength is used to pump the state of pentacene from thesinglet ground state S0 to the first excited single state S1. Then, they decayquickly into the triplet state TE through spin-selective intersystem crossing(ISC). The population in the state |0⟩ is much larger than that in |±⟩. Aradio-frequency pulse labeled by rf is to engineer the population of states|0⟩ and |+⟩. The spin polarization will relax back to the ground singlet stateby phosphorescence or nonradiative decay. The decay time of states |±⟩ ist1=7±1 μs Xie_2017 .</figcaption></figure>
Single NV centers in diamond, which are defects composed of a substitutional nitrogen atom and a neighboring vacancy[24], have been proposed as quantum sensors for detecting a weak magnetic signal within the nanoscale [25; 26]. The size of this quantum sensor can be engineered to be small compared to the micrometer force range, and the geometry enables close proximity between the sensor and the source. Furthermore, the magnetic noise can be isolated well by dynamical decoupling technology [27; 26]. Recently, this type of quantum sensor has been proposed and utilized to explore electron-nucleon monopole-dipole interaction [28].
Herein, we focus on searching for exotic dipole-dipole interaction mediated by axial-vector fields between electrons. Figure 1(a) shows the schematic of the setup. A single crystal of \(p\)-terphenyl doped with pentacene-\(d_{14}\), \(0.05\) mol\(\%\), is placed on the surface of the diamond. The spin density of the sample is estimated to be \(\rho=1.62\times 10^{-3}~{}\)nm\({}^{-3}\). The thickness of the single crystal is \(h=15~{}\mu\)m. The long axis of the pentacene molecule is nearly along the [111]-NV axis. A 520-nm laser pulse with the beam intensity being about \(10^{7}~{}\)W/m\({}^{2}\) is applied on the single crystal to generate polarized electrons [22]. The NV center labeled by \(S\) is a few micrometers below the surface of the diamond. The ground state of the NV center is an electron spin triplet state \({}^{3}A_{2}\) with three substates \(\left|m_{S}=0\right\rangle\) and \(\left|m_{S}=\pm 1\right\rangle\)[24]. A static magnetic field \(B_{0}\) of about 512 G is applied along the NV symmetry axis to remove the degeneracy of the \(|m_{S}=\pm 1\rangle\) spin states. The spin states \(\left|m_{S}=0\right\rangle\) and \(\left|m_{S}=-1\right\rangle\) are utilized as a quantum sensor [26]. The state of \(S\) can be manipulated by microwave pulses labeled by MW in Fig. 2(a), which are delivered by a copper microwave wire. The \(\left|m_{S}=+1\right\rangle\) state remains idle due to the large detuning. Laser pulses with wavelengths of \(532~{}\)nm are applied to initialize and readout the state of \(S\)[28]. There are two layers, a 150-nm-silver layer and a 100-nm-PMMA layer, between the single crystal and diamond to isolate the two laser beams as well as fluorescence from S and the single crystal.
The first step is to prepare polarized electrons. The electronic energy level diagram of electron spins in a pentacene molecule [29] is shown in Fig. 1(b). After being excited by a 520-nm laser pulse, the pentacene can be pumped from the singlet state \(S_{0}\) to the triplet manifold \(T_{E}\) via spin selective intersystem crossing [30; 31]. The population of the state \(\left|0\right\rangle\) of the triplet sublevels is much greater than the states \(\left|\pm\right\rangle\), while the populations of \(\left|\pm\right\rangle\) are equal [22]. In our experiment, a \(1.5~{}\mu\)s laser pulse by a Gaussian beam with the radius of \(35~{}\mu\)m was applied. A radio-frequency (rf) pulse with frequency resonant to the transition between \(\left|0\right\rangle\) and \(\left|+\right\rangle\) is applied after the laser pumping. The frequency of rf is set to 820 MHz, and the time duration of the rf pulse is \(80~{}\)ns. After this rf pulse, a nonzero population difference between the Zeeman eigenstates of an external magnetic field (\(\left|\pm 1\right\rangle_{p}\)), with \(P_{0}\) being about \(0.5\%\), is generated [23]. After the polarization procedure, the state of the electron spins will relax back to the singlet ground state \(S_{0}\), which is of magnetic resonance silence. This results in a decay of polarization \(P(t)=P_{0}\exp(-t/t_{1})\) with decay time \(t_{1}=7\pm 1\mu\)s.
Now, we consider the interactions between polarized electrons of pentacene and \(S\). The magnetic diople-diople interaction between a single electron spin and \(S\) is
\[H_{1}=-\frac{\mu_{0}\gamma_{e}\gamma_{e}\hbar^{2}}{16\pi r^{3}}[3(\vec{\sigma_ {1}}\cdot\hat{r})(\vec{\sigma_{2}}\cdot\hat{r})-(\vec{\sigma_{1}}\cdot\vec{ \sigma_{2}})],\] (1)
where \(\vec{\sigma_{1}}\) and \(\vec{\sigma_{2}}\) stand for Pauli vectors of the electron spin of pentacene and that of \(S\), respectively, and \(\gamma_{e}=2\pi\times 2.8~{}\)MHz/Gauss stands for the gyromagnetic ratio of the electron spin. The symbol \(\vec{r}\) is the displacement vector between the electrons, and \(r=|\vec{r}|\) and \(\hat{r}=\vec{r}/r\) are the displacement and the unit displacement vector, respectively. The axial-vector dipole-dipole interaction mediated by hypothetical axial-vector bosons [6] can be written as
\[H_{2}=\frac{g_{A}^{e}g_{A}^{e}}{4\pi\hbar c}\frac{\hbar c}{r}(\vec{\sigma_{1}} \cdot\vec{\sigma_{2}})e^{{-\frac{r}{\lambda}}},\] (2)
where \(g_{A}^{e}g_{A}^{e}/4\pi\hbar c\) is the dimensionless axial-vector coupling constant between electrons, \(\lambda=\hbar/(mc)\) is the force range, \(m\) is the mass of the hypothetical particle, \(\hbar\) is Plank’s constant divided by \(2\pi\), and \(c\) is the speed of light. When the electron spin of pentacene is in the state of \(\left|+1\right\rangle_{p}\), the quantum sensor S feels an effective magnetic field from the electron spin, which can be written as
\[b_{\textrm{eff}}(r,\theta)=-\frac{\mu_{0}\gamma_{e}\hbar}{8\pi r^{3}}(3\cos^{2 }\theta-1)+(\frac{g_{A}^{e}g_{A}^{e}}{4\pi\hbar c})\frac{2c}{\gamma_{e}}\frac{ e^{-\frac{r}{\lambda}}}{r},\] (3)
where \(\theta\) stands for the angle between the external magnetic field and \(\vec{r}\). The first term in the right part of Eq. 3 is due to the magnetic dipole-dipole interaction, and the second term is from the axial-vector coupling between electrons. The effective magnetic field felt by \(S\) from a bulk of pentacene with the electron spin density \(\rho\) and polarization \(P(t)\) is
\[b(t)=\rho P(t)\int_{V}b_{\textrm{eff}}(r,\theta)dV,\] (4)
where \(V\) stands for the cylinder of polarized electrons. The radius of the cylinder is equal to the radius of the laser beam, which is \(35\pm 5~{}\mu\)m. The thickness of the cylinder is \(15\pm 3~{}\mu\)m.
<figure><img src="content_image/1804.07026/x2.png"><figcaption>Figure 2: (a) Pulse sequence for measurement of the polarized electrons by S.For the rf pulse, the frequency is 820 MHz, and the pulse length is 80 ns. ForMW pulses, the frequency is 1.43 GHz, and the pulse length of π/2(π) is about90 ns (180 ns). The pulse lengths of all the laser pulses are 1.5 μs. Thedelay time between MW pulses is set to τ=30 μs. (b) Black circles with errorbars are experimental accumulated ϕ due to the polarized electrons, withdifferent distances d = 12, 23, 49 μm. The error bars of the data are due tothe photon statistics. Red line is the fit for ϕ with Eq. 4 when λ=500 μm.</figcaption></figure>
The experimental pulse sequence is shown in Fig. 2(a). The first \(\pi/2\) microwave pulse prepares \(S\) to a superposition state \((|0\rangle-i|1\rangle)/\sqrt{2}\). The spin echo sequence [32] has been applied on \(S\) to cancel unwanted semistatic magnetic noise, and the coherence time of \(S\) is about \(400~{}\mu\)s. The delay time \(\tau\) in the pulse sequence is fixed to be \(\tau=30~{}\mu\)s, which is much shorter than \(T_{2}\) and is much longer than the decay time of polarized electrons \(t_{1}\). A 520-nm laser pulse together with a rf pulse are applied on the bulk pentacene to prepare polarized electrons. The polarized electrons generate an effective magnetic field \(b(t)\) on the NV center’s electron spin via the coupling between them. This effective magnetic field \(b(t)\) causes a phase shift \(\phi=\int_{0}^{\tau}\gamma_{e}b(t)dt-\int_{\tau}^{2\tau}\gamma_{e}b(t)dt\) on the state of \(S\) and the final \(\pi/2\) pulse converts this phase shift to the population of the state \(\left|m_{S}=0\right\rangle\) of \(S\). The phase of the last \(\pi/2\) is set to be \(90^{\circ}\) relative to the first \(\pi/2\) pulse, so that the accumulated phase due to coupling to the polarized electrons can be obtained by \(\phi=\arcsin(1-2P_{\left|m_{S}=0\right\rangle})\), where \(P_{\left|m_{S}=0\right\rangle}\) stands for the population in state \(\left|m_{S}=0\right\rangle\) of NV centers.
The experimental data are presented in Fig. 2(b). Three NV centers with different depths have been chosen to measure the signal from the polarized electrons. The three depths of these NV centers are measured to be \(d\) = 12, 23, and 49\(~{}\mu\)m [33]. The experimental phases acquired by NV centers are presented as black circles with error bars in Fig. 2(b). The error bars are due to the photon statistics. The red line is the fitting to the experimental data using Eq. 4 with both of the two interactions included, when force range is \(\lambda=500~{}\mu\)m. The initial polarization obtained by this fit is \(P_{0}=4.7\pm 0.1\%\). When \(\lambda=500~{}\mu\)m, the fitting provides \(g_{A}^{e}g_{A}^{e}/4\pi\hbar c=(-0.78\pm 1.46)\times 10^{-20}\). The value of the axial-vector field induced interaction is less than its standard deviation showing no evidence of the exotic interaction observed in our experiment. The upper limit of this interaction at \(\lambda=500~{}\mu\)m due to the statistical errors can be set to be \(g_{A}^{e}g_{A}^{e}/4\pi\hbar c\leq 3.64\times 10^{-20}\) with 95\(\%\) confidence level. The constraint due to the statistical errors can be obtained for any given force range \(\lambda\) with the same procedure.
Systematic error | Size of effect | Corrections
---|---|---
Deviation in x−y plane | 0±10 μm | (−0.6±1.3)×10−20
Distance | 12±1.3 μm | (1±80)×10−22
Decoherence of S | 405±23 μs | (−55±6)×10−22
Decay time | 7±1 μs | (−5±36)×10−21
Radius | 35±5 μm | (−3±7)×10−21
Thickness | 15±3 μm | (−9±45)×10−21
Polarization | 4.7±0.1% | (−1±52)×10−22
Total | | (−2.9±6.0)×10−20
Table 1: Summary of the systematic errors in our experiment. The corrections
to geAgeA/4πℏc at λ=500 μm are listed.
We examined systematic errors and analyzed the corrections to \(g_{A}^{e}g_{A}^{e}/4\pi\hbar c\). We take \(\lambda=500~{}\mu\)m as an example, while corrections due to these systematic errors are listed in Table 1. The main systematic error in our experiment is those of the magnetic dipole-dipole interaction between electrons due to the uncertainties of experimental parameters. For example, the distance between \(S\) and the bottom of the pentacene bulk is \(12\pm 1.3~{}\mu\)m, from which the shift of the magnetic field felt by \(S\) due to the dipole-dipole interaction is estimated to be 1.0(115)\(\times 10^{-10}~{}\)T. Then, a correction to \(g_{A}^{e}g_{A}^{e}/4\pi\hbar c\) for \(500~{}\mu\)m due to this type of systematic error can be obtained as \(1(80)\times 10^{-22}\). The deviation in the \(x-y\) plane is mainly due to the long time drift of our optical system, which was observed to be less than \(10~{}\mu\)m during our experiment. This effect causes a correction to the coupling of \(-0.6(1.3)\times 10^{-20}\). The systematic errors due to the uncertainty of the radius, the thickness of the single crystal and the relaxation time of the polarized electron have been taken into account in the Supplemental Material [33]. The correction due to the decoherence time of \(S\) is also examined. The detailed analysis of the systematic errors are included in Table 1. The total correction to the interaction at \(500~{}\mu\)m is \(-2.9(6.0)\times 10^{-20}\). The bound for the exotic interaction with force range \(\lambda=500~{}\mu\)m is derived to be \(|g_{A}^{e}g_{A}^{e}/4\pi\hbar c|\leq 1.8\times 10^{-19}\) with a 95\(\%\) confidence level, when both statistical and systematic errors are taken into account. The upper limits with different values of force range shown in Fig. 3 are obtained with the same method.
Figure 3 shows the new constraint set by this work together with recent constraints from experimental searches for axial-vector-mediated dipole-dipole interactions. Filled areas correspond to excluded values. For the force range \(\lambda>900~{}\mu\)m, the constraint was established by Ritter \(et~{}al\).[12; 7]. For the force range \(\lambda<10~{}\mu\)m in Fig. 3, the upper limit was set by Kotler \(et~{}al\).[14]. The red line is the constraint established by our experimental observation, which clearly shows more stringent constraints in the range from 10 to 900\(~{}\mu\)m. Specifically, the obtained upper limit of the exotic dipole-dipole interaction at 500 \(\mu\)m is about a factor of 50 more stringent than the one set by Ref. [14]. The constraint may be further improved by several strategies in the future. By enhancing the power of the excited laser, the polarization of the electron spin can be improved. Multiple laser pumping pulses can be employed together with multipulse dynamical decoupling sequence. Therefore, the accumulated phase due to the polarized electron can be enhanced. To reduce the systematic errors, one may fabricate the single crystal of pentacene with more precision. The location of the NV center can be addressed more precisely by high resolution imaging technology, such as stimulated emission depletion microscopy [34].
<figure><img src="content_image/1804.07026/x3.png"><figcaption>Figure 3: Upper limit on the axial-vector-mediated dipole-dipole interactionsbetween electrons geAgeA/4πℏc as a function of the force range λ and mass ofthe axial-vector bosons m. The black solid lines represent upper bounds fromRefs. PRL_ConstraintDipDip ; Ritter_PRD_1990 . Our work (the red line)establishes a new laboratory bound in the force range from 10 to 900 μm. Theobtained upper bound of the interaction at 500 μm is |geAgeA/4πℏc|≤1.8×10−19,which is one order of magnitude more stringent than a previous experiment.</figcaption></figure>
_Conclusion_. We present an experimental platform to constrain an exotic dipole-dipole interaction between electrons. Our method benefits from the high controllability of the quantum states of NV centers [35], which have been employed as sensitive magnetometers. Our recent work shows that NV centers can be utilized as a quantum sensor to detect the monopole-dipole interaction between an electron spin and nucleons at micrometer scale [28]. In the present study, a new constraint on an axial-vector mediated interaction between electrons for the force range 10-900\(~{}\mu\)m has been established. In the future, we expect that other types of spin-dependent forces [6] might be investigated by the NV-center quantum sensor. NV centers will not only be an important quantum sensor for physics within the standard model, but will also be a platform for probing hypothetical particles beyond the standard model.
This work was supported by the NSFC (Grants No. 81788101, No. 11227901, No. 11722327, No. 11653002, No. 11421303, and No. J1310021), the CAS (Grants No. GJJSTD20170001, No. QYZDY-SSW-SLH004, and No. QYZDB-SSW-SLH005), the 973 Program (Grants No. 2013CB921800 and No. 2016YFB0501603), and Anhui Initiative in Quantum Information Technologies (Grant No. AHY050000). X. R and F. S. thank the Youth Innovation Promotion Association of Chinese Academy of Sciences for the support. Y. F. C. is supported in part by the CAST Young Elite Scientists Sponsorship Program (2016QNRC001) and by the Fundamental Research Funds for the Central Universities.
## References
* (1) J. H. Schwarz and N. Seiberg, Rev. Mod. Phys. **71**, S112 (1999).
* (2) J. E. Moody and F. Wilczek, Phys. Rev. D **30**, 130 (1984).
* (3)P. W. Graham, I. G.Irastorza, S. K. Lamoreaux, A. Lindner, and K. A. van Bibber, Annu. Rev. Nucl. Part. Sci. **65**, 485 (2015).
* (4) B. A. Dobrescu, Phys. Rev. Lett. **94**, 151802 (2005).
* (5)T. Appelquist, B. A. Dobrescu, and A. R. Hopper, Phy. Rev. D **68**, 035012 (2003).
* (6) B. A. Dobrescu and I.Mocioiu, J. High Energy Phys. **11** (2006) 005.
* (7) M.S. Safronova, D. Budker, D. DeMille, D. F. J. Kimball, A. Derevianko, and C. W. Clark, Rev. Mod. Phys. **90**, 025008 (2018).
* (8)D. J. Marsh, Phys. Rep. **643**, 1 (2016).
* (9) G. Bertone, D. Hooper, and J. Silk, Phys. Rep. **405**, 279 (2005).
* (10)M. Kamionkowski, J. Pradler, and D. G. E. Walker, Phys. Rev. Lett. **113**, 251302 (2014).
* (11)R. D. Peccei and H. R. Quinn, Phys. Rev. Lett.**38**, 1440 (1977).
* (12) R. C. Ritter, C. E. Goldblum, W. T. Ni, G. T. Gillies and C. C. Speake, Phys. Rev. D **42**, 977 (1990).
* (13) B. R. Heckel, E. G. Adelberger, C. E. Cramer, T. S. Cook, S. Schlamminger, U. Schmidt, Phys. Rev. D **78**, 092006 (2008).
* (14) S. Kotler, R. Ozeri, and D. F. Jackson Kimball, Phys. Rev. Lett.**115**, 081801 (2015).
* (15) S. G. Karshenboim, Phys. Rev. D **82**, 113013 (2010).
* (16) T. M. Leslie, E. Weisman, R. Khatiwada, and J. C. Long, Phys. Rev. D **89**, 114022 (2014).
* (17) F. Ficek, D. F. Jackson Kimball, M. G. Kozlov, N. Leefer, S. Pustelny, and D. Budker, Phys. Rev. A **95**, 032505 (2017).
* (18) L. Hunter _et al._, Science **339**, 928 (2013).
* (19) S. Baumann _et al._, Science **350**, 417 (2015).
* (20) T. Choi _et al._, Nat. Nanotechnol. **12**, 420 (2017).
* (21) P. Luo, J. Ding, J. Wang and X. Ren, Phys. Rev. D **96**, 055028 (2017).
* (22) D. J. Sloop _et al._, J. Chem. Phys. **75**, 3746 C3757 (1981).
* (23)T. Xie _et al._, Phys. Rev. Appl. **9**, 064003 (2018).
* (24) M. W. Doherty, N.B.Manson, P.Delaney, F.Jelezko, J.Wrachtrup, L.C.L.Hollenberg, Phys. Rep. **528**, 1 (2013).
* (25) J. M. Taylor _et al._, Nat. Phys. **4**, 810 (2008).
* (26) C. L. Degen _et al._, Rev. Mod. Phys. **89**, 035002 (2017).
* (27) J. Du _et al._, Nature **461**, 1265 (2009).
* (28) X. Rong _et al._,Nat. Commun. **9**, 739 (2018).
* (29) J. L. Ong _et al._, J. Phys. Chem **97**, 7833 (1993).
* (30) F. G. Patterson, H. W. H. Lee, W. L. Wilson, and M. D. Fayer, Chem. Phys. **84**, 51 (1984).
* (31) J. Köehler _et al._, Chem. Phys. Lett. **250**, 137 (1996).
* (32) E. L. Hahn, Phys. Rev. **80**, 580 (1950).
* (33) See Supplemental Material for details of the experimental parameters and the statistical and systematic error analysis, which includes Ref. [36].
* (34) D. Wildanger, J. R. Maze, and S. W. Hell, Phys. Rev. Lett. **107**, 017601 (2011).
* (35) X. Rong _et al._, Nat. Commun. **6**, 8748 (2015).
* (36) G. Balasubramanian _et al._, Nat. Mater. **8**, 383 (2009).
|
1708.04824 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 38895,
"num_imgs": 1,
"llama3_tokens_count": 15247
} | [
"content_image/1708.04824/x1.png"
] | # Some remarks on the theorems of Wright and Braaksma on the Wright function \({}_{p}\Psi_{q}(z)\)¹
[FOOTNOTE:1][ENDFOOTNOTE]
R. B. Paris
_University of Abertay Dundee, Dundee DD1 1HG, UK_
###### Abstract
We carry out a numerical investigation of the asymptotic expansion of the so-called Wright function \({}_{p}\Psi_{q}(z)\) (a generalised hypergeometric function) in the case when exponentially small terms are present. This situation is covered by two theorems of Wright and Braaksma. We demonstrate that a more precise understanding of the behaviour of \({}_{p}\Psi_{q}(z)\) is obtained by taking into account the Stokes phenomenon.
\(\,\)\(\,\)
**1. Introduction**
We consider the Wright function (a generalised hypergeometric function) defined by
\[{}_{p}\Psi_{q}(z)\equiv{}_{p}\Psi_{q}\biggl{(}\!\!\begin{array}[]{c}(\alpha_{1 },a_{1}),\ldots,(\alpha_{p},a_{p})\\ (\beta_{1},b_{1}),\ldots,(\beta_{q},b_{q})\end{array}\!\!;z\!\biggr{)}=\sum_{n =0}^{\infty}g(n)\,\frac{z^{n}}{n!},\] (1.1)
\[g(n)=\frac{\prod_{r=1}^{p}\Gamma(\alpha_{r}n+a_{r})}{\prod_{r=1}^{q}\Gamma( \beta_{r}n+b_{r})},\] (1.2)
where \(p\) and \(q\) are nonnegative integers, the parameters \(\alpha_{r}\) and \(\beta_{r}\) are real and positive and \(a_{r}\) and \(b_{r}\) are arbitrary complex numbers. We also assume that the \(\alpha_{r}\) and \(a_{r}\) are subject to the restriction
\[\alpha_{r}n+a_{r}\neq 0,-1,-2,\ldots\qquad(n=0,1,2,\ldots\ ;\,1\leq r\leq p)\] (1.3)
so that no gamma function in the numerator in (1.1) is singular. In the special case \(\alpha_{r}=\beta_{r}=1\), the function \({}_{p}\Psi_{q}(z)\) reduces to a multiple of the ordinary hypergeometric function
\[{}_{p}\Psi_{q}(z)=\frac{\prod_{r=1}^{p}\Gamma(a_{r})}{\prod_{r=1}^{q}\Gamma(b_ {r})}\,{}_{p}F_{q}\biggl{(}\begin{array}[]{c}a_{1},\ldots,a_{p}\\ b_{1},\ldots,b_{q}\end{array}\!\!;z\biggr{)};\]
see, for example, [13, p. 40].
We introduce the parameters associated² with \(g(n)\) given by
[FOOTNOTE:2][ENDFOOTNOTE]
\[\kappa=1+\sum_{r=1}^{q}\beta_{r}-\sum_{r=1}^{p}\alpha_{r},\qquad h=\prod_{r=1} ^{p}\alpha_{r}^{\alpha_{r}}\prod_{r=1}^{q}\beta_{r}^{-\beta_{r}},\]
\[\vartheta=\sum_{r=1}^{p}a_{r}-\sum_{r=1}^{q}b_{r}+\leavevmode\hbox{${ \frac{1}{2}}$}(q-p),\qquad\vartheta^{\prime}=1-\vartheta.\] (1.4)
If it is supposed that \(\alpha_{r}\) and \(\beta_{r}\) are such that \(\kappa>0\) then \({}_{p}\Psi_{q}(z)\) is uniformly and absolutely convergent for all finite \(z\). If \(\kappa=0\), the sum in (1.1) has a finite radius of convergence equal to \(h^{-1}\), whereas for \(\kappa<0\) the sum is divergent for all nonzero values of \(z\). The parameter \(\kappa\) will be found to play a critical role in the asymptotic theory of \({}_{p}\Psi_{q}(z)\) by determining the sectors in the \(z\)-plane in which its behaviour is either exponentially large, algebraic or exponentially small in character as \(|z|\rightarrow\infty\).
The determination of the asymptotic expansion of \({}_{p}\Psi_{q}(z)\) for \(|z|\rightarrow\infty\) and finite values of the parameters has a long history; for details, see [10, §2.3]. Detailed investigations were carried out by Wright [16, 17] and by Braaksma [2] for a more general class of integral functions than (1.1). We present a summary of their results related to the asymptotic expansion of \({}_{p}\Psi_{q}(z)\) for large \(|z|\) in Section 2. Our purpose here is to consider two of the expansion theorems involving the presence of exponentially small expansions valid in certain sectors of the \(z\)-plane. We demonstrate by numerical computation that a more precise understanding of the asymptotic structure of \({}_{p}\Psi_{q}(z)\) can be achieved by taking into account the Stokes phenomenon.
**2. Standard asymptotic theory for \(|z|\rightarrow\infty\)**
We first state the standard asymptotic expansion of the integral function \({}_{p}\Psi_{q}(z)\) as \(|z|\rightarrow\infty\) for \(\kappa>0\) and finite values of the parameters given in [17] and [2]; see also [11, §2.3]. To present this expansion we introduce the exponential expansion \(E_{p,q}(z)\) and the algebraic expansion \(H_{p,q}(z)\) associated with \({}_{p}\Psi_{q}(z)\).
The exponential expansion \(E_{p,q}(z)\) can be obtained from the Ford-Newsom theorem [3, 4]. A simpler derivation of this result in the case \({}_{p}\Psi_{q}(z)\) based on the Abel-Plana form of the well-known Euler-Maclaurin summation formula is given in [10, pp. 42–50]. We have the formal asymptotic sum
\[E_{p,q}(z):=Z^{\vartheta}e^{Z}\sum_{j=0}^{\infty}A_{j}Z^{-j},\qquad Z=\kappa( hz)^{1/\kappa},\] (2.1)
where the coefficients \(A_{j}\) are those appearing in the inverse factorial expansion of \(g(s)/s!\) given by
\[\frac{g(s)}{\Gamma(1+s)}=\kappa(h\kappa^{\kappa})^{s}\biggl{\{}\sum_{j=0}^{M-1 }\frac{A_{j}}{\Gamma(\kappa s+\vartheta^{\prime}+j)}+\frac{\rho_{M}(s)}{\Gamma (\kappa s+\vartheta^{\prime}+M)}\biggr{\}}.\] (2.2)
Here \(g(s)\) is defined in (1.2) with \(n\) replaced by \(s\), \(M\) is a positive integer and \(\rho_{M}(s)=O(1)\) for \(|s|\rightarrow\infty\) in \(|\arg\,s|<\pi\). The leading coefficient \(A_{0}\) is specified by
\[A_{0}=(2\pi)^{\frac{1}{2}(p-q)}\kappa^{-\frac{1}{2}-\vartheta}\prod_{r=1}^{p} \alpha_{r}^{a_{r}-\frac{1}{2}}\prod_{r=1}^{q}\beta_{r}^{\frac{1}{2}-b_{r}}.\] (2.3)
The coefficients \(A_{j}\) are independent of \(s\) and depend only on the parameters \(p\), \(q\), \(\alpha_{r}\), \(\beta_{r}\), \(a_{r}\) and \(b_{r}\). An algorithm for their evaluation is described in the appendix.
The algebraic expansion \(H_{p,q}(z)\) follows from the Mellin-Barnes integral representation [11, §2.4]
\[{}_{p}\Psi_{q}(z)=\frac{1}{2\pi i}\int_{-\infty i}^{\infty i}\Gamma(s)g(-s)(ze ^{\mp\pi i})^{-s}ds,\qquad|\arg(-z)|<\pi(1-\leavevmode\hbox{${\frac{ 1}{2}}$}\kappa),\] (2.4)
where the path of integration is indented near \(s=0\) to separate³ the poles of \(\Gamma(s)\) from those of \(g(-s)\) situated at
[FOOTNOTE:3][ENDFOOTNOTE]
\[s=(a_{r}+k)/\alpha_{r},\qquad k=0,1,2,\dots\,\ (1\leq r\leq p).\] (2.5)
In general there will be \(p\) such sequences of simple poles though, depending on the values of \(\alpha_{r}\) and \(a_{r}\), some of these poles could be multiple poles or even ordinary points if any of the \(\Gamma(\beta_{r}s+b_{r})\) are singular there. Displacement of the contour to the right over the poles of \(g(-s)\) then yields the algebraic expansion of \({}_{p}\Psi_{q}(z)\) valid in the sector in (2.4).
If it is assumed that the parameters are such that the poles in (2.5) are all simple we obtain the algebraic expansion given by \(H_{p,q}(z)\), where
\[H_{p,q}(z):=\sum_{m=1}^{p}\alpha_{m}^{-1}z^{-a_{m}/\alpha_{m}}S_{p,q}(z;m)\] (2.6)
and \(S_{p,q}(z;m)\) denotes the formal asymptotic sum
\[S_{p,q}(z;m):=\sum_{k=0}^{\infty}\frac{(-)^{k}}{k!}\Gamma\left(\frac{k+a_{m}}{ \alpha_{m}}\right)\,\frac{\prod_{r=1}^{{}^{\prime}\,p}\Gamma(a_{r}-\alpha_{r}( k+a_{m})/\alpha_{m})}{\prod_{r=1}^{q}\Gamma(b_{r}-\beta_{r}(k+a_{m})/\alpha_{m })}z^{-k/\alpha_{m}},\] (2.7)
with the prime indicating the omission of the term corresponding to \(r=m\) in the product. This expression in (2.6) consists of (at most) \(p\) expansions each with the leading behaviour \(z^{-a_{m}/\alpha_{m}}\) (\(1\leq m\leq p\)). When the parameters \(\alpha_{r}\) and \(a_{r}\) are such that some of the poles are of higher order, the expansion (2.7) is invalid and the residues must then be evaluated according to the multiplicity of the poles concerned; this will lead to terms involving \(\log\,z\) in the algebraic expansion.
The three main expansion theorems are as follows. Throughout we let \(\epsilon\) denote an arbitrarily small positive quantity.
**Theorem 1**: \(\!\!\!.\) _If \(0<\kappa<2\), then_
\[{}_{p}\Psi_{q}(z)\sim\left\{\begin{array}[]{lll}E_{p,q}(z)+H_{p,q}(ze^{\mp\pi i })&\leavevmode\hbox{in}&|\arg\,z|\leq\leavevmode\hbox{${\frac{1}{2}} $}\pi\kappa\\ \\ H_{p,q}(ze^{\mp\pi i})&\leavevmode\hbox{in}&\leavevmode\hbox{${\frac {1}{2}}$}\pi\kappa+\epsilon\leq|\arg\,z|\leq\pi\end{array}\right.\] (2.8)
_as \(|z|\rightarrow\infty\). The upper or lower sign in \(H_{p,q}(ze^{\mp\pi i})\) is chosen according as \(\arg\,z>0\) or \(\arg\,z<0\), respectively._
It is seen that the \(z\)-plane is divided into two sectors, with a common vertex at \(z=0\), by the rays \(\arg\,z=\pm\leavevmode\hbox{${\frac{1}{2}}$}\pi\kappa\). In the sector \(|\arg\,z|<\leavevmode\hbox{${\frac{1}{2}}$}\pi\kappa\), the asymptotic character of \({}_{p}\Psi_{q}(z)\) is exponentially large whereas in the complementary sector \(\leavevmode\hbox{${\frac{1}{2}}$}\pi\kappa<|\arg\,z|\leq\pi\), the dominant expansion of \({}_{p}\Psi_{q}(z)\) is algebraic in character. On the rays \(\arg\,z=\pm\leavevmode\hbox{${\frac{1}{2}}$}\pi\kappa\) the exponential expansion is oscillatory and is of a comparable magnitude to \(H_{p,q}(ze^{\mp\pi i})\).
**Theorem 2**: \(\!\!\!.\) _If \(\kappa=2\) then_
\[{}_{p}\Psi_{q}(z)\sim E_{p,q}(z)+E_{p,q}(ze^{\mp 2\pi i})+H_{p,q}(ze^{\mp\pi i})\] (2.9)
_as \(|z|\to\infty\) in the sector \(|\arg\,z|\leq\pi\). The upper or lower signs are chosen according as \(\arg\,z>0\) or \(\arg\,z<0\), respectively._
The rays \(\arg\,z=\pm\leavevmode\hbox{${\frac{1}{2}}$}\pi\kappa\) now coincide with the negative real axis. It follows that \({}_{p}\Psi_{q}(z)\) is exponentially large in character as \(|z|\to\infty\) except in the neighbourhood of the negative real axis, where the algebraic expansion becomes asymptotically significant.
**Theorem 3**: \(\!\!\!.\) _When \(\kappa>2\) we have⁴_
[FOOTNOTE:4][ENDFOOTNOTE]
\[{}_{p}\Psi_{q}(z)\sim\sum_{n=-N}^{N}E_{p,q}(ze^{2\pi in})+H_{p,q}(ze^{\mp\pi iz})\] (2.10)
_as \(|z|\to\infty\) in the sector \(|\arg\,z|\leq\pi\). The integer \(N\) is chosen such that it is the smallest integer satisfying \(2N+1>\leavevmode\hbox{${\frac{1}{2}}$}\kappa\) and the upper or lower is chosen according as \(\arg\,z>0\) or \(\arg\,z<0\), respectively._
_In this case the asymptotic behaviour of \({}_{p}\Psi_{q}(z)\) is exponentially large for all values of \(\arg\,z\) and, consequently, the algebraic expansion may be neglected. The sums \(E_{p,q}(ze^{2\pi in})\) are exponentially large (or oscillatory) as \(|z|\to\infty\) for values of \(\arg\,z\) satisfying \(|\arg\,z+2\pi n|\leq\leavevmode\hbox{${\frac{1}{2}}$}\pi\kappa\)._
The division of the \(z\)-plane into regions where \({}_{p}\Psi_{q}(z)\) possesses exponentially large or algebraic behaviour for large \(|z|\) is illustrated in Fig. 1. When \(0<\kappa<2\), the exponential expansion \(E_{p,q}(z)\) is still present in the sectors \(\leavevmode\hbox{${\frac{1}{2}}$}\pi\kappa<|\arg\,z|<\min\{\pi,\pi\kappa\}\), where it is subdominant. The rays \(\arg\,z=\pm\pi\kappa\) (\(0<\kappa<1\)), where \(E_{p,q}(z)\) is _maximally_ subdominant with respect to \(H_{p,q}(ze^{\mp\pi i})\), are called Stokes lines.⁵ As these rays are crossed (in the sense of increasing \(|\arg\,z|\)) the exponential expansion switches off according to Berry’s now familiar error-function smoothing law [1]; see [8] for details. The rays \(\arg\,z=\pm\leavevmode\hbox{${\frac{1}{2}}$}\pi\kappa\), where \(E_{p,q}(z)\) is oscillatory and comparable to \(H_{p,q}(ze^{\mp\pi i})\), are called anti-Stokes lines.
[FOOTNOTE:5][ENDFOOTNOTE]
<figure><img src="content_image/1708.04824/x1.png"><figcaption>Figure 1: The exponentially large and algebraic sectors associated with pΨq(z)in the complex z-plane with θ=argz when 0<κ<1. The Stokes and anti-Stokeslines are indicated.</figcaption></figure>
In view of the above interpretation of the Stokes phenomenon a more precise version of Theorem 1 is as follows:
**Theorem 4**: \(\!\!\!.\) _When \(0<\kappa\leq 2\), then_
\[{}_{p}\Psi_{q}(z)\sim\left\{\begin{array}[]{lll}E_{p,q}(z)+H_{p,q}(ze^{\mp\pi i })&\leavevmode\hbox{in}&|\arg\,z|\leq\min\{\pi-\epsilon,\pi\kappa-\epsilon\}\\ \\ H_{p,q}(ze^{\mp\pi i})&\leavevmode\hbox{in}&\pi\kappa+\epsilon\leq|\arg\,z| \leq\pi\ \ (0<\kappa<1)\\ \\ E_{p,q}(z)+E_{p,q}(ze^{\mp 2\pi i})+H_{p,q}(ze^{\mp\pi i})&\leavevmode\hbox{in }&|\arg\,z|\leq\pi\ \ (1<\kappa\leq 2)\end{array}\right.\] (2.11)
_as \(|z|\rightarrow\infty\). The upper or lower signs are chosen according as \(\arg\,z>0\) or \(\arg\,z<0\), respectively._
We omit the expansion _on_ the Stokes lines \(\arg\,z=\pm\pi\kappa\); the details in the case \(p=1\), \(q\geq 0\) are discussed in [9]. The expansions in (2.11a) and (2.8a) were given by Wright [16, 17] in the sector \(|\arg\,z|\leq\min\{\pi,\leavevmode\hbox{${\frac{3}{2}}$}\pi\kappa-\epsilon\}\) as he did not take into account the Stokes phenomenon. Since \(E_{p,q}(z)\) is exponentially small in \(\leavevmode\hbox{${\frac{1}{2}}$}\pi\kappa<|\arg\,z|\leq\pi\), then in the sense of Poincaré, the expansion \(E_{p,q}(z)\) can be neglected and there is no inconsistency between Theorems 1 and 4. Similarly, \(E_{p,q}(ze^{-2\pi i})\) is exponentially small compared to \(E_{p,q}(z)\) in \(0\leq\arg\,z<\pi\) and there is no inconsistency between the expansions in (2.8a) and (2.11c) when \(1<\kappa<2\). However, in the vicinity of \(\arg\,z=\pi\), these last two expansions are of comparable magnitude and, for real parameters, they combine to generate a real result on this ray. A similar remark applies to \(E_{p,q}(ze^{2\pi i})\) in \(-\pi<\arg\,z\leq 0\).
The following theorem was given by Braaksma [2, p. 331].
**Theorem 5**: \(\!\!\!.\) _If \(p=0\), so that \(g(s)\) has no poles and \(\kappa>1\), then \(H_{0,q}(z)\equiv 0\). When \(1<\kappa<2\), we have the expansion_
\[{}_{0}\Psi_{q}(z)\sim E_{0,q}(z)+E_{0,q}(ze^{\mp 2\pi i})\] (2.12)
_as \(|z|\to\infty\) in the sector \(|\arg\,z|\leq\pi\) The upper or lower sign is chosen according as \(\arg\,z>0\) or \(\arg\,z<0\), respectively. The dominant expansion \({}_{0}\Psi_{q}(z)\sim E_{p,q}(z)\) holds in the reduced sector \(|\arg\,z|\leq\pi-\epsilon\)._
It can be seen that (2.12) agrees with (2.11c) when \(H_{p,q}(z)\equiv 0\). Braaksma gave the result (2.12) valid in a sector straddling the negative real axis given by \(\pi-\delta\leq\arg\,z\leq\pi+\delta\), where \(0<\delta<\leavevmode\hbox{${\frac{1}{2}}$}\pi(1-\leavevmode\hbox{${ \frac{1}{2}}$}\kappa)\).
It is our purpose here to examine Theorems 4 and 5 in more detail by means of a series of examples. We carry out a numerical investigation to show that (2.11c) is valid when \(1<\kappa<2\) and, when \(0<\kappa<1\), that the exponential expansion \(E_{p,q}(z)\) in Theorem 4 switches off (as \(|\arg\,z|\) increases) across the Stokes lines \(\arg\,z=\pm\pi\kappa\), where \(E_{p,q}(z)\) is maximally subdoiminant with respect to \(H_{p,q}(ze^{\mp\pi i})\). Similarly in Theorem 5, we show that when \(1<\kappa<2\) the expansions \(E_{p,q}(ze^{\mp 2\pi i})\) switch off across the Stokes lines \(\arg\,z=\pm\pi(1-\leavevmode\hbox{${\frac{1}{2}}$}\kappa)\), where they are maximally subdominant with respect to \(E_{p,q}(z)\). Thus, although the expansions in (2.11a) and (2.12) are valid asymptotic descriptions, more accurate evaluation will result from taking into account the Stokes phenomenon as the above-mentioned rays are crossed.
**3. Numerical examples**
**Example 3.1** Our first example is the Mittag-Leffler function \({\cal E}_{a,b}(z)\) defined by
\[{\cal E}_{a,b}(z)=\sum_{n=0}^{\infty}\frac{z^{n}}{\Gamma(an+b)},\]
where we consider \(a>0\). This corresponds to a case of \({}_{1}\Psi_{1}(z)\) with the parameters \(\kappa=a\), \(h=a^{-a}\), \(\vartheta=1-b\) and \(g(s)=\Gamma(1+s)/\Gamma(as+b)\). Then from (2.1)–(2.3), we have \(Z=z^{1/a}\), \(A_{0}=1/a\) with \(A_{j}=0\) for \(j\geq 1\). The exponential and algebraic expansions are from (2.1), (2.6) and (2.7) given by
\[E_{1,1}(z)=\frac{1}{a}z^{(1-b)/a}\exp[z^{1/a}],\qquad H_{1,1}(ze^{\mp\pi i})=- \sum_{k=1}^{\infty}\frac{z^{-k}}{\Gamma(b-ak)}.\]
Then, from Theorems 2, 3 and 4 we obtain the following asymptotic expansions⁶ as \(|z|\to\infty\).
[FOOTNOTE:6][ENDFOOTNOTE]
(i) When \(0<a<1\)
\[{\cal E}_{a,b}(z)\sim\left\{\begin{array}[]{ll}{\frac{1}{a}}z^{(1 -b)/a}\exp[z^{1/a}]-\sum_{k=1}^{\infty}{\frac{z^{-k}}{\Gamma(b-ak )}}&(|\arg\,z|\leq\pi a-\epsilon)\\ -{\sum_{k=1}^{\infty}\frac{z^{-k}}{\Gamma(b-ak)}}&(\pi a+\epsilon \leq\arg\,z\leq\pi);\end{array}\right.\] (3.1)
(ii) when \(1<a<2\)
\[{\cal E}_{a,b}(z)\sim\left\{\begin{array}[]{ll}{\frac{1}{a}}z^{(1 -b)/a}\exp[z^{1/a}]-\sum_{k=1}^{\infty}{\frac{z^{-k}}{\Gamma(b-ak )}}&(|\arg\,z|\leq\pi-\epsilon)\\ {\frac{1}{a}}z^{(1-b)/a}\exp[z^{1/a}]+{\frac{1}{a}}( ze^{\mp 2\pi i})^{(1-b)/a}\exp[(ze^{\mp 2\pi i})^{1/a}]&\\ -{\sum_{k=1}^{\infty}\frac{z^{-k}}{\Gamma(b-ak)}}&(|\arg\,z|\leq \pi);\end{array}\right.\] (3.2)
(iii) when \(a=2\)
\[{\cal E}_{a,b}(z)\sim{\frac{1}{a}}z^{(1-b)/a}\exp[z^{1/a}]+ {\frac{1}{a}}(ze^{\mp 2\pi i})^{(1-b)/a}\exp[(ze^{\mp 2\pi i})^{1 /a}]\]
\[-{\sum_{k=1}^{\infty}\frac{z^{-k}}{\Gamma(b-ak)}}\qquad\qquad(| \arg\,z|\leq\pi);\hskip 56.905512pt\] (3.3)
(iv) when \(a>2\)
\[{\cal E}_{a,b}(z)\sim\frac{1}{a}\sum_{n=-N}^{N}(ze^{2\pi in})^{(1-b)/a}\exp[z^ {1/a}e^{2\pi in/a}]-\sum_{k=1}^{\infty}\frac{z^{-k}}{\Gamma(b-ak)}\qquad(|\arg \,z|\leq\pi),\] (3.4)
where \(N\) is the smallest integer⁷ satisfying \(2N+1>\leavevmode\hbox{${\frac{1}{2}}$}a\). The upper or lower signs are taken according as \(\arg\,z>0\) or \(\arg\,z<0\), respectively.
[FOOTNOTE:7][ENDFOOTNOTE]
When \(0<a<1\), it is established in [6] (see also [15]) that the exponential term \(a^{-1}\exp[z^{1/a}]\) in (3.1a)is multiplied by the approximate factor involving the error function
\[\frac{1}{2}+\frac{1}{2}\leavevmode\hbox{erf}\biggl{[}\frac{(\pi a\mp\theta)}{a }\,\sqrt{\frac{|z|}{2}}\biggr{]}\]
as \(|z|\to\infty\) in the neighbourhood of the Stokes lines \(\theta=\arg\,z=\pm\pi a\), respectively, where it is maximally subdominant. This shows that the above exponential term indeed switches off in the familiar manner [1] as one crosses the Stokes lines in the sense of increasing \(|\theta|\) and that consequently the expansion in (3.1a) is valid in \(|\arg\,z|\leq\pi a-\epsilon\).
On the negative real axis we put \(z=-x\), with \(x>0\). From (3.2), we have when \(1<a<2\)
\[{\cal E}_{a,b}(-x)\sim\frac{1}{a}(xe^{\pi i})^{(1-b)/a}\exp[x^{1/a}e^{\pi i/a} ]+\frac{1}{a}(xe^{-\pi i})^{(1-b)/a}\exp[(x^{1/a}e^{-\pi i/a}]\]
\[\hskip 113.811024pt-\sum_{k=1}^{\infty}\frac{(-x)^{-k}}{\Gamma(b-ak)}\]
\[=F_{a,b}(x)-\sum_{k=1}^{\infty}\frac{(-x)^{-k}}{\Gamma(b-ak)},\] (3.5)
as \(x\to+\infty\), where
(3.6)
The presence of the additional exponential expansion \(E_{1,1}(ze^{\mp 2\pi i})\) in (3.2) is seen to be essential in order to obtain a real result⁸ (when \(b\) is real) on the negative \(z\)-axis.
[FOOTNOTE:8][ENDFOOTNOTE]
**Example 3.2** Our second example is the function
\[F_{1}(z)=\sum_{n=0}^{\infty}\frac{\Gamma(\leavevmode\hbox{${\frac{1} {2}}$}n+a)}{\Gamma(n+b)}\,\frac{z^{n}}{n!}\qquad(\kappa=\leavevmode\hbox{${ \frac{3}{2}}$}),\] (3.7)
where \(a\) and \(b\) are finite parameters, which corresponds to a case of \({}_{1}\Psi_{1}(z)\). The exponential expansion is
\[E_{1,1}(z)=Z^{\vartheta}e^{Z}\sum_{j=0}^{\infty}A_{j}Z^{-j},\qquad Z= \leavevmode\hbox{${\frac{3}{2}}$}(hz)^{2/3},\]
where, from (2.3),
\[A_{0}=(\leavevmode\hbox{${\frac{2}{3}}$})^{\vartheta+1/2}( \leavevmode\hbox{${\frac{1}{2}}$})^{a-1/2}\]
and \(\vartheta=a-b\), \(h=2^{-1/2}\). An algorithm for the computation of the normalised coefficients \(c_{j}=A_{j}/A_{0}\) is described in the appendix. In our computations we have employed \(0\leq j\leq 40\); the first ten coefficients \(c_{j}\) for \(F_{1}(z)\) are listed in Table 1 for the particular case \(a=\leavevmode\hbox{${\frac{1}{4}}$}\) and \(b=\leavevmode\hbox{${\frac{3}{4}}$}\). From (2.6), the algebraic expansion is
\[H_{1,1}(ze^{\mp\pi i})=2\sum_{k=0}^{\infty}\frac{(-)^{k}\Gamma(2k+2a)}{k! \Gamma(b-2a-2k)}\,(ze^{\mp\pi i})^{-2k-2a}.\]
j | cj | j | cj
---|---|---|---
1 | 61192 | 2 | 2316173728
3 | 2278328542467328 | 4 | 4460450942532614907904
5 | 303756381993056262062317568 | 6 | 1627218162507876057213895789838336
7 | 1800908305977032402151385067991648960512 | 8 | 18891994313891085902264752127464435172803346432
9 | 255994479105393966121728293753676258543978604182634496 | 10 | 867263223408091756761370100995751411683280887784006131646464
Table 1: The normalised coefficients cj for 1≤j≤10 (with c0=1) for the sum
(3.7) when a=14 and b=34.
It is clearly sufficient for real parameters to consider values of \(z\) satisfying \(0\leq\arg\,z\leq\pi\) and this we do throughout this section. From Theorem 4, we obtain
\[F_{1}(z)=E_{1,1}(z)+E_{1,1}(ze^{-2\pi i})+H_{1,1}(ze^{-\pi i})\]
as \(|z|\to\infty\) in \(0\leq\arg\,z\leq\pi\), from which we see that \(F_{1}(z)\) is exponentially large in the sector \(|\arg\,z|<3\pi/4\). We have computed \(F_{1}(z)\) for a value of \(|z|\) and varying \(\theta=\arg\,z\) in the range \(0.7\pi\leq\theta\leq\pi\). In Table 2 we show the absolute values of
θ/π | |R1(z)| | |E1,1(ze−2πi)|
---|---|---
1.00 | 6.283513×10−7 | 6.283515×10−7
0.95 | 6.605074×10−8 | 6.605098×10−8
0.90 | 8.190985×10−9 | 8.190854×10−9
0.85 | 1.226317×10−9 | 1.225981×10−9
0.80 | 2.263874×10−10 | 2.261409×10−10
0.75 | 5.240704×10−11 | 5.236698×10−11
0.70 | 1.573812×10−11 | 1.546959×10−11
Table 2: Values of the absolute error R1(z) in the computation of F1(z) using
an optimal truncation of both E1,1(z) and H1,1(ze−πi) compared with
|E1,1(ze−2πi)| as a function of θ for z=100eiθ, a=14 and b=34.
\[{\cal R}_{1}(z)\equiv F_{1}(z)-E_{1,1}^{opt}(z)-H_{1,1}^{opt}(ze^{-\pi i})\]
compared with \(|E_{1,1}(ze^{-2\pi i})|\) (which was computed for \(0\leq j\leq 5\)), where the superscript ‘opt’ denotes that both the asymptotic sums \(E_{1,1}(z)\) and \(H_{1,1}(ze^{-\pi i})\) are truncated at their respective optimal truncation points. The results clearly confirm that (i) the exponential expansion \(E_{1,1}(z)\) is present in the algebraic sector \(\leavevmode\hbox{${\frac{3}{4}}$}\pi<\arg\,z\leq\pi\) and (ii) the subdominant expansion \(E_{1,1}(ze^{-2\pi i})\) is present in (at least) the sector \(0.7\pi\leq\theta\leq\pi\). It was not possible to penetrate very far into the exponentially large sector \(|\arg\,z|<\leavevmode\hbox{${\frac{3}{4}}$}\pi\), since the error in the computation of \(E_{1,1}(z)\) — even at optimal truncation — swamps the algebraic and subdominant exponential expansions. Such a computation would require a hyperasymptotic evaluation of the dominant expansion on the lines of that described for the generalised Bessel function \({}_{0}\Psi_{1}(z)\) in Wong and Zhao [14].
**Example 3.3** Consider the function
\[F_{2}(z)=\sum_{n=0}^{\infty}\frac{\Gamma(\leavevmode\hbox{${\frac{2} {3}}$}n+a\,)z^{n}}{\Gamma(\leavevmode\hbox{${\frac{1}{3}}$}n+b)\,n!} \qquad(\kappa=\leavevmode\hbox{${\frac{2}{3}}$}).\]
According to Theorem 4, the expansion of \(F_{2}(z)\) for large \(|z|\) is
\[F_{2}(z)\sim E_{1,1}(z)+H_{1,1}(ze^{-\pi i})\qquad(0\leq\arg\,z\leq\leavevmode \hbox{${\frac{2}{3}}$}\pi-\epsilon).\]
The algebraic expansion is, from (2.6), given by
\[H_{1,1}(ze^{-\pi i})=\frac{3}{2}\sum_{k=0}^{\infty}\frac{(-)^{k}\Gamma( \leavevmode\hbox{${\frac{3}{2}}$}k+\leavevmode\hbox{${ \frac{3}{2}}$}a)}{k!\,\Gamma(b-\leavevmode\hbox{${\frac{1}{2}}$}a- \leavevmode\hbox{${\frac{1}{2}}$}k)}(ze^{-\pi i})^{-3(k+a)/2}\]
and the exponential expansion \(E_{1,1}(z)\) is obtained from (2.1) with the parameters \(\vartheta=a-b\), \(h=(\leavevmode\hbox{${\frac{2}{3}}$})^{\frac{2}{3}}(\leavevmode\hbox {${\frac{1}{3}}$})^{-\frac{1}{3}}\) and \(A_{0}=\kappa^{-\frac{1}{2}-\vartheta}(\leavevmode\hbox{${\frac{2}{3} }$})^{a-\frac{1}{2}}(\leavevmode\hbox{${\frac{1}{3}}$})^{\frac{1}{2} -b}\). The coefficients \(A_{j}\) are obtained as indicated in Example 3.2.
The function \(F_{2}(z)\) is exponentially large in the sector \(|\arg\,z|<\leavevmode\hbox{${\frac{1}{3}}$}\pi\), whereas in the sector \(\leavevmode\hbox{${\frac{1}{3}}$}\pi<\arg\,z\leq\pi\) the algebraic expansion \(H_{1,1}(ze^{-\pi i})\) is dominant. The expansion \(E_{1,1}(z)\) is maximally subdominant with respect to \(H_{1,1}(ze^{-\pi i})\) on the ray \(\arg\,z=\pi\kappa=\leavevmode\hbox{${\frac{2}{3}}$}\pi\). Consequently, as \(\arg\,z\) increases, the exponential expansion \(E_{1,1}(z)\) should switch off across the Stokes line \(\arg\,z=\leavevmode\hbox{${\frac{2}{3}}$}\pi\), to leave the algebraic expansion \(H_{1,1}(ze^{-\pi i})\) in the sector \(\leavevmode\hbox{${\frac{2}{3}}$}\pi<\arg\,z\leq\pi\). To demonstrate this, we define the Stokes multiplier \(S(\theta)\) by
\[F_{2}(z)=H_{11}^{opt}(ze^{-\pi i})+A_{0}Z^{\vartheta}e^{Z}\,S(\theta).\]
In Table 3 we show the absolute values of \({\cal R}_{2}(z):=F_{2}(z)-H_{1,1}^{opt}(ze^{-\pi i})\) and of the leading term of \(E_{1,1}(z)\) as a function of \(\theta=\arg\,z\). We also show the values⁹ of Re(\(S(\theta)\)) in the neighbourhood of the Stokes line \(\arg\,z=\leavevmode\hbox{${\frac{2}{3}}$}\pi\) for the case \(z=10e^{i\theta}\) and \(a=\leavevmode\hbox{${\frac{1}{3}}$}\), \(b=\leavevmode\hbox{${\frac{1}{4}}$}\). It is seen that the Stokes multiplier has the value \(\simeq 1\) when \(\theta=\leavevmode\hbox{${\frac{1}{2}}$}\pi\) (before the transition commences) and \(\simeq 0\) when \(\theta=\leavevmode\hbox{${\frac{3}{4}}$}\pi\) (after the transition is almost completed).
[FOOTNOTE:9][ENDFOOTNOTE]
θ/π | |R2(z)| | |A0ZϑeZ| | Re(S(θ))
---|---|---|---
0.50 | 4.4964×10−8 | 4.4947×10−8 | 1.0000
0.55 | 1.2980×10−9 | 1.3005×10−9 | 0.9981
0.60 | 1.1196×10−10 | 1.1848×10−10 | 0.9450
0.62 | 5.6361×10−11 | 6.4685×10−11 | 0.8713
0.64 | 3.2641×10−11 | 4.3607×10−11 | 0.7485
0.66 | 1.9737×10−11 | 3.6426×10−11 | 0.5418
0.68 | 1.3545×10−11 | 3.7762×10−11 | 0.3600
0.70 | 9.9952×10−12 | 4.8568×10−11 | 0.2058
0.72 | 9.1973×10−12 | 7.7328×10−11 | 0.1189
0.75 | 5.6314×10−12 | 2.2959×10−10 | 0.0237
Table 3: Values of the absolute error in R2(z)≡F2(z)−Hopt1,1(ze−πi) in the
computation of F2(z) using an optimal truncation of the algebraic expansion
compared with the leading term of |E1,1(z)| as a function of θ for z=10eiθ,
a=13 and b=14. The final column shows the real part of the computed Stokes
multiplier S(θ) for transition across the ray argz=23π.
**Example 3.4** Our final example is the function of the type \({}_{0}\Psi_{2}(z)\) given by
\[F_{3}(z)=\sum_{n=0}^{\infty}\frac{z^{n}}{n!\Gamma(cn+a)\Gamma(cn+b)}\qquad( \kappa=1+2c),\] (3.8)
where \(0<c\leq\leavevmode\hbox{${\frac{1}{2}}$}\). Since \(p=0\), the algebraic expansion \(H_{0,2}(z)\equiv 0\). From Theorem 5 we obtain the asymptotic expansion
\[F_{3}(z)\sim E_{0,2}(z)+E_{0,2}(ze^{\mp 2\pi i})\qquad(|\arg\,z|\leq\pi),\]
where the associated parameters are \(\vartheta=1-a-b\), \(h=c^{-2c}\) and
\[A_{0}=\frac{c^{\vartheta}\kappa^{-\vartheta-1/2}}{2\pi}~{}.\]
The function \(F_{3}(z)\) is exponentially large in the sector \(|\arg\,z|<\leavevmode\hbox{${\frac{1}{2}}$}\pi(1+2c)\). The other expansion \(E_{0,2}(ze^{-2\pi i})\) is subdominant in the upper half-plane but combines with \(E_{0,2}(z)\) on the negative real axis to produce (for real \(a\) and \(b\)) a real expansion.
Since the exponential factors associated with \(E_{0,2}(z)\) and \(E_{0,2}(ze^{-2\pi i})\) are \(\exp[|Z|e^{i\theta/\kappa}]\) and \(\exp[|Z|e^{i(\theta-2\pi)/\kappa}]\), where \(\theta=\arg\,z\) and we recall that \(Z\) is defined in (2.1), the greatest difference between these factors occurs when
\[\sin\biggl{(}\frac{\theta}{\kappa}\biggr{)}=\sin\biggl{(}\frac{\theta-2\pi}{ \kappa}\biggr{)};\]
that is, when \(\theta=\leavevmode\hbox{${\frac{1}{2}}$}\pi(2-\kappa)\). Consequently, as \(\arg\,z\) increases in the upper half-plane, we expect that the expansion \(E_{0,2}(ze^{-2\pi i})\) should switch on across the Stokes line \(\arg\,z=\leavevmode\hbox{${\frac{1}{2}}$}\pi(2-\kappa)\); similar considerations apply to \(E_{0,2}(ze^{2\pi i})\) and the Stokes line \(\arg\,z=-\leavevmode\hbox{${\frac{1}{2}}$}\pi(2-\kappa)\) in the lower half-plane.
To demonstrate the correctness of this claim, we choose \(c=\leavevmode\hbox{${\frac{1}{10}}$}\) (so that \(\kappa=\leavevmode\hbox{${\frac{6}{5}}$}\)) and \(a=\leavevmode\hbox{${\frac{1}{4}}$}\), \(b=\leavevmode\hbox{${\frac{3}{4}}$}\). The function \(F_{3}(z)\) is therefore exponentially large in the sector \(|\arg\,z|<\leavevmode\hbox{${\frac{3}{5}}$}\pi\) and the Stokes line in the upper half-plane is \(\arg\,z=\leavevmode\hbox{${\frac{2}{5}}$}\pi\). We have chosen \(a-b\) to have a half-integer value for a very specific reason. The more detailed treatment in [8] shows that there is a _third_ (_subdominant_) _exponential series_ present in the expansion of \(F_{3}(z)\) given by
\[2\cos\pi(a-b)\,X^{\vartheta}e^{-X}\sum_{j=0}^{\infty}A_{j}(-X)^{-j},\qquad X= \kappa(hze^{-\pi i})^{1/\kappa}.\]
Our present choice of \(a\) and \(b\) therefore eliminates this third expansion and enables us to deal with a case comprising only two exponential expansions.
In Table 4, we show for \(|z|=20\) and varying \(\theta=\arg\,z\) the values of \(|F_{3}(z)-E_{0,2}^{opt}(z)|\) and \(|E_{0,2}(ze^{-2\pi i})|\) together with the real part of the Stokes multiplier \(S(\theta)\) defined by
\[F_{3}(z)=E_{0,2}^{opt}(z)+A_{0}(Ze^{-2\pi i/\kappa})^{\vartheta}\,\exp[Ze^{-2 \pi i/\kappa}]\,S(\theta).\]
The results clearly demonstrate the switching-on of the subdominant expansion \(E_{0,2}(ze^{-2\pi i})\) across the Stokes line \(\arg\,z=\leavevmode\hbox{${\frac{2}{5}}$}\pi\) as \(\arg\,z\) increases in the upper half-plane.
θ/π | |R3(z)| | |E0,2(ze−2πi)| | Re(S(θ))
---|---|---|---
0.20 | 7.231938×10−4 | 1.452127×10−1 | 0.0020
0.25 | 2.204854×10−4 | 8.898995×10−3 | 0.0184
0.30 | 5.082653×10−5 | 5.720603×10−4 | 0.0797
0.35 | 9.416276×10−6 | 4.042959×10−5 | 0.2230
0.40 | 1.502207×10−6 | 3.287009×10−6 | 0.4477
0.45 | 2.239289×10−7 | 3.209167×10−7 | 0.6893
0.50 | 3.430029×10−8 | 3.915246×10−8 | 0.8679
0.55 | 5.977355×10−9 | 6.187722×10−9 | 0.9575
0.60 | 1.301304×10−9 | 1.307416×10−9 | 0.9862
1.00 | 1.307416×10−9 | 1.307416×10−9 | 0.9908
Table 4: Values of the absolute error in R3(z)≡F3(z)−Eopt0,2(z) in the
computation of F3(z) using an optimal truncation of E0,2(z) compared with
|E0,2(ze−2πi)| as a function of θ for z=20eiθ, a=14 and b=34. The final column
shows the real part of the computed Stokes multiplier S(θ) for transition
across the ray argz=25π.
**Appendix: An algorithm for the computation of the coefficients \(c_{j}=A_{j}/A_{0}\)**
We describe an algorithm for the computation of the normalised coefficients \(c_{j}=A_{j}/A_{0}\) appearing in the exponential expansion \(E_{p,q}(z)\) in (2.1). Methods of computing these coefficients by recursion in the case \(\alpha_{r}=\beta_{r}=1\) have been given by Riney [12] and Wright [18]; see [11, Section 2.2.2] for details. Here we describe an algebraic method for arbitrary \(\alpha_{r}>0\) and \(\beta_{r}>0\).
The inverse factorial expansion (2.2) can be re-written as
\[\frac{g(s)\Gamma(\kappa s+\vartheta^{\prime})}{\Gamma(1+s)}=\kappa A_{0}(h \kappa^{\kappa})^{s}\biggl{\{}\sum_{j=0}^{M-1}\frac{c_{j}}{(\kappa s+\vartheta ^{\prime})_{j}}+\frac{O(1)}{(\kappa s+\vartheta^{\prime})_{M}}\biggr{\}}\] (A.1)
for \(|s|\to\infty\) uniformly in \(|\arg\,s|\leq\pi-\epsilon\), where \(g(s)\) is defined in (1.2) with \(n\) replaced by \(s\). Introduction of the scaled gamma function \(\Gamma^{*}(z)=\Gamma(z)(2\pi)^{-\frac{1}{2}}e^{z}z^{\frac{1}{2}-z}\) leads to the representation
\[\Gamma(\alpha s+a)=(2\pi)^{\frac{1}{2}}e^{-\alpha s}(\alpha s)^{\alpha s+a- \frac{1}{2}}\,{\bf e}(\alpha s;a)\Gamma^{*}(\alpha s+a),\]
where
\[{\bf e}(\alpha s;a):=e^{-a}\biggl{(}1+\frac{a}{\alpha s}\biggr{)}^{\alpha s+a- \frac{1}{2}}=\exp\,\left[(\alpha s+a-\leavevmode\hbox{${\frac{1}{2}} $})\log\,\left(1+\frac{a}{\alpha s}\right)-a\right].\]
Then, after some routine algebra we find that the left-hand side of (A.1) can be written as
\[\frac{g(s)\Gamma(\kappa s+\vartheta^{\prime})}{\Gamma(1+s)}=\kappa A_{0}(h \kappa^{\kappa})^{s}\,R_{p,q}(s)\,\Upsilon_{p,q}(s),\] (A.2)
where
\[\Upsilon_{p,q}(s):=\frac{\prod_{r=1}^{p}\Gamma^{*}(\alpha_{r}s+a_{r})}{\prod_{ r=1}^{q}\Gamma^{*}(\beta_{r}s+b_{r})}\,\frac{\Gamma^{*}(\kappa s+\vartheta^{ \prime})}{\Gamma^{*}(1+s)},\qquad R_{p,q}(s):=\frac{\prod_{r=1}^{p}e(\alpha_{r }s;a_{r})}{\prod_{r=1}^{q}e(\beta_{r}s;b_{r})}\,\frac{e(\kappa s;\vartheta^{ \prime})}{e(s;1)}.\]
Substitution of (A.2) in (A.1) then yields the inverse factorial expansion in the form
\[R_{p,q}(s)\,\Upsilon_{p,q}(s)=\sum_{j=0}^{M-1}\frac{c_{j}}{(\kappa s+\vartheta ^{\prime})_{j}}+\frac{O(1)}{(\kappa s+\vartheta^{\prime})_{M}}\] (A.3)
as \(|s|\to\infty\) in \(|\arg\,s|\leq\pi-\epsilon\).
We now expand \(R_{p,q}(s)\) and \(\Upsilon_{p,q}(s)\) for \(s\to+\infty\) making use of the well-known expansion (see, for example, [11, p. 71])
\[\Gamma^{*}(z)\sim\sum_{k=0}^{\infty}(-)^{k}\gamma_{k}z^{-k}\qquad(|z| \rightarrow\infty;\ |\arg\,z|\leq\pi-\epsilon),\]
where \(\gamma_{k}\) are the Stirling coefficients, with
\[\gamma_{0}=1,\quad\gamma_{1}=-\leavevmode\hbox{${\frac{1}{12}}$}, \quad\gamma_{2}=\leavevmode\hbox{${\frac{1}{288}}$},\quad\gamma_{3}= \leavevmode\hbox{${\frac{139}{51840}}$},\quad\gamma_{4}=-\leavevmode \hbox{${\frac{571}{2488320}}$},\ldots\ .\]
Then we find
\[\Gamma^{*}(\alpha s+a)=1-\frac{\gamma_{1}}{\alpha s}+O(s^{-2}),\qquad e(\alpha s ;a)=1+\frac{a(a-1)}{2\alpha s}+O(s^{-2}),\]
whence
\[R_{p,q}(s)=1+\frac{{\cal A}}{2s}+O(s^{-2}),\qquad\Upsilon_{p,q}(s)=1+\frac{{ \cal B}}{12s}+O(s^{-2}),\]
where we have defined the quantities \({\cal A}\) and \({\cal B}\) by
\[{\cal A}=\sum_{r=1}^{p}\frac{a_{r}(a_{r}-1)}{\alpha_{r}}-\sum_{r=1}^{q}\frac{b _{r}(b_{r}-1)}{\beta_{r}}-\frac{\vartheta}{\kappa}(1-\vartheta),\qquad{\cal B} =\sum_{r=1}^{p}\frac{1}{\alpha_{r}}-\sum_{r=1}^{q}\frac{1}{\beta_{r}}+\frac{1} {\kappa}-1.\]
Upon equating coefficients of \(s^{-1}\) in (A.3) we then obtain
\[c_{1}=\leavevmode\hbox{${\frac{1}{2}}$}\kappa({\cal A}+\leavevmode \hbox{${\frac{1}{6}}$}{\cal B}).\] (A.4)
The higher coefficients are obtained by continuation of this expansion process in inverse powers of \(s\). We write the product on the left-hand side of (A.3) as an expansion in inverse powers of \(\kappa s\) in the form
\[R_{p,q}(s)\Upsilon_{p,q}(s)=1+\sum_{j=1}^{M-1}\frac{C_{j}}{(\kappa s)^{j}}+O(s ^{-M})\] (A.5)
as \(s\to+\infty\), where the coefficients \(C_{j}\) are determined with the aid of _Mathematica_. From the expansion of the ratio of two gamma functions in [5, (5.11.13)] we obtain
\[\frac{1}{(\kappa s+\vartheta^{\prime})_{j}}=\frac{1}{(\kappa s)^{j}}\biggl{\{} \sum_{j=0}^{M-1}\frac{(-)^{k}(j)_{k}}{(\kappa s)^{k}k!}\,B_{k}^{(1-j)}( \vartheta^{\prime})+O(s^{-M})\biggr{\}},\]
where \(B_{k}^{(\sigma)}(x)\) are the generalised Bernoulli polynomials defined by
\[\biggl{(}\frac{t}{e^{t}-1}\biggr{)}^{\sigma}e^{xt}=\sum_{k=0}^{\infty}\frac{B_ {k}^{(\sigma)}(x)}{k!}\,t^{k}\qquad(|t|<2\pi).\]
Here we have \(\sigma=1-j\leq 0\) and \(B_{0}^{(\sigma)}(x)=1\).
Then the right-hand side of (A.3) as \(s\to+\infty\) becomes
\[1+\sum_{j=1}^{M-1}\frac{c_{j}}{(\kappa s+\vartheta^{\prime})_{j}}+O(s^{-M})=1+ \sum_{j=1}^{M-1}\frac{c_{j}}{(\kappa s)^{j}}\sum_{k=0}^{M-1}\frac{(-)^{k}(j)_{ k}}{(\kappa s)^{k}k!}\,B_{k}^{(1-j)}(\vartheta^{\prime})+O(s^{-M})\]
\[=1+\sum_{j=1}^{M-1}\frac{D_{j}}{(\kappa s)^{j}}+O(s^{-M})\] (A.6)
with
\[D_{j}=\sum_{k=0}^{j-1}(-)^{k}\biggl{(}\!\!\!\begin{array}[]{c}j-1\\ k\end{array}\!\!\!\biggr{)}c_{j-k}\,B_{k}^{(k-j+1)}(\vartheta^{\prime}),\]
where we have made the change in index \(j+k\to j\) and used ‘triangular’ summation (see [13, p. 58]). Substituting (A.5) and (A.6) into (A.3) and equating the coefficients of like powers of \(\kappa s\), we then find \(C_{j}=D_{j}\) for \(1\leq j\leq M-1\), whence
\[c_{j}=C_{j}-\sum_{k=1}^{j-1}(-)^{k}\biggl{(}\!\!\!\begin{array}[]{c}j-1\\ k\end{array}\!\!\!\biggr{)}c_{j-k}\,B_{k}^{(k-j+1)}(\vartheta^{\prime}).\]
Thus we find
\[c_{1} = C_{1},\]
\[c_{2} = C_{2}-c_{1}B_{1}^{(0)}(\vartheta^{\prime}),\]
\[c_{3} = C_{3}-2c_{2}B_{1}^{(-1)}(\vartheta^{\prime})+c_{1}B_{2}^{(0)}( \vartheta^{\prime}),\ldots\]
and so on, from which the coefficients \(c_{j}\) can be obtained recursively. With the aid of _Mathematica_ this procedure is found to work well in specific cases when the various parameters have numerical values, where up to a maximum of 100 coefficients have been so calculated.
## References
* [1] M.V. Berry, Uniform asymptotic smoothing of Stokes’s discontinuities, Proc. Roy. Soc. London **A422** (1989) 7–21.
* [2] B.L.J. Braaksma, Asymptotic expansions and analytic continuations for a class of Barnes-integrals, Compos. Math. **15** (1963) 239–341.
* [3] W.B. Ford, _The Asymptotic Developments of Functions Defined by Maclaurin Series_, University of Michigan, Science Series, Vol. II, 1936.
* [4] C.V. Newsom, On the character of certain entire functions in distant portions of the plane, Amer. J. Math. **60** (1938) 561–572.
* [5] F.W.J. Olver, D.W. Lozier, R.F. Boisvert and C.W. Clark (eds.), _NIST Handbook of Mathematical Functions_, Cambridge University Press, Cambridge, 2010.
* [6] R.B. Paris, Exponential asymptotics of the Mittag-Leffler function, Proc. Roy. Soc. London **458A** (2002) 3041–3052.
* [7] R.B. Paris, Some remarks on the theorems of Wright and Braaksma concerning the asymptotics of the generalised hypergeometric functions, Technical Report MS 09:01, Abertay University, 2009.
* [8] R.B. Paris, Exponentially small expansions in the asymptotics of the Wright function, J. Comput. Appl. Math. **234** (2010) 488–504.
* [9] R.B. Paris, Exponentially small expansions of the Wright function on the Stokes lines, Lithuanian Math. J. **54** (2014) 82–105..
* [10] R.B. Paris and A.D. Wood, _Asymptotics of High Order Differential Equations_, Pitman Research Notes in Mathematics, **129**, Longman Scientific and Technical, Harlow, 1986.
* [11] R.B. Paris and D. Kaminski, _Asymptotics and Mellin-Barnes Integrals_, Cambridge University Press, Cambridge, 2001.
* [12] T.D. Riney, On the coefficients in asymptotic factorial expansions, Proc. Amer. Math. Soc. **7** (1956) 245–249.
* [13] L.J. Slater, _Generalized Hypergeometric Functions_, Cambridge University Press, Cambridge, 1966.
* [14] R. Wong and Y.-Q. Zhao, Smoothing of Stokes’s discontinuity for the generalized Bessel Proc. Roy. Soc. London **A455** (1999) 1381–1400.
* [15] R. Wong and Y.-Q. Zhao, Exponential asymptotics of the Mittag-Leffler function, Constr. Approx. **18** (2002) 355–385.
* [16] E.M. Wright, The asymptotic expansion of the generalized hypergeometric function, J. Lond. Math. Soc. (Ser. 2) **10** (1935) 286–293.
* [17] E.M. Wright, The asymptotic expansion of the generalized hypergeometric function, Proc. Lond. Math. Soc. (Ser. 2) **46** (1940) 389–408.
* [18] E.M. Wright, A recursion formula for the coefficients in an asymptotic expansion, Proc. Glasgow Math. Assoc. **4** (1958) 38–41.
|
0809.1314 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 23516,
"num_imgs": 0,
"llama3_tokens_count": 7078
} | [] | **Muon decays in the Earth’s atmosphere, differential aging and the paradox of the twins**
**J.H.Field**
Département de Physique Nucléaire et Corpusculaire Université de Genève . 24, quai Ernest-Ansermet CH-1211 Genève 4.
E-mail: john.field@cern.ch
Observation of the decay of muons produced in the Earth’s atmosphere by cosmic ray interactions provides a graphic illustration of the counter-intuitive space-time predictions of special relativity theory. Muons at rest in the atmosphere, decaying simultaneously, are subject to a universal time-dilatation effect when viewed from a moving frame and so are also observed to decay simultaneously in all such frames. The analysis of this example reveals the underlying physics of the differential aging effect in Langevin’s travelling-twin thought experiment.
PACS 03.30.+p
## 1 **Introduction**
The present paper is one in a recent series, devoted to space-time physics, by the present author. In Ref. [1] the classical ‘Rockets-and-String’ [2] and ‘Pole-and-Barn’ [3] paradoxes of special relativity were re-analysed taking into account the distinction between the real and apparent¹ positions of uniformly moving objects. Different results were obtained from those of the usual text-book interpretations of these experiments and a new causality-violating paradox was pointed out. This paradox, as well as the related ‘backwards running clocks’ one of Soni [4], was resolved in Ref. [5] where, in order to avoid these paradoxes as well as manifest breakdown of translational invariance, in some applications of the standard space-time Lorentz transformation, the use of a ‘local’ Lorentz transformation. i.e. one where the transformed event in the moving frame lies at the coordinate origin in this frame, was advocated. When this is done the closely correlated ‘relativity of simultaneity’ (RS) and ‘length contraction’ (LC) effects of conventional special relativity theory do not occur. The connection between these effects is explained in Ref. [1]. Ref. [5] also contains a ‘mini review’ of all experimental tests of special relativity where it is pointed out that, whereas time dilatation is well-confirmed experimentally, no experimental evidence exists for the RS and LC effects. In the later papers [6, 7, 8] it is explained how the spurious RS and LC effects result from a misuse of the time symbols in the standard space-time Lorentz transformation. Refs. [7, 8] present the argument in a pedagogical manner, whereas Ref. [6] contains a concise summary of it.
[FOOTNOTE:1][ENDFOOTNOTE]
In the following section the necessary formulae for the analysis of the muon decay thought experiment —essentially the prediction of a universal time dilatation effect— are derived from first principles. Here there is considerable overlap with work presented in Refs. [6, 7, 8] The analysis of the thought experiment presented in Section 3 shows the absence of the spurious text-book RS effect: Muons which decay simultaneously in a common proper frame, are observed to do so in all inertial frames. Finally the results of the analysis presented in this paper are used to shed light on the physical basis of the differential aging effect in the travelling-twin thought experiment [9].
## 2 **Operational meaning of the space-time Lorentz transformation: Rates and spatial separations of moving clocks**
The Lorentz transformation (LT) relates observations (\(x\),\(y\),\(z\),\(t\)) of the coordinates of space-time events in one inertial frame S, to observations of the coordinates (\(x^{\prime}\),\(y^{\prime}\),\(z^{\prime}\),\(t^{\prime}\)) of the same events in another inertial frame S’. As is conventional, the Cartesian spatial coordinate axes of the two frames are parallel, and the origin of the frame S’ moves with constant speed, \(v\), along the \(x\)-axis. In any actual experiment, times are recorded by clocks, and positions specified by marks on fixed rulers (or their equivalent). Therefore, in order to relate the space-time coordinates appearing in the LT to actual physical measurements they must be identified with clock readings and length interval measurements [10]. This can be done in two distinct ways depending on whether the experimenter observing the clocks and performing the length measurements is at rest in S or in S’. In the former case (only events with spatial coordinates along the \(x\)-axis are considered so that \(y\) = \(y^{\prime}\) = \(z\) = \(z^{\prime}\) = \(0\)) the appropriate LT, for a clock, C’, situated at the origin of S’, is:
\[x^{\prime}({\rm C}^{\prime}) = \gamma_{v}[x({\rm C}^{\prime})-v\tau]=0\] (2.1)
\[t^{\prime} = \gamma_{v}[\tau-\frac{\beta_{v}x({\rm C}^{\prime})}{c}]\] (2.2)
and in the latter case, for a clock, C, situated at the origin of S, is:
\[x({\rm C}) = \gamma_{v}[x^{\prime}({\rm C})+c\tau^{\prime}]=0\] (2.3)
\[t = \gamma_{v}[\tau^{\prime}+\frac{\beta_{v}x^{\prime}({\rm C})}{c}]\] (2.4)
In these equations \(\beta_{v}\equiv v/c\), \(\gamma_{v}\equiv 1/\sqrt{1-\beta_{v}^{2}}\) and \(c\) is the speed of light in vacuum. The clocks in S and S’ are synchronised so that for (2.1) and (2.2), \(t^{\prime}=\tau=0\) when \(x=x^{\prime}=0\), and for (2.3) and (2.4), \(t=\tau^{\prime}=0\) when \(x=x^{\prime}=0\). In (2.1) and (2.2) the transformed events lie on the worldline of a clock, C’, at rest in S’, which is observed from S. The observed time in S registered by C’( which is in motion in this frame) is \(t^{\prime}\), while \(\tau\) is the time registered by the clock, C, identical to C’, but at rest in S. In contrast, in (2.3) and (2.4) the transformed events lie on the worldline of C, which is observed from S’. The time \(t\) is that registered by C as observed from S’ and \(\tau^{\prime}\) is the time registered by C’ as observed in its own proper frame. Thus two distinct experiments are possible involving one stationary and one moving clock, depending on whether the experimenter performing the space and time measurements is in the rest frame of one, or the other, of the two clocks. To describe both of these experiments, four different time symbols, \(\tau\), \(\tau^{\prime}\), \(t\) and \(t^{\prime}\), with different operational meanings, are required.
From (2.1), the equation of motion of C’ in S is:
\[x({\rm C}^{\prime})=v\tau\] (2.5)
while from (2.3) the equation of motion of C in S’ is:
\[x^{\prime}({\rm C})=-v\tau^{\prime}\] (2.6)
Using (2.5) to eliminate \(x\) from (2.2), and in view of the definition of \(\gamma_{v}\):
\[\tau=\gamma_{v}t^{\prime}\] (2.7)
Similarly, using (2.6) to eliminate \(x^{\prime}\) from (2.4) gives:
\[\tau^{\prime}=\gamma_{v}t\] (2.8)
(2.7) and (2.8) are expressions of the relativistic Time Dilatation (TD) effect in the two ‘reciprocal’ experiments that may be performed using the clocks C and C’. They show that, according, to the LT, ‘moving clocks run slow’ in a universal manner (no spatial coordinates appear in (2.7) and (2.8)). In fact:
\[\frac{{\rm rate~{}of~{}moving~{}clock}}{{\rm rate~{}of~{}stationary~{}clock}}= \frac{t^{\prime}}{\tau}=\frac{t}{\tau^{\prime}}=\frac{1}{\gamma_{v}}\] (2.9)
To discuss measurements of the spatial separations of moving clocks, at least two clocks, (say, \({\rm C}^{\prime}_{A}\) and \({\rm C}^{\prime}_{B}\), at rest in S’) must be considered. It is assumed that they lie along the \(x^{\prime}\)-axis separated by the distance \(L^{\prime}\), \({\rm C}^{\prime}_{A}\) being at the origin of S’ and \({\rm C}^{\prime}_{B}\) at \(x^{\prime}=L^{\prime}\). The space transformation equations analogous to (2.1) for \({\rm C}^{\prime}_{A}\) and \({\rm C}^{\prime}_{B}\) and are then:
\[x^{\prime}({\rm C}^{\prime}_{A}) = \gamma_{v}[x({\rm C}^{\prime}_{A})-v\tau]=0\] (2.10)
\[x^{\prime}({\rm C}^{\prime}_{B})-L^{\prime} = \gamma_{v}[x({\rm C}^{\prime}_{B})-L-v\tau]=0\] (2.11)
Inspection of (2.11) shows that \(L=x({\rm C}^{\prime}_{B},\tau=0)\), a relation valid for all values of \(v\) for the choice of coordinate systems in (2.10) and (2.11). In particular, it is valid when \(v\to 0\), \(\gamma_{v}\to 1\), and \(x\to x^{\prime}\). Then for \(v=0\):
\[x^{\prime}({\rm C}^{\prime}_{B})-L^{\prime}=x^{\prime}({\rm C}^{\prime}_{B})-L\] (2.12)
so that
\[L^{\prime}=L\] (2.13)
The spatial separation of the clocks is therefore a Lorentz-invariant quantity.
Suppose now that two objects move with speeds \(u_{1}\), \(u_{2}\) (\(u_{1}>u_{2}\)) along the positive \(x\)-axis in S and that they are coincident with the origins of S and S’ at the time \(\tau=0\). At later times \(\tau\), \(t^{\prime}\), the separation of the objects in S is \((u_{1}-u_{2})\tau\) and in S’ is \((u^{\prime}_{1}-u^{\prime}_{2})t^{\prime}\). where \(u^{\prime}_{1}-u^{\prime}_{2}\) is the relative velocity of the objects in S’. In view of (2.13) and the time dilatation formula (2.7) it follows that
\[u^{\prime}_{1}-u^{\prime}_{2}=\gamma_{v}(u_{1}-u_{2})\] (2.14)
A particular case of (2.14), to be used in the following section, is \(u_{1}=v\), \(u_{2}=u^{\prime}_{1}=0\) giving
\[-u^{\prime}_{2}\equiv v^{\prime}=\gamma_{v}v\] (2.15)
where \(v^{\prime}\) is the observed speed of the origin of S along the negative \(x\)-axis in S´. The relation (2.14) is the transformation formula of the _relative velocity_ of two objects between two inertial frames, to be contrasted with the relativistic parallel velocity addition formula:
\[w=\frac{u-v}{1-\frac{uv}{c^{2}}}\] (2.16)
which relates kinematical configurations of a single moving object in the frames S and S’. In the case \(u=0\), (2.16) gives \(w=-v\) and this equation relates the kinematical configuration in S in the primary experiment described by (2.1) and (2.2) to that in S’ in the (physically independent) reciprocal experiment described by (2.3) and (2.4). In contrast (2.14) describes the observed _relative velocity transformation_ within the primary experiment. For further discussion of this important point see Refs. [11, 12].
## 3 **Muons are clocks that demonstrate time dilatation and differential aging**
Muon decays constitute an excellent laboratory for testing the predictions of special relativity. For example, the TD effect of Eqn(2.7) was experimentally confirmed at the per mille level of relative precision in the ultrarelativisic domain (\(\gamma_{v}\simeq 30\)) by observation of the decay of muons in the storage ring of the last CERN muon \(g-2\) experiment [13]. In the present paper, it is shown that thought experiments involving muons provide a graphic illustration of the predicted space-time behaviour, in special relativity, of clocks in different inertial frames.
Unlike most other unstable particles, muons are particularly suitable for precise tests of the TD effect because of the ease of their production from pion decay and their long mean lifetime of 2.2 \(\mu\)s. The former yields high events statistics and the latter the possibility of precise time interval measurements using accurate clocks in the laboratory frame [13].
The thought experiment developed in the present paper is an elaboration of the well-known demonstration that the very presence of cosmic muons at the Earth’s surface is, by itself, sufficient to demonstrate the existence of the TD effect [14, 15, 16, 17]. Muons are produced predominantly by the weak decay of charged pions \(\pi^{\pm}\rightarrow\mu^{\pm}\nu\). The velocity of the muon, \(v_{\mu}\), depends upon that of the parent pion, \(v_{\pi}\), and the centre-of-mass decay angle, \(\theta^{\star}\). If the pion has the same velocity, \(v_{\mu}^{\star}=c(m_{\pi}^{2}-m_{\mu}^{2})/(m_{\pi}^{2}+m_{\mu}^{2})\), as the muon in the pion rest frame, (corresponding to a pion momentum of 49.5 MeV/c) and \(\cos\theta^{\star}=-1\), the muon is produced at rest in the laboratory system. The maximum muon decay energy \(E_{\mu}^{max}\) correponds to \(\cos\theta^{\star}=1\) and is given, in terms of the parent pion energy \(E_{\pi}\), and the pion velocity \(v_{\pi}=c\beta_{\pi}\), by the relation:
\[E_{\mu}^{max}=E_{\pi}\frac{[m_{\pi}^{2}(1+\beta_{\pi})+m_{\mu}^{2}(1-\beta_{ \pi})]}{2m_{\pi}^{2}}\] (3.1)
For ultra-relativistic parent pions with \(\beta_{\pi}\simeq 1\), \(E_{\mu}^{max}\simeq E_{\pi}\).
Due to the thickness of the Earth’s atmosphere, the majority of interactions of primary cosmic protons, that produce the parent pions of cosmic muons, occur at high altitude, \(\simeq\) 20 km above the Earth’s surface. A muon with speed close to that of light then takes at least \(\simeq\) 700 \(\mu s\) to reach the surface of the Earth. This may be compared with the muon mean lifetime of 2.2 \(\mu s\). Without the TD effect, only a fraction \(\exp[-700/2.2]\simeq 10^{-138}\) of the muons produced at altitude would reach the Earth’s surface. However a 10 GeV muon, which has \(\gamma_{v}\simeq 94\), has a 3.5 \(\%\) probability to reach the Earth’s surface, before decaying, when the TD effect is taken into account.
In the thought experiment considered here it is assumed that two muons \(\mu_{\rm A}\) and \(\mu_{\rm A^{\prime}}\) are produced simultaneously at the same point A (see Fig.1a) by decay of pions from a primary cosmic ray interaction with the nucleus of a gas atom in the atmosphere. The muon \(\mu_{\rm A}\) is produced at rest in the atmosphere (inertial frame S) while \(\mu_{\rm A^{\prime}}\) (with proper frame S’) is produced with velocity \(v=c\beta_{v}=\sqrt{3}/2\), so that \(\gamma_{v}=2\). It happens that both muons decay after time \(T\) in their proper frames. Because of the TD effect, the muon \(\mu_{\rm A^{\prime}}\) will then be seen by an observer at rest in the atmosphere to decay after time \(\tau=\gamma_{v}T=2T\) at a point B at a distance \(L=2Tv=2.28\)km from A. It is also supposed that at the same time, \(\tau=0\), that \(\mu_{\rm A}\) and \(\mu_{\rm A^{\prime}}\) are created, another muon, \(\mu_{\rm B}\) , (also with proper decay lifetime \(T\)) is created at rest in the atmosphere at the point B, by decay of pion from another primary cosmic ray interaction. Since \(\mu_{\rm A}\) and \(\mu_{\rm B}\) are at rest in the atmosphere and have no TD effect, they will decay simultaneously at \(\tau=T\) (Fig.1b) in the frame S. At this instant the muon \(\mu_{\rm A^{\prime}}\) is still undecayed and is at the point M, midway between A and B, When \(\mu_{\rm A^{\prime}}\) decays (Fig.1c) \(\mu_{\rm A}\) and \(\mu_{\rm B}\) no longer exist, however the centres of mass of their, by now distant, decay products \(e\), \(\nu\) and \(\bar{\nu}\), denoted as (\(\mu_{\rm A}\) ) and (\(\mu_{\rm B}\) ), and indicated in Figs. 1-3 as two concentric circles, still remain at the points A and B.
[FIGURE:S3.F1][ENDFIGURE]
[FIGURE:S3.F2][ENDFIGURE]
[FIGURE:S3.F3][ENDFIGURE]
The sequence of events that would be seen by an observer in the rest frame, S,’ of \(\mu_{\rm A^{\prime}}\) corresponding to those of Fig.1, is shown in Fig.2. According to the relative velocity transformation formula (2.15), \(\mu_{\rm A}\) and \(\mu_{\rm B}\) are observed to move to the left with velocity \(v\gamma_{v}\). The configuration at \(\tau=\tau^{\prime}=0\) is shown in Fig.2a. According to the time dilatation relation (2.7), appropriate to this case, \(\mu_{\rm A}\) and \(\mu_{\rm B}\) are seen to decay simultaneously at time \(\tau^{\prime}=T/\gamma_{v}=1.2\mu s\) (Fig.2b). As in Fig.1, it can be sees that \(\mu_{\rm A^{\prime}}\) is aligned with M, the mid point of the line segment AB, at the time of simultaneous decay of \(\mu_{\rm A}\) and \(\mu_{\rm B}\) . As shown in Fig.2c, \(\mu_{\rm A^{\prime}}\) decays at time \(\tau^{\prime}=T=2.2\mu s\), when it is aligned with B, as is also the case for an observer in the frame S, as shown in Fig.1c. Since \(\mu_{\rm A}\) and \(\mu_{\rm B}\) are seen to decay simultaneously by observers at rest in both S and S’, there is no RS effect.
The sequence of events that would be observed in the frame S”, moving with velocity \(w\) in the positive \(x\)-direction will now be considered. According to the relative velocity transformation formula (2.14), \(\mu_{\rm A^{\prime}}\) is observed to move with speed \(\gamma_{w}(v-w)\) in the positive \(x\)-direction in S” since the _relative velocity_ of S” and S’ in the frame S is \(v-w\). Also, according to (2.15), \(\mu_{\rm A}\) and \(\mu_{\rm B}\) are observed to move with speed \(w\gamma_{w}\) in the negative \(x\)-direction in S”. The velocity of \(\mu_{\rm A^{\prime}}\) relative to \(\mu_{\rm A}\) and \(\mu_{\rm B}\) in the frame S” is then \(v\gamma_{w}\). The sequence of events seem by an observer at rest in S” is illustrated, for the case \(w=v/2\), in Fig.3. Using the time dilatation realation (2.7), with \(v\) set equal to \(w\) in order to relate times in S and S”, \(\mu_{\rm A}\) and \(\mu_{\rm B}\) decay at time \(\tau^{\prime\prime}=T/\gamma_{w}\), when \(\mu_{\rm A^{\prime}}\) is aligned with M (see Fig.3b). Note that the time of this event in S” depends only on the value of \(w\), being independent of the value of \(v\). The muon \(\mu_{\rm A^{\prime}}\) decays at time \(\tau^{\prime\prime}=\gamma_{v}T/\gamma_{w}\) when it is aligned with B (see Fig.3c).
In all three frames, S, S’ and S”, \(\mu_{\rm A}\) and \(\mu_{\rm B}\) are observed to decay simultaneously and earlier than \(\mu_{\rm A^{\prime}}\) . These decay times, in each frame, are presented in Table 1. The entries in this table satisfy the following condition:
\[\frac{\tau_{D}(\mu_{\rm A^{\prime}})}{\tau_{D}(\mu_{\rm A})}=\frac{\tau^{ \prime}_{D}(\mu_{\rm A^{\prime}})}{\tau^{\prime}_{D}(\mu_{\rm A})}=\frac{\tau^ {\prime\prime}_{D}(\mu_{\rm A^{\prime}})}{\tau^{\prime\prime}_{D}(\mu_{\rm A}) }=\gamma_{v}\] (3.2)
Since the last member of this equation is independent of \(w\), it follows that ratio of the decay times given by the time dilatation relation (2.7) is the same for all inertial observers, and so is an invariant, fixed by the velocity of \(\mu_{\rm A^{\prime}}\) in the rest frame of \(\mu_{\rm A}\) where the time dilatation effect is defined —i.e. observer at rest in S, observed muon at rest in S’.
Different (but reciprocal, i.e. related to those of Eqn(3.2) by exchange of primed and unprimed quantities) results would be obtained in the situation where the observer of the time dilatation effect is at rest in S’, while the observed muon is a rest in S, so that the reciprocal time dilatation relation (2.8) is applicable.
Inspection of, and reflection on, Figs.1 and 2 reveals the physical basis of the differential aging effect in the ‘twin paradox’ introduced by Langevin [9]. The ‘travelling twin’ can be indentified with \(\mu_{\rm A^{\prime}}\) , the ‘stay at home’ one with either \(\mu_{\rm A}\) or \(\mu_{\rm B}\) since their associated ‘clocks’ are synchronous. At the end of the outward journey, when \(\mu_{\rm A^{\prime}}\) arrives at B, the synchronous clocks at A and B were seen in S to run twice as fast as that of \(\mu_{\rm A^{\prime}}\) —in fact \(\mu_{\rm A}\) and \(\mu_{\rm B}\) have already decayed when \(\mu_{\rm A^{\prime}}\) arrives at B. In Fig.1, the observer in S sees \(\mu_{\rm A^{\prime}}\) aging less rapidly than \(\mu_{\rm A}\) or \(\mu_{\rm B}\) . However, as required by the time dilatation relation (2.7), an observer in S’ (Fig.2) sees \(\mu_{\rm A}\) or \(\mu_{\rm B}\)_aging more rapidly_, by a factor two, than \(\mu_{\rm A^{\prime}}\) . This is true even though these clocks are in motion relative to the observer on S’. Fig.2 also reveals that the physical basis of time dilatation, and differential aging, is not, as hitherto, to be found in ‘length contraction’ in the frame S’, but instead in the greater velocity of \(\mu_{\rm A}\) or \(\mu_{\rm B}\) relative to \(\mu_{\rm A^{\prime}}\) in the ‘travelling frame’ S’, than that of \(\mu_{\rm A^{\prime}}\) relative to \(\mu_{\rm A}\) or \(\mu_{\rm B}\) in the ‘base frame’ S from which the time dilatation effect is observed. For futher discussion of base and travelling frames in relation to primary and reciprocal space-time experiments see Refs. [11, 12].
The much-discussed ‘twin paradox’ arises when it is attempted to describe the sequence of events shown in Fig.2 by use of the time dilatation relation (2.8) of the reciprocal experiment which requires clocks in S to run slower (not faster, as in Fig.2) than those in S’, when observed in the latter frame. This is a nonsensical interpretation since the time variables \(\tau\) and \(t^{\prime}\) appearing in the time dilatation relation (2.7) have a completely different operational meaning to those, \(\tau^{\prime}\) and \(t\) in the time dilatation relation of the reciprocal experiment. See Ref. [11] for a more detailed discussion of the standard and incorrect interpretation of the twin paradox based on the spurious LC effect, in contrast with its correct interpretation following from the relative velocity transformation formula (2.14).
Frame | ~τD(μA′) | ~τD(μA)=~τD(μB)
---|---|---
S | γvT | T
S’ | T | T/γv
S” | γvT/γw | T/γw
Table 1: Decay times of muons μA′ and μA and μB in the frames S, S’ and S”. ~τ
denotes the proper time in each frame.
## References
* [1] J.H.Field, ‘On the Real and Apparent Positions of Moving Objects in Special Relativity: The Rockets-and-String and Pole-and-Barn Paradoxes Revisited and a New Paradox’,arXiv pre-print: http://xxx.lanl.gov/abs/physics/0403094v3. Cited 30 Nov 2007.
* [2] E.Dewan and M.Beran, Am J. Phys.**27** 517 (1959).
* [3] E.Dewan, Am J. Phys. **31** 383 (1963).
* [4] V.S.Soni, Eur. J. Phys. **23** 225 (2002).
* [5] J.H.Field, ‘The Local Space-Time Lorentz Transformation: a New Formulation of Special Relativity Compatible with Translational Invariance’, arXiv pre-print: http://xxx.lanl.gov/abs/ physics/0501043v3. Cited 30 Nov 2007.
* [6] J.H.Field, ‘Spatially-separated synchronised clocks in the same inertial frame: Time dilatation, but no relativity of simultaneity or length contraction’. arXiv pre-print: http://xxx.lanl.gov/abs/0802.3298v2. Cited 4 Mar 2008.
* [7] J.H.Field, ’Clock rates, clock settings and the physics of the space-time Lorentz transformation’, arXiv pre-print: http://xxx.lanl.gov/abs/physics/0606101v4. Cited 4 Dec 2007.
* [8] J.H.Field, ’Translational invariance and the space-time Lorentz transformation with arbitary spatial coordinates’, http://xxx.lanl.gov/abs/physics/0703185v2. cited 15 Feb 2008.
* [9] P.Langevin, Scientia, **10** 31 (1911).
* [10] J.H.Field, ‘The physics of space and time I: The description of rulers and clocks in uniform translational motion by Galilean or Lorentz transformations’, arXiv pre-print: http://xxx.lanl.gov/abs/physics/0612039v3. Cited 28 Mar 2008.
* [11] J.H.Field, ‘The physics of space and time III: Classification of space-time experiments and the twin paradox’. arXiv pre-print: http://xxx.lanl.gov/abs/0806.3671v1. Cited 23 Jun 2008.
* [12] J.H.Field, ‘Primary and reciprocal space-time experiments, relativistic reciprocity relations and Einstein’s train-embankment thought experiment´. arXiv pre-print: http://xxx.lanl.gov/abs/0807.0158v1. Cited 1 Jul 2008.
* [13] J.Bailey _et al_ Nature **268** 301 (1979).
* [14] R.P.Feynman, R.Leighton and M.Sands, The Feynman Lectures in Physics, Volume I (Addison-Wesley, Reading Massachusetts, 1966) Section 15-3.
* [15] E.F.Taylor and J.A.Wheeler, ‘Spacetime Physics’, W.H.Freeman and Company, San Franciso 1966, Section 42, P89.
* [16] P.A.Tipler and R.A.Lewellyn, ‘Modern Physics’ W.H.Freeman and Company, New York, Ch 1, P40.
* [17] A.Walker, CERN Courier, **46** Number 4 May 2006, P23.
|
1802.08013 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 80568,
"num_imgs": 8,
"llama3_tokens_count": 17701
} | [
"content_image/1802.08013/x1.png",
"content_image/1802.08013/x2.png",
"content_image/1802.08013/x3.png",
"content_image/1802.08013/x4.png",
"content_image/1802.08013/x5.png",
"content_image/1802.08013/x6.png",
"content_image/1802.08013/x7.png",
"content_image/1802.08013/x8.png"
] | # Intrinsic Motivation and Mental Replay enable
Efficient Online Adaptation in Stochastic Recurrent Networks
Daniel Tanneberg
daniel@robot-learning.de
Jan Peters
mail@jan-peters.net
Elmar Rueckert
rueckert@rob.uni-luebeck.de
Intelligent Autonomous Systems, Technische Universität Darmstadt,
Hochschulstr. 10, 64289 Darmstadt, Germany
Robot Learning Group, Max-Planck Institute for Intelligent Systems,
Max-Planck-Ring 4, 72076 Tübingen, Germany
Institute for Robotics and Cognitive Systems, Universität zu Lübeck,
Ratzeburger Allee 160, 23538 Lübeck, Germany
###### Abstract
Autonomous robots need to interact with unknown, unstructured and changing environments, constantly facing novel challenges. Therefore, continuous online adaptation for lifelong-learning and the need of sample-efficient mechanisms to adapt to changes in the environment, the constraints, the tasks, or the robot itself are crucial. In this work, we propose a novel framework for probabilistic online motion planning with online adaptation based on a bio-inspired stochastic recurrent neural network. By using learning signals which mimic the intrinsic motivation signal _cognitive dissonance_ in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds. We evaluate our online planning and adaptation framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is shown by learning unknown workspace constraints sample-efficiently from few physical interactions while following given way points.
keywords: Intrinsic Motivation, Online Learning, Experience Replay, Autonomous Robots, Spiking Recurrent Networks, Neural Sampling †
[FOOTNOTE:†][ENDFOOTNOTE]
©2018. Licensed under the Creative Commons CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
<figure><img src="content_image/1802.08013/x1.png"><figcaption>Figure 1: Conceptual sketch of the framework. A shows the online planning andadaptation concept of using short segments. On the upper part the idea ofcognitive dissonance is illustrated with a planned and executed trajectory.The steps sampling and post-processing for a segment are timed such that theyare performed during the end of the execution of the previous segment, whereasmodel adaptation is performed at the beginning of the segment execution. Bshows the process with two segments in detail, including sampling ofmovements, decoding and averaging for creating the mental plan and the modelupdate. The executed segment provides feedback for planning the next segmentand the matching mental and executed trajectory pairs are used for updatingthe model based on their cognitive dissonance.</figcaption></figure>
## 1 Introduction
One of the major challenges in robotics is the concept of developmental robots [1; 2; 3], i.e., robots that develop and adapt autonomously through lifelong-learning [4; 5; 6]. Although a lot of research has been done for learning tasks autonomously in recent years, experts with domain knowledge are still required in many setups to define and guide the learning problem, e.g., for reward shaping, for providing demonstrations or for defining the tasks that should be learned. In a fully autonomous self-adaptive robot however, these procedures should be carried out by the robot itself. In other words, the robot and especially its development should not be limited by the learning task specified by the expert, but should rather be able to develop _on its own_. Thus, the robot should be equipped with mechanisms enabling autonomous development to _understand_ and decide when, what, and how to learn [7; 8].
Furthermore, as almost all robotic tasks involve movements and therefore movement planning, this developing process should be continuous. In particular, planning a movement, executing it, and learning from the results should be integrated in a continuous online framework. This idea is investigated in iterative learning control approaches [9; 10], which can be seen as a simple adaptation mechanism that learns to track given repetitive reference trajectories. More complex adaptation strategies are investigated in model-predictive control approaches [11; 12; 13; 14] that simultaneously plan, execute and re-plan motor commands. However, the used models are fixed and cannot adapt straightforwardly to new challenges.
Online learning with real robots was investigated in [15], where multiple models were learned online for reaching tasks. Online learning of push recovery actions during walking in a humanoid robot was shown in [16], and in [17] a mechanism for online learning of the body structure of a humanoid robot was discussed. Recurrent neural networks were used to learn body mappings in a humanoid robot [18], and for efficient online learning of feedback controllers [19]. However, in all these online learning settings, the learning problem was designed and specified a priori by a human expert, providing extrinsic reward.
From autonomous mental development in humans however, it is known that intrinsic motivation is a strong factor for learning [20; 21]. Furthermore, intrinsically motivated behavior is crucial for gaining the competence, i.e., a set of reusable skills, to enable autonomy [22]. Therefore, the abstract concept of intrinsically motivated learning has inspired many studies in artificial and robotic systems, e.g. [23; 24; 25], which investigate intrinsically motivated learning in the reinforcement learning framework [26]. Typically, such systems learn the consequences of actions and choose the action that maximizes a novelty or prediction related reward signal [27; 28; 29].
Intrinsic motivation is used for self-generating reward signals that are able to guide the learning process without an extrinsic reward that has to be manually defined and provided by an expert. For the concept of lifelong-learning, intrinsic motivation signals are typically used for incremental learning within hierarchical reinforcement learning [30] and the options framework [31]. Starting with a developmental phase, the robots learn incrementally more complex tasks utilizing the previously and autonomously learned skills. Furthermore, the majority of related work on intrinsically motivated learning focuses on concepts and simulations, and only few applications to real robotic systems exist, for example [32; 33].
<figure><img src="content_image/1802.08013/x2.png"><figcaption>Figure 2: Experimental setup. A shows the KUKA LWR arm (left) and itsrealistic dynamic simulation (right). B shows the setup for online learning onthe real robot. The model was initialized with one trial from the simulationof the robot (1st trial in Figure 4) and the new obstacle is learnedadditionally online on the real system. The overlay shows the mental plan overone trial of about 5:30 minutes. See Figure 5 for more details.</figcaption></figure>
ContributionThe contribution of this work is a neural-based framework for robot control that enables efficient online adaptation during motion planning tasks. A novel intrinsically motivated _local_ learning signal is derived and combined with an experience replay strategy to enable efficient online adaptation. We implement the adaptation approach into a biologically inspired stochastic recurrent neural network for motion planning [34; 35]. This work builds on recent prior studies where a _global_ learning signal was investigated [36; 37]. These global and local learning signals enable efficient task-independent online adaptation without an explicit specified objective or learning task. In robotic experiments we evaluate and compare these global and local learning signals and discuss their properties. This study shows that our framework is suitable for model based robot control tasks where adaptation of the state transition model to dynamically changing environmental conditions is necessary.
The task-independent online adaptation is done by updating the recurrent synaptic weights encoding the state transition model. The proposed learning principle, therefore, can be applied to model-based (control) approaches with internal (transition) models, like, for example, (stochastic) optimal control [38; 39; 40] and model-predictive control [11; 12; 13; 14]. Furthermore, the method is embedded into a novel framework for continuous online motion planning and learning that combines the scheduling concept of model-predictive control with the adaptation idea of iterative learning control.
The online model adaptation mechanism uses a supervised learning approach and is modulated by intrinsic motivation signals that are inspired by cognitive dissonance [41; 42]. We use a knowledge-based model of intrinsic motivation [43] that describes the divergence of the expectation to the observation. This intrinsic motivation signal tells the agent where its model is _incorrect_ and guides the adaptation of the model with this mismatch. In our experiments, this dissonance signal relates to a tracking error, however, the proposed method is more general and can be used with various modalities like vision or touch. We derive two different mechanisms to compute the dissonance, a _global_ learning signal that captures the _distance_ between mental and executed trajectory, and a _local_ learning signal that takes the neurons _responsibilities_ for encoding these trajectories into account. These learning signals trigger the online adaptation when _necessary_ and guide the strength of the update.
Additionally, to intensify the effect of the experience, we use a mental replay mechanism, what has been proposed to be a fundamental concept in human learning [44]. This mental replay is implemented by exploiting the stochastic nature of the spiking neural network model and its spike encodings of trajectories to generate multiple sample encodings for every experienced situation.
We will show that the stochastic recurrent network can adapt efficiently to novel environments without specifying a learning task within seconds from few interactions by using the proposed intrinsic motivation signals and a mental replay strategy on a simulated and real robotic system (shown in Figure 2).
### Related Work on Intrinsically Motivated Learning
In this subsection we discuss the related work for intrinsically motivated learning from practical and theoretical perspectives.
Early work on intrinsically motivated learning not using the typically reinforcement learning framework used the prediction error of sensory inputs for self-localization tasks [45]. In an online setup, the system explored novel and interesting stimuli to learn a representation of the environment. By using this intrinsic motivation signal, the system developed structures for perception, representation and actions in a neural network model. Actions were chosen such that the expected increase of knowledge was maximized. The approach was evaluated in a gridworld domain and on a simple mobile robot platform.
Intrinsic motivation signals prediction, familiarity (in terms of frequency of state transitions) and stability (in terms of sensor signals to its average) were investigated in [46] in task-independent online visual exploration problems in simulation and on a simple robot.
By using the hierarchical reinforcement learning framework and utilizing the intrinsic motivation signal novelty, autonomous learning of a hierarchical skill collection in a playroom simulation was shown in [23]. The novelty signal directed the agent to novel situations when it got _bored_. As already learned skills can be used as actions in new policies, the approach implements an incremental learning setup.
A similar approach was investigated in [33], were a framework for lifelong-learning was proposed. This framework learns hierarchical polices and has similarities to the options framework. By implementing a motivation signal based on affordance discovery¹, a repertoire of movement primitives for object detection and manipulation was learned on a platform with two robotic arms. The authors also showed that these primitives can be sequenced and generalized to enable more complex and robust behavior.
[FOOTNOTE:1][ENDFOOTNOTE]
Another approach for lifelong-learning based on hierarchical reinforcement learning and the options framework is shown in [47]. The authors learn incrementally a collection of reusable skills in simulations, by implementing the motivation signals novelty for learning new skills and prediction error for updating existing skills.
A different approach based on competence improvement with hierarchical reinforcement learning is discussed in [48]. The agent is given a set of skills, or options as in the options framework, and needs to choose which skill to improve. The used motivation signal competence is implemented as the expected return of a skill to achieve a certain goal. Rewards are generated based on this competence progress and the approach is evaluated in a gridworld domain.
In [32], the _intelligent adaptive curiosity_ system is introduced and used to lead a robot to maximize its learning progress, i.e., guiding the robot to situations, that are neither too predictable nor too unpredictable. The reinforcement learning problem is simplified to only trying to maximize the expected reward at the next timestep and a positive reward is generated when the error of an internal predictive model decreases. Thus, the agent focuses on exploring situations whose complexity matches its current abilities. The mechanism is used on a robot that learns to manipulate objects. The idea is to equip agents with mechanisms _computing_ the degree of novelty, surprise, complexity or challenge from the robots point of view and use these signals for guiding the learning.
In [29] different prediction based signals are investigated within a reinforcement learning framework on a simulated robot arm learning reaching movements. The framework uses multiple expert neural networks, one for each task, and a selection mechanism that determines which expert to train. The motivation signals are implemented with learned predictors with varying input that learn to predict the achievement of the selected task. Predicting the achievement of the task once in the beginning of a trial produced the best results.
Recently, open-ended learning systems based on intrinsic motivation increasingly give importance to explicit goals – known from the idea of goal babbling for learning inverse kinematics [49] – for autonomous learning of skills to manipulate the robots environment [50].
Beside the aforementioned more practical research, also work on theoretical aspects of intrinsic motivated learning exists. For example, a coherent theory and fundamental investigation of using intrinsic motivation in machine learning over two decades is discussed in [51]. The authors state that the improvement of prediction errors can be used as an intrinsic reinforcement for efficient learning.
Another comprehensive overview of intrinsically motivated learning systems is given in [25]. The authors introduce three classes for clustering intrinsic motivation mechanisms. In particular, they divide these mechanisms into prediction based, novelty based and competence based approaches, and discuss their features in detail. Furthermore, that prediction based and novelty based intrinsic motivations are subject to distinct mechanisms was shown in [52].
In [43] a psychological view on intrinsic motivation is discussed and a formal typology of computational approaches for studying such learning systems is presented.
Typically intrinsic motivation signals have been used for incremental task learning, acquiring skill libraries, learning perceptual patterns and for object manipulation. For the goal of fully autonomous robots however, the ability to focus and guide learning independently from tasks, specified rewards and human input is crucial. The robot should be able to learn without _knowing_ what it is supposed to learn in the beginning. Furthermore, the robot should detect on its own if it needs to learn something new or adapt an existing ability if its internal model differs from the perceived reality. To achieve this, we equip the robot with a mechanism for task-independent online adaptation utilizing intrinsic motivation signals inspired by cognitive dissonance. For rapid online adaptation within seconds, we additionally employ a mental replay strategy to intensify experienced situations. Adaptation is done by updating the synaptic weights in the recurrent layer of the network that encodes the state transition model, and this learning is guided by the cognitive dissonance inspired signals.
## 2 Materials and Methods
In this section, we first summarize the challenge and goal we want to address with this paper. Afterwards, we describe the functionality and principles of the underlying bio-inspired stochastic recurrent neural network model, that samples movement trajectories by simulating its inherent dynamics. Next we introduce our novel framework, which enables this model to plan movements online and show how the model can adapt online utilizing intrinsic motivation signals within a supervised learning rule and a mental replay strategy.
### The Challenge of (Efficient) Online Adaptation in Stochastic Recurrent Networks
The main goal of the paper is to show that efficient online adaptation of stochastic recurrent networks can be achieved by using intrinsic motivation signals and mental replay. Efficiency is measured as the number of updates triggered, which is equal to the number of required samples, e.g., here the number of physical interactions of the robot with the environment. Additionally, we will show that using adaptive learning signals and only trigger learning when necessary are crucial mechanisms for updating such sensitive stochastic networks.
### Motion Planning with Stochastic Recurrent Neural Networks
The proposed framework builds on the model recently presented in [34], where it was shown that stochastic spiking networks can solve motion planning tasks optimally. Furthermore, in [35] an approach to scale these models to higher dimensional spaces by introducing a factorized population coding and that the model can be trained from demonstrations was shown.
Inspired by neuroscientific findings on the mental path planning of rodents [53], the model mimics the behavior of hippocampal place cells. It was shown that the neural activity of these cells is correlated not only with actual movements, but also with future mental plans. This bio-inspired motion planner consists of stochastic spiking neurons forming a multi-layer recurrent neural network. It was shown that spiking networks can encode arbitrary complex distributions [54] and learn temporal sequences [55; 56]. We utilize these properties for motion planning and learning as well as to encode multi-modal trajectory distributions that can represent multiple solutions to planning problems.
The basis model consists of two different types of neuron populations: a layer of \(K\)_state_ neurons and a layer of \(N\)_context_ neurons. The state neurons form a fully connected recurrent layer with synaptic weights \(w_{i,k}\), while the context neurons provide feedforward input via synaptic weights \(\theta_{j,k}\), with \(j\in N\) and \(k,i\in K\) with \(N\ll K\). There are no lateral connections between context neurons. Each constraint or any task-related information is modeled by a population of context neurons. While the state neurons are uniformly spaced within the modeled state space, the _task-dependent_ context neurons are Gaussian distributed _locally_ around the corresponding location they encode, i.e., there are only context neurons around the specific constraint they encode.
The state neurons can be seen as an abstract and simplified version of place cells and encode a cognitive map of the environment [57]. They are modeled by stochastic neurons which build up a membrane potential based on the weighted neural input. Context neurons have no afferent connections and spike with a fixed time-dependent probability. Operating in discrete time and using a fixed refractory period of \(\tau\) timesteps that decays linearly, the neurons spike in each time step with a probability based on their membrane potential. All spikes from presynaptic neurons get weighted by the corresponding synaptic weight and are integrated to an overall postsynaptic potential (PSP). Assuming linear dendritic dynamics, the membrane potential of the state neurons is given by
\[u_{t,k}=\sum_{i=1}^{K}w_{i,k}\tilde{v}_{i}(t)+\sum_{j=1}^{N} \theta_{j,k}\tilde{y}_{j}(t)\ ,\] (1)
where \(\tilde{v}_{i}(t)\) and \(\tilde{y}_{j}(t)\) denote the presynaptic input injected from neurons \(i\in K\) and \(j\in N\) at time \(t\) respectively. Depending on the used PSP kernel for integrating over time, this injected input can include spikes from multiple previous timesteps. This definition implements a simple stochastic spike response model [58]. Using this membrane potential, the probability to spike for the state neurons can be defined by \(\rho_{t,k}=p(v_{t,k}=1)=f(u_{t,k})\), where \(f(\cdot)\) denotes the activation function, that is required to be differentiable. The binary activity of the state neurons is denoted by \(\mathbf{v}_{t}=(v_{t,1},..,v_{t,K})\), where \(v_{t,k}=1\) if neuron \(k\) spikes at time t and \(v_{t,k}=0\) otherwise. Analogously, \(\mathbf{y}_{t}\) describes the activity of the context neurons. The synaptic weights \(\boldsymbol{\theta}\) which connect context neurons to state neurons provide task related information. By injecting this task related information, the context neurons modulate the random walk behavior of the state neurons towards goal directed movements. This input from the context neurons can also be learned [34] or can be used to, for example, include known dynamic constraints in the planning process [35].
We compared setting the feedfoward context neuron input weights \(\boldsymbol{\theta}\) as in [35] – proportional to the euclidean distance – to using Student’s t-distributions and generalized error distributions, where the latter produced the best results and was used in the experiments. At each context neuron position such a distribution is located and the weights to the state neurons are drawn from this distribution using the distance between the connected neurons as input. For way points, these context neurons install a _gradient_ towards the associated position such that the random walk samples are biased towards the active locations.
For planning, the stochastic network encodes a distribution
\[q(\mathbf{v}_{1:T}|\boldsymbol{\theta})=p(\mathbf{v}_{0})\prod_{ t=1}^{T}\mathcal{T}(\mathbf{v}_{t}|\mathbf{v}_{t-1})\phi_{t}(\mathbf{v}_{t}| \boldsymbol{\theta})\]
over state sequences (\(\mathbf{v}_{1:T}\)) of \(T\) timesteps, where \(\mathcal{T}(\mathbf{v}_{t}|\mathbf{v}_{t-1})\) denotes the transition model and \(\phi_{t}(\mathbf{v}_{t}|\boldsymbol{\theta})\) the task related input provided by the context neurons. Using the definition of the membrane potential from Equation (1), the state transition model is given by
\[\mathcal{T}(v_{t,i}|\mathbf{v}_{t-1}) =f\left(\sum_{k=1}^{K}w_{k,i}\tilde{v}_{k}(t)v_{t,i}\right)\ ,\] (2)
where a PSP kernel that covers multiple time steps includes information provided by spikes from multiple previous time steps. In particular, we use a rectangular PSP kernel of \(\tau\) timesteps, given by
\[\tilde{v}_{k}(t)=\begin{cases}1\text{ if }\exists l\in[t-\tau,t-1 ]:v_{l,k}=1\\ 0\text{ otherwise}\end{cases}\ ,\]
such that, if neuron \(k\) has spiked within the last \(\tau\) timesteps, the presynaptic input \(\tilde{v}_{k}(t)\) is set to \(1\). Movement trajectories can be sampled by simulating the dynamics of the stochastic recurrent network [54] where multiple samples are used to generate smooth trajectories.
Encoding continuous domains with binary neuronsAll neurons have a preferred position in a specified coordinate system and encode binary random variables (spike \(=1\)/no spike \(=0\)). Thus, the solution sampled from the model for a planning problem is the spiketrain of the state neurons, i.e., a sequence of binary activity vectors. These binary neural activities encode the continuous system state \(\mathbf{x}_{t}\), e.g., end-effector position or joint angle values, using the decoding scheme
\[\mathbf{x}_{t}=\frac{1}{|\hat{\mathbf{v}}_{t}|}\sum_{k=1}^{K}\hat {v}_{t,k}\mathbf{p}_{k}\quad\text{with}\quad|\hat{\mathbf{v}}_{t}|=\sum_{k=1}^ {K}\hat{v}_{t,k}\ ,\]
where \(\mathbf{p}_{k}\) denotes the preferred position of neuron \(k\) and \(\hat{v}_{t,k}\) is the continuous activity of neuron \(k\) at time \(t\) calculated by filtering the binary activity \(v_{t,k}\) with a Gaussian window filter. Together with the dynamics of the network, that allows multiple state neurons being active at each timestep, this encoding enables the model to work in continuous domains. To find a movement trajectory from position \(\mathbf{a}\) to a target position \(\mathbf{b}\), the model generates a sequence of states encoding a task fulfilling trajectory.
### Online Motion Planning Framework
For efficient online adaptation, the model should be able to react during the execution of a planned trajectory. Therefore, we consider a short time horizon instead of planning complete movement trajectories over a long time horizon. This short time horizon sub-trajectory is called a _segment_. A trajectory \(\boldsymbol{\kappa}\) from position \(\mathbf{a}\) to position \(\mathbf{b}\) can thus consist of multiple segments. This movement planning segmentation has two major advantages. First, it enables the network to consider feedback of the movement execution in the planning process and, second, the network can react to changing contexts, e.g., a changing target position. Furthermore, it allows the network to update itself during planning, providing a mechanism for online model learning and adaptation to changing environments or constraints. The general idea of how we enable the model to plan and adapt online is illustrated in Figure 1.
To ensure a continuous execution of segments, the planning phase of the next segment needs to be finished before the execution of the current segment finished. On the other hand, planning of the next segment should be started as late as possible to incorporate the most up-to-date feedback into the process. Thus, for estimating the starting point for planning the next segment, we calculate a running average over the required planning time and use the three sigma confidence interval compared to the expected execution time. The expected execution time is calculated from the distance the planned trajectory covers and a manually set velocity. The learning part can be done right after a segment execution is finished. The alignment of these processes are visualized in Figure 1A.
As the recurrent network consists of stochastic spiking neurons, the network models a distribution over movement trajectories rather than a single solution. In order to create a smooth movement trajectory, we average over multiple samples drawn from the model when planning each segment. Before the final mental movement trajectory is created by averaging over the drawn samples, we added a sample rejection mechanism. As spiking networks can encode arbitrary complex functions, the model can encode multi-modal movement distributions. Imagine that the model faces a known obstacle that can be avoided by going around either left or right. Drawn movement samples can contain both solutions and when averaging over the samples, the robot would crash into the obstacle. Thus, only samples that encode the same solution should be considered for averaging.
Clustering of samples could solve this problem, but as our framework has to run online, this approach is too expensive. Therefore, we implemented a heuristic based approach that uses the angle between approximated movement directions as distance. First a reference movement sample is chosen such that its average distance to the majority of the population is minimal, i.e., the sample that has the minimal mean distance to \(90\%\) of the population is chosen as the reference. Subsequently only movement samples with an approximated movement direction close to the reference sample are considered for averaging. As threshold for rejecting a sample, the three-sigma interval of the average distances of the reference sample to the closest \(90\%\) of the population is chosen.
The feedback provided by the executed movement is incorporated before planning the next segment in two steps. First, the actual position of the robot is used to initialize the sampling of the next segment such that planning starts from where the robot actually is, not where the previous mental plan indicates, i.e., the refractory state of the state neurons is set accordingly. Second, the executed movement is used for updating the model based on the cognitive dissonance signal it generated. In Figure 1B this planning and adaptation process is sketched.
### Online Adaptation of the Recurrent Layer
The online update of the spiking network model is based on the contrastive divergence (CD) [59] based learning rules derived recently in [35]. CD draws multiple samples from the current model and uses them to approximate the likelihood gradient. The general CD update rule for learning parameters \(\Theta\) of some function \(f(x;\Theta)\) is given by
\[\Delta\Theta =\left<\frac{\partial\log f(x;\Theta)}{\partial\Theta}\right>_{ \mathbf{X}^{0}}\hskip-7.0pt-\left<\frac{\partial\log f(x;\Theta)}{\partial \Theta}\right>_{\mathbf{X}^{1}}\ ,\] (3)
where \(\mathbf{X}^{0}\) and \(\mathbf{X}^{1}\) denote the state of the Markov chain after \(0\) and \(1\) cycles respectively, i.e., the data and the model distribution. We want to update the state transition function \(\mathcal{T}(\mathbf{v}_{t}|\mathbf{v}_{t-1})\), which is encoded in the synaptic weights \(\mathbf{w}\) between the state neurons (see Equation (2)). Thus, learning or adapting the transition model means to change these synaptic connections. The update rule for the synaptic connection \(w_{k,i}\) between neuron \(k\) and \(i\) is therefore given by
\[w_{k,i}\gets w_{k,i}+\alpha\Delta w_{k,i}\] (4)
\[\text{with}\quad\Delta w_{k,i}=\tilde{v}_{t-1,k}\tilde{v}_{t,i}- \tilde{v}_{t-1,k}v_{t,i}\ ,\]
where \(\tilde{v}\) denotes the spike encoding of the training data, \(v\) the sampled spiking activity, \(t\) the discrete timestep and \(\alpha\) is the learning rate. Here, we consider a resetting rectangular PSP kernel of one time step (\(\tilde{v}_{t-1,k}\)), a PSP kernel of \(\tau\) time steps follows the same derivation and is used in the experiments. In summary, this learning rules changes the model distribution slowly towards the presented training data distribution. For a more detailed description of this spiking contrastive divergence learning rule, we refer to [35]. This learning scheme works for offline model learning when the previously gathered training data is replayed to an inhibitory initialized model.
For using the derived model learning rule in the online scenario, we need to make several changes. In the original work, the model was initialized with inhibitory connections. Thus, no movement can be sampled from the model for exploration until the learning process has converged. This is not suitable in the online learning scenario, as a _working_ model for exploration is required, i.e., the model needs to be able to generate movements at any time. Therefore, we initialize the synaptic weights between the state neurons using Gaussian distributions [60], i.e., a Gaussian is placed at the preferred position of each state neuron and the synaptic weights are drawn from these distributions with an additional additive negative offset term that enables inhibitory connections. The synaptic weights are limited within \([-1,1]\).
This process initializes the transition model with an uniform prior, where for each position, transitions in all directions are equally likely. The variance of these basis functions and the offset term are chosen such that only close neighbors get excitatory connections, while distant neighbors get inhibitory connections, ensuring only small state changes within one timestep. i.e, a movement cannot _jump_ to the target immediately.
Furthermore, the learning rule has to be adapted as we do not learn with an empty model from a given set of demonstrations but rather update a working model with online feedback. Therefore, we treat the perceived feedback in form of the executed trajectory as a sample from the training data distribution and the mental trajectory as a sample from the model distribution in the supervised learning scheme presented in Equation (3).
Spike Encoding of TrajectoriesFor encoding the mental and executed trajectories into spiketrains, Poisson processes with normalized Gaussian responsibilities of the state neurons at each timestep as time-varying input are used as in [35]. These responsibilities are calculated using the same Gaussian basis functions, centered at the state neurons preferred positions, as used for initializing the synaptic weights. More details on these responsibilities are given in Subsection 2.6 as they are also used for the local adaptation signals. To transform these continuous responsibilities of the state neurons into binary spiketrains, they are scaled by a factor of \(100\), limited into \([0,10]\) and used as mean input to a Poisson distribution for each neuron. The drawn samples for each neuron from these Poisson distributions for each timestep are compared to a threshold of \(4\) and the neurons spike at time \(t\) if this threshold is reached and the neuron has not spiked within its refractory period before. The used parameters were chosen as they produced similar spiketrains as the ones sampled from the model.
### Global Intrinsically Motivated Adaptation Signal
For online learning, the learning rate typically needs to be small to account for the noisy updates, inducing a long learning horizon, and thus requires a large amount of samples. Especially, for learning with robots this is a crucial limitation as the number of experiments is limited. Furthermore, the model should only be updated if _necessary_. Therefore, we introduce a time-varying learning rate \(\alpha_{t}\) that controls the update step. This dynamic rate can for example encode uncertainty to update only reliable regions, can be used to emphasize updates in certain workspace or joint space areas, or to encode intrinsic motivation signals.
In this work, we use an intrinsic motivation signal for \(\alpha_{t}\) that is motivated by cognitive dissonance [41; 42]. Concretely, the dissonance between the mental movement trajectory generated by the stochastic network and the actual executed movement is used. Thus, if the executed movement is similar to the generated mental movement, the update is small, while a stronger dissonance leads to a larger update. In other words, learning is guided by the mismatch between expectation and observation.
This cognitive dissonance signal is implemented by the timestep-wise distance between the mental movement plan \(\boldsymbol{\kappa}^{(m)}\) and the executed movement \(\boldsymbol{\kappa}^{(e)}\). Thus, the resulting learning factor is generated globally and is the same for all neurons. As distance metric we chose the squared \(L^{2}\) norm but other metrics could be used as well depending on, for example, the modeled spaces or environment specific features. Thus, for updating the synaptic connection \(w_{k,i}\) at time \(t\), we change Equation (4) to
\[w_{k,i}\gets w_{k,i}+\alpha_{t}\Delta w_{k,i}\] (5)
\[\text{with}\quad\alpha_{t}=\lVert\boldsymbol{\kappa}_{t}^{(m)}- \boldsymbol{\kappa}_{t}^{(e)}\rVert_{2}^{2}\]
\[\text{and}\quad\Delta w_{k,i}=\tilde{v}_{t-1,k}\tilde{v}_{t,i}- \tilde{v}_{t-1,k}v_{t,i}\quad,\]
where \(\tilde{v}_{t}\) is the spike encoding generated from the actual executed movement trajectory \(\boldsymbol{\kappa}^{(e)}_{t}\) and \(v_{t}\) the encoding from the mental trajectory \(\boldsymbol{\kappa}^{(m)}_{t}\) using the previously described Poisson process approach.
To stabilize the learning progress and for safety on the real system, we limit \(\alpha_{t}\) in our experiments to \(\alpha_{t}\in[0,0.3]\) and use a learning threshold of \(0.02\). Thereby, the model update is only triggered when the cognitive dissonance is larger than this threshold, avoiding unnecessary computational resources, being more robust against noisy observations. Note that during the experiments, \(\alpha_{t}\) did not reach the safety limit and, therefore, the limit had no influence on the learning. With this intrinsic motivated learning factor and the threshold that triggers adaptation, the update is regulated according to the model error and _invalid_ parts of the model are updated accordingly.
### Local Intrinsically Motivated Adaptation Signals
In the previous subsection we discussed a mechanism for determining the cognitive dissonance signal that relies on the distance between the mental and the executed plan. Thus, the resulting \(\alpha_{t}\) is the same for all neurons at each timestep \(t\), i.e., resulting in a _global_ adaptation signal. Furthermore, the adaptation signal is calculated without taking the model into account. To generate the adaptation signal incorporating the model, we need a different mechanism which is already inherent to the model. Furthermore, we want to have individual learning signals for each neuron leading to a more focused and flexible adaptation mechanism. Thus, the resulting learning signal should be _local_ and generated using the model. To fulfill these properties, we utilize the mechanism that is already used in the model to encode trajectories into spiketrains – the responsibilities of each neuron for a trajectory. Inserting these individual learning signals into the update rule from Equation (5) alters the update rule to
\[w_{k,i}\gets w_{k,i}+\alpha_{t,i}\Delta w_{k,i}\] (6)
\[\text{with}\quad\alpha_{t,i}=c(\omega_{t,i}^{(m)}-\omega_{t,i}^{( e)})^{2}\]
\[\text{and}\quad\Delta w_{k,i}=\tilde{v}_{t-1,k}\tilde{v}_{t,i}- \tilde{v}_{t-1,k}v_{t,i}\quad,\]
with an additional constant scaling factor \(c\). For each neuron \(i\), \(\alpha_{t,i}\) encodes the time dependent adaptation signal. These local adaptation signals are calculated as the squared difference between the responsibilities \(\omega_{t,i}^{(m)}\) and \(\omega_{t,i}^{(e)}\) for each neuron \(i\) for the mental and the executed trajectory respectively. These responsibilities emerge from the Gaussian basis functions centered at the state neurons positions that are also used for initializing the state transition model and the spike encoding of trajectories. Therefore, the responsibilities are given by \(\omega_{t,i}^{(m)}=b_{i}(\boldsymbol{\kappa}_{t}^{(m)})\) and \(\omega_{t,i}^{(e)}=b_{i}(\boldsymbol{\kappa}_{t}^{(e)})\) with
\[b_{i}(\mathbb{x})=\exp\left(\frac{1}{2}(\mathbb{x}-\boldsymbol{p }_{i})^{\mathsf{T}}\boldsymbol{\Sigma}^{-1}(\mathbb{x}-\boldsymbol{p}_{i}) \right)\quad,\]
where \(\mathbb{p}_{i}\) is the preferred position of neuron \(i\). In the experiment we set \(c=3\), the learning threshold for \(\alpha_{t,i}\) that triggers learning for each neuron to \(0.05\) and limit the signal like in the global adaptation signal setting to \(\alpha_{t,i}\in[0,0.3]\). Note, as in the global adaptation experiments, this limit was never reached in the local experiments and thus, had no influence on the results.
### Using Mental Replay Strategies to Intensify Experienced Situations
As the encoding of trajectories into spiketrains using Poisson processes is a stochastic operation, we can obtain a whole population of encodings from a single trajectory. Therefore, populations of training and model data pairs can be generated from one experience and used for learning. We utilize this feature to implement a mental replay strategy that intensifies experienced situations to speed up adaptation. In particular, we draw \(20\) trajectory encoding samples per observation in the adaptation experiments, where each sample is a different spike encoding of the trajectory, i.e., a mental replay of the experienced situation. Thus, by using such a mental replay approach, we can apply multiple updates from a single interaction with the environment. The two mechanisms, using intrinsic motivation signals for guiding the updates and mental replay strategies to intensify experiences, lower the required number of experienced situations, which is a crucial requirement for learning with real robotic systems.
<figure><img src="content_image/1802.08013/x3.png"><figcaption>Figure 3: Adaptation results for three trials with the global learning signal.Each column in A shows one trial of the online adaptation with the globallearning signal, where the upper row shows the mental plan over time and thelower row depicts the adapted model. This change in the model is depicted withthe heatmap showing the average change of synaptic input each neuron receives.Similarly the average change of synaptic output each neuron sends is shownwith the scaled neuron sizes. B and C show the global learning signals αt forthe three trials over the planned segments.</figcaption></figure>
## 3 Results
We conducted four experiments to evaluate the proposed framework for online planning and learning based on intrinsic motivation and mental replay. In all experiments the framework had to follow a path given by way points that are activated successively one after each other. Each way point remains active until it is reached by the robot. In the first two experiments a realistic dynamic simulation of the KUKA LWR arm was used. First, the proposed framework had to adapt to an unknown obstacle that blocks the direct path between two way points using the global adaptation signal and, second, by using the local adaptation signals and, third, by using constant learning rates (in combination with the global adaptation signal for triggering learning). In the fourth experiment, we used a pre-trained model from the simulation in a real robot experiment to show that it is possible to transfer knowledge from simulation to the real system. Additionally, the model had to adapt online to a new unknown obstacle, again using the local adaptation signals, to highlight online learning on the real system.
### Experimental Setup
For the simulation experiments, we used a realistic dynamic simulation of the KUKA LWR arm with a cartesian tracking controller to follow the reference trajectories generated by our model. The tracking controller is equipped with a safety controller that stops the tracking when an obstacle is hit. The task was to follow a given sequence of way points, where obstacles block the direct path between two way points in the adaptation experiments. In the real robot experiment, the same tracking and safety controllers were used. Figure 2 shows the simulated and real robot as well as the experimental setup.
By activating the way points successively one after each other as target positions using appropriate context neurons, the model generates online a trajectory tracking the given shape. The model has no knowledge about the task or the constraint, i.e., the target way points, their activation pattern and the obstacle. We considered a two-dimensional workspace that spans \([-1,1]\) for both dimensions – the neuron’s coordinate system – encoding the \(60\times 60\) cm operational space of the robot. Each dimension is encoded by \(15\) state neurons, which results in \(225\) state neurons using full population coding as in [35]. The refractory period is set to \(\tau=10\), mimicking biological realistic spiking behavior and introducing additional noise in the sampling process. The transition model is initialized by Gaussian basis functions centered at the preferred positions of the neurons (see _Materials and Methods_ for more details). For the mental replay we used \(20\) iterations, i.e., \(20\) pairs of training data were generated for each executed movement. All adaptation experiments were \(300\) segments long, where \(40\) trajectory samples were drawn for each segment and \(10\) trials were conducted for each experimental setting.
{adjustbox}
max width= updates triggered (⇓) update time (⇓) planning time (⇓) exec. time
(⇑) target reached (⇑) global trigger α=0.001 7.3±1.1 42.6±4.2 ms 0.238±0.047
s 0.898±0.656 s 10.5±0.5 α=0.01 2.4±0.5 43.7±4.2 ms 0.237±0.046 s 0.806±0.652
s 8.5±3.2 α=0.1 2.0±0.0 46.7±5.7 ms 0.234±0.043 s 0.771±0.670 s 7.9±4.1 global
αt 2.8±0.9 43.3±4.1 ms 0.241±0.044 s 0.928±0.658 s 10.4±0.8 local αt,i 8.6±2.8
52.6±7.5 ms 0.235±0.042 s 1.11±0.679 s 13.7±1.4
Table 1: Evaluation of the adaptation experiments for 10 trials with each the
global, the local and constant learning signals in simulation. The values
denote the number of times learning was triggered by a segment (updates
triggered = required samples = physical interactions), the time required per
triggered update including the mental replay strategy (update time), the
planned execution time per segment (exec. time), the required time for
planning a segment including sampling and post-processing (planning time), and
the number of times the blocked target was reached within the budget of 300
segments (target reached), i.e., number of times all way points were visited.
All values denote mean and standard deviation. The ⇓ and ⇑ symbols denote if a
lower or higher value is better respectively. Note that the constant α
settings use the global adaptation signal αt for triggering learning.
<figure><img src="content_image/1802.08013/x4.png"><figcaption>Figure 4: Adaptation results for three trials with the local learning signals.Each column in A shows one trial of the online adaptation with the locallearning signals, where the upper row shows the mental plan over time and thelower row depicts the changed model. This change in the model is depicted withthe heatmap showing the average change of synaptic input each neuron receives.Similarly the average change of synaptic output each neuron sends is shownwith the scaled neuron sizes. B and C show the local learning signals αt,i forthe three trials over the planned segments. Each color indicates the learningsignal for one neuron.</figcaption></figure>
### Rapid Online Model Adaptation using Global and Local Signals
In this experiment, we want to show the model’s ability to adapt continuously during the execution of the planned trajectory. A direct path between two successively activated way points is blocked by an unknown non-symmetric obstacle, which results in a discrepancy between the planned and executed trajectory due to the interrupted movement.
Constant Learning Rates and the Importance of the Learning ThresholdThe main and starting motivation of the project was to enable online adaptation in the proposed stochastic recurrent network. Therefore, we first created the framework for online planning (and adaptation – see Figure 1). Afterwards we started experiments with online adaptation using the original learning rule (see Equation (4)) and a constant learning rate \(\alpha\). We were not able to find a constant \(\alpha\) for which the online learning was successful and stable, i.e., learning to avoid the obstacles and generating valid movements throughout the whole experiment. With small learning rates, learning to avoid obstacles was successful, however, as the model is updated _permanently_ and in areas that are not affected by the environmental change, the model became unstable over time, resulting in a model, that was not able to produce valid movements anymore. The effect on the transition model using different constant learning rate is shown in Figure 8, which shows the _unlearned_ transition model that cannot produce valid movements (compare to Figures 3 & 4).
These insights gave rise to the idea of using adaptive learning signals in combination with a learning threshold to trigger learning only when an unexpected change is perceived. With these mechanisms, successful and stable online adaptation of the stochastic recurrent network was possible.
Most closely related to our work are potential fields methods for motion planning and extensions to dynamic obstacle avoidance (see [61; 62; 63; 64; 65] for example). All these approaches are deterministic models that consider obstacles through fixed heuristics of repelling potential fields. In contrast, in our work we learn to avoid obstacles online through interaction by using the unexpected perceived feedback. In addition to the gradient based method in [61], we can learn to avoid obstacles with unknown shapes through the interactive online approach and static obstacles do not need to be known a priori. To evaluate the benefit of the dynamic online adaptation signals, we additionally compare to a baseline of our model using constant learning rates (with the adaptive global signal as learning trigger). This model can be seen as an extension of [61] using stochastic neurons with the ability to adapt the potential field whenever an obstacle is hit.
Online Adaptation Experimental ResultsThe effect of the online learning process using intrinsically motivated signals is shown in Figure 3 and Figure 4 for the global and the local signals respectively, where the mental movement trajectories, the adapted models and the adaptation signals \(\alpha_{t}\) and \(\alpha_{t,i}\) for three trials are shown. Additionally we compare to using different constant learning rates \(\alpha\), which use the global adaptation signal and its learning threshold to trigger learning (see the previous paragraph for why this is important), but using the constant \(\alpha\) for the update.
With the proposed intrinsically motivated online learning, the model initially tries to enter the _invalid_ area but _recognizes_, due to the perceived feedback of the interrupted movement encoded in the cognitive dissonance signals, the unexpected obstacle. As a result the model adapts successfully and avoids the obstacle. This adaptation happens efficiently from only \(2.8\pm 0.9\) physical interactions – planned segments that hit the obstacle, which is equal to the number of samples required for learning – with the global learning signal, where the planned execution time of one segment is \(0.928\pm 0.658\) seconds. Moreover, the learning phase including the mental replay strategy takes only \(43.3\pm 4.1\) milliseconds per triggered segment.
Update and planning time with the local learning signals are similar, but adaptation is triggered \(8.6\pm 2.8\) times and the planned execution time is \(1.11\pm 0.679\) seconds. The increase of triggered updates is induced by the higher variability and noise in the individual learning signals, enabling more precise but also more _costly_ adaptation. Still, the required samples – triggered updates – for successful adaptation reflect a sample efficient adaptation mechanism for a complex stochastic recurrent network. The longer execution time indicates that the local learning signals generate more efficient solutions, as every segment covers a larger part of the trajectory, i.e., less segments are required resulting in a higher number for reaching the blocked target. The local adaptation signals reached the blocked target \(13.7\pm 1.4\) times, which outperforms the other adaptation signals. These results are summarized in Table 1. Thus, during the adaptation the global learning signals need fewer interactions, but the resulting solutions afterwards are less efficient. The different effects of the global and local learning signals are discussed in more detail in Section 4.
The results when using constant learning rates are summarized in Table 1 as well. The best result was achieved with a learning rate of \(\alpha=0.001\), resulting in similar number of reached targets like the global adaptation signal (see also Figure 6), but required almost as much updates – i.e., samples – as the local adaptation signals. In addition to tuning this additional parameter, i.e., the constant learning rate, an adaptive signal for triggering learning is still required for successful and robust adaptation. Moreover, when using the higher constant learning rates, the learning was unstable in some trials even with the adaptive trigger signals, i.e., after adaptation no valid movements could be sampled anymore.
By adapting online to the perceived cognitive dissonances, the model generates new valid solutions avoiding the obstacle within seconds from few physical interactions (samples) with both learning signals.
<figure><img src="content_image/1802.08013/x5.png"><figcaption>Figure 5: Adaptation results on the real robot. Online adaptation with thereal KUKA LWR arm using the local learning signals initialized with simulationresults, i.e., the right obstacles are already learned. The left obstacle isadded to the real environment (see Figure 2B). Each column in A shows themental plan and the model for the indicated time. The change in the model isdepicted with the heatmap showing the average change of synaptic input eachneuron receives compared to the pre-trained model. Similarly the averagechange of synaptic output each neuron sends is shown with the scaled neuronsizes. The mental plan demonstrates the rapid adaptation, as only a fewinteractions of the robot are necessary to adapt to the new environment. Thisefficiency is further highlighted in B and C, where the local learning signalsαt,i are shown over the execution time. Each color indicates the learningsignal for one neuron.</figcaption></figure>
### Transfer to and Learning on the Real Robot
With this experiment we show that the models learned in simulation can be transferred directly onto the real system and, furthermore, that the efficient online adaptation can be done on a real robotic system. Therefore, we adapted the simulated task of following the four given way points. Additionally to the obstacles that were already present in simulation, we added a new unknown obstacle to the real environment. The setup is shown in Figure 2B. The framework parameters were the same as in the simulation experiment, except that the recurrent weights of the neural network were initialized with one trial of the simulation. For updating the model the local learning signals were used and therefore the model was initialized with the 1st trial of the local signals simulation experiments (1st column in Figure 4A). On average, an experimental trial on the real robot took about \(5{:}30\) minutes (same as in simulation) and Figure 5 shows the execution and adaptation over time.
As we started with the network trained in simulation, the robot successfully avoids the first obstacles right away and no adaptation is triggered before approach the new obstacle (Fig. 5 first column).
After \(15\) segments, the robot collides with the new obstacle and adapts to it within \(7\) interactions (Figure 5 second and third column). The mismatch between the mental plan and the executed trajectory is above the learning threshold and the online adaptation is triggered and scaled with \(\alpha_{t,i}\) (Figure 5B).
To highlight the efficient adaptation on the real system, we depicted the mental plan after \(15\), \(18\) and \(300\) segments in Figure 5A. For the corresponding segments, the cognitive dissonance signals show a significant mismatch that leads to the fast adaptation, illustrated in Figure 5B-C. After the successful avoidance of the new obstacle, the robot performs the following task while avoiding both obstacles and no further updates are triggered.
## 4 Discussion
In this section we evaluate and compare the learning signals, the resulting models after the adaptation process, and the generated movements of the local and the global learning signals.
<figure><img src="content_image/1802.08013/x6.png"><figcaption>Figure 6: Efficiency of the learned solutions. A shows the mean and standarddeviation of the number of required segments to reach the blocked target overall 10 trials for each setting. B shows the cumulated required segments forreaching the blocked target for each trial and setting together with the meanand standard deviation of the trials of one setting. Note that due to thelimit of planning 300 segments in each trial, the number of times the blockedtarget is reached differs across trials. The constant learning rate (α=0.001)still uses the global adaptation signal for triggering learning.</figcaption></figure>
### Efficiency of the Learned Solutions
Comparing the generated movements in Figure 4A to the movements generated with the global adaptation signal in Figure 3A, the model using the local learning signals anticipates the learned obstacle earlier resulting in more natural evasive movements, i.e., more efficient solutions. Here we define efficiency as the number of segments required to reach the blocked target. As shown in Figure 4B-C, each neuron has a different learning signal \(\alpha_{t,i}\) and therefore a different timing and scale for the adaptation, i.e., the neurons adapt independently in contrast to the global signal. These individual updates enable a more flexible and finer adaptation, resulting in more efficient solutions. As a result, when using the local adaptation signals, the model favors the more efficient solution on the right and chooses the left solution only in some trials at all after adaptation. In contrast, this behavior never occurred in all ten trials with the global signal.
This efficiency can also be seen in Figure 6, where the required segments to reach the blocked target are shown for the local signals, the global signal, a constant learning rate \(\alpha=0.001\), and without any adaptation. Note that due to the stochasticity in the movement generation, the model can reach the block target without adaptation as well. However, without adaption the obstacle is only avoided occasionally through the stochasticity in sampling the movements.
In Figure 6A the mean and standard deviation of the required segments for reaching the blocked target are shown for \(10\) trials with each setting over the complete \(300\) segments in each trial. All adaptation mechanisms outperform the model without adaptation, whereas the local signals perform better than the global signal and the constant learning rate. Similar, in Figure 6B the cumulated required segments for reaching the blocked target consecutively are shown for each trial together with the mean and standard deviation over the trials. Note that, as all trials were limited to \(300\) segments, the number of times the blocked target was reached differs in the different settings and trials (see Table 1), depending on the efficiency of the generated movements, i.e., the amount of segments used.
<figure><img src="content_image/1802.08013/x7.png"><figcaption>Figure 7: Comparison of the learning signals. The magnitude of the generatedlearning signals over all 10 trials for each of the global and localmechanisms are shown with their respective frequencies. Update mass refers tothe sum of all generated learning signals weighted by their frequencies.</figcaption></figure>
### Comparison of the Learning Signals
To investigate the difference in the generated movements when using the global or local signals, we analyzed the corresponding learning signals \(\alpha_{t}\) and \(\alpha_{t,i}\). This evaluation is shown in Figure 7, where the magnitudes of the generated learning signals are plotted with their occurring frequency. When looking at the right half of the histograms – the \(\alpha_{t}\) and \(\alpha_{t,i}\) with lower magnitude –, both learning mechanisms produce similar distributions of the learning signals magnitude. The main difference is the range of the generated signals, i.e., the local mechanism is able to generate stronger learning signals. Even though the frequency of these bigger updates is low – about \(15\%\) of the total updates –, they cover \(30\%\) of the total update mass, where update mass is calculated as the sum over all generated learning signals weighted by their frequencies. In contrast, the biggest \(15\%\) of the global learning signals cover \(34\%\) of the update mass and are all smaller than the biggest \(15\%\) of the local signals.
The ability to generate stronger learning signals in addition to the flexibility of individual signals, enables the local adaptation mechanism to learn models which generate more efficient solutions. The importance of the flexibility enabled by the individual learning signals is further discussed in the subsequent section.
### Spatial Adaptation
Investigating the structure of the changes induced by the different learning signals, reveals a difference in the spatial adaptation and especially in the strength of the changes. In the lower rows of Figure 3A and Figure 4A the changes in the models are visualized with heatmaps showing the _average change_ of _synaptic input_ each state neuron receives, e.g., a value of \(-0.03\) indicates that the corresponding neuron receives more inhibitory signals after adaptation. Additionally, the _average change_ of _synaptic output_ of each state neuron is depicted by the scaled neuron sizes.
When the model adapts with the global signal (Figure 3), the incoming synaptic weights of neurons with preferred positions around the blocked area are decreased – the model only adapts in these areas. The neurons around the constraint are inhibited after adaptation and, therefore, state transitions to these neurons get less likely. This inhibition hinders the network to sample mental movements in affected areas, i.e., the model has learned to avoid these areas. Due to the global signal, the learning is coarse and the affected area is spread larger than the actual obstacle.
In contrast, when adapting using the local signals (Figure 4), the structure of the changes in the model are more focused. The strongest inhibition is still around the obstacle – and stronger than with the global signal –, but much less changes can be found _in front_ of the obstacle. This concentration of the adaptation can also be seen when comparing the changes in the synaptic input and output. Both learning mechanism produce a similar change in the output, but very different changes in the input, i.e., the neurons adapted with the local signals _learned to focus_ their output more precisely.
These stronger and more focused adaptations seem to enable the models updated with the local learning signals to generate more efficient solutions and favor the simpler path.
### Learning Multiple Solutions
Even though during the adaptation phase the model only experienced one successful strategy to avoid the obstacle, it is able to generate different solutions, i.e., bypassing the obstacle left or right, with both adaptation mechanisms. Depending on the individual adaptation in each trial, however, the ratio between the generation of the different solutions differs. Especially when using the local signals, the frequency of the more efficient solution is higher, reflecting the efficiency comparison in Figure 6.
The feature of generating different solutions is enabled by the model’s intrinsic stochasticity, the ability of spiking neural networks to encode arbitrary complex functions, the planning as inference approach and the task-independent adaptation of the state transition model.
## 5 Conclusion
In this work, we introduced a novel framework for probabilistic online motion planning with an efficient online adaptation mechanism. This framework is based on a recent bio-inspired stochastic recurrent neural network that mimics the behavior of hippocampal place cells [34; 35]. The online adaptation is modulated by intrinsic motivation signals inspired by _cognitive dissonance_ which encode the mismatch between mental expectation and observation. Based on our prior work on the global intrinsic motivation signal [36; 37], we developed in this work a more flexible local intrinsic motivation signal for guiding the online adaptation. Additionally we compared and discussed the properties of these two intrinsically motivated learning signals. By combining these learning signals with a mental replay strategy to intensify experienced situations, sample-efficient online adaptation within seconds is achieved. This rapid adaptation is highlighted in simulated and real robotic experiments, where the model adapts to an unknown environment within seconds from few interactions with unknown obstacles without a specified learning task or other human input. Although requiring a few interactions more, the local learning signals learn more focused and are able to generate more efficient solutions – less segments to reach the blocked target – due to the high flexibility of individual learning signals.
In contrast to [34], where the _task-dependent context neuron input_ was learned in a reinforcement learning setup, we update the state transition model, encoded in the recurrent state neurons connections, to adapt _task-independently_ with a supervised learning approach. This sample-efficient and task-independent adaptation lowers the required expert knowledge and makes the approach promising for learning on robotic systems, for reusability and for adding online adaptation to (motion) planning methods.
Learning to avoid unknown obstacles by updating the state transition model encoded in the recurrent synaptic weights is a step towards the goal of recovering from failures. One limitation to overcome before that, is the curse of dimensionality of the full population coding used by the uniformly distributed state neurons to scale the model to higher dimensional spaces. In future work therefore, we want to combine this approach with the factorized population coding from [35] – where the model’s ability to scale to higher dimensional spaces and settings with different modalities was shown – and learning the state neuron population [62], in order to apply the framework to recover from failure tasks with broken joints [66; 67], investigating an intrinsic motivation signal mimicking the avoidance of arthritic pain [68; 69].
With the presented intrinsic motivation signals, the agent can adapt to novel environments by reacting to the perceived feedback. For active exploration, and thereby _forgetting_ or finding novel solutions after failures, we plan to investigate intrinsic motivation signals mimicking curiosity [70] in addition.
As robots should not be limited in their development by the learning tasks specified by the human experts, equipping robots with such task-independent adaptation mechanisms is an important step towards autonomously developing and lifelong-learning systems.
## Acknowledgments
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No #713010 (GOAL-Robots) and No #640554 (SKILLS4ROBOTS).
<figure><img src="content_image/1802.08013/x8.png"><figcaption>Figure 8: Adaptation results with constant learning rates. Each column showsone trial of the online adaptation with different constant learning rates α,where the upper row shows the mental plan over time and the lower row depictsthe adapted model. This change in the model is depicted with the heatmapshowing the average change of synaptic input each neuron receives. Similarlythe average change of synaptic output each neuron sends is shown with thescaled neuron sizes. With constant learning rates and no adaptive signal fortriggering learning, the model is updated constantly, by what the statetransition model gets destroyed and no correct movement can be sampledanymore.</figcaption></figure>
## References
* (1) M. Lungarella, G. Metta, R. Pfeifer, G. Sandini, Developmental robotics: a survey, Connection Science 15 (4) (2003) 151–190.
* (2) J. Schmidhuber, Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts, Connection Science 18 (2) (2006) 173–187.
* (3) M. Asada, K. Hosoda, Y. Kuniyoshi, H. Ishiguro, T. Inui, Y. Yoshikawa, M. Ogino, C. Yoshida, Cognitive developmental robotics: A survey, IEEE Transactions on Autonomous Mental Development 1 (1) (2009) 12–34.
* (4) S. Thrun, T. M. Mitchell, Lifelong robot learning, Robotics and autonomous systems 15 (1-2) (1995) 25–46.
* (5) M. B. Ring, Child: A first step towards continual learning, Machine Learning 28 (1) (1997) 77–104.
* (6) P. Ruvolo, E. Eaton, Ella: An efficient lifelong learning algorithm, in: International Conference on Machine Learning, 2013, pp. 507–515.
* (7) J. Weng, J. McClelland, A. Pentland, O. Sporns, I. Stockman, M. Sur, E. Thelen, Autonomous mental development by robots and animals, Science 291 (5504) (2001) 599–600.
* (8) J. Weng, Developmental robotics: Theory and experiments, International Journal of Humanoid Robotics 1 (02) (2004) 199–236.
* (9) A. Tayebi, Adaptive iterative learning control for robot manipulators, Automatica 40 (7) (2004) 1195–1203.
* (10) D. A. Bristow, M. Tharayil, A. G. Alleyne, A survey of iterative learning control, IEEE Control Systems 26 (3) (2006) 96–114.
* (11) D. Gu, H. Hu, Neural predictive control for a car-like mobile robot, Robotics and Autonomous Systems 39 (2) (2002) 73–86.
* (12) M. Krause, J. Englsberger, P.-B. Wieber, C. Ott, Stabilization of the capture point dynamics for bipedal walking based on model predictive control, IFAC Proceedings Volumes 45 (22) (2012) 165–171.
* (13) E. F. Camacho, C. B. Alba, Model predictive control, Springer Science & Business Media, 2013.
* (14) A. Ibanez, P. Bidaud, V. Padois, Emergence of humanoid walking behaviors from mixed-integer model predictive control, in: Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, IEEE, 2014, pp. 4014–4021.
* (15) L. Jamone, L. Natale, F. Nori, G. Metta, G. Sandini, Autonomous online learning of reaching behavior in a humanoid robot, International Journal of Humanoid Robotics 9 (03) (2012) 1250017.
* (16) S.-J. Yi, B.-T. Zhang, D. Hong, D. D. Lee, Online learning of a full body push recovery controller for omnidirectional walking, in: Humanoid Robots (Humanoids), 2011 11th IEEE-RAS International Conference on, IEEE, 2011, pp. 1–6.
* (17) M. Hersch, E. Sauser, A. Billard, Online learning of the body schema, International Journal of Humanoid Robotics 5 (02) (2008) 161–181.
* (18) R. F. Reinhart, J. J. Steil, Neural learning and dynamical selection of redundant solutions for inverse kinematic control, in: Humanoid Robots (Humanoids), 2011 11th IEEE-RAS International Conference On, IEEE, 2011, pp. 564–569.
* (19) T. Waegeman, F. Wyffels, B. Schrauwen, Feedback control by online learning an inverse model, IEEE transactions on neural networks and learning systems 23 (10) (2012) 1637–1648.
* (20) R. M. Ryan, E. L. Deci, Intrinsic and extrinsic motivations: Classic definitions and new directions, Contemporary educational psychology 25 (1) (2000) 54–67.
* (21) R. M. Ryan, E. L. Deci, Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being., American psychologist 55 (1) (2000) 68.
* (22) R. W. White, Motivation reconsidered: The concept of competence., Psychological review 66 (5) (1959) 297.
* (23) A. G. Barto, S. Singh, N. Chentanez, Intrinsically motivated learning of hierarchical collections of skills, in: Proceedings of the 3rd International Conference on Development and Learning, 2004, pp. 112–19.
* (24) G. Baldassarre, What are intrinsic motivations? a biological perspective, in: A. Cangelosi, J. Triesch, I. Fasel, K. Rohlfing, F. Nori, P.-Y. Oudeyer, M. Schlesinger, Y. Nagai (Eds.), Proceedings of the International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob-2011), IEEE, New York, NY, 2011, pp. E1–8, frankfurt am Main, Germany, 24–27/08/11.
* (25) G. Baldassarre, M. Mirolli (Eds.), Intrinsically motivated learning in natural and artificial systems, Springer, Berlin, 2013. doi:10.1007/978-3-642-32375-1.
* (26) R. S. Sutton, A. G. Barto, Reinforcement learning: An introduction, Vol. 1, MIT press Cambridge, 1998.
* (27) A. Stout, G. D. Konidaris, A. G. Barto, Intrinsically motivated reinforcement learning: A promising framework for developmental robot learning, Tech. rep., MASSACHUSETTS UNIV AMHERST DEPT OF COMPUTER SCIENCE (2005).
* (28) U. Nehmzow, Y. Gatsoulis, E. Kerr, J. Condell, N. Siddique, T. M. McGuinnity, Novelty detection as an intrinsic motivation for cumulative learning robots, in: Intrinsically Motivated Learning in Natural and Artificial Systems, Springer, 2013, pp. 185–207.
* (29) V. G. Santucci, G. Baldassarre, M. Mirolli, Intrinsic motivation signals for driving the acquisition of multiple tasks: a simulated robotic study, in: Proceedings of the 12th International Conference on Cognitive Modelling (ICCM), 2013, pp. 1–6.
* (30) A. G. Barto, S. Mahadevan, Recent advances in hierarchical reinforcement learning, Discrete Event Dynamic Systems 13 (4) (2003) 341–379.
* (31) R. S. Sutton, D. Precup, S. Singh, Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning, Artificial intelligence 112 (1-2) (1999) 181–211.
* (32) P.-Y. Oudeyer, F. Kaplan, V. V. Hafner, Intrinsic motivation systems for autonomous mental development, IEEE transactions on evolutionary computation 11 (2) (2007) 265–286.
* (33) S. Hart, R. Grupen, Learning generalizable control programs, IEEE Transactions on Autonomous Mental Development 3 (3) (2011) 216–231.
* (34) E. Rueckert, D. Kappel, D. Tanneberg, D. Pecevski, J. Peters, Recurrent spiking networks solve planning tasks, Scientific reports 6 (2016) 21142.
* (35) D. Tanneberg, A. Paraschos, J. Peters, E. Rueckert, Deep spiking networks for model-based planning in humanoids, in: IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), 2016.
* (36) D. Tanneberg, J. Peters, E. Rueckert, Online learning with stochastic recurrent neural networks using intrinsic motivation signals, in: Conference on Robot Learning, 2017.
* (37) D. Tanneberg, J. Peters, E. Rueckert, Efficient online adaptation with stochastic recurrent neural networks, in: IEEE-RAS 17th International Conference on Humanoid Robots (Humanoids), 2017.
* (38) H. J. Kappen, V. Gómez, M. Opper, Optimal control as a graphical model inference problem, Machine learning 87 (2) (2012) 159–182.
* (39) M. Botvinick, M. Toussaint, Planning as inference, Trends in cognitive sciences 16 (10) (2012) 485–488.
* (40) E. Rueckert, G. Neumann, M. Toussaint, W. Maass, Learned graphical models for probabilistic planning provide a new class of movement primitives, Frontiers in Computational Neuroscience 6 (97).
* (41) L. Festinger, Cognitive dissonance., Scientific American.
* (42) J. Kagan, Motives and development., Journal of personality and social psychology 22 (1) (1972) 51.
* (43) P.-Y. Oudeyer, F. Kaplan, What is intrinsic motivation? a typology of computational approaches, Frontiers in neurorobotics 1.
* (44) D. J. Foster, M. A. Wilson, Reverse replay of behavioural sequences in hippocampal place cells during the awake state, Nature 440 (7084) (2006) 680.
* (45) J. M. Herrmann, K. Pawelzik, T. Geisel, Learning predictive representations, Neurocomputing 32 (2000) 785–791.
* (46) F. Kaplan, P.-Y. Oudeyer, Motivational principles for visual know-how development, in: Proceedings of the Third International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, 2003, pp. 73–80.
* (47) J. H. Metzen, F. Kirchner, Incremental learning of skill collections based on intrinsic motivation, Frontiers in neurorobotics 7.
* (48) A. Stout, A. G. Barto, Competence progress intrinsic motivation, in: Development and Learning (ICDL), 2010 IEEE 9th International Conference on, IEEE, 2010, pp. 257–262.
* (49) M. Rolf, J. J. Steil, M. Gienger, Goal babbling permits direct learning of inverse kinematics, IEEE Transactions on Autonomous Mental Development 2 (3) (2010) 216–229.
* (50) V. G. Santucci, G. Baldassarre, M. Mirolli, Grail: a goal-discovering robotic architecture for intrinsically-motivated learning, IEEE Transactions on Cognitive and Developmental Systems 8 (3) (2016) 214–231. doi:10.1109/TCDS.2016.2538961. URL http://ieeexplore.ieee.org/document/7470616/
* (51) J. Schmidhuber, Formal theory of creativity, fun, and intrinsic motivation (1990–2010), IEEE Transactions on Autonomous Mental Development 2 (3) (2010) 230–247.
* (52) A. Barto, M. Mirolli, G. Baldassarre, Novelty or surprise?, Frontiers in Psychology – Cognitive Science 4 (907) (2013) e1–15. doi:10.3389/fpsyg.2013.00907.
* (53) B. E. Pfeiffer, D. J. Foster, Hippocampal place cell sequences depict future paths to remembered goals, Nature 497 (7447) (2013) 74.
* (54) L. Buesing, J. Bill, B. Nessler, W. Maass, Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons, PLoS computational biology 7 (11) (2011) e1002211.
* (55) J. Brea, W. Senn, J.-P. Pfister, Sequence learning with hidden units in spiking neural networks, in: Advances in neural information processing systems, 2011, pp. 1422–1430.
* (56) D. Kappel, B. Nessler, W. Maass, STDP installs in winner-take-all circuits an online approximation to hidden markov model learning, PLoS computational biology 10 (3) (2014) e1003511.
* (57) K. L. Stachenfeld, M. Botvinick, S. J. Gershman, Design principles of the hippocampal cognitive map, in: Advances in neural information processing systems, 2014, pp. 2528–2536.
* (58) W. Gerstner, W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity, Cambridge University Press, 2002.
* (59) G. E. Hinton, Training products of experts by minimizing contrastive divergence, Neural computation 14 (8) (2002) 1771–1800.
* (60) S. Stringer, T. Trappenberg, E. Rolls, I. Araujo, Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells, Network: Computation in Neural Systems 13 (2) (2002) 217–242.
* (61) H.-T. Chiang, N. Malone, K. Lesser, M. Oishi, L. Tapia, Path-guided artificial potential fields with stochastic reachable sets for motion planning in highly dynamic environments, in: 2015 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2015, pp. 2347–2354.
* (62) U. M. Erdem, M. Hasselmo, A goal-directed spatial navigation model using forward trajectory planning based on grid cells, European Journal of Neuroscience 35 (6) (2012) 916–931.
* (63) M. C. Lee, M. G. Park, Artificial potential field based path planning for mobile robots using a virtual obstacle concept, in: 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Vol. 2, IEEE, 2003, pp. 735–740.
* (64) J. Barraquand, B. Langlois, J.-C. Latombe, Numerical potential field techniques for robot path planning, IEEE transactions on systems, man, and cybernetics 22 (2) (1992) 224–241.
* (65) S. S. Ge, Y. J. Cui, Dynamic motion planning for mobile robots using potential field method, Autonomous robots 13 (3) (2002) 207–222.
* (66) D. J. Christensen, U. P. Schultz, K. Stoy, A distributed and morphology-independent strategy for adaptive locomotion in self-reconfigurable modular robots, Robotics and Autonomous Systems 61 (9) (2013) 1021–1035.
* (67) A. Cully, J. Clune, D. Tarapore, J.-B. Mouret, Robots that can adapt like animals, Nature 521 (7553) (2015) 503–507.
* (68) B. Kulkarni, D. Bentley, R. Elliott, P. J. Julyan, E. Boger, A. Watson, Y. Boyle, W. El-Deredy, A. K. P. Jones, Arthritic pain is processed in brain areas concerned with emotions and fear, Arthritis & Rheumatology 56 (4) (2007) 1345–1354.
* (69) M. Leeuw, M. E. Goossens, S. J. Linton, G. Crombez, K. Boersma, J. W. Vlaeyen, The fear-avoidance model of musculoskeletal pain: current state of scientific evidence, Journal of behavioral medicine 30 (1) (2007) 77–94.
* (70) P.-Y. Oudeyer, J. Gottlieb, M. Lopes, Intrinsic motivation, curiosity, and learning: Theory and applications in educational technologies, Progress in brain research 229 (2016) 257–284.
|
1301.6122 | {
"language": "en",
"source": "Arxiv",
"date_download": "2024-12-03T00:00:00"
} | {
"doc_length": 144440,
"num_imgs": 25,
"llama3_tokens_count": 45769
} | [
"content_image/1301.6122/x1.png",
"content_image/1301.6122/x3.png",
"content_image/1301.6122/x4.png",
"content_image/1301.6122/x7.png",
"content_image/1301.6122/x10.png",
"content_image/1301.6122/x13.png",
"content_image/1301.6122/x16.png",
"content_image/1301.6122/x18.png",
"content_image/1301.6122/x22.png",
"content_image/1301.6122/x26.png",
"content_image/1301.6122/x30.png",
"content_image/1301.6122/x34.png",
"content_image/1301.6122/x37.png",
"content_image/1301.6122/x40.png",
"content_image/1301.6122/x43.png",
"content_image/1301.6122/x46.png",
"content_image/1301.6122/x49.png",
"content_image/1301.6122/x52.png",
"content_image/1301.6122/x54.png",
"content_image/1301.6122/x56.png",
"content_image/1301.6122/x57.png",
"content_image/1301.6122/x61.png",
"content_image/1301.6122/x62.png",
"content_image/1301.6122/x66.png",
"content_image/1301.6122/x67.png"
] | # Search for the standard model Higgs boson in \(\bm{\ell\nu}\)+jets final states in 9.7 fb\(\bm{{}^{-1}}\) of \(\bm{p\bar{p}}\) collisions with the D0 detector
V.M. Abazov
Joint Institute for Nuclear Research, Dubna, Russia
A. Abbinante
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
B. Abbott
University of Oklahoma, Norman, Oklahoma 73019, USA
B.S. Acharya
Tata Institute of Fundamental Research, Mumbai, India
M. Adams
University of Illinois at Chicago, Chicago, Illinois 60607, USA
T. Adams
Florida State University, Tallahassee, Florida 32306, USA
G.D. Alexeev
Joint Institute for Nuclear Research, Dubna, Russia
G. Alkhazov
Petersburg Nuclear Physics Institute, St. Petersburg, Russia
A. Alton\({}^{a}\)
University of Michigan, Ann Arbor, Michigan 48109, USA
A. Askew
Florida State University, Tallahassee, Florida 32306, USA
S. Atkins
Louisiana Tech University, Ruston, Louisiana 71272, USA
K. Augsten
Czech Technical University in Prague, Prague, Czech Republic
C. Avila
Universidad de los Andes, Bogotá, Colombia
F. Badaud
LPC, Université Blaise Pascal, CNRS/IN2P3, Clermont, France
L. Bagby
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
B. Baldin
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
D.V. Bandurin
Florida State University, Tallahassee, Florida 32306, USA
S. Banerjee
Tata Institute of Fundamental Research, Mumbai, India
E. Barberis
Northeastern University, Boston, Massachusetts 02115, USA
P. Baringer
University of Kansas, Lawrence, Kansas 66045, USA
J.F. Bartlett
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
U. Bassler
CEA, Irfu, SPP, Saclay, France
V. Bazterra
University of Illinois at Chicago, Chicago, Illinois 60607, USA
A. Bean
University of Kansas, Lawrence, Kansas 66045, USA
M. Begalli
Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil
L. Bellantoni
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
S.B. Beri
Panjab University, Chandigarh, India
G. Bernardi
LPNHE, Universités Paris VI and VII, CNRS/IN2P3, Paris, France
R. Bernhard
Physikalisches Institut, Universität Freiburg, Freiburg, Germany
I. Bertram
Lancaster University, Lancaster LA1 4YB, United Kingdom
M. Besançon
CEA, Irfu, SPP, Saclay, France
R. Beuselinck
Imperial College London, London SW7 2AZ, United Kingdom
P.C. Bhat
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
S. Bhatia
University of Mississippi, University, Mississippi 38677, USA
V. Bhatnagar
Panjab University, Chandigarh, India
G. Blazey
Northern Illinois University, DeKalb, Illinois 60115, USA
S. Blessing
Florida State University, Tallahassee, Florida 32306, USA
K. Bloom
University of Nebraska, Lincoln, Nebraska 68588, USA
A. Boehnlein
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
D. Boline
State University of New York, Stony Brook, New York 11794, USA
E.E. Boos
Moscow State University, Moscow, Russia
G. Borissov
Lancaster University, Lancaster LA1 4YB, United Kingdom
A. Brandt
University of Texas, Arlington, Texas 76019, USA
O. Brandt
II. Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen, Germany
R. Brock
Michigan State University, East Lansing, Michigan 48824, USA
A. Bross
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
D. Brown
LPNHE, Universités Paris VI and VII, CNRS/IN2P3, Paris, France
X.B. Bu
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
M. Buehler
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
V. Buescher
Institut für Physik, Universität Mainz, Mainz, Germany
V. Bunichev
Moscow State University, Moscow, Russia
S. Burdin\({}^{b}\)
Lancaster University, Lancaster LA1 4YB, United Kingdom
C.P. Buszello
Uppsala University, Uppsala, Sweden
E. Camacho-Pérez
CINVESTAV, Mexico City, Mexico
B.C.K. Casey
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
H. Castilla-Valdez
CINVESTAV, Mexico City, Mexico
S. Caughron
Michigan State University, East Lansing, Michigan 48824, USA
S. Chakrabarti
State University of New York, Stony Brook, New York 11794, USA
D. Chakraborty
Northern Illinois University, DeKalb, Illinois 60115, USA
K.M. Chan
University of Notre Dame, Notre Dame, Indiana 46556, USA
A. Chandra
Rice University, Houston, Texas 77005, USA
E. Chapon
CEA, Irfu, SPP, Saclay, France
G. Chen
University of Kansas, Lawrence, Kansas 66045, USA
S.W. Cho
Korea Detector Laboratory, Korea University, Seoul, Korea
S. Choi
Korea Detector Laboratory, Korea University, Seoul, Korea
B. Choudhary
Delhi University, Delhi, India
S. Cihangir
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
D. Claes
University of Nebraska, Lincoln, Nebraska 68588, USA
J. Clutter
University of Kansas, Lawrence, Kansas 66045, USA
M. Cooke
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
W.E. Cooper
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
M. Corcoran
Rice University, Houston, Texas 77005, USA
F. Couderc
CEA, Irfu, SPP, Saclay, France
M.-C. Cousinou
CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
D. Cutts
Brown University, Providence, Rhode Island 02912, USA
A. Das
University of Arizona, Tucson, Arizona 85721, USA
G. Davies
Imperial College London, London SW7 2AZ, United Kingdom
S.J. de Jong
Nikhef, Science Park, Amsterdam, the Netherlands
Radboud University Nijmegen, Nijmegen, the Netherlands
E. De La Cruz-Burelo
CINVESTAV, Mexico City, Mexico
F. Déliot
CEA, Irfu, SPP, Saclay, France
R. Demina
University of Rochester, Rochester, New York 14627, USA
D. Denisov
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
S.P. Denisov
Institute for High Energy Physics, Protvino, Russia
S. Desai
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
C. Deterre\({}^{d}\)
II. Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen, Germany
K. DeVaughan
University of Nebraska, Lincoln, Nebraska 68588, USA
H.T. Diehl
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
M. Diesburg
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
P.F. Ding
The University of Manchester, Manchester M13 9PL, United Kingdom
A. Dominguez
University of Nebraska, Lincoln, Nebraska 68588, USA
A. Dubey
Delhi University, Delhi, India
L.V. Dudko
Moscow State University, Moscow, Russia
A. Duperrin
CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
S. Dutt
Panjab University, Chandigarh, India
A. Dyshkant
Northern Illinois University, DeKalb, Illinois 60115, USA
M. Eads
Northern Illinois University, DeKalb, Illinois 60115, USA
D. Edmunds
Michigan State University, East Lansing, Michigan 48824, USA
J. Ellison
University of California Riverside, Riverside, California 92521, USA
V.D. Elvira
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
Y. Enari
LPNHE, Universités Paris VI and VII, CNRS/IN2P3, Paris, France
H. Evans
Indiana University, Bloomington, Indiana 47405, USA
V.N. Evdokimov
Institute for High Energy Physics, Protvino, Russia
L. Feng
Northern Illinois University, DeKalb, Illinois 60115, USA
T. Ferbel
University of Rochester, Rochester, New York 14627, USA
F. Fiedler
Institut für Physik, Universität Mainz, Mainz, Germany
F. Filthaut
Nikhef, Science Park, Amsterdam, the Netherlands
Radboud University Nijmegen, Nijmegen, the Netherlands
W. Fisher
Michigan State University, East Lansing, Michigan 48824, USA
H.E. Fisk
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
M. Fortner
Northern Illinois University, DeKalb, Illinois 60115, USA
H. Fox
Lancaster University, Lancaster LA1 4YB, United Kingdom
S. Fuess
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
A. Garcia-Bellido
University of Rochester, Rochester, New York 14627, USA
J.A. García-González
CINVESTAV, Mexico City, Mexico
G.A. García-Guerra\({}^{c}\)
CINVESTAV, Mexico City, Mexico
V. Gavrilov
Institute for Theoretical and Experimental Physics, Moscow, Russia
W. Geng
CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
Michigan State University, East Lansing, Michigan 48824, USA
C.E. Gerber
University of Illinois at Chicago, Chicago, Illinois 60607, USA
Y. Gershtein
Rutgers University, Piscataway, New Jersey 08855, USA
G. Ginther
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
University of Rochester, Rochester, New York 14627, USA
G. Golovanov
Joint Institute for Nuclear Research, Dubna, Russia
P.D. Grannis
State University of New York, Stony Brook, New York 11794, USA
S. Greder
IPHC, Université de Strasbourg, CNRS/IN2P3, Strasbourg, France
H. Greenlee
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
G. Grenier
IPNL, Université Lyon 1, CNRS/IN2P3, Villeurbanne, France and Université de Lyon, Lyon, France
Ph. Gris
LPC, Université Blaise Pascal, CNRS/IN2P3, Clermont, France
J.-F. Grivaz
LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
A. Grohsjean\({}^{d}\)
CEA, Irfu, SPP, Saclay, France
S. Grünendahl
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
M.W. Grünewald
University College Dublin, Dublin, Ireland
T. Guillemin
LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
G. Gutierrez
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
P. Gutierrez
University of Oklahoma, Norman, Oklahoma 73019, USA
J. Haley
Northeastern University, Boston, Massachusetts 02115, USA
L. Han
University of Science and Technology of China, Hefei, People’s Republic of China
K. Harder
The University of Manchester, Manchester M13 9PL, United Kingdom
A. Harel
University of Rochester, Rochester, New York 14627, USA
J.M. Hauptman
Iowa State University, Ames, Iowa 50011, USA
J. Hays
Imperial College London, London SW7 2AZ, United Kingdom
T. Head
The University of Manchester, Manchester M13 9PL, United Kingdom
T. Hebbeker
III. Physikalisches Institut A, RWTH Aachen University, Aachen, Germany
D. Hedin
Northern Illinois University, DeKalb, Illinois 60115, USA
H. Hegab
Oklahoma State University, Stillwater, Oklahoma 74078, USA
A.P. Heinson
University of California Riverside, Riverside, California 92521, USA
U. Heintz
Brown University, Providence, Rhode Island 02912, USA
C. Hensel
II. Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen, Germany
I. Heredia-De La Cruz
CINVESTAV, Mexico City, Mexico
K. Herner
University of Michigan, Ann Arbor, Michigan 48109, USA
G. Hesketh\({}^{f}\)
The University of Manchester, Manchester M13 9PL, United Kingdom
M.D. Hildreth
University of Notre Dame, Notre Dame, Indiana 46556, USA
R. Hirosky
University of Virginia, Charlottesville, Virginia 22904, USA
T. Hoang
Florida State University, Tallahassee, Florida 32306, USA
J.D. Hobbs
State University of New York, Stony Brook, New York 11794, USA
B. Hoeneisen
Universidad San Francisco de Quito, Quito, Ecuador
J. Hogan
Rice University, Houston, Texas 77005, USA
M. Hohlfeld
Institut für Physik, Universität Mainz, Mainz, Germany
I. Howley
University of Texas, Arlington, Texas 76019, USA
Z. Hubacek
Czech Technical University in Prague, Prague, Czech Republic
CEA, Irfu, SPP, Saclay, France
V. Hynek
Czech Technical University in Prague, Prague, Czech Republic
I. Iashvili
State University of New York, Buffalo, New York 14260, USA
Y. Ilchenko
Southern Methodist University, Dallas, Texas 75275, USA
R. Illingworth
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
A.S. Ito
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
S. Jabeen
Brown University, Providence, Rhode Island 02912, USA
M. Jaffré
LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
A. Jayasinghe
University of Oklahoma, Norman, Oklahoma 73019, USA
M.S. Jeong
Korea Detector Laboratory, Korea University, Seoul, Korea
R. Jesik
Imperial College London, London SW7 2AZ, United Kingdom
P. Jiang
University of Science and Technology of China, Hefei, People’s Republic of China
K. Johns
University of Arizona, Tucson, Arizona 85721, USA
E. Johnson
Michigan State University, East Lansing, Michigan 48824, USA
M. Johnson
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
A. Jonckheere
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
P. Jonsson
Imperial College London, London SW7 2AZ, United Kingdom
J. Joshi
University of California Riverside, Riverside, California 92521, USA
A.W. Jung
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
A. Juste
Institució Catalana de Recerca i Estudis Avançats (ICREA) and Institut de Física d’Altes Energies (IFAE), Barcelona, Spain
E. Kajfasz
CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
D. Karmanov
Moscow State University, Moscow, Russia
I. Katsanos
University of Nebraska, Lincoln, Nebraska 68588, USA
R. Kehoe
Southern Methodist University, Dallas, Texas 75275, USA
S. Kermiche
CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
N. Khalatyan
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
A. Khanov
Oklahoma State University, Stillwater, Oklahoma 74078, USA
A. Kharchilava
State University of New York, Buffalo, New York 14260, USA
Y.N. Kharzheev
Joint Institute for Nuclear Research, Dubna, Russia
I. Kiselevich
Institute for Theoretical and Experimental Physics, Moscow, Russia
J.M. Kohli
Panjab University, Chandigarh, India
A.V. Kozelov
Institute for High Energy Physics, Protvino, Russia
J. Kraus
University of Mississippi, University, Mississippi 38677, USA
A. Kumar
State University of New York, Buffalo, New York 14260, USA
A. Kupco
Center for Particle Physics, Institute of Physics, Academy of Sciences of the Czech Republic, Prague, Czech Republic
T. Kurča
IPNL, Université Lyon 1, CNRS/IN2P3, Villeurbanne, France and Université de Lyon, Lyon, France
V.A. Kuzmin
Moscow State University, Moscow, Russia
S. Lammers
Indiana University, Bloomington, Indiana 47405, USA
P. Lebrun
IPNL, Université Lyon 1, CNRS/IN2P3, Villeurbanne, France and Université de Lyon, Lyon, France
H.S. Lee
Korea Detector Laboratory, Korea University, Seoul, Korea
S.W. Lee
Iowa State University, Ames, Iowa 50011, USA
W.M. Lee
Florida State University, Tallahassee, Florida 32306, USA
X. Lei
University of Arizona, Tucson, Arizona 85721, USA
J. Lellouch
LPNHE, Universités Paris VI and VII, CNRS/IN2P3, Paris, France
D. Li
LPNHE, Universités Paris VI and VII, CNRS/IN2P3, Paris, France
H. Li
University of Virginia, Charlottesville, Virginia 22904, USA
L. Li
University of California Riverside, Riverside, California 92521, USA
Q.Z. Li
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
J.K. Lim
Korea Detector Laboratory, Korea University, Seoul, Korea
D. Lincoln
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
J. Linnemann
Michigan State University, East Lansing, Michigan 48824, USA
V.V. Lipaev
Institute for High Energy Physics, Protvino, Russia
R. Lipton
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
H. Liu
Southern Methodist University, Dallas, Texas 75275, USA
Y. Liu
University of Science and Technology of China, Hefei, People’s Republic of China
A. Lobodenko
Petersburg Nuclear Physics Institute, St. Petersburg, Russia
M. Lokajicek
Center for Particle Physics, Institute of Physics, Academy of Sciences of the Czech Republic, Prague, Czech Republic
R. Lopes de Sa
State University of New York, Stony Brook, New York 11794, USA
R. Luna-Garcia\({}^{g}\)
CINVESTAV, Mexico City, Mexico
A.L. Lyon
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
A.K.A. Maciel
LAFEX, Centro Brasileiro de Pesquisas Físicas, Rio de Janeiro, Brazil
R. Magaña-Villalba
CINVESTAV, Mexico City, Mexico
S. Malik
University of Nebraska, Lincoln, Nebraska 68588, USA
V.L. Malyshev
Joint Institute for Nuclear Research, Dubna, Russia
J. Mansour
II. Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen, Germany
J. Martínez-Ortega
CINVESTAV, Mexico City, Mexico
R. McCarthy
State University of New York, Stony Brook, New York 11794, USA
C.L. McGivern
The University of Manchester, Manchester M13 9PL, United Kingdom
M.M. Meijer
Nikhef, Science Park, Amsterdam, the Netherlands
Radboud University Nijmegen, Nijmegen, the Netherlands
A. Melnitchouk
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
D. Menezes
Northern Illinois University, DeKalb, Illinois 60115, USA
P.G. Mercadante
Universidade Federal do ABC, Santo André, Brazil
M. Merkin
Moscow State University, Moscow, Russia
A. Meyer
III. Physikalisches Institut A, RWTH Aachen University, Aachen, Germany
J. Meyer\({}^{j}\)
II. Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen, Germany
F. Miconi
IPHC, Université de Strasbourg, CNRS/IN2P3, Strasbourg, France
N.K. Mondal
Tata Institute of Fundamental Research, Mumbai, India
M. Mulhearn
University of Virginia, Charlottesville, Virginia 22904, USA
E. Nagy
CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
M. Naimuddin
Delhi University, Delhi, India
M. Narain
Brown University, Providence, Rhode Island 02912, USA
R. Nayyar
University of Arizona, Tucson, Arizona 85721, USA
H.A. Neal
University of Michigan, Ann Arbor, Michigan 48109, USA
J.P. Negret
Universidad de los Andes, Bogotá, Colombia
P. Neustroev
Petersburg Nuclear Physics Institute, St. Petersburg, Russia
H.T. Nguyen
University of Virginia, Charlottesville, Virginia 22904, USA
T. Nunnemann
Ludwig-Maximilians-Universität München, München, Germany
J. Orduna
Rice University, Houston, Texas 77005, USA
N. Osman
CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
J. Osta
University of Notre Dame, Notre Dame, Indiana 46556, USA
M. Padilla
University of California Riverside, Riverside, California 92521, USA
A. Pal
University of Texas, Arlington, Texas 76019, USA
N. Parashar
Purdue University Calumet, Hammond, Indiana 46323, USA
V. Parihar
Brown University, Providence, Rhode Island 02912, USA
S.K. Park
Korea Detector Laboratory, Korea University, Seoul, Korea
R. Partridge\({}^{e}\)
Brown University, Providence, Rhode Island 02912, USA
N. Parua
Indiana University, Bloomington, Indiana 47405, USA
A. Patwa\({}^{k}\)
Brookhaven National Laboratory, Upton, New York 11973, USA
B. Penning
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
M. Perfilov
Moscow State University, Moscow, Russia
Y. Peters
II. Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen, Germany
K. Petridis
The University of Manchester, Manchester M13 9PL, United Kingdom
G. Petrillo
University of Rochester, Rochester, New York 14627, USA
P. Pétroff
LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
M.-A. Pleier
Brookhaven National Laboratory, Upton, New York 11973, USA
P.L.M. Podesta-Lerma\({}^{h}\)
CINVESTAV, Mexico City, Mexico
A. Podkowa\({}^{l}\)
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
V.M. Podstavkov
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
A.V. Popov
Institute for High Energy Physics, Protvino, Russia
M. Prewitt
Rice University, Houston, Texas 77005, USA
D. Price
Indiana University, Bloomington, Indiana 47405, USA
N. Prokopenko
Institute for High Energy Physics, Protvino, Russia
J. Qian
University of Michigan, Ann Arbor, Michigan 48109, USA
A. Quadt
II. Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen, Germany
B. Quinn
University of Mississippi, University, Mississippi 38677, USA
M.S. Rangel
LAFEX, Centro Brasileiro de Pesquisas Físicas, Rio de Janeiro, Brazil
P.N. Ratoff
Lancaster University, Lancaster LA1 4YB, United Kingdom
I. Razumov
Institute for High Energy Physics, Protvino, Russia
I. Ripp-Baudot
IPHC, Université de Strasbourg, CNRS/IN2P3, Strasbourg, France
F. Rizatdinova
Oklahoma State University, Stillwater, Oklahoma 74078, USA
M. Rominsky
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
A. Ross
Lancaster University, Lancaster LA1 4YB, United Kingdom
C. Royon
CEA, Irfu, SPP, Saclay, France
P. Rubinov
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
R. Ruchti
University of Notre Dame, Notre Dame, Indiana 46556, USA
G. Sajot
LPSC, Université Joseph Fourier Grenoble 1, CNRS/IN2P3, Institut National Polytechnique de Grenoble, Grenoble, France
P. Salcido
Northern Illinois University, DeKalb, Illinois 60115, USA
A. Sánchez-Hernández
CINVESTAV, Mexico City, Mexico
M.P. Sanders
Ludwig-Maximilians-Universität München, München, Germany
A.S. Santos\({}^{i}\)
LAFEX, Centro Brasileiro de Pesquisas Físicas, Rio de Janeiro, Brazil
G. Savage
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
L. Sawyer
Louisiana Tech University, Ruston, Louisiana 71272, USA
T. Scanlon
Imperial College London, London SW7 2AZ, United Kingdom
R.D. Schamberger
State University of New York, Stony Brook, New York 11794, USA
Y. Scheglov
Petersburg Nuclear Physics Institute, St. Petersburg, Russia
H. Schellman
Northwestern University, Evanston, Illinois 60208, USA
C. Schwanenberger
The University of Manchester, Manchester M13 9PL, United Kingdom
R. Schwienhorst
Michigan State University, East Lansing, Michigan 48824, USA
J. Sekaric
University of Kansas, Lawrence, Kansas 66045, USA
H. Severini
University of Oklahoma, Norman, Oklahoma 73019, USA
E. Shabalina
II. Physikalisches Institut, Georg-August-Universität Göttingen, Göttingen, Germany
V. Shary
CEA, Irfu, SPP, Saclay, France
S. Shaw
Michigan State University, East Lansing, Michigan 48824, USA
A.A. Shchukin
Institute for High Energy Physics, Protvino, Russia
R.K. Shivpuri
Delhi University, Delhi, India
V. Simak
Czech Technical University in Prague, Prague, Czech Republic
P. Skubic
University of Oklahoma, Norman, Oklahoma 73019, USA
P. Slattery
University of Rochester, Rochester, New York 14627, USA
D. Smirnov
University of Notre Dame, Notre Dame, Indiana 46556, USA
K.J. Smith
State University of New York, Buffalo, New York 14260, USA
G.R. Snow
University of Nebraska, Lincoln, Nebraska 68588, USA
J. Snow
Langston University, Langston, Oklahoma 73050, USA
S. Snyder
Brookhaven National Laboratory, Upton, New York 11973, USA
S. Söldner-Rembold
The University of Manchester, Manchester M13 9PL, United Kingdom
L. Sonnenschein
III. Physikalisches Institut A, RWTH Aachen University, Aachen, Germany
K. Soustruznik
Charles University, Faculty of Mathematics and Physics, Center for Particle Physics, Prague, Czech Republic
J. Stark
LPSC, Université Joseph Fourier Grenoble 1, CNRS/IN2P3, Institut National Polytechnique de Grenoble, Grenoble, France
D.A. Stoyanova
Institute for High Energy Physics, Protvino, Russia
M. Strauss
University of Oklahoma, Norman, Oklahoma 73019, USA
L. Suter
The University of Manchester, Manchester M13 9PL, United Kingdom
P. Svoisky
University of Oklahoma, Norman, Oklahoma 73019, USA
M. Titov
CEA, Irfu, SPP, Saclay, France
V.V. Tokmenin
Joint Institute for Nuclear Research, Dubna, Russia
Y.-T. Tsai
University of Rochester, Rochester, New York 14627, USA
D. Tsybychev
State University of New York, Stony Brook, New York 11794, USA
B. Tuchming
CEA, Irfu, SPP, Saclay, France
C. Tully
Princeton University, Princeton, New Jersey 08544, USA
L. Uvarov
Petersburg Nuclear Physics Institute, St. Petersburg, Russia
S. Uvarov
Petersburg Nuclear Physics Institute, St. Petersburg, Russia
S. Uzunyan
Northern Illinois University, DeKalb, Illinois 60115, USA
R. Van Kooten
Indiana University, Bloomington, Indiana 47405, USA
W.M. van Leeuwen
Nikhef, Science Park, Amsterdam, the Netherlands
N. Varelas
University of Illinois at Chicago, Chicago, Illinois 60607, USA
E.W. Varnes
University of Arizona, Tucson, Arizona 85721, USA
I.A. Vasilyev
Institute for High Energy Physics, Protvino, Russia
A.Y. Verkheev
Joint Institute for Nuclear Research, Dubna, Russia
L.S. Vertogradov
Joint Institute for Nuclear Research, Dubna, Russia
M. Verzocchi
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
M. Vesterinen
The University of Manchester, Manchester M13 9PL, United Kingdom
D. Vilanova
CEA, Irfu, SPP, Saclay, France
P. Vokac
Czech Technical University in Prague, Prague, Czech Republic
H.D. Wahl
Florida State University, Tallahassee, Florida 32306, USA
M.H.L.S. Wang
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
J. Warchol
University of Notre Dame, Notre Dame, Indiana 46556, USA
G. Watts
University of Washington, Seattle, Washington 98195, USA
M. Wayne
University of Notre Dame, Notre Dame, Indiana 46556, USA
J. Weichert
Institut für Physik, Universität Mainz, Mainz, Germany
L. Welty-Rieger
Northwestern University, Evanston, Illinois 60208, USA
A. White
University of Texas, Arlington, Texas 76019, USA
D. Wicke
Fachbereich Physik, Bergische Universität Wuppertal, Wuppertal, Germany
M.R.J. Williams
Lancaster University, Lancaster LA1 4YB, United Kingdom
G.W. Wilson
University of Kansas, Lawrence, Kansas 66045, USA
M. Wobisch
Louisiana Tech University, Ruston, Louisiana 71272, USA
D.R. Wood
Northeastern University, Boston, Massachusetts 02115, USA
T.R. Wyatt
The University of Manchester, Manchester M13 9PL, United Kingdom
Y. Xie
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
R. Yamada
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
S. Yang
University of Science and Technology of China, Hefei, People’s Republic of China
T. Yasuda
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
Y.A. Yatsunenko
Joint Institute for Nuclear Research, Dubna, Russia
W. Ye
State University of New York, Stony Brook, New York 11794, USA
Z. Ye
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
H. Yin
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
K. Yip
Brookhaven National Laboratory, Upton, New York 11973, USA
S.W. Youn
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, USA
J.M. Yu
University of Michigan, Ann Arbor, Michigan 48109, USA
J. Zennamo
State University of New York, Buffalo, New York 14260, USA
T.G. Zhao
The University of Manchester, Manchester M13 9PL, United Kingdom
B. Zhou
University of Michigan, Ann Arbor, Michigan 48109, USA
J. Zhu
University of Michigan, Ann Arbor, Michigan 48109, USA
M. Zielinski
University of Rochester, Rochester, New York 14627, USA
D. Zieminska
Indiana University, Bloomington, Indiana 47405, USA
L. Zivkovic
LPNHE, Universités Paris VI and VII, CNRS/IN2P3, Paris, France
January 25, 2013
###### Abstract
We present, in detail, a search for the standard model Higgs boson, \(H\), in final states with a charged lepton (electron or muon), missing energy, and two or more jets in data corresponding to 9.7 fb\({}^{-1}\) of integrated luminosity collected at a center of mass energy of \(\sqrt{s}\) = 1.96 TeV with the D0 detector at the Fermilab Tevatron \(p\bar{p}\) Collider. The search uses \(b\)-jet identification to categorize events for improved signal versus background separation and is sensitive to associated production of the \(H\) with a \(W\) boson, \(WH\to\ell\nu b\bar{b}\); gluon fusion with the Higgs decaying to \(W\) boson pairs, \(H\to WW\to\ell\nu jj\); and associated production with a vector boson where the Higgs decays to \(W\) boson pairs, \(VH\to VWW\to\ell\nu jjjj\) production (where \(V=W\) or \(Z\)). We observe good agreement between data and expected background. We test our method by measuring \(WZ\) and \(ZZ\) production with \(Z\to b\bar{b}\) and find production rates consistent with the standard model prediction. For a Higgs boson mass of 125 GeV, we set a 95% C.L. upper limit on the production of a standard model Higgs boson of 5.8\(\times\sigma_{\rm SM}\), where \(\sigma_{\rm SM}\) is the standard model Higgs boson production cross section, while the expected limit is 4.7\(\times\sigma_{\rm SM}\). We also interpret the data considering models with fourth generation fermions, or a fermiophobic Higgs boson.
pacs: 14.80.Bn, 13.85.Rm FERMILAB-PUB-13-030-E
The D0 Collaboration¹
[FOOTNOTE:1][ENDFOOTNOTE]
## I Introduction
The Higgs boson is the massive physical state that emerges from electroweak symmetry breaking in the Higgs mechanism [1; 2; 3]. This mechanism generates the masses of the weak gauge bosons and explains the fermion masses through their Yukawa couplings to the Higgs boson field. The mass of the Higgs boson (\(M_{H}\)) is a free parameter in the standard model (SM). Precision measurements of various SM electroweak parameters constrain \(M_{H}\) to be less than \(152\) GeV at the 95% C.L. [4; 5; 6]. Direct searches at the CERN \(e^{+}e^{-}\) Collider (LEP) [7] exclude \(M_{H}<114.4\) GeV at the 95% C.L. The ATLAS and CMS Collaborations, using \(pp\) collisions at the CERN LHC, exclude masses from \(110<M_{H}<600\) GeV, except for a narrow region between 122 and 127 GeV [8; 9]. Both experiments observe a resonance at a mass of \(\approx 125\) GeV, primarily in the \(\gamma\gamma\) and \(ZZ\) final states, with a significance greater than 5 standard deviations (s.d.) that is consistent with SM Higgs boson production [10; 11]. The CDF and D0 Collaborations at the Fermilab Tevatron Collider report a combined analysis that excludes the region \(147<M_{H}<179\) GeV [12] and shows evidence at the 3 s.d. level for a particle decaying to \(b\bar{b}\), produced in association with a \(W\) or \(Z\) boson, consistent with SM \(WH/ZH\) production [13]. Demonstrating that the observed resonance is the SM Higgs boson requires also observing it at the predicted rate in the \(b\bar{b}\) final state, which is the dominant decay mode for masses below \(M_{H}\lesssim 135\) GeV.
The dominant production process for the Higgs boson at the Tevatron Collider is gluon fusion (\(gg\to H\)), followed by the associated production of a Higgs boson with a vector boson (\(VH\)), then via vector boson fusion (\(VVqq^{\prime}\to Hqq^{\prime}\)). At masses below \(M_{H}\approx 135\) GeV, the Higgs boson mainly decays to a pair of \(b\) quarks, while for larger masses, the dominant decay is to a pair of \(W\) bosons. Because the \(H\to b\bar{b}\) process is difficult to distinguish from background at hadron colliders, it is more effective to search for the Higgs boson produced in association with a vector boson for this decay channel.
This Article presents a search by the D0 collaboration for the SM Higgs boson using events containing one isolated charged lepton (\(\ell=e\) or \(\mu\)), a significant imbalance in transverse energy (\(\not\!\!E_{T}\)), and two or more jets. It includes a detailed description of the \(WH\to\ell\nu b\bar{b}\) search, initially presented in Ref. [14] and used as an input to the result presented in Ref. [13], differing from and superseding that result due to an updated treatment of some systematic uncertainties as described in Sec. X below. The complete analysis comprises searches for the production and decay channels: \(WH\rightarrow\ell\nu b\bar{b}\), \(H\to WW^{*}\rightarrow\ell\nu jj\) (where \(j=u,d,s,c\)), and \(VH\to VWW^{*}\rightarrow\ell\nu jjjj\) (where \(V=W\) or \(Z\)). This search also considers contributions from \(ZH\) production and from the decay \(H\to ZZ\) when one of the charged leptons from \(Z\to\ell\ell\) decay is not identified in the detector. We optimize the analysis by subdividing data into mutually exclusive subchannels based on charged lepton flavor, jet multiplicity, and the number and quality of candidate \(b\) quark jets. This search also extends the most recent D0 \(WH\to\ell\nu b\bar{b}\) search [14] by adding subchannels with looser \(b\)-quark jet identification requirements and subchannels with four or more jets. These additional subchannels are primarily sensitive to \(H\to WW^{*}\rightarrow\ell\nu jj\) and \(VH\to VWW^{*}\rightarrow\ell\nu jjjj\) production and extend the reach of our search to \(M_{H}=200\) GeV. We present a measurement of \(VZ\) production with \(Z\to b\bar{b}\) as a cross check on our methodology in Sec. XI. In addition to our standard model interpretation, we consider interpretations of our result in models with a fourth generation of fermions, and models with a fermiophobic Higgs as described in Sec. XIII.
Several other searches for \(WH\to\ell\nu b\bar{b}\) production have been reported at a \(p\bar{p}\) center-of-mass energy of \(\sqrt{s}=1.96\) TeV, most recently by the CDF Collaboration [15]. The results presented here supersede previous searches by the D0 Collaboration, presented in Refs. [16; 17; 18; 19; 20], which used subsamples of the data presented in this Article. They also supersede a previous search for Higgs boson production in the \(\ell\nu jj\) final state by the D0 Collaboration [21].
## II The D0 Detector
This analysis relies on all major components of the D0 detector: tracking detectors, calorimeters, and muon identification system. These systems are described in detail in Ref. [22; 23; 24; 25].
Closest to the interaction point is the silicon microstrip tracker (SMT) followed by the central scintillating fiber tracker (CFT). These detector subsystems are located inside a 2 T magnetic field provided by a superconducting solenoid. They track charged particles and are used to reconstruct primary and secondary vertices for pseudorapidities [26] of \(|\eta|<3\). Outside the solenoid is the liquid argon/uranium calorimeter consisting of one central calorimeter (CC) covering \(|\eta|\lesssim 1\) and two end calorimeters (EC) extending coverage to \(|\eta|\approx 4\). Each calorimeter contains an innermost finely segmented electromagnetic layer followed by two hadronic layers, with fine and coarse segmentation, respectively. The main functions of the calorimeters are to measure energies and help identify electrons, photons, and jets using coordinate information of significant energy clusters. They also give a measure of the \(\not\!\!E_{T}\). A preshower detector between the solenoidal magnet and central calorimeter consists of a cylindrical radiator and three layers of scintillator strips covering the region \(|\eta|<1.3\). The outermost system provides muon identification. It is divided into a central section that covers \(|\eta|<1\) and forward sections that extend coverage out to \(|\eta|\approx 2\). The muon system is composed of three layers of drift tubes and scintillation counters, one layer before and two layers after a 1.8 T toroidal magnet.
## III Event Trigger
Events in the electron channel are triggered by a logical OR of triggers that require an electromagnetic object and jets, as described in Ref. [20]. Trigger efficiencies are modeled in the Monte Carlo (MC) simulation by applying the trigger efficiency, measured in data, as an event weight. This efficiency is parametrized as a function of electron \(\eta\), azimuthal angle \(\phi\) [27], and transverse momentum \(p_{T}\). For the events selected in our analysis, these triggers have an efficiency of \((90-100)\%\) depending on the trigger and the region of the detector.
The muon channel uses an inclusive trigger approach, based on the logical OR of all available triggers, except those containing lifetime-based requirements that can bias the performance of \(b\)-jet identification. To determine the trigger efficiency, we compare data events selected with a well-modeled logical OR of the single muon and muon+jets triggers (\(T_{\mu\text{OR}}\)), which are about 70% efficient, to events selected using all triggers. The increase in event yield in the inclusive trigger sample is used to determine an inclusive trigger correction for the MC trigger efficiency, \(P_{\mathrm{corr}}\), relative to the \(T_{\mu\text{OR}}\) trigger ensemble:
(1)
where the numerator is the difference between the number of data events in the inclusive trigger sample and the \(T_{\mu\text{OR}}\) trigger sample, after subtracting off instrumental multijet (MJ) backgrounds, and the denominator is the number of MC events (after the event selection and normalization to data described in Sec. VIII and the MC corrections are applied as described in Sec. VI.1) with the trigger efficiency set to 1. The total trigger efficiency estimate for events in the muon channel is \(T_{\mu\text{OR}}+P_{\mathrm{corr}}\), limited to be \(\leq 1\).
Triggers based on jets and \(\not\!\!E_{T}\) make the most significant contributions to the inclusive set of triggers beyond those included in the well-modeled \(T_{\mu\text{OR}}\) trigger set. To account for these contributions, the correction from \(T_{\mu\text{OR}}\) triggers to the inclusive set of triggers is parametrized as a function of the scalar sum of the transverse momenta of all jets, \(H_{T}\), and the \(\not\!\!E_{T}\), and is derived for separate regions in muon \(\eta\).
For \(|\eta|<1.0\), events are dominantly triggered by single muon triggers, while for \(|\eta|>1.6\), triggers based on the logical OR of muon+jets prevail. The third region, \(1.0<|\eta|<1.6\), is a mixture of single muon and muon+jets triggers. In the \(|\eta|<1.0\) and \(1.0<|\eta|<1.6\) regions the detector support structure allows only partial coverage by the muon system. This impacts the muon trigger efficiency in the region \(-2<\phi<-1.2\). In these regions, we therefore derive separate corrections. The inclusive trigger approach results in a gain of about 30% in efficiency over using only muon and the muon+jets triggers. Examples of these corrections, \(P_{\mathrm{corr}}\) are shown in Fig. 1.
<figure><img src="content_image/1301.6122/x1.png"><figcaption>Figure 1: (color online) Data-derived muon trigger correction to account forthe resulting efficiency gain in moving from single muon and muon+jetstriggers to inclusive triggers as a function of HT for |η|<1.0, shown (a) forevents with ⧸ET<50 GeV and (b) for events with ⧸ET≥50 GeV. The black circlesshow the correction when the muon is in the region of ϕ (−2<ϕ<−1.2) wherethere is a gap in the muon coverage for detector supports, and the redtriangles show the correction elsewhere in ϕ.</figcaption></figure>
## IV Identification of Leptons, Jets,
and \(\not\!\!E_{T}\)
To reconstruct the candidate \(W(\to\ell\nu)\) boson, our selected events are required to contain a single identified electron or muon together with significant \(\not\!\!E_{T}\). To ensure statistical independence with channels that contain more than one lepton, we do not consider events with more than one electron or muon. Two or more jets are also required in order to study \(WH\to\ell\nu b\bar{b}\), \(H\to WW\to\ell\nu jj\), and \(VH\to VWW\to\ell\nu jjjj\) production. Two sets of lepton identification criteria are applied for each lepton channel in order to form a “loose” sample, used to estimate the multijet background from data as described in Sec. VII, and a “tight” sample used to perform the search. The event selection procedure, prior to \(b\)-jet categorization, is similar to that described in Ref. [20] and described in more detail below.
Electrons with \(p_{T}>15\) GeV are selected in the pseudorapidity regions \(|\eta|<1.1\) and \(1.5<|\eta|<2.5\), corresponding to the CC and EC, respectively. A multivariate discriminant is used to identify electrons. The discriminant is based on a boosted decision tree [28; 29; 30; 31; 32] (BDT) as implemented in the tmva package [33] with input variables that are listed below. The BDTs are discussed in more detail in Sec. IX. The loose and tight electron samples are defined by different requirements on the response of this multivariate discriminant that are chosen to retain high electron selection efficiencies while suppressing backgrounds at differing rates.
Leptons coming from the leptonic decays of \(W\) bosons tend to be isolated from jets. Isolated electromagnetic showers are identified within a cone in \(\eta\)-\(\phi\) space of \(\mbox{$\Delta\mathcal{R}$}=\sqrt{\Delta\eta^{2}+\Delta\phi^{2}}<0.4\) [34]. In the CC (EC), an electromagnetic shower is required to deposit \(97\%\)\((90\%)\) of its total energy within a cone of radius \(\mbox{$\Delta\mathcal{R}$}=0.2\) in the electromagnetic calorimeter. The showers must have transverse and longitudinal distributions that are consistent with those expected from electrons. In the CC region, a reconstructed track, isolated from other tracks, is required to have a trajectory that extrapolates to the electromagnetic (EM) shower. The isolation criteria restrict the sum of the scalar \(p_{T}\) of tracks with \(p_{T}>0.5\) GeV within a hollow cone of radius \(0.05<\mbox{$\Delta\mathcal{R}$}<0.4\) surrounding the electron candidate to be less than 2.5 GeV. The BDT is constructed using additional information such as: the number and scalar \(p_{T}\) sum of tracks in the cone of radius \(\mbox{$\Delta\mathcal{R}$}<0.4\) surrounding the candidate cluster, track-to-cluster-matching probability, the ratio of the transverse energy of the cluster to the transverse momentum of the track associated with the shower, the EM energy fraction, lateral and longitudinal shower shape characteristics, as well as the number of hits in the various layers of the tracking detector, and information from the central preshower detector. The discriminants are trained using \(Z/\gamma^{*}\to ee\) data events.
We select muons with \(p_{T}>15\) GeV and \(|\eta|<2.0\). They are required to have reconstructed track segments in layers of the muon system both before and after the toroidal magnet, except where detector support structure limits muon system coverage, for which the presence of track segments in any layer is sufficient. The local muon system track must be spatially matched to a track in the central tracker.
Muons originating from semi-leptonic decays of heavy flavored hadrons are typically not isolated due to jet fragmentation and secondary particles from the partial hadronic decays. We employ a loose muon definition, requiring minimal separation of \(\mbox{$\Delta\mathcal{R}$}(\mu,j)>0.5\) between the muon and any jet, while the tight identification has additional isolation requirements. For tight muons, the scalar sum of the \(p_{T}\) of tracks with \(\mbox{$\Delta\mathcal{R}$}<0.5\) around the muon candidate is required to be less than \(0.4\times p_{T}^{\mu}\). Furthermore, the transverse energy deposits in the calorimeter in a hollow cone of \(0.1<\mbox{$\Delta\mathcal{R}$}<0.4\) around the muon must be less than \(0.12\times p_{T}^{\mu}\). To suppress cosmic ray muons, scintillator timing information is used to require hits in the detector to coincide with a beam crossing.
To reduce backgrounds from \(Z/\gamma^{*}\to\ell\ell+\)jets and \(t\bar{t}\) production, we reject events containing more than one tight-isolated charged lepton.
Jets are reconstructed in the calorimeters in the region \(|\eta|<2.5\) using an iterative midpoint cone algorithm, with a cone size of \(\mbox{$\Delta\mathcal{R}$}=0.5\) [35]. To minimize the possibility that jets are caused by noise or spurious energy deposits, the fraction of the total jet energy contained in the electromagnetic layers of the calorimeter is required to be between 5% and 95%, and the energy fraction in the coarse hadronic layers of the calorimeter is required to be less than 40%. To suppress noise, different energy thresholds are also applied to clustered and to isolated cells [36]. The energy of the jets is scaled by applying a correction determined from \(\gamma+\)jet events using the same jet-finding algorithm. This scale correction accounts for additional energy (e.g., residual energy from previous bunch crossings and energy from multiple \(p\mbox{$\bar{p}$}\) interactions) that is sampled within the finite cone size, the calorimeter energy response to particles produced within the jet cone, and energy flowing outside the cone or moving into the cone via detector effects [36]. We also apply an additional correction that accounts for the flavor composition of jets [37].
Jet energy calibration and resolution are adjusted in simulated events to match those measured in data. This correction is derived from \(Z(\to ee)+\)jet events from the \(p_{T}\) imbalance between the \(Z\) boson and the recoiling jet in MC simulation when compared to that observed in data, and applied to jet samples in MC events. Differences in reconstruction thresholds in simulation and data are also taken into account, and the jet identification efficiency and jet resolution are adjusted in the simulation to match those measured in data. All selected jets are required to have \(p_{T}>20\) GeV and \(|\eta|<2.5\). We require that jets originate from the primary \(p\bar{p}\) vertex (PV), such that each selected jet is matched to at least two tracks with \(p_{T}>0.5\) GeV that have at least one hit in the SMT detector and a distance of closest approach with respect to the PV of less than 0.5 cm in the transverse plane and less than 1 cm along the beam axis (\(z\)). Interaction vertices are reconstructed from tracks that have \(p_{T}>0.5\) GeV with at least two hits in the SMT. The primary vertex is the reconstructed vertex with the highest average \(p_{T}\) of its tracks. Vertex reconstruction is described in more detail in Ref. [38]. We also require that the PV be reconstructed within \(z_{PV}=\pm 60~{}\rm cm\) of the center of the detector.
The \(\not\!\!E_{T}\) is calculated from individual calorimeter cell energies in the electromagnetic and fine hadronic sections of the calorimeter and is required to satisfy \(\mbox{$\not\!\!E_{T}$}>15\) GeV for the electron channel and \(\mbox{$\not\!\!E_{T}$}>20\) GeV for the muon channel. Energy from the coarse hadronic layers that is contained within a jet is also included in the \(\not\!\!E_{T}\) calculation. A correction for the presence of any muons and all energy corrections applied to electrons and jets are propagated to the value of \(\not\!\!E_{T}\).
## V Tagging of \({\bm{b}}\)-Quark Jets
The \(b\)-tagging algorithm for identifying jets originating from \(b\) quarks is based on a multivariate discriminant using a combination of variables sensitive to the presence of tracks or secondary vertices displaced significantly in the \(x\)-\(y\) plane from the \(p\mbox{$\bar{p}$}\) interaction vertex. This algorithm provides improved performance over the neural network algorithm described in Ref. [38].
Jets considered by the \(b\)-tagging algorithm are required to be “taggable,” i.e., contain at least two tracks with each having at least one hit in the SMT. The efficiency of this requirement accounts for variations in detector acceptance and track reconstruction efficiencies at different locations of the PV prior to the application of the \(b\)-tagging algorithm, and depends on the \(z\) position of the PV and the \(p_{T}\) and \(\eta\) of the jet. For jets that pass through the geometrical acceptance of the tracking system, this efficiency is typically about 97%. The efficiency for \(b\)-tagging is determined with respect to taggable jets. The correction for taggability is measured in the selected data sample, while the corrections for \(b\)-tagging are determined in an independent heavy-flavor jet enriched sample of events that include a jet containing a muon, as described in Ref. [38]. The efficiency for jets to be taggable and to satisfy \(b\)-tagging requirements in the simulation is corrected to reproduce the respective efficiencies in data.
We define six independent tagging samples with zero, one loose, one tight, two loose, two medium, or two tight \(b\)-tagged jets. An inclusive “pretag” sample is also considered for parts of this analysis. Events with no jets satisfying the \(b\)-tagging criteria are included in the zero \(b\)-tag sample. If exactly one jet is \(b\)-tagged, and the \(b\)-identification discriminant output for that jet, \(b_{\text{ID}}^{j_{i}}\), satisfies the tight selection threshold (\(b_{\text{ID}}^{j_{i}}>0.15\)), that event is considered part of the one tight \(b\)-tag sample. Events with exactly one \(b\)-tagged jet that fails the tight selection threshold, but passes the loose selection threshold (\(b_{\text{ID}}^{j_{i}}>0.02\)) are included in the one loose \(b\)-tag sample. Events with two or more \(b\)-tagged jets are assigned to either the two loose \(b\)-tags, two medium \(b\)-tags, or two tight \(b\)-tags category, depending on the value of the average \(b\)-identification discriminant of the two jets with the highest discriminant values, i.e., the double tight category is required to satisfy \((b_{\text{ID}}^{j_{1}}+b_{\text{ID}}^{j_{2}})/2>0.55\); the medium category is \(0.35<(b_{\text{ID}}^{j_{1}}+b_{\text{ID}}^{j_{2}})/2\leq 0.55\); and the loose category is \(0.02<(b_{\text{ID}}^{j_{1}}+b_{\text{ID}}^{j_{2}})/2\leq 0.35\) (see Fig. 2). The operating point for the loose (medium, tight) threshold has an identification efficiency of 79% (57%, 47%) for individual \(b\) jets, averaged over selected jet \(p_{T}\) and \(\eta\) distributions, with a \(b\)-tagging misidentification rate of 11% (0.6%, 0.15%) for light quark jets (\(lf\)), calculated by the method described in Ref. [38].
<figure><img src="content_image/1301.6122/x3.png"><figcaption>Figure 2: (color online) Average of the b-identification discriminant outputsof each jet in events with two jets.</figcaption></figure>
## VI Monte Carlo Simulation
We account for all Higgs boson production and decay processes that can lead to a final state containing exactly one charged well isolated lepton, \(\not\!\!E_{T}\), and two or more jets. The signal processes considered are:
* Associated production of a Higgs boson with a vector boson where the Higgs boson decays to \(b\bar{b}\), \(c\bar{c}\), \(\tau\tau\), or \(VV\). The associated weak vector boson decays leptonically in the case of \(H\to b\bar{b}\) and either leptonically or hadronically in the case of \(H\to WW\). Contributions from \(Z(\to\ell\ell)H(\to b\bar{b})\) production arise from identifying only one charged lepton in the detector, with the other contributing to the \(\not\!\!E_{T}\).
* Higgs boson production via gluon fusion with the subsequent decay \(H\to VV\), where one weak vector boson decays leptonically (with exactly one identified lepton).
* Higgs boson production via vector boson fusion with the subsequent decay \(H\to VV\), where one weak vector boson decays leptonically (with exactly one identified lepton).
Various SM processes can mimic expected signal signatures, including \(V+\)jets, diboson (\(VV\)), MJ, \(t\bar{t}\), and single top quark production.
All signal processes and most of the background processes are estimated from MC simulation, while the MJ background is evaluated from data, as described in Sec. VII. We use pythia [39] to simulate all signal processes and diboson processes. The \(V+\)jets and \(t\bar{t}\) samples are simulated with the alpgen [40] MC generator interfaced to pythia for parton showering and hadronization, while the singletop event generator [41; 42] interfaced to pythia is used for single top quark events. To avoid overestimating the probability of further partonic emissions in pythia, the MLM factorization (“matching”) scheme [43] is used. All of these simulations use CTEQ6L1 [44; 45] parton distribution functions (PDF).
A full geant-based [46] detector simulation is used to process signal and background events. To account for residual activity from previous beam crossings and contributions from the presence of additional \(p\bar{p}\) interactions, events from randomly selected beam crossings with the same instantaneous luminosity profile as the data are overlaid on the simulated events. All events are then reconstructed using the same software as used for data.
The signal cross sections and branching fractions are normalized to the SM predictions [12]. The \(WH\) and \(ZH\) cross sections are calculated at next-to-next-to-leading order (NNLO) [47], with MSTW2008 NNLO PDFs [48]. The gluon fusion process uses the NNLO+next-to-next-to-leading log (NNLL) calculation [49], and the vector boson fusion process is calculated at NNLO in QCD [50]. The Higgs boson decay branching fractions are obtained with hdecay [51; 52]. We use NLO cross sections to normalize single top quark [53] and diboson [54; 55] production, while we use an approximate NNLO calculation for \(t\bar{t}\) production [56]. The \(p_{T}\) of the \(Z\) boson in \(Z+\)jets events is corrected to match that observed in data [57]. The \(p_{T}\) of the \(W\) boson in \(W\)+jets events is corrected using the same dependence but taking into account the differences between the \(p_{T}\) spectra of the \(Z\) and \(W\) bosons in NNLO QCD [58]. Additional scale factors to account for higher order terms in the alpgen MC for the \(V\)+heavy flavor jets, \(V+hf\), are obtained from mcfm [59; 55]. The \(V\)+jets processes are then normalized to data for each lepton flavor and jet multiplicity separately as described in Sec. VIII.
### MC Reweighting
Motivated by previous comparisons of alpgen with data [60] and with other event generators [43], we develop corrections to \(W+\text{jets}\) and \(Z+\text{jets}\) MC samples to correct for the shape discrepancies in kinematic distributions between data and simulation. The corrections are derived based on the direct comparison between data and MC samples prior to the application of \(b\)-tagging, where any contamination from signal is very small.
To improve the description of jet directions, we correct the \(\eta\) distributions of the leading and second leading jets in \(W/Z\)+jets events. The correction function is a fourth-order polynomial determined from the ratio of the \(V\)+jets events in MC and data minus non-\(V\)+jets backgrounds. The modeling of the lepton \(\eta\) in \(W\)+jets events is adjusted by applying a second-order polynomial correction. Correlated discrepancies observed in the leptonically decaying \(W\) boson transverse momentum, \(p_{T}^{W}\), and the jet angular separation, \(\mbox{$\Delta\mathcal{R}$}(j_{1},j_{2})\), are corrected through two reweighting functions in the two-dimensional \(\Delta\mathcal{R}\)-\(p_{T}^{W}\) plane [20]. The \(p_{T}^{W}\) reweighting is applied only to \(W+\text{jets}\) events, while the \(\Delta\mathcal{R}\) reweighting is applied to both \(W+\text{jets}\) and \(Z+\text{jets}\) events. Each of these corrections is designed to change differential distributions, but to preserve normalization. Corrections are on the order of a few percent in the highly populated region of each distribution and may exceed 10% for extreme values of each distribution.
All corrections are derived in events selected with muon+jets triggers to minimize uncertainties due to contamination from MJ events, and are applied to both the electron and muon channels. Additional \(p_{T}^{W}\), \(\Delta\mathcal{R}\), and lepton \(\eta\) corrections and corresponding systematic uncertainties are determined from events selected with inclusive muon triggers and are applied to events containing muons, accounting for variations in modeling distributions of the inclusively triggered events.
## VII Multijet Background
The MJ background, events where a jet is misidentified as a lepton, is determined from the data prior to the application of \(b\)-tagging, using a method similar to the one used in Ref. [20]. This method involves applying event weights that depend on the relative efficiency \(\varepsilon_{\text{LT}}^{\ell}\), of a lepton passing loose requirements to subsequently pass the tight requirements and a similar relative probability, \(P_{\text{LT}}^{\text{MJ}}\), for a MJ event to pass these sequential selections. A MJ template is constructed by selecting events from data in which the lepton passes the loose isolation requirement, but fails the tight requirement, as described in Sec. IV. Each event in the MJ template is weighted by
\[w_{\text{MJ}}=\frac{P_{\text{LT}}^{\text{MJ}}}{1-P_{\text{LT}}^{\text{MJ}}},\] (2)
where \(P^{\text{MJ}}_{\text{LT}}\) is a function of the event kinematics. Since the MJ template contains a contribution from events with real leptons originating from leptonic decays of \(W/Z\) bosons, we correct the normalization of the \(V\)+jets MC using the event weight
\[w_{\text{VJ}}=1-\frac{P_{\text{LT}}^{\text{MJ}}\left(1-\varepsilon_{\text{LT}} ^{\ell}\right)}{\varepsilon_{\text{LT}}^{\ell}\left(1-P_{\text{LT}}^{\text{MJ} }\right)},\] (3)
where \({\epsilon^{\ell}_{\text{LT}}}\) and \(P^{\text{MJ}}_{\text{LT}}\) are functions of event kinematics. The efficiencies \({\epsilon^{\ell}_{\text{LT}}}\) are functions of lepton \(p_{T}\), and they are determined from \(Z/\gamma^{*}\rightarrow\ell\ell\) events. The probabilities \(P^{\text{MJ}}_{\text{LT}}\) are determined in the region \(5<\mbox{$\not\!\!E_{T}$}<15\) GeV from the measured ratio of the number of events with tight leptons and those with loose leptons after correcting each sample for the expected MC contribution from real leptons in the specific kinematic interval. Electron channel probabilities are parametrized in \(p_{T}\), calorimeter detector \(\eta\), and \(\min\Delta\phi(\mbox{$\not\!\!E_{T}$},j)\) while probabilities in the muon channel are parametrized in \(p_{T}\) for different regions in muon detector \(\eta\) and \(\Delta\phi(\mbox{$\not\!\!E_{T}$},\mu)\).
## VIII Event Selection
Events are required to have one isolated charged lepton, large \(\not\!\!E_{T}\), and two or more jets, as described in Sec. IV. To suppress MJ backgrounds, events must satisfy the additional requirement that \(M_{T}^{W}>40\ \mathrm{GeV}-0.5~{}\times\mbox{$\not\!\!E_{T}$}\), where \(M_{T}^{W}\) is the transverse mass [61] of the \(W\) boson. We then perform the final normalization of the \(V\)+jets and MJ backgrounds via a simultaneous fit to data in the \(M_{T}^{W}\) distribution after subtracting the other SM background predictions from the data as described in Refs. [14; 19; 20]. The distribution of \(M_{T}^{W}\) after this normalization procedure is shown in Fig. 3(a). We perform separate fits for each lepton flavor and jet multiplicity category before dividing events into categories based on the number and quality of identified \(b\) jets, as described in Sec. V. All events passing these selection criteria constitute the pretag sample, and each pretag event also belongs to exactly one of the six independent \(b\)-tag categories. Only the zero and one-loose \(b\)-tag categories are considered when searching for the signal in events with four or more jets because \(t\bar{t}\) production dominates the small amount of signal present in higher \(b\)-tag categories.
The expected number of events from each signal and background category is compared to the observed data for each \(b\)-jet identification category for events with two jets, three jets, and four or more jets in Tables 1, 2, and 3, respectively. Selected kinematic distributions are shown for all selected events in Figs. 3 and 4, and the dijet invariant mass for events with two jets is shown for all \(b\)-tag categories in Figs. 5 and 6. In all plots, data points are shown with error bars that reflect the statistical uncertainty only. Discrepancies in data-MC agreement are within our systematic uncertainties described in Sec. X.
<figure><img src="content_image/1301.6122/x4.png"><figcaption>Figure 3: (color online) Distributions for all selected events with two jetsof (a) transverse mass of the lepton-⧸ET system, (b) charged lepton pT, and(c) ⧸ET. The signal is multiplied by 1000. Overflow events are added to thelast bin.</figcaption></figure>
<figure><img src="content_image/1301.6122/x7.png"><figcaption>Figure 4: (color online) Distributions for all selected events with two jetsof (a) leading jet pT, (b) second-leading jet pT, and (c) ΔR between theleading and second-leading jets. The signal is multiplied by 1000. Overflowevents are added to the last bin.</figcaption></figure>
<figure><img src="content_image/1301.6122/x10.png"><figcaption>Figure 5: (color online) Invariant mass of the leading and second-leading jetsin events with two jets and (a) zero b-tags, (b) one loose b-tag, and (c) onetight b-tag. The signal is multiplied by 1000, 500, and 200, respectively.Overflow events are added to the last bin.</figcaption></figure>
<figure><img src="content_image/1301.6122/x13.png"><figcaption>Figure 6: (color online) Invariant mass of the leading and second-leading jetsin events with two jets and (a) two loose b-tags, (b) two medium b-tags, and(c) two tight b-tags. The signal is multiplied by 200, 50, and 50,respectively. Overflow events are added to the last bin.</figcaption></figure>
| Pretag | 0 b-tags | 1 loose b-tag | 1 tight b-tag | 2 loose b-tags | 2 med. b-tags | 2 tight b-tags
---|---|---|---|---|---|---|---
VH→ℓνb¯b | 37.3 | 6.4 | 4.0 | 11.6 | 3.2 | 4.6 | 7.7
H→VV→ℓνjj | 24.7 | 18.8 | 3.9 | 1.8 | 0.3 | 0.07 | 0
VH→VVV→ℓνjjjj | 13.0 | 9.3 | 2.3 | 1.2 | 0.3 | 0.04 | 0.01
Diboson | 5686 | 4035 | 968 | 535 | 109 | 42 | 38
V+(g,u,d,s)-jets | 182 271 | 148 686 | 26 421 | 6174 | 1762 | 132 | 13
V+(b¯b/c¯c) | 27 443 | 15 089 | 4872 | 5236 | 978 | 691 | 691
top (t¯t \+ single top) | 3528 | 758 | 455 | 1289 | 247 | 333 | 462
Multijet | 58 002 | 43 546 | 9316 | 3700 | 946 | 298 | 195
Total expectation | 276 930 | 212 114 | 42 032 | 16 935 | 4043 | 1496 | 1400
Total uncertainty | ± 14 998 | ± 11 352 | ± 2438 | ± 1696 | ± 362 | ± 117 | ± 175
Observed events | 276 929 | 211 169 | 42 774 | 16 406 | 4057 | 1358 | 1165
Table 1: Observed number of events in data and expected number of events from
each signal and background source (where V=W,Z) for events with exactly two
jets. The expected signal is quoted at MH=125 GeV. The total background
uncertainty includes all sources of systematic uncertainty added in
quadrature.
| Pretag | 0 b-tags | 1 loose b-tag | 1 tight b-tag | 2 loose b-tags | 2 med. b-tags | 2 tight b-tags
---|---|---|---|---|---|---|---
VH→ℓνb¯b | 8.6 | 1.3 | 1.0 | 2.4 | 0.9 | 1.1 | 1.7
H→VV→ℓνjj | 8.8 | 6.0 | 1.7 | 0.8 | 0.3 | 0.07 | 0.01
VH→VVV→ℓνjjjj | 7.3 | 4.5 | 1.6 | 0.9 | 0.3 | 0.05 | 0.01
Diboson | 1138 | 727 | 238 | 113 | 42 | 14 | 10
V+(g,u,d,s)-jets | 24 086 | 18 078 | 4577 | 976 | 582 | 34 | 3
V+(b¯b/c¯c) | 6625 | 3213 | 1349 | 1250 | 411 | 228 | 164
top (t¯t \+ single top) | 3695 | 563 | 419 | 1123 | 365 | 460 | 570
Multijet | 10 364 | 6629 | 2162 | 933 | 367 | 130 | 82
Total expectation | 45 908 | 29 209 | 8746 | 4395 | 1768 | 867 | 830
Total uncertainty | ± 2582 | ± 1619 | ± 587 | ± 528 | ± 209 | ± 118 | ± 113
Observed events | 45 907 | 28 924 | 8814 | 4278 | 1815 | 879 | 797
Table 2: Observed number of events in data and expected number of events from
each signal and background source (where V=W,Z) for events with exactly three
jets. The expected signal is quoted at MH=125 GeV. The total background
uncertainty includes all sources of systematic uncertainty added in
quadrature.
Pretag | 0 b-tags | 1 loose b-tag
---|---|---
VH→ℓνb¯b | 1.4 | 0.2 | 0.2
H→VV→ℓνjj | 2.4 | 1.4 | 0.6
VH→VVV→ℓνjjjj | 3.6 | 2.0 | 0.8
Diboson | 199 | 112 | 46
V+(g,u,d,s)-jets | 3055 | 2143 | 679
V+(b¯b/c¯c) | 1280 | 542 | 286
top (t¯t \+ single top) | 2889 | 311 | 268
Multijet | 2092 | 1110 | 450
Total expectation | 9516 | 4217 | 1729
Total uncertainty | ± 530 | ± 231 | ± 144
Observed events | 9685 | 3915 | 1786
Table 3: Observed number of events in data and expected number of events from
each signal and background source (where V=W,Z) for events with four or more
jets. The expected signal is quoted at MH=125 GeV. The total background
uncertainty includes all sources of systematic uncertainty added in
quadrature.
## IX Multivariate Signal Discriminants
We employ multivariate analysis (MVA) techniques to separate signal from background events. To separate signal from the MJ events, we use a boosted decision tree implemented with the tmva package [33] . This multivariate analysis is described in Sec. IX.1. A BDT is also used to separate signal from other specific background sources in events with four or more jets (see Sec. IX.4). For the final multivariate analysis, we use a BDT in the one tight \(b\)-tag channel and all three two \(b\)-tag channels, and we use a random forest decision tree (RF) [62] implemented in the statpatternrecognition (SPR) package [63; 28] for events in the zero and one loose \(b\)-tag channels.
The BDT and the RF are forms of machine learning techniques known as decision trees. Decision trees operate on a series of yes/no splits on events that are known to be classified as either signal or background. The splitting is done to maximally separate signal from background. The resulting nodes are continually split to optimally separate signal from background until either a minimum number of events in a node is reached or the events in a node are pure signal or pure background. The technique of boosting in the BDT builds up a series of trees where each tree is retrained, boosting the weights for events that are misclassified in the previous training. The RF technique creates a collection of decision trees where each tree is trained on a subset of the training data that is randomly sampled.
We train separate BDTs and RFs for each lepton flavor, jet multiplicity, and tagging category, and for each hypothesized Higgs boson mass in steps of 5 GeV. Since the branching fraction for the Higgs decay to \(b\) quarks is only significant over the mass range 90–150 GeV, we restrict the search in the one tight and two \(b\)-tag channels to this range of \(M_{H}\). In the zero and one loose \(b\)-tag channels, the primary signal contribution is from Higgs decays to vector bosons, the search is performed over the mass range of 100–200 GeV.
Each of the final BDTs and RFs are trained to distinguish the signal from all of the backgrounds. We choose variables to train the BDTs and RFs that have good agreement between data and background simulation (since the expected contribution from signal events is small), and so that there is a good separation between signal and at least one background. Background and signal samples are each split into three independent samples for use in training, testing, and performing the final statistical analysis with each multivariate discriminant. We ensure that the discriminant is not biased towards statistical fluctuations in the training sample by comparing the training output to the testing sample. The independent sample used for the limit setting procedure ensures that any optimizations performed based on the output of the training and testing samples do not bias the final limits.
### Multivariate multijet discriminators
We train two separate BDTs to separate the MJ background from signal events: one for \(VH(\to b\bar{b},c\bar{c},\tau\tau)\) signals, \(\mathrm{MVA_{MJ}}(VH)\), and one for \(H\to VV\) signals, \(\mathrm{MVA_{MJ}}(H\to VV)\). The variables used in training these BDTs are chosen to exploit kinematic differences between the MJ and signal events, and are documented in Appendix A. To improve the training statistics, we combine signal events for \(M_{H}=120\), 125, and 130 GeV in training. We find that a BDT trained on this combination of Higgs boson masses has a similar performance when applied to other masses, eliminating the need for a mass dependent MJ discriminant. The BDT outputs \(\mathrm{MVA_{MJ}}(VH)\) and \(\mathrm{MVA_{MJ}}(H\to VV)\) are shown in Fig. 7. The \(\mathrm{MVA_{MJ}}(VH)\) and \(\mathrm{MVA_{MJ}}(H\to VV)\) discriminant outputs are used as input variables to the final MVAs, as detailed in Appendix A.
<figure><img src="content_image/1301.6122/x16.png"><figcaption>Figure 7: (color online) The multivariate discriminant output for (a)MVAMJ(VH) and (b) MVAMJ(H→VV), for all events. The signal is multiplied by1000.</figcaption></figure>
### Final \(\bm{WH\to\ell\nu b\bar{b}}\) MVA analysis
In events with two or three jets and one tight \(b\)-tag or two \(b\)-tags, the \(WH\to\ell\nu b\bar{b}\) process provides the dominant signal contribution. To separate signal from background, we train a BDT on the \(WH\to\ell\nu b\bar{b}\) signal and all backgrounds. The lists of input variables to the MVA and their descriptions are included in Appendix A. Figures 8 and 9 show examples of some of the most effective discriminating variables used in our BDTs for the two-jet and three-jet channels, respectively, in the one tight \(b\)-tag and all two \(b\)-tags channels. Figures 10 and 11 show the BDT output for the two and three-jet channels, respectively, in the one tight \(b\)-tag and all the two \(b\)-tag channels.
<figure><img src="content_image/1301.6122/x18.png"><figcaption>Figure 8: (color online) Distributions of some of the most significant inputsto the final discriminant in events with exactly two jets and either one tightb-tag, two loose b-tags, two medium b-tags, or two tight b-tags: (a)pWT/(pℓT+⧸ET), shown for events with one tight b-tag; (b)max|Δη(ℓ,{j1~{}or~{}j2})| defsjetlepdetamax , shown for events with two looseb-tags; (c) qℓ×ηℓ defslepqeta , shown for events with two medium b-tags; (d)∑(pT)VIS defstopovissumpt , shown for events with two tight b-tags. The signalis multiplied by 200, 200, 50, and 50, respectively. Overflow events are addedto the last bin.</figcaption></figure>
<figure><img src="content_image/1301.6122/x22.png"><figcaption>Figure 9: (color online) Distributions of some of the most significant inputsto the final discriminant in events with exactly three jets and either onetight b-tag, two loose b-tags, two medium b-tags, or two tight b-tags: (a)max|Δη(ℓ,{j1~{}or~{}j2})| defsjetlepdetamax , shown for events with one tightb-tag; (b) qℓ×ηℓ defslepqeta , shown for events with two loose b-tags; (c)aplanarity defstopoaplanarity , shown for events with two medium b-tags; (d)mℓνj2 defslnujmB ; defsnupz , shown for events with two tight b-tags. Thesignal is multiplied by 200, 50, 50, and 50, respectively. Overflow events areadded to the last bin.</figcaption></figure>
<figure><img src="content_image/1301.6122/x26.png"><figcaption>Figure 10: (color online) Distributions of the final discriminant output,after the maximum likelihood fit (described in Sec. XII), in events withexactly two jets and: (a) one tight b-tag, (b) two loose b-tags, (c) twomedium b-tags, and (d) two tight b-tags. The signal is multiplied by 100, 100,20, and 20, respectively.</figcaption></figure>
<figure><img src="content_image/1301.6122/x30.png"><figcaption>Figure 11: (color online) Distributions of the final discriminant output,after the maximum likelihood fit (described in Sec. XII), in events withexactly three jets and: (a) one tight b-tag, (b) two loose b-tags, (c) twomedium b-tags, and (d) two tight b-tags. The signal is multiplied by 100, 100,20, and 20, respectively.</figcaption></figure>
### Final \(\bm{H\to WW\to\ell\nu jj}\) MVA analysis
The \(H\to WW\to\ell\nu jj\) process provides the dominant signal in events with two or three jets and zero \(b\)-tags or one loose \(b\)-tag, since the \(W\) boson decays producing a \(b\) quark are rare. For signal searches in these channels, we apply a multivariate technique based on the RF discriminant. Events in the above tagging categories are examined for \(100\leq M_{H}\leq 150\) GeV. Since we do not perform the search in the one tight and two \(b\)-tag channels for \(M_{H}>150\) GeV, events having exactly two or three jets in all \(b\)-tagging categories (i.e. pretag events) are used in the search for \(155\leq M_{H}\leq 200\) GeV.
To suppress MJ background in the electron channel in these subchannels, we select events with \(\mathrm{MVA_{MJ}}(H\to VV)>-0.4\) for \(M_{H}\leq 150\) GeV in events with zero or one loose tag, and \(\mathrm{MVA_{MJ}}(VH)>-0.5\) for \(M_{H}\geq 155\) GeV in all events. These requirements were optimized to maximize the ratio of number of signal events to the square root of the number of background events. The MJ component in the zero or one loose \(b\)-tag muon channel is small, so there is no cut applied to the MJ MVA outputs.
We train a RF on the total signal and background from all considered physics processes. We optimize the RF independently in the electron and muon channels for each \(b\)-tag and jet multiplicity category. As the signal shape is strongly driven by the signal mass hypothesis, we optimize the MVA variable list at two different mass points: at \(M_{H}=125\) GeV for masses below 150 GeV and at \(M_{H}=165\) GeV for masses above 150 GeV. Because the resolution of the reconstructed Higgs boson mass is about 20 GeV for channels presented in this Article, optimizing the input variable list at only these mass points is sufficient. Each RF is trained using between 14 and 30 well modeled discriminating variables formed from kinematic properties of either elementary objects like jets or leptons, or composite objects, such as reconstructed \(W\) boson candidates (see Figs. 12 and 13). The lists of input variables and their descriptions are included in Appendix A. The final RF discriminants for the electron and muon channels are shown in Figs. 14 and 15.
<figure><img src="content_image/1301.6122/x34.png"><figcaption>Figure 12: (color online) Distributions of the most significant inputs to thefinal multivariate discriminants for the two-jets zero and one loose b-tagchannels: (a) ΔpT(j1,j2), shown for events with zero b-tags for MH=125 GeV;(b) ∑2i=1pjiT, shown for events with one loose b-tag for MH=125 GeV; (c)(Mℓν−Mj12)/(Mℓν+Mj12), shown for all tags for MH=165 GeV. The signal ismultiplied by 1000, 500, and 100, respectively. Overflow events are added tothe last bin.</figcaption></figure>
<figure><img src="content_image/1301.6122/x37.png"><figcaption>Figure 13: (color online) Distributions of the most significant inputs to thefinal multivariate discriminants for the three-jets zero and one loose b-tagchannels: (a) |Δη(W,ℓ)| defslnulepdeta ; defsnupz , shown for events with zerob-tags for MH=125 GeV; (b) MWT, shown for events with one loose b-tag forMH=125 GeV; (c) ∑(pT)VIS defstopovissumpt , shown for all tags for MH=165 GeV.The signal is multiplied by 1000, 500, and 100, respectively. Overflow eventsare added to the last bin.</figcaption></figure>
<figure><img src="content_image/1301.6122/x40.png"><figcaption>Figure 14: (color online) Distributions of the final discriminant output,after the maximum likelihood fit (described in Sec. XII), for events in thefollowing channels: (a) two jets, zero b-tags for MH=125 GeV, (b) two jets,one loose b-tag for MH=125 GeV, (c) two jets, all tags MH=165 GeV. The signalis multiplied by 500, 500, and 200, respectively.</figcaption></figure>
<figure><img src="content_image/1301.6122/x43.png"><figcaption>Figure 15: (color online) Distributions of the final discriminant output,after the maximum likelihood fit (described in Sec. XII), for events in thefollowing channels: (a) three jets, zero b-tags for MH=125 GeV, (b) threejets, one loose b-tag for MH=125 GeV, (c) three jets, all tags MH=165 GeV. Thesignal is multiplied by 500, 500, and 200, respectively.</figcaption></figure>
### Final \(\bm{VH\to VWW\to\ell\nu jjjj}\) MVA analysis
The majority of signal events with four or more jets and zero \(b\)-tags or one loose \(b\)-tag are from the \(VH\to VWW\to\ell\nu jjjj\) process, but there are significant contributions from direct production via gluon fusion and vector boson fusion. Identification of the Higgs boson decay products in \(VH\to VWW\) events is complicated by the combinatorics of pairing four jets into two hadronically decaying vector boson candidates and then two of the three total vector boson candidates into the Higgs boson candidate. The discriminating variables are different for fully hadronic and semileptonic Higgs boson decays, and determining the Higgs boson candidate for an event also determines which of these two decay scenarios is considered. Variables unique to a particular decay scenario are set to a default value outside of the physical range of that variable in events reconstructed under the alternate decay scenario. To reconstruct the two hadronically decaying vector boson candidates, we examine the leading four jets in an event and choose the jet pairings that minimize:
\[E_{ab,cd}=|m_{ab}-M_{W}|+|m_{cd}-M_{W}|,\] (4)
where \(m_{ab}\) (\(m_{cd}\)) is the invariant mass of the \(a^{\rm th}\) and \(b^{\rm th}\) (\(c^{\rm th}\) and \(d^{\rm th}\)) jets, and \(M_{W}=80.4\) GeV [71]. The Higgs boson candidate is then determined by considering the semileptonically decaying \(W\) boson and the two hadronically decaying vector bosons and selecting the vector boson candidate pair with the minimum \(\Delta\mathcal{R}\) separation in an event, out of the three possible pairings.
Diverse signal processes contribute to the inclusive four-jet channel with relative contributions varying with \(M_{H}\). To help mitigate the effect of having many signal and background contributions to this search channel, we use two layers of multivariate discriminants to improve the separation of signal from background. The first layer of training focuses on separating the sum of all signal processes from specific sets of backgrounds. Input variables for each background-specific discriminant are selected based on the separation power between the total signal and the backgrounds being considered. Background-specific discriminants are trained to separate the sum of all Higgs boson signal processes from three specific background categories: \(t\bar{t}\) and single top quark production, \(V\)+jets production, and diboson production. The input variables and their descriptions are listed in Appendix A. Separate background-specific discriminants are trained for each Higgs boson mass point considered. Sample inputs and output responses of the background-specific discriminants are shown in Figs. 16 and 17, respectively, for \(M_{H}=125\) GeV.
<figure><img src="content_image/1301.6122/x46.png"><figcaption>Figure 16: (color online) Distributions of the most significant inputs tobackground-specific multivariate discriminants for the ≥4-jet subchannels: (a)cosθ(ℓ)ℓνCM defslnucmcostheta ; defsnupz , input to discriminant againstV+jets backgrounds, shown for events with zero b-tags; (b) SIGjets(ℓ)defslepsigma , input to discriminant against diboson backgrounds, shown forevents with zero b-tags; (c) ∑pT(ℓ,⧸ET,j1,j2,j3,j4), input to discriminantagainst top quark backgrounds, shown for events with one loose b-tag. TheMH=125 GeV signal is multiplied by 250 in (c) and by 500 in (a) and (b).Overflow events are added to the last bin.</figcaption></figure>
<figure><img src="content_image/1301.6122/x49.png"><figcaption>Figure 17: (color online) Distributions of the output of background-specificmultivariate discriminants for the ≥4-jet subchannels: (a) discriminantagainst V+jets backgrounds, shown for events with zero b-tags; (b)discriminant against diboson backgrounds, shown for events with zero b-tags;(c) discriminant against top quark backgrounds, shown for events with oneloose b-tag. The MH=125 GeV signal is multiplied by 250 in (c) and by 500 in(a) and (b).</figcaption></figure>
The background-specific discriminants are used as inputs to the final RF discriminant that is trained to discriminate all signal processes from the total background contributions. Additional input variables for the final discriminant are selected based on their separation power between the total signal and the total background, and are required to be well modeled. The input variables for each lepton and \(b\)-tag category are listed in Appendix A. Sample inputs and output responses of the final discriminants are shown in Figs. 18 and 19, respectively, for \(M_{H}=125\) GeV.
<figure><img src="content_image/1301.6122/x52.png"><figcaption>Figure 18: (color online) Distributions of the most significant inputs, otherthan background-specific multivariate discriminants, to the final multivariatediscriminants for the ≥4-jet subchannels: (a) mj12ℓ, shown for events withzero b-tags; (b) ΔR(ℓ,j3), shown for events with one loose b-tag. The MH=125GeV signal is multiplied by 500 in (a) and 250 in (b). Overflow events areadded to the last bin.</figcaption></figure>
<figure><img src="content_image/1301.6122/x54.png"><figcaption>Figure 19: (color online) Distributions of the final discriminant output,after the maximum likelihood fit (described in Sec. XII), at MH=125 GeV forthe four or more jets channels with: (a) zero b-tags and (b) one loose b-tag.The MH=125 GeV signal is multiplied by 500.</figcaption></figure>
## X Systematic Uncertainties
We assess systematic uncertainties on signals and backgrounds for each of the jet multiplicity and \(b\)-tag channels by repeating the full analysis after varying each source of uncertainty by \(\pm 1\) s.d. We consider uncertainties that affect both the normalizations and the shapes of our MVA outputs.
We include theoretical uncertainties on the \(t\bar{t}\) and single top quark production cross sections (7% [56; 53]), diboson production cross section (6% [54]), \(V+lf\) production (6%), and \(V+hf\) production (20%, estimated from mcfm [59; 55]). Since the \(V\)+jets experimental scaling factors for the three- and four-jet channels are different from unity, we apply an additional systematic uncertainty on the \(V\)+jets samples that is uncorrelated across jet multiplicity and lepton flavor bins. The size of this uncertainty is taken as the uncertainty from the \(V\)+jets fit to data, described in Sec. VII.
An uncertainty on the integrated luminosity (6.1% [74]) affects the normalization of the expected signal and simulated backgrounds. Uncertainties that affect the final MVA distribution shapes include jet taggability (3% per jet), \(b\)-tagging efficiency (2.5%–3% per heavy-quark jet), the light-quark jet misidentification rate (10% per jet), jet identification efficiency (5%), and jet energy calibration and resolution (varying between 5% and 15%, depending on the process and channel) as described in Ref. [20]. We also include uncertainties from modeling that affect both the shapes and normalizations of the final MVA distributions. These include an uncertainty on the trigger efficiency in the muon channel as derived from the data (3%–5%), lepton identification and reconstruction efficiency (5%–6%), the MLM matching [40] applied to \(V\)+light-flavor events (\(\approx 0.5\)%), the alpgen renormalization and factorization scales, and the choice of parton distribution functions (2%) as described in Ref. [20]. The trigger uncertainty in the muon channel is calculated as the difference between applying a trigger correction calculated using the alpgen reweightings derived on the \(T_{\mu\text{OR}}\) trigger sample and applying the nominal trigger correction. Since we reweight our alpgen samples, we include separate uncertainties on each of the five functions used to apply the reweighting. The adjusted functions are calculated by shifting the parameter responsible for the largest shape variation of the fit by \(\pm 1\) s.d. then calculating the remaining parameters for the function using the covariance matrix obtained from the functional fit.
We determine the uncertainty on the MJ background shape by relaxing the requirement from Sec. VIII on \(M_{T}^{W}\) to \(M_{T}^{W}>30\ \mathrm{GeV}-0.5~{}\times\mbox{$\not\!\!E_{T}$}\) and repeating the analysis with this selection in place. The positive and negative variations are taken to be symmetric. The uncertainty on the MJ rate is \(15\%\) (\(20\%\)) for the electron (muon) channel. Since our MJ sample is statistically limited, we do not correlate the uncertainties on the rate and shape across the subchannels. Since we simultaneously fit MJ and \(V\)+jets to match data, we apply a normalization uncertainty to the \(V\)+jets samples that is anticorrelated with the MJ normalization systematics and scales as the relative MJ to \(V\)+jets normalization.
## XI \(\bm{WZ}\) and \(\bm{ZZ}\) Production with \(\bm{Z\to b\bar{b}}\)
The SM processes \(W(\rightarrow\ell\nu)Z(\to b\bar{b})\) and \(Z(\rightarrow\ell\ell)Z(\to b\bar{b})\) where one of the leptons from the \(Z\to\ell\ell\) decay is not reconstructed, result in the same final state signature as the Higgs boson in this search. Therefore, we search for these processes to validate our analysis methodology. The only change in the analysis is in the training of the final discriminant in events with two or three jets with one tight \(b\)-tag or two \(b\)-tags. We train using the \(WZ\) and \(ZZ\) diboson processes as signal while leaving the \(WW\) process as a background. The output of this discriminant is used to measure the combined \(WZ\) and \(ZZ\) cross section by performing a maximum likelihood fit to data using signal plus background models, with maximization over the systematic uncertainties as described in detail in Sec. XII. The expected significance of the measurement using the MVA output is 1.8 s.d. We measure a cross section of 0.50 \(\pm\) 0.34 (stat.) \(\pm\) 0.36 (syst.) times the expected SM cross section of 4.4 \(\pm\) 0.3 pb. Figure 20 shows the MVA discriminant output for the diboson cross section (\(WZ+ZZ\)) with background-subtracted data and signal scaled to the best fit value.
<figure><img src="content_image/1301.6122/x56.png"><figcaption>Figure 20: (color online) Final MVA discriminant output shown for theexpected diboson signal and background-subtracted data rebinned as a functionof log(S/B), after the maximum likelihood fit, summed over b-tag channels. Theerror bars on data points represent the statistical uncertainty only. Thepost-fit systematic uncertainties are represented by the solid lines. Thesignal expectation is shown scaled to the best fit value. The inset gives anexpanded view of the high log(S/B) region.</figcaption></figure>
## XII Upper Limits on the Higgs Boson Production Cross Section
We derive upper limits on the Higgs boson production cross section multiplied by the corresponding branching fraction in units of the SM prediction. The limits are calculated using the modified frequentist \(CL_{s}\) approach [75; 76; 77], and the procedure is repeated for each assumed value of \(M_{H}\).
Two hypotheses are considered: the background-only hypothesis (B), in which only background contributions are present, and the signal-plus-background (S+B) hypothesis in which both signal and background contributions are present.
The limits are determined using the MVA output distributions, together with their associated uncertainties, as inputs to the limit setting procedure. To preserve the stability of the limit derivation procedure in regions of small background statistics in the one tight \(b\)-tag and all two \(b\)-tags categories, the width of the bin at the largest MVA output value is adjusted by comparing the total background and signal+background expectations until the statistical significances for B and S+B are, respectively, greater than 3.6 and 5.0 s.d. from zero. The remaining part of the distribution is then divided into equally sized bins. In the zero \(b\)-tags and one loose \(b\)-tag categories, the width of the bin at largest MVA output is set such that the relative statistical uncertainty on the signals plus background entries is less than 0.15. The remaining bins are distributed uniformly. The rebinning procedure is checked for potential biases in the determination of the final limits, and no such bias is observed.
We evaluate the compatibility of the data with the background-only and signal+background hypotheses. This is done using the log likelihood ratio (\(LLR\)), which is twice the negative logarithm of the ratio of the Poisson likelihoods, \(L\), of the signal+background hypothesis to the background only hypothesis, \(LLR=-2\ln(L_{S+B}/L_{B})\).
Systematic uncertainties are included through nuisance parameters that are assigned Gaussian probability distributions (priors). The signal and background predictions are functions of the nuisance parameters. Each common source of systematic uncertainty (such as the uncertainties on predicted SM cross sections, identification efficiencies, and energy calibration, as described in Sec. X) is taken to be correlated across all channels except as otherwise noted in Sec. X.
The inclusion of systematic uncertainties in the generation of pseudoexperiments has the effect of broadening the expected \(LLR\) distributions and, thus, reducing the ability to resolve signal-like excesses. This degradation can be partially reduced by performing a maximum likelihood fit to each pseudoexperiment (and data), once for the B hypothesis and once for the S+B hypothesis. The maximization is performed over the systematic uncertainties. The \(LLR\) is evaluated for each outcome using the ratio of maximum likelihoods for the fit to each hypothesis. The resulting degradation of the limits due to systematic uncertainties is \(\sim 30\%\) for searches in the vicinity of \(M_{H}=125\) GeV.
The medians of the obtained \(LLR\) distributions for the B and S+B hypotheses for each tested mass are presented in Fig. 21. The corresponding \(\pm 1\) s.d. and \(\pm 2\) s.d. values for the background-only hypothesis at each mass point are represented by the shaded regions in the figure. The \(LLR\) values obtained from the data are also presented in the figure.
<figure><img src="content_image/1301.6122/x57.png"><figcaption>Figure 21: (color online) The expected and observed log likelihood ratios asfunctions of the hypothesized Higgs boson mass MH for the (a) electron andmuon, two and three jets, one tight and two b-tag channels; (b) electron andmuon, two and three jets, zero and one loose b-tag channels; (c) electron andmuon, four or more jets, zero and one loose b-tag channels; (d) combination ofall channels. The dashed red and black lines correspond to the median LLR ofthe signal+background and background-only hypotheses, respectively. The solidline corresponds to the LLR obtained from the data, and the shaded regions arethe ±1 s.d. and ±2 s.d. values for the background-only hypothesis.</figcaption></figure>
<figure><img src="content_image/1301.6122/x61.png"><figcaption>Figure 22: (color online) The MVA discriminant output distribution minus thetotal background expectation for MH=125 GeV rebinned as a function oflog(S/B). The post-fit uncertainties are represented by the solid lines. Thesignal expectation is shown scaled to the best fit value. The inset gives anexpanded view of the high log(S/B) region.</figcaption></figure>
<figure><img src="content_image/1301.6122/x62.png"><figcaption>Figure 23: (color online) The expected and observed 95% C.L. upper limits onSM Higgs boson production for the (a) electron and muon, two and three jets,one tight and two b-tag channels; (b) electron and muon, two and three jets,zero and one loose b-tag channels (MH≤150 GeV) and pretag channels (MH≥155GeV); (c) electron and muon, four or more jets, zero and one loose b-tagchannels; (d) combination of all channels. The limits are presented as ratiosto the expected SM prediction. The dashed line corresponds to the expectedlimit, and the solid line corresponds to the limit observed in data. Theshaded regions are the ±1 s.d. and ±2 s.d. values for the expected limit.</figcaption></figure>
The MVA discriminant distributions, for the Higgs boson mass point \(M_{H}=125\) GeV, after subtracting the total posterior background expectation are shown in Fig. 22. The signal expectation is shown scaled to the observed upper limit (described later) and the uncertainties in the background after the constrained fit are shown by the solid lines.
| Combined 95% C.L. Limit /σSM
---|---
MH (GeV) | 90 | 95 | 100 | 105 | 110 | 115 | 120 | 125 | 130 | 135 | 140 | 145 | 150 | 155 | 160 | 165 | 170 | 175 | 180 | 185 | 190 | 195 | 200
| 2 or 3 jets with one tight b-tag or two b-tags
Expected | 1.8 | 1.9 | 2.2 | 2.5 | 2.9 | 3.4 | 3.8 | 4.7 | 5.8 | 7.9 | 11.1 | 16.7 | 20.8 | – | – | – | – | – | – | – | – | – | –
Observed | 1.6 | 1.3 | 2.2 | 2.0 | 2.1 | 2.9 | 3.4 | 4.8 | 6.6 | 10.1 | 13.6 | 18.8 | 18.5 | – | – | – | – | – | – | – | – | – | –
| 2 or 3 jets with zero b-tags or one loose b-tag
Expected | – | – | 29.8 | 30.0 | 32.6 | 34.0 | 32.5 | 27.5 | 21.6 | 16.2 | 13.3 | 10.3 | 9.1 | 5.7 | 4.2 | 4.0 | 5.0 | 6.1 | 6.8 | 7.9 | 7.8 | 9.0 | 9.7
Observed | – | – | 34.4 | 24.9 | 41.4 | 31.4 | 40.3 | 43.5 | 32.3 | 19.1 | 17.0 | 7.3 | 3.3 | 4.5 | 3.3 | 2.8 | 3.5 | 3.2 | 4.4 | 4.5 | 4.8 | 7.0 | 12.2
| 4 or more jets with zero b-tags or one loose b-tag
Expected | – | – | 357 | 316 | 224 | 139 | 68.6 | 41.2 | 26.2 | 19.4 | 15.5 | 13.7 | 11.3 | 9.7 | 8.3 | 7.3 | 8.5 | 10.0 | 11.4 | 13.7 | 15.6 | 17.3 | 18.8
Observed | – | – | 365 | 331 | 369 | 182 | 149 | 71.2 | 63.4 | 31.8 | 28.3 | 24.9 | 21.9 | 14.6 | 10.9 | 8.5 | 8.7 | 9.5 | 8.8 | 11.2 | 15.7 | 19.2 | 19.8
| All channels combined
Expected | 1.8 | 1.9 | 2.2 | 2.5 | 2.9 | 3.4 | 3.8 | 4.7 | 5.0 | 6.7 | 7.8 | 7.9 | 5.7 | 5.2 | 3.8 | 3.7 | 4.4 | 5.4 | 5.9 | 7.0 | 7.2 | 8.3 | 8.9
Observed | 1.6 | 1.3 | 2.3 | 1.7 | 2.9 | 4.6 | 5.3 | 5.8 | 8.5 | 9.9 | 10.7 | 9.6 | 6.1 | 4.6 | 4.0 | 2.8 | 2.8 | 3.4 | 4.2 | 5.7 | 8.4 | 6.9 | 11.4
Table 4: The expected and observed 95% C.L. limits, as a function of the Higgs
boson mass MH, presented as ratios of production cross section times branching
fraction to the expected SM prediction.
Upper limits are calculated at 23 discrete values of the Higgs boson mass, spanning the range 90–200 GeV and spaced in increments of 5 GeV, by scaling the expected signal contribution to the value at which it can be excluded at the 95% C.L. The expected limits are calculated from the background-only \(LLR\) distribution whereas the observed limits are quoted with respect to the \(LLR\) values measured in data. The expected and observed 95% C.L. upper limits results for the Higgs boson production cross section multiplied by the decay branching fraction are shown, as a function of the Higgs boson mass \(M_{H}\), in units of the SM prediction in Fig. 23. The values obtained for the expected and observed limit to SM ratios at each mass point are listed in Table 4 for all one-tight, two-loose, two-medium, and two-tight \(b\)-tag subchannels together, for the two-jet and three-jet, zero and one loose \(b\)-tag subchannels (all \(b\)-tag categories for \(M_{H}>150\) GeV) together, the \(\geq 4\)-jet subchannels, and the combination of all subchannels.
## XIII Interpretations in fourth generation and fermiophobic Higgs models
Extensions of the minimal electroweak symmetry breaking mechanism of the SM may be allowed, including models with a fourth generation of fermions or with a Higgs boson that has modified couplings to fermions, as in fermiophobic Higgs models (FHM). We interpret our results in these scenarios using the subchannels that are sensitive to \(H\to WW\) decays: events with two or more jets and zero or one loose \(b\)-tag for \(M_{H}\leq 150\) GeV, extended to include pretag two- and three-jet events for \(M_{H}\geq 155\) GeV. These are the first results for these models in the \(\ell\nu+\)jets final state.
Previous results from the Tevatron Collider experiments in the context of a fourth generation of fermions set a limit on the \(M_{H}\) of \(131<M_{H}<207\) GeV [78]. The ATLAS [79] and CMS [80] collaborations exclude \(140<M_{H}<185\) GeV and \(144<M_{H}<207\) GeV, respectively. Previous searches for the fermiophobic Higgs boson in \(H\to\gamma\gamma\) and \(H\to VV\) channels, with two leptons in the final state, were carried out at the LEP \(e^{+}e^{-}\) Collider [81; 82; 83; 84], by the CDF [85] and D0 [86] Collaborations, and by the ATLAS [87] and CMS [88] Collaborations, with the most stringent limits being set by the CMS experiment where the excluded range is \(110<M_{H}<194\) GeV.
The \(Hgg\) coupling is enhanced in fourth-generation models, which leads to a higher rate of \(gg\to H\) production and a larger decay width of \(H\to gg\) than in the SM [89; 90; 91; 92]. However, since \(H\to gg\) is loop-mediated, the \(H\to WW^{*}\) decay mode dominates for \(M_{H}>135\) GeV, as in the SM. We consider two scenarios for the presence of a fourth generation. In the “low-mass” scenario, we assume a fourth-generation neutrino mass of \(m_{\nu 4}=80\) GeV and a value for the fourth-generation charged lepton mass of \(m_{\ell 4}=100\) GeV, while in the “high-mass” scenario, we assume values for the fourth-generation neutrino and lepton masses of \(m_{\nu 4}=m_{\ell 4}=1\) TeV. Both scenarios set the fourth-generation quark masses to the values in Ref. [92]. After applying our selection criteria, the total expected signal for \(gg\to H\) production in the low-mass (high-mass) fourth-generation model is enhanced by a factor of 7.2 (7.5) over the SM production rate for \(M_{H}=125\) GeV. We only consider gluon fusion Higgs boson production, and we set limits on \(\sigma(gg\to H)\times\mathcal{B}(H\to WW^{*})\). These limits are compared with the predicted \(gg\to H\) production cross section results from hdecay [51], as shown in Fig. 24. We exclude the “low-mass” scenario for \(150<M_{H}<188\) GeV, and the “high-mass” scenario for \(150<M_{H}<190\) GeV.
In the FHM, the Higgs boson does not couple to fermions at tree level but is otherwise SM-like. This suppresses production via gluon fusion to a negligible rate and forbids direct decay to fermions. Production in association with a vector boson or via vector boson fusion is allowed. For this interpretation, we set the contribution from \(gg\to H\) production to zero and scale the contributions from other production and decay mechanisms to reflect the predicted rate in the FHM. After applying our selection criteria, the total expected signal for vector boson fusion and \(VH\to VWW\) production in the FHM is enhanced by a factor of 4.2 over the SM production rate for \(M_{H}=125\) GeV. The expected and observed cross section times branching fraction limits are compared to the FHM predictions in Fig. 25.
<figure><img src="content_image/1301.6122/x66.png"><figcaption>Figure 24: (color online) The expected and observed 95% C.L. upper limits onσ(gg→H)×B(H→WW) compared to the prediction from the fourth-generation fermionmodel.</figcaption></figure>
<figure><img src="content_image/1301.6122/x67.png"><figcaption>Figure 25: (color online) The expected and observed 95% C.L. upper limits onfermiophobic Higgs boson production.</figcaption></figure>
## XIV Summary
We have presented a search for SM Higgs boson production in lepton + \(\not\!\!E_{T}\) + jets final states with a dataset corresponding to 9.7 fb\({}^{-1}\) of integrated luminosity collected with the D0 detector. The search is sensitive to \(VH\to Vb\bar{b}\), \(H\to WW^{*}\rightarrow\ell\nu jj\), and \(WH\to WWW^{*}\rightarrow\ell\nu jjjj\) production and decay, and supersedes previous \(VH\to Vb\bar{b}\) and \(H\to WW^{*}\rightarrow\ell\nu jj\) searches published by D0. To maximize our signal sensitivity, we subdivide the dataset into 36 independent subchannels according to lepton flavor, jet multiplicity, and the number and quality of \(b\)-tagged jets and apply multivariate analysis techniques to further discriminate between signal and background. We test our method by examining SM \(WZ\) and \(ZZ\) production with \(Z\to b\bar{b}\) decay and find production rates consistent with the SM prediction. We observe no significant excess over the background prediction as expected from the amplitude of a 125 GeV SM Higgs boson signal, given the sensitivity of this single channel. Significance is achieved by combining this channel with the other low mass channels analyzed at the Tevatron [13], while here we set 95% C.L. upper limits on the Higgs boson production cross section for masses between 90 and 200 GeV. For \(M_{H}=125\) GeV, the observed (expected) upper limit is 5.8 (4.7) times the SM prediction. We interpret the data also in models with fourth generation fermions, or a fermiophobic Higgs boson. In these interpretations, we exclude \(150<M_{H}<188(190)\) GeV in the “low-mass” (“high-mass”) fourth generation fermion scenario, and provide 95% C.L limits on the production cross section in the fermiophobic model.
## XV Acknowledgments
We thank the staffs at Fermilab and collaborating institutions, and acknowledge support from the DOE and NSF (USA); CEA and CNRS/IN2P3 (France); MON, NRC KI and RFBR (Russia); CNPq, FAPERJ, FAPESP and FUNDUNESP (Brazil); DAE and DST (India); Colciencias (Colombia); CONACyT (Mexico); NRF (Korea); FOM (The Netherlands); STFC and the Royal Society (United Kingdom); MSMT and GACR (Czech Republic); BMBF and DFG (Germany); SFI (Ireland); The Swedish Research Council (Sweden); and CAS and CNSF (China).
## References
* (1) F. Englert and R. Brout, Phys. Rev. Lett. **13**, 321 (1964).
* (2) P. W. Higgs, Phys. Rev. Lett. **13**, 508 (1964).
* (3) G. S. Guralnik, C. R. Hagen, and T. W. B. Kibble, Phys. Rev. Lett. **13**, 585 (1964).
* (4) T. Aaltonen _et al._, (CDF Collaboration), Phys. Rev. Lett. **108**, 151803 (2012).
* (5) V. M. Abazov _et al._, (D0 Collaboration), Phys. Rev. Lett. **108**, 151804 (2012).
* (6) LEP Electroweak Working Group , http://lepewwg.web.cern.ch/LEPEWWG/.
* (7) R. Barate _et al._, (LEP Working Group for Higgs boson searches), Phys. Lett. B **565**, 61 (2003).
* (8) G. Aad _et al._, (ATLAS Collaboration), Phys. Rev. D **86**, 032003 (2012).
* (9) S. Chatrchyan _et al._, (CMS Collaboration), Phys. Lett. B **710**, 26 (2012).
* (10) G. Aad _et al._, (ATLAS Collaboration), Phys. Lett. B **716**, 1 (2012).
* (11) S. Chatrchyan _et al._, (CMS Collaboration), Phys. Lett. B **716**, 30 (2012).
* (12) TEVNPH (Tevatron New Phenomena and Higgs Working Group), arXiv:1203.3774.
* (13) T. Aaltonen _et al._, (CDF and D0 Collaborations), Phys. Rev. Lett. **109**, 071804 (2012).
* (14) V. M. Abazov _et al._, (D0 Collaboration), Phys. Rev. Lett. **109**, 121804 (2012).
* (15) T. Aaltonen _et al._, (CDF Collaboration), Phys. Rev. Lett. **109**, 111804 (2012).
* (16) V. M. Abazov _et al._, (D0 Collaboration), Phys. Rev. Lett. **94**, 091802 (2005).
* (17) V. M. Abazov _et al._, (D0 Collaboration), Phys. Lett. B **663**, 26 (2008).
* (18) V. M. Abazov _et al._, (D0 Collaboration), Phys. Rev. Lett. **102**, 051803 (2009).
* (19) V. M. Abazov _et al._, (D0 Collaboration), Phys. Lett. B **698**, 6 (2011).
* (20) V. M. Abazov _et al._, (D0 Collaboration), Phys. Rev. D **86**, 032005 (2012).
* (21) V. M. Abazov _et al._, (D0 Collaboration), Phys. Rev. Lett. **106**, 171802 (2011).
* (22) S. Abachi _et al._, (D0 Collaboration), Nucl. Instrum. Methods Phys. Res. A **338**, 185 (1994).
* (23) V. M. Abazov _et al._, (D0 Collaboration), Nucl. Instrum. Methods Phys. Res. A **565**, 463 (2006).
* (24) M. Abolins _et al._, Nucl. Instrum. Methods Phys. Res. A **584**, 75 (2008).
* (25) R. Angstadt _et al._, Nucl. Instrum. Methods Phys. Res. A **622**, 298 (2010).
* (26) The pseudorapidity \(\eta=-\ln\left[\tan\frac{\theta}{2}\right]\), where \(\theta\) is the polar angle as measured from the proton beam axis.
* (27) The azimuthal angle, \(\phi\), is defined as the opening angle with respect to the +\(x\) direction in a right-handed coordinate system defined by +\(y\) as up and +\(z\) as the proton beam direction.
* (28) I. Narsky, arXiv:physics/0507157, (2005).
* (29) L. Breiman, J. Friedman, R. Olshen, and C. Stone, _Classification and Regression Trees_ (Wadsworth & Brooks/Cole Advanced Books and Software, Pacific Grove, CA, 1984).
* (30) R. E. Schapire, The Boosting Approach to Machine Learning: An Overview, MSRI Workshop on Nonlinear Estimation and Classification, Berkeley, CA, USA, 2001.
* (31) Y. Freund and R. E. Schapire, J. Japanese Society for Artificial Intelligence **14**, 771 (1999), (in Japanese, translation by Naoki Abe).
* (32) J. H. Friedman, eConf **C030908**, WEAT003 (2003).
* (33) A. Hoecker _et al._, PoS **ACAT**, 040 (2007), we use version 4.1.0.
* (34)\(\Delta\mathcal{R}=\sqrt{\left(\Delta\eta\right)^{2}+\left(\Delta\phi\right)^{2}}\) is the separation between two objects in (\(\eta,\phi\)) space, where \(\phi\) is the azimuthal angle.
* (35) G. C. Blazey _et al._, arXiv:hep-ex/0005012.
* (36) V. M. Abazov _et al._, (D0 Collaboration), Phys. Rev. D **85**, 052006 (2012).
* (37) V. M. Abazov _et al._, (D0 Collaboration), Phys. Rev. D **84**, 032004 (2011).
* (38) V. M. Abazov _et al._, (D0 Collaboration), Nucl. Instrum. Methods Phys. Res. A **620**, 490 (2010).
* (39) T. Sjöstrand, S. Mrenna, and P. Z. Skands, J. High Energy Phys. **05**, 026 (2006).
* (40) M. L. Mangano, M. Moretti, F. Piccinini, R. Pittau, and A. D. Polosa, J. High Energy Phys. **07**, 001 (2003).
* (41) E. Boos _et al._, Nucl. Instrum. Methods Phys. Res. A **534**, 250 (2004).
* (42) E. Boos, V. Bunichev, L. Dudko, V. Savrin, and V. Sherstnev, Phys. Atom. Nucl. **69**, 1317 (2006).
* (43) J. Alwall _et al._, Eur. Phys. J. C **53**, 473 (2007).
* (44) H. L. Lai _et al._, Phys. Rev. D **55**, 1280 (1997).
* (45) J. Pumplin, D. Stump, J. Huston, N. P. Lai, H.-L., and W.-K. Tung, J. High Energy Phys. **07**, 012 (2002).
* (46) R. Brun and F. Carminati, GEANT Detector Description and Simulation Tool, CERN Program Library Long Writeup W5013, 1993 (unpublished).
* (47) J. Baglio and A. Djouadi, J. High Energy Phys. **10**, 064 (2010).
* (48) A. Martin, W. Stirling, R. Thorne, and G. Watt, Eur. Phys. J. C **63**, 189 (2009).
* (49) D. de Florian and M. Grazzini, Phys. Lett. B **674**, 291 (2009).
* (50) P. Bolzoni, F. Maltoni, S.-O. Moch, and M. Zaro, Phys. Rev. D **85**, 035002 (2012).
* (51) A. Djouadi, J. Kalinowski, and M. Spira, Comput. Phys. Commun. **108**, 56 (1998).
* (52) J. Butterworth _et al._, arXiv:1003.1643, (2010).
* (53) N. Kidonakis, Phys. Rev. D **74**, 114012 (2006).
* (54) J. M. Campbell and R. K. Ellis, Phys. Rev. D **60**, 113006 (1999).
* (55) J. M. Campbell, R. K. Ellis, and C. Williams, MCFM - Monte Carlo for FeMtobarn processes, http://mcfm.fnal.gov/.
* (56) U. Langenfeld, S. Moch, and P. Uwer, Phys. Rev. D **80**, 054009 (2009).
* (57) V. M. Abazov _et al._, (D0 Collaboration), Phys. Rev. D **76**, 012003 (2007).
* (58) K. Melnikov and F. Petriello, Phys. Rev. D **74**, 114017 (2006).
* (59) J. M. Campbell, arXiv:hep-ph/0105226, (2001).
* (60) V. M. Abazov _et al._, (D0 Collaboration), Phys. Lett. B **669**, 278 (2008).
* (61) Transverse mass of a leptonically decaying \(W\) is defined as \(M_{T}^{2}\equiv 2\,p_{T}^{\ell}\,\)\(\not\!\!E_{T}\)\([1-\cos\Delta\phi(\ell,\)\(\not\!\!E_{T}\)\()]\).
* (62) L. Breiman, Mach. Learn. **45**, 5 (2001).
* (63) I. Narsky, arXiv:physics/0507143, (2005).
* (64) Maximum \(\Delta\eta\) between the charged lepton and the leading or second leading jet.
* (65) Product of the lepton charge and its pseudorapidity.
* (66) Scalar sum of the \(p_{T}\) of the visible particles, including the charged lepton and jets.
* (67) Aplanarity is defined as \(3\lambda_{3}/2\), where \(\lambda_{3}\) is the smallest eigenvalue of the normalized momentum tensor \(S^{\alpha\beta}=(\sum_{i}p_{i}^{\alpha}p_{i}^{\beta})/(\sum_{i}|\vec{p_{i}}|^{ 2})\) , where \(\alpha,\beta=1,2,3\) correspond to the \(x,y,z\) momentum components, and \(i\) runs over all visible objects.
* (68) Invariant mass of the system consisting of the charged lepton, reconstructed neutrino, and second leading jet.
* (69) The \(p_{Z}\) of the neutrino candidate is estimated by constraining the charged lepton and the neutrino system to the mass of the \(W\) boson and choosing the lowest magnitude solution.
* (70)\(\Delta\eta\) between the \(\ell\nu\) system and the charged lepton.
* (71) J. Beringer _et al._, Particle Data Group, Phys. Rev. D **86**, 010001 (2012).
* (72) Cosine of the angle between the charged lepton and the proton beam axis in the center of mass of the \(\ell\nu\) system.
* (73)\({\cal{SIG}}_{\text{jets}}\) for \(\ell\) is defined with respect to all jets in an event as \(\sum_{\text{jets}}[p_{T}^{\text{jet}}\times\Delta\mathcal{R}(\ell,{\text{jet}} )]/\sum_{\text{jets}}(p_{T}^{\text{jet}})\).
* (74) T. Andeen _et al._, FERMILAB-TM-2365 (2007).
* (75) T. Junk, Nucl. Instrum. Methods Phys. Res. A **434**, 435 (1999).
* (76) A. L. Read, J. Phys. G **28**, 2693 (2002).
* (77) W. Fisher, FERMILAB-TM-2386-E (2007).
* (78) T. Aaltonen _et al._, CDF Collaboration, Phys. Rev. D **82**, 011102 (2010).
* (79) G. Aad _et al._, (ATLAS Collaboration), Eur. Phys. J. C **71**, 1728 (2011).
* (80) S. Chatrchyan _et al._, (CMS Collaboration), Phys. Lett. B **699**, 25 (2011).
* (81) A. Heister _et al._, (ALEPH Collaboration), Phys. Lett. B **544**, 16 (2002).
* (82) J. Abdallah _et al._, (DELPHI Collaboration), Eur. Phys. J. C **35**, 313 (2004).
* (83) P. Achard _et al._, (L3 Collaboration), Phys. Lett. B **568**, 191 (2003).
* (84) G. Abbiendi _et al._, (OPAL Collaboration), Phys. Lett. B **544**, 259 (2002).
* (85) T. Aaltonen _et al._, (CDF Collaboration), Phys. Lett. B **717**, 173 (2012).
* (86) V. Abazov _et al._, (D0 Collaboration), Phys. Rev. Lett. **107**, 151801 (2011).
* (87) G. Aad _et al._, (ATLAS Collaboration), Eur.Phys. J. C **72**, 2157 (2012).
* (88) S. Chatrchyan _et al._, (CMS Collaboration), J. High Energy Phys. **1209**, 111 (2012).
* (89) B. Holdom _et al._, PMC Phys. **A3**, 4 (2009).
* (90) G. D. Kribs, T. Plehn, M. Spannowsky, and T. M. P. Tait, Phys. Rev. D **76**, 075016 (2007).
* (91) E. Arik, O. Cakir, S. A. Cetin, and S. Sultansoy, Acta Phys. Polon. B **37**, 2839 (2006).
* (92) C. Anastasiou, R. Boughezal, and E. Furlan, J. High Energy Phys. **06**, 101 (2010).
* (93) S. J. Parke and S. Veseli, Phys. Rev. D **60**, 093003 (1999).
* (94) K. Black _et al._, arxiv:hep-ph/1010.3698v2, (2011).
## Appendix A Multivariate Discriminator Input Variables
The multivariate discriminators used in this search use input variables from five general categories: final state particle information, as measured in the D0 detector; kinematics of reconstructed objects, such as \(W\) boson candidates reconstructed from the leptonic or hadronic decay products; angular distributions between final state particles and reconstructed objects; topological variables that examine the net properties of all final state particles in an event; and special variables focused on discriminating Higgs boson candidate events from specific backgrounds. Certain multivariate discriminants trained to separate a Higgs boson signal from a specific background are also used as inputs for a final discriminant that is trained to separate the Higgs boson signal from all backgrounds.
Individual input variables are described in detail below. In the descriptions, \(\ell\) refers to the electron or muon in a selected event, \(\nu\) refers to the neutrino candidate, and \(j_{n}\) refers to jets as ordered by \(p_{T}\) where \(j_{1}\) is the jet with highest \(p_{T}\). The \(p_{Z}\) of the neutrino candidate is estimated by constraining the charged lepton and the neutrino system to the mass of the \(W\) boson and choosing the lowest magnitude solution.
Input variable lists for each multivariate discriminant appear in Tables 5–18. The ranking by importance of the variables is determined in the BDT by counting how often the variables are used to split decision tree nodes, and by weighting each split occurrence by the separation gain-squared it has achieved and by the number of events in the node [33]. In the RF, the importance of variables are estimated after training in an independent sample of validation events. These events are run through the RF, once for each variable used. On each pass the class of each event is randomized whenever the variable under test is encountered and the change in the quadratic loss figure of merit is estimated:
\[FOM=\frac{\sum_{i=1}^{\#events}{wgt_{i}(event^{class}_{i}-RF( event_{i}))^{2}}}{\sum_{i=1}^{\#events}wgt_{i}}\] (5)
where \(wgt_{i}\) is an event weight, \(event_{i}^{class}\)=1 for signal, 0 for background, and \(RF(event_{i})\) is the output of the RF classifier for a given event. Whenever the RF makes an incorrect assignment for an event the FOM increases in value. In this test the assignments are randomized for one variable at a time, effectively removing the predictive power of that variable, and the FOM will increase more when more powerful variables are removed in this manner.
The input variable distributions are defined as follows:
### Final State Particle Information
* \(E(j_{1})\): Energy of the leading jet
* \(p_{T}^{j_{1}}\): \(p_{T}\) of the leading jet
* \(p_{T}^{j_{2}}\): \(p_{T}\) of the second leading jet
* \(p_{T}^{j_{3}}\): \(p_{T}\) of the third leading jet
* \(q^{\ell}\times\eta^{\ell}\): Product of the lepton charge and its pseudorapidity
* \(p_{Z}^{\nu_{1}}\): Smaller absolute value solution for \(p_{Z}\) of the reconstructed neutrino, reconstructed with the assumption all \(\not\!\!E_{T}\) is originating from \(W\) boson
* \(p_{Z}^{\nu_{2}}\): Larger absolute value solution for \(p_{Z}\) of the reconstructed neutrino
* \(\not\!p_{T}\): Missing \(p_{T}\) as determined from charged particle tracks in central tracking detector
* \(\mbox{$\not\!\!E_{T}$}^{\text{SC}}\): Scaled \(\not\!\!E_{T}\) is defined as \(\sum^{\text{jets}}\{E(j_{i})\times\{\overrightarrow{\mbox{$\not\!\!E_{T}$}} \cdot\overrightarrow{p(j_{i})}/[\mbox{$\not\!\!E_{T}$}\times|\overrightarrow{p (j_{i})}|]\}^{2}\}\)
* \(\not\!\!E_{T}^{\text{Sig}}\): \(\not\!\!E_{T}\) significance, a measure of the consistency of the observed \(\not\!\!E_{T}\) with respect to zero \(\not\!\!E_{T}\), accounting for the uncertainty in the calorimeter objects that contribute to \(\not\!\!E_{T}\)
* \(\mbox{$\not\!p_{T}$}^{\text{Sig}}\): \(\not\!p_{T}\) significance, a measure of the consistency of the observed \(\not\!p_{T}\) with respect to zero \(\not\!p_{T}\), accounting for the uncertainty in the charged particle tracks that contribute to \(\not\!p_{T}\)
* \(q^{\ell}\times\eta^{j_{2}}\): Product of the the lepton charge and pseudorapidity of the second leading jet
* \(q^{\ell}\times\eta^{j_{3}}\): Product of the the lepton charge and pseudorapidity of the third leading jet
* \(b_{\text{ID}}^{j_{12}}\): Averaged \(b\)-jet identification output for the highest energy \(b\)-tagged jets
### Kinematics of Reconstructed Objects
* \(m_{j_{12}}\): Invariant mass of the leading and second leading jets
* \(m_{T}^{j_{12}}\): Transverse mass of the leading and second leading jets
* \(m_{j_{123}}\): Invariant mass of the leading, second leading, and third leading jets
* \(m_{j_{1234}}\): Invariant mass of the leading, second leading, third leading, and fourth leading jets
* \(\Delta p_{T}(\ell,\mbox{$\not\!\!E_{T}$})\): Scalar difference: \(|p_{T}^{\ell}-\mbox{$\not\!\!E_{T}$}|\)
* \(\Delta p_{T}(j_{1},j_{2})\): scalar difference, \(p_{T}^{j_{1}}-p_{T}^{j_{2}}\)
* \(\sum p_{T}(j_{1},j_{2},\ell)\): Scalar sum of the \(p_{T}\) of the two leading jets and the lepton
* \(\Delta p_{T}(\ell,\mbox{$\not\!\!E_{T}$})/p_{T}^{W}\): Ratio of the scalar difference between \(p_{T}^{\ell}\) and the \(\not\!\!E_{T}\), to \(p_{T}^{W}\)
* \(\max(p_{T}^{\ell},\mbox{$\not\!\!E_{T}$})/p_{T}^{W}\): Ratio of the \(\max(p_{T}^{\ell},\mbox{$\not\!\!E_{T}$})\) to \(p_{T}^{W}\)
* \(\min(p_{T}^{\ell},\mbox{$\not\!\!E_{T}$})/p_{T}^{W}\): Ratio of the \(\min(p_{T}^{\ell},\mbox{$\not\!\!E_{T}$})\) to \(p_{T}^{W}\)
* \(p_{T}^{W}/(p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$})\): Ratio of the \(p_{T}^{W}\) to \(p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$}\)
* \(\Delta p_{T}(W,\ell)\): \(|p_{T}^{W}-p_{T}^{\ell}|\)
* \(\Delta p_{T}(W,\mbox{$\not\!\!E_{T}$})\): \(|p_{T}^{W}-\mbox{$\not\!\!E_{T}$}|\)
* \(p_{T}^{j_{123}}\): \(p_{T}\) of the system consisting of the leading, second leading, and third leading jets
* \(\Delta p_{T}(j_{2},j_{23})\): scalar \(\Delta p_{T}\) between the second leading jet and the system consisting of the second leading and third leading jets
* \(\sum\limits_{i=1}^{2}p_{T}^{j_{i}}\): Scalar sum of the \(p_{T}\) of the two leading jets, \(p_{T}^{j_{1}}+p_{T}^{j_{2}}\)
* \(\sum\limits_{i=1}^{4}p_{T}^{j_{i}}\): Scalar sum of the \(p_{T}\) of the leading, second leading, third leading and fourth leading jets
* \(p_{T}^{j_{12}}/\sum\limits_{i=1}^{2}p_{T}^{j_{i}}\): Ratio of the \(p_{T}\) of the leading and second leading jet system to the scalar sum of the \(p_{T}\) of the two leading jets,
* \(p_{T}^{j_{23}}/\sum\limits_{i=2}^{3}p_{T}^{j_{i}}\): Ratio of the \(p_{T}\) of the system consisting of the second leading and third leading jets to the scalar sum of the \(p_{T}\) of the second leading and third leading jets
* Recoil\((p_{T}^{j_{12}})\): Recoil \(p_{T}\) of the first and second leading jet system
* \(m_{j_{12}\ell}\): Invariant mass of the dijet system and the lepton
* \(M_{T}^{W}\): Transverse mass of the \(\ell\nu\) system
* \(p_{T}^{W}\): \(p_{T}\) of the \(\ell\nu\) system
* \(p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$}\): Scalar sum of \(p_{T}^{\ell}\) and \(\not\!\!E_{T}\)
* Recoil\((p_{T}^{W})\): \(p_{T}\) of the \(\ell\nu\) system with respect to the thrust vector, \(\vec{\ell}-\vec{\nu}\)
* \(m_{\ell\nu j_{1}}\): Invariant mass of the system consisting of the charged lepton, reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\)), and leading jet
* \(m_{\ell\nu j_{2}}\): Invariant mass of the system consisting of the charged lepton, reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\)), and second leading jet
* \(p_{T}^{\ell\nu j_{2}}\): \(p_{T}\) of the system consisting of the charged lepton, reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\)), and second leading jet
* \(p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$}+p_{T}^{j_{2}}\): Scalar sum of the \(p_{T}\) of the charged lepton, \(\not\!\!E_{T}\), and second leading jet
* \(m_{\ell\nu j_{12}}\): Invariant mass of the charged lepton, reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\)), and two leading jets
* \(m_{T}^{\ell\nu j_{12}}\): Transverse mass of the charged lepton, \(\not\!\!E_{T}\), and two leading jets
* \(m_{\ell\nu j_{12}}(p_{Z}(\nu)=0)\): Invariant mass of the charged lepton, reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\)), and two leading jets, with the assumption that \(p_{Z}^{\nu}=0\)
* \(\sum{p_{T}(\ell,\mbox{$\not\!\!E_{T}$},j_{1},j_{2})}\): Scalar sum, \(p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$}+\sum\limits_{i=1}^{2}p_{T}^{j_{i}}\)
* \(\sum p_{T}(\ell,\mbox{$\not\!\!E_{T}$},j_{1},j_{2},j_{3},j_{4})\): Scalar sum, \(p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$}+\sum\limits_{i=1}^{4}p_{T}^{j_{i}}\)
* \(\cal{H}\): Helicity is defined for an object \(A\), coming from the decay of object \(C\) via \(C\to AB\), as \(\arccos\theta_{AC}[(\overrightarrow{C}\cdot\overrightarrow{A})/(|C|\times|A|)]\)
* \({\cal{H}}(j_{12},j_{1})\): Helicity of the leading jet in the dijet system, calculated in the laboratory frame
* \(\cal{V}\): Velocity is defined for an object \(C\to AB\) as \(-\ln\{1-\{1-4\times[(m_{A}^{2}+m_{B}^{2})/m_{C}^{2}]^{1/2}\}^{1/2}\}\)
* \({\cal{V}}(j_{12})\): Velocity of the dijet system
* \({\cal{V}}_{j13}\): Velocity of the system consisting of the leading and third leading jets
* \(\cal{T}\): Twist is \(\arctan(\Delta\phi/\Delta\eta)\)
* \({\cal{T}}(j_{12})\): Twist of the dijet system
* \({\cal{T}}_{j23}\): Twist of the system consisting of the second leading and third leading jets
* \({\cal{T}}_{W\to\ell\nu}\): Twist of the \(\ell\nu\) system
* \(\cal{W}\): Width of a jet in \((\eta,\phi)\) space defined as \(\sqrt{\eta_{w}^{2}+\phi_{w}^{2}}\), where \(\eta_{w}\) and \(\phi_{w}\) are the \(p_{T}\) weighted RMS \(\eta\) and \(\phi\) of energy deposits around the jet centroid.
* \({\cal{W}}_{j3}\): Width of the third leading jet
### Angular Distributions
* \(\Delta\eta(j_{1},j_{2})\): Separation in \(\eta\) between the two leading jets, \(|\eta_{j_{1}}-\eta_{j_{2}}|\)
* \(\max|\Delta\eta(j_{12},\{j_{1}\text{~{}or~{}}j_{2}\})|\): Maximum \(\Delta\eta\) between the dijet system and the leading or second leading jet
* \(\Delta\phi(j_{1},j_{2})\): Separation in \(\phi\) between the two leading jets, \(|\phi_{j_{1}}-\phi_{j_{2}}|\)
* \(\mbox{$\Delta\mathcal{R}$}(j_{1},j_{2})\): Angular separation in \((\eta,\phi)\) space between the two leading jets
* \(\min\mbox{$\Delta\mathcal{R}$}(j_{12},\{j_{1}\text{~{}or~{}}j_{2}\})\): Minimum angular separation in \((\eta,\phi)\) space between the dijet system and the leading or second leading jet
* \(\Delta\phi(j_{12},j_{1})\): \(|\phi_{j_{12}}-\phi_{j_{1}}|\), where \(\phi_{j_{12}}\) is the \(\phi\) of the dijet system
* \(\Delta\phi(j_{1},j_{3})\): Separation in \(\phi\) between the first and third leading jets, \(|\phi_{j_{1}}-\phi_{j_{3}}|\)
* \(\mbox{$\Delta\mathcal{R}$}(j_{3},j_{13})\): \(\Delta\mathcal{R}\) between the third leading jet and the system consisting of the leading and third leading jets
* \(\Delta\phi(j_{3},j_{23})\): \(\Delta\phi\) between the third leading jet and the system consisting of the second leading and third leading jets
* \(\angle(j_{1},j_{2})\): 3D angle between the two leading jets
* \(\cos\theta(j_{1},j_{2})_{\text{CM}}\): Cosine of the angle between the two leading jets in the center of mass (CM) of the dijet system
* \(\angle[\ell,\text{bis}(j_{1},j_{2})]\): 3D angle between the charged lepton and the bisector of the dijet system
* \(\Delta\phi[W,\text{bis}(j_{1},j_{2})]\): Signed \(\Delta\phi\) between the \(\ell\nu\) system and the bisector of the dijet system
* \(|\Delta\eta(\ell,j_{1})|\): Separation in \(\eta\) between the lepton and the leading jet, \(|\eta^{\ell}-\eta^{j_{1}}|\)
* \(|\Delta\eta(\ell,j_{2})|\): Separation in \(\eta\) between the lepton and the second leading jet, \(|\eta^{\ell}-\eta^{j_{2}}|\)
* \(|\Delta\eta(\ell,j_{3})|\): Separation in \(\eta\) between the lepton and the third leading jet, \(|\eta^{\ell}-\eta^{j_{3}}|\)
* \(\max|\Delta\eta(\ell,\{j_{1}\text{~{}or~{}}j_{2}\})|\): Maximum \(\Delta\eta\) between the charged lepton and the leading or second leading jet
* \(\mbox{$\Delta\mathcal{R}$}(\ell,j_{1})\): \(\Delta\mathcal{R}\) between the charged lepton and the leading jet
* \(\mbox{$\Delta\mathcal{R}$}(\ell,j_{2})\): \(\Delta\mathcal{R}\) between the charged lepton and the second leading jet
* \(\mbox{$\Delta\mathcal{R}$}(\ell,j_{3})\): \(\Delta\mathcal{R}\) between the charged lepton and the third leading jet
* \(\Delta\phi(\mbox{$\not\!\!E_{T}$},j_{1})\): \(\Delta\phi\) between the \(\not\!\!E_{T}\) and the leading jet
* \(\mbox{$\Delta\mathcal{R}$}(\nu,j_{1})\): \(\Delta\mathcal{R}\) between the reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\)), and the leading jet
* \(\min[\mbox{$\Delta\mathcal{R}$}(\nu,\{j_{1}~{}\text{or}~{}j_{2}\})]\): Minimum \(\Delta\mathcal{R}\) between the reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\)), and the leading or second leading jet
* \(\mbox{$\Delta\mathcal{R}$}(\ell,\text{light~{}jet})\): \(\Delta\mathcal{R}\) between the charged lepton and the leading non-\(b\)-tagged jet
* \(\angle(\ell,j_{12})\): 3D angle between the charged lepton and the dijet system
* \(\Delta\eta(\ell,\nu)\): Separation in \(\eta\) between the lepton and the reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\)), \(|\eta^{\ell}-\eta^{\nu}|\)
* \(\max|\Delta\eta(W,\{\ell\text{~{}or~{}}\nu\})|\): Maximum \(\Delta\eta\) between the \(\ell\nu\) system and charged lepton or reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\))
* \(\Delta\phi(\ell,\mbox{$\not\!\!E_{T}$})\): \(\phi\) angle between the lepton and \(\not\!\!E_{T}\).
* \(\max|\Delta\phi(W,\{\ell\text{~{}or~{}}\mbox{$\not\!\!E_{T}$}\})|\): Maximum \(\Delta\phi\) between the \(\ell\nu\) system and the charged lepton or \(\not\!\!E_{T}\)
* \(\min|\Delta\phi(W,\{\ell\text{~{}or~{}}\mbox{$\not\!\!E_{T}$}\})|\): Minimum \(\Delta\phi\) between the \(\ell\nu\) system and the charged lepton or \(\not\!\!E_{T}\)
* \(\mbox{$\Delta\mathcal{R}$}(\ell,\nu)\): \(\Delta\mathcal{R}\) between the charged lepton and the reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\))
* \(\max[\mbox{$\Delta\mathcal{R}$}(W,\{\ell\text{~{}or~{}}\nu\})]\): Maximum \(\Delta\mathcal{R}\) between the \(\ell\nu\) system and the charged lepton or reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\))
* \(\min[\mbox{$\Delta\mathcal{R}$}(W,\{\ell\text{~{}or~{}}\nu\})]\): Minimum \(\Delta\mathcal{R}\) between the \(\ell\nu\) system and the charged lepton or reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\))
* \(|\Delta\eta(W,\ell)|\): \(\Delta\eta\) between the \(\ell\nu\) system and the charged lepton
* \(\mbox{$\Delta\mathcal{R}$}(W,\ell)\): \(\Delta\mathcal{R}\) between the \(\ell\nu\) system and the charged lepton
* \(\mbox{$\Delta\mathcal{R}$}(W,\nu)\): \(\Delta\mathcal{R}\) between the \(\ell\nu\) system and the reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\))
* \(\angle(\ell,\nu)\): 3D angle between the charged lepton and the reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\))
* \(\cos\theta(\ell)_{\ell\nu\text{CM}}\): Cosine of the angle between the charged lepton and the proton beam axis in the CM of \(\ell\nu\) system
* \(\cos\theta(\ell)\): Cosine of the angle between the charged lepton and the proton beam axis in the detector
* \(|\Delta\eta(W,j_{2})|\): \(\Delta\eta\) between \(\ell\nu\) system and the second leading jet
* \(\Delta\phi(W,j_{2})\): \(\Delta\phi\) between \(\ell\nu\) system and the second leading jet
* \(\mbox{$\Delta\mathcal{R}$}(W,j_{2})\): \(\Delta\mathcal{R}\) between \(\ell\nu\) system and the second leading jet
* \(|\Delta\eta(W,j_{12})|\): \(\Delta\eta\) between \(\ell\nu\) system and the dijet system
* \(\angle(j_{1},j_{2})_{\text{HCM}}\): 3D angle between the two leading jets in the \(H\to WW\to\ell\nu jj\) CM frame (HCM)
* \(\cos[\angle(j_{1},\ell\nu)]_{\text{HCM}}\): Cosine of the 3D angle between the leading jet and \(\ell\nu\) system in the HCM
* \(\cos[\angle(j_{1},\ell)]_{\text{HCM}}\): Cosine of the 3D angle between the charged lepton and the leading jet in the HCM
* \(\cos[\angle(j_{1}^{j_{12}\text{CM}},(W\to\ell\nu)^{\text{HCM}})]\): Cosine of the 3D angle between the leading jet in energy in the CM of the dijet system and \(\ell\nu\) system in the \(H\to WW\to\ell\nu jj\) CM frame; jet energy is calculated in the \(H\to WW\to\ell\nu jj\) CM frame
* \(\cos[\angle(\ell_{\ell\nu CM},(W\to\ell\nu)_{\text{HCM}})]\): Cosine of the 3D angle between the charged lepton in the \(\ell\nu\) system CM and \(\ell\nu\) system in the \(H\to WW\to\ell\nu jj\) CM frame
* \(\cos[\angle(\ell_{\ell\nu CM;4j},(W\to\ell\nu)_{\ell\nu jjCM;4j})]\): Cosine of the 3D angle between the charged lepton in the \(\ell\nu\) system CM and \(\ell\nu\) system in the \(H\to WW\to\ell\nu jj\) CM frame for \(V(\to jj)H(\to WW\to\ell\nu jj)\) candidate events; jet energy is calculated in the \(H\to WW\to\ell\nu jj\) CM frame
* \(\cos(\angle(j_{1},\ell\nu))_{\text{HCM};4j}\): Cosine of the 3D angle between the leading jet and \(\ell\nu\) system in the \(H\to WW\to\ell\nu jj\) CM frame frame for \(V(\to jj)H(\to WW\to\ell\nu jj)\) candidate events
* \(\cos(\theta^{*})\): \(\theta^{*}=\angle(W,\text{incoming}~{}u\text{-type quark})\) in HCM frame [93]
* \(\cos(\chi^{*})\): \(\chi^{*}=\angle(\ell,\text{spin}_{W})\) in \(\ell\nu\) system CM frame [93]
### Topological Variables
* \(\cal{A}\): Aplanarity is \(3\lambda_{3}/2\) where \(\lambda_{3}\) is the smallest eigenvalue of the normalized momentum tensor \(S^{\alpha\beta}=(\sum_{i}p_{i}^{\alpha}p_{i}^{\beta})/(\sum_{i}|\vec{p_{i}}|^{ 2})\) , where \(\alpha,\beta=1,2,3\) correspond to the \(x,y,z\) momentum components, and \(i\) runs over selected objects. Without arguments, it is calculated for all visible objects
* \({\cal{A}}(\ell j_{1}j_{2})\): \(\cal{A}\) calculated for the charged lepton, and leading and second leading jets
* \({\cal{A}}(\nu_{1})\): \(\cal{A}\) calculated for the charged lepton, reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\)), and all selected jets
* \(\cal{C}\): Centrality is \((\sum_{i}p_{T}^{i})/(\sum_{i}|\vec{p_{i}}|)\), where \(i\) runs over \(\ell\) and all jets
* \(\cal{S}\): Sphericity is \(3(\lambda_{2}+\lambda_{3})/2\) where \(\lambda_{3}\) (\(\lambda_{2}\)) is the smallest (second-smallest) eigenvalue of the normalized momentum tensor described under \(\cal{A}\). Without arguments, it is calculated for all visible objects
* \({\cal{S}}(\ell j_{1}j_{2})\): \(\cal{S}\) calculated for the charged lepton, and leading and second leading jets
* \({\cal{S}}(\ell\nu_{2}j_{1}j_{2})\): \(\cal{S}\) calculated for the charged lepton, reconstructed neutrino (assuming \(p_{Z}^{\nu_{2}}\)), and leading and second leading jets
* \({\cal{S}}(\nu_{1})\): \(\cal{S}\) calculated for the charged lepton, reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\)), and all selected jets
* \(p_{T}^{\text{VIS}}\): Magnitude of the vector sum of the \(\vec{p}_{T}\) of the visible particles
* \(\sum(p_{T})^{\text{VIS}}\): Scalar sum of the \(p_{T}\) of the visible particles
* \(K_{T}^{\min}\): \(\mbox{$\Delta\mathcal{R}$}(j_{1},j_{2})\times E_{T}^{j_{2}}/(\mbox{$\not\!\!E_ {T}$}+E_{T}^{\ell})\), where \(E_{T}\) is the transverse energy
* \(\text{bis}(j_{12},\nu)\): Scalar product of the bisector of the dijet system and the \(\not\!\!E_{T}\) vector, i.e. \(\overrightarrow{\text{bis}(j_{1},j_{2})}\cdot\overrightarrow{\mbox{$\not\!\!E_ {T}$}}\)
* \(m^{\text{Asym}}\): Mass asymmetry between \(\ell\nu\) system and the dijet system: \((m_{\ell\nu}-m_{j_{12}})/(m_{\ell\nu}+m_{j_{12}})\)
* \((p_{T}^{W}-p_{T}^{j_{12}})/(p_{T}^{W}+p_{T}^{j_{12}})\): \(p_{T}\) asymmetry between \(\ell\nu\) system and the dijet system
* \(p_{T}^{W}/p_{T}^{j_{12}}\): Ratio of \(p_{T}^{W}\) to \(p_{T}^{j_{12}}\)
* \((p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$}-\sum\limits_{i=1}^{2}p_{T}^{j_{i}})/\sum{ p_{T}(\ell,\mbox{$\not\!\!E_{T}$},j_{1},j_{2})}\): \(\sum p_{T}\) asymmetry between \(\ell\nu\) system and the dijet system
* \((p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$})/\sum\limits_{i=1}^{2}p_{T}^{j_{i}}\): Ratio of \(p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$}\) to \(\sum\limits_{i=1}^{2}p_{T}^{j_{i}}\)
* \(p_{T}^{\ell\nu j_{12}}/\sum{p_{T}(\ell,\mbox{$\not\!\!E_{T}$},j_{1},j_{2})}\): Ratio of the \(p_{T}^{\ell\nu j_{12}}\) to \(\sum{p_{T}(\ell,\mbox{$\not\!\!E_{T}$},j_{1},j_{2})}\)
* \({{\cal{SIG}}}(j_{12},j_{1})\): Based on the pull variables described in Ref. [94]. Sigma, \(\cal{SIG}\), of the dijet system with respect to the leading jet defined as \(p_{T}^{j_{2}}\mbox{$\Delta\mathcal{R}$}(j_{1},j_{2})/\sum\limits_{i=1}^{2}p_{T }^{j_{i}}\)
* \(\max[{\cal{SIG}}(j_{12},\{j_{1}\text{~{}or~{}}j_{2}\})]\): Maximum \(\cal{SIG}\) of the leading or second leading jet defined as \(p_{T}^{\max}(j_{1},j_{2})\mbox{$\Delta\mathcal{R}$}(j_{1},j_{2})/\sum\limits_{ i=1}^{2}p_{T}^{j_{i}}\) with respect to the dijet system
* \(\min[{\cal{SIG}}(j_{12},\{j_{1}\text{~{}or~{}}j_{2}\})]\): Minimum \(\cal{SIG}\) of the leading or second leading jet defined as \(p_{T}^{\min}(j_{1},j_{2})\mbox{$\Delta\mathcal{R}$}(j_{1},j_{2})/\sum\limits_{ i=1}^{2}p_{T}^{j_{i}}\) with respect to the dijet system
* \({\cal{SIG}}(W,\ell)\): \(\cal{SIG}\) of the \(\ell\nu\) system with respect to the lepton, defined as \((\mbox{$\Delta\mathcal{R}$}(\ell,\nu)\times\mbox{$\not\!\!E_{T}$})/(p_{T}^{ \ell}+\mbox{$\not\!\!E_{T}$})\)
* \(\max[{\cal{SIG}}(W,\{\ell\text{~{}or~{}}\nu\})]\): Maximum \(\cal{SIG}\) of the lepton or \(\not\!\!E_{T}\) defined as \(\max(p_{T}^{\ell},\mbox{$\not\!\!E_{T}$})\times\mbox{$\Delta\mathcal{R}$}(\ell ,\nu)/(p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$})\) with respect to the \(\ell\nu\) system
* \(\min[{\cal{SIG}}(W,\{\ell\text{~{}or~{}}\nu\})]\): Minimum \(\cal{SIG}\) of the lepton or \(\not\!\!E_{T}\) defined as \(\min(p_{T}^{\ell},\mbox{$\not\!\!E_{T}$})\times\mbox{$\Delta\mathcal{R}$}(\ell ,\nu)/(p_{T}^{\ell}+\mbox{$\not\!\!E_{T}$})\) with respect to the \(\ell\nu\) system
* \({{\cal{SIG}}_{\text{jets}}}(A)\) : \(Sigm\) defined for an object \(A\), with respect to all jets in an event, as * \(\sum_{\text{jets}}[p_{T}^{\text{jet}}\times\mbox{$\Delta\mathcal{R}$}(A,{\text {jet}})]/\sum_{\text{jets}}(p_{T}^{\text{jet}})\)
* \({{\cal{SIG}}_{\text{jets}}}(j_{123})\): \({{\cal{SIG}}_{\text{jets}}}\) of the system consisting of the leading, second leading and third leading jets
* \({{\cal{SIG}}_{\text{jets}}}(j_{1234})\): \({{\cal{SIG}}_{\text{jets}}}\) of the system consisting of the leading, second leading, third leading and fourth leading jets
* \({{\cal{SIG}}_{\text{jets}}}(\ell)\): \({{\cal{SIG}}_{\text{jets}}}\) of the charged lepton
* \(\cal{SIM}\): Similarity is defined for two objects, \(A\) and \(B\), as \(\min(p_{T})^{2}\times\mbox{$\Delta\mathcal{R}$}^{2}/(\sum p_{T})^{2}\), where \(\min(p_{T})\) is the minimum \(p_{T}\) of the objects \(A\) and \(B\), \(\Delta\mathcal{R}\) is the angular separation in \((\eta,\phi)\) space between objects \(A\) and \(B\), and \(\sum p_{T}\) is the scalar sum of the \(p_{T}\)s of \(A\) and \(B\)
* \(\cal{SIM}(\ell,\nu)\): Similarity of the charged lepton and the reconstructed neutrino (assuming \(p_{Z}^{\nu_{1}}\))
### Discriminants to Separate Higgs Boson Events from a Specific Background
* \(\mathrm{MVA_{tt}}\): Output of the multivariate discriminant trained against \(t\bar{t}\) and single top quark backgrounds
* \(\mathrm{MVA_{VJ}}\): Output of the multivariate discriminant trained against \(V+\)jets backgrounds
* \(\mathrm{MVA_{VV}}\): Output of the multivariate discriminant trained against diboson backgrounds
* \(\mathrm{MVA_{MJ}}(H\to VV)\): Output of the multivariate discriminant trained to distinguish \(H\to WW\to\ell\nu jj\) from the MJ background
* \(\mathrm{MVA_{MJ}}(VH)\): Output of the multivariate discriminant trained to distinguish \(WH\to\ell\nu b\bar{b}\) from the MJ background
MVAMJ Input Variables
---
ην
⧸ESigT
Δη(ℓ,ν)
TW→ℓν
cosθ(ℓ)ℓνCM
V(j12)
mAsym
C
⧸ET
pVIST
max|Δη(ℓ,{j1~{}or~{}j2})|
Table 5: Input variables for the MVAMJ(VH) and MVAMJ(H→VV) discriminants. Two
discriminants were trained to reject MJ events, one trained using VH→ℓνb¯b
events as a signal and the second using H→VV→ℓνjj events as signal. Both
discriminants use the same list of inputs. Variables are listed by their
importance in the MVA.
Variable | 2T | 2M | 2L | 1T
---|---|---|---|---
MVAMJ(VH) | 1 | 1 | |
mb1b2 | 2 | 4 | 3 | 1
pWT/(pℓT+⧸ET) | 3 | 6 | 4 | 2
bj12ID | 4 | 13 | 1 | 4
cos(χ∗) | 5 | 3 | |
max|Δη(ℓ,{j1~{}or~{}j2})| | 6 | 11 | 2 | 3
qℓ×ηℓ | 7 | 2 | 6 | 6
ΔR(ℓ,j1) | 8 | 5 | |
min[SIG(j12,{j1~{}or~{}j2})] | 9 | 15 | 9 | 5
qℓ×ηj1 | 10 | 7 | 11 | 9
V(j12) | 11 | 12 | 7 | 11
cos(θ∗) | 12 | 10 | |
mℓνj2 | 13 | 16 | 12 | 13
mj12T | 14 | 14 | |
C | 15 | 8 | 8 | 10
∑(pT)VIS | 16 | 9 | |
mAsym | | | 5 | 8
A | | | 10 | 12
pj2T | | | 13 | 7
Table 6: Table of input variables for the final signal discriminant for the
WH→ℓνb¯b channel. Variables are listed by their rank of importance when used
in the two tight b-tagged (2T), two medium b-tagged (2M), two loose b-tagged
(2L), and one tight b-tagged (1T) categories.
Variable | 0T | 1T
---|---|---
(pℓT+⧸ET)/2∑i=1pjiT | 1 | 1
|Δη(W,ℓ)| | 2 |
mj12 | 3 | 4
MVAMJ(VH) | 4 | 6
|Δη(W,j2)| | 5 |
max|Δη(j12,{j1~{}or~{}j2})| | 6 | 10
A(ℓj1j2) | 7 |
ΔR(ℓ,j1) | 8 |
∑(pT)VIS | 9 | 15
H(j12,j1) | 10 |
max|Δϕ(W,{ℓ~{}or~{}⧸ET})| | 11 |
Δϕ(ℓ,⧸ET) | 12 |
ΔpT(j1,j2) | 13 |
pj1T | 14 | 7
|Δη(ℓ,j1)| | | 2
MVAMJ(H→VV) | | 3
|Δη(ℓ,j2)| | | 5
V(j12) | | 8
MWT | | 9
cosθ(ℓ)ℓνCM | | 11
pℓνj2T | | 12
ΔpT(W,⧸ET) | | 13
pℓT+⧸ET | | 14
∑pT(j1,j2,ℓ) | | 16
2∑i=1pjiT | | 17
Table 7: Table of input variables for the final signal discriminant for the
H→WW→eνjj channel for two jets events. Variables are by their rank of
importance when used in the zero b-tags (0T) and one loose b-tag (1T)
categories for MH≤150 GeV.
Variable | 0T | 1T
---|---|---
MVAMJ(VH) | 1 | 6
mj12 | 2 | 15
(pℓT+⧸ET)/2∑i=1pjiT | 3 | 2
ΔR(ℓ,j1) | 4 |
|Δη(W,j2)| | 5 |
|Δη(W,ℓ)| | 6 |
max|Δη(j12,{j1~{}or~{}j2})| | 7 | 8
A(ℓj1j2) | 8 |
∑(pT)VIS | 9 | 16
max|Δϕ(W,{ℓ~{}or~{}⧸ET})| | 10 |
ΔpT(j1,j2) | 11 |
H(j12,j1) | 12 |
Δϕ(ℓ,⧸ET) | 13 |
pj1T | 14 |
MVAMJ(H→VV) | | 1
|Δη(ℓ,j2)| | | 3
|Δη(ℓ,j1)| | | 4
cosθ(ℓ)ℓνCM | | 5
pj2T | | 7
V(j12) | | 9
MWT | | 10
pℓνj2T | | 11
ΔpT(W,⧸ET) | | 12
pℓT+⧸ET | | 13
∑pT(j1,j2,ℓ) | | 14
2∑i=1pjiT | | 17
Table 8: Table of input variables for the final signal discriminant for the
H→WW→eνjj channel for three jets events. Variables are listed by their rank of
importance when used in the zero b-tags (0T) and one loose b-tag (1T)
categories for MH≤150 GeV.
Variable | 0T | 1T
---|---|---
Δϕ[W,bis(j1,j2)] | 1 |
|Δη(W,ℓ)| | 2 |
mj12 | 3 | 7
pℓT+⧸ET+pj2T | 4 | 16
min[ΔR(W,{ℓ~{}or~{}ν})] | 5 |
ΔR(j1,j2) | 6 | 14
SIM(ℓ,ν) | 7 |
2∑i=1pjiT | 8 | 4
min(pℓT,⧸ET)/pWT | 9 |
ΔpT(ℓ,⧸ET) | 10 |
ΔR(ν,j1) | 11 |
Δϕ(⧸ET,j1) | 12 |
SIG(j12,j1) | 13 |
∑(pT)VIS | 14 |
ΔpT(j1,j2) | 15 |
pj1T | 16 | 6
|Δη(ℓ,j1)| | | 1
H(j12,j1) | | 2
|Δη(ℓ,j2)| | | 3
cosθ(ℓ)ℓνCM | | 5
ΔR(W,j2) | | 6
|Δη(W,j12)| | | 8
mAsym | | 9
bis(j12,ν) | | 10
ΔpT(W,ℓ) | | 11
mℓνj2 | | 12
TW→ℓν | | 13
Δϕ(j1,j2) | | 15
mℓνj12T | | 17
∑pT(j1,j2,ℓ) | | 18
ΔR(ℓ,ν) | | 19
Δϕ(ℓ,⧸ET) | | 20
pWT | | 21
Table 9: Table of input variables for the final signal discriminant for the
H→WW→μνjj channel for two jets events. Variables are listed by their rank of
importance when used in the the zero b-tags (0T) and one loose b-tag (1T)
categories for MH≤150 GeV.
Variable | 0T | 1T
---|---|---
pj3T | 1 |
ΔR(ℓ,j2) | 2 |
|Δη(W,ℓ)| | 3 |
|Δη(ℓ,j1)| | 4 | 2
cosθ(j1,j2)CM | 5 | 22
Δϕ(j1,j3) | 6 |
V(j12) | 7 | 21
Recoil(pWT) | 8 | 16
A | 9 |
mℓνj1 | 10 |
mj12ℓ | 11 | 18
KminT | 12 |
cos[∠(j1,ℓν)]HCM | 13 |
mℓνj12 | 14 |
Δϕ(j3,j23) | | 1
qℓ×ηℓ | | 3
Δϕ(W,j2) | | 4
⧸ETSC | | 5
S(ℓj1j2) | | 6
mj123 | | 7
ΔR(ν,j1) | | 8
ΔR(ℓ,light~{}jet) | | 9
Δη(j1,j2) | | 10
mAsym | | 11
pWT/(pℓT+⧸ET) | | 12
SIGjets(j123) | | 13
C | | 14
SIG(j12,j1) | | 15
cosθ(ℓ)ℓνCM | | 17
MWT | | 19
mℓνj12(pZ(ν)=0) | | 20
mj12 | | 23
TW→ℓν | | 24
Δη(ℓ,ν) | | 25
Table 10: Table of input variables for the final signal discriminant for the
H→WW→μνjj channel for three jets events. Variables are listed by their rank of
importance when used in the zero b-tags (0T) and one loose b-tag (1T)
categories for MH≤150 GeV.
MVA Input Variables
---
∑(pT)VIS
Recoil(pWT)
mj12
mAsym
SIG(W,ℓ)
MVAMJ(VH)
|Δη(W,ℓ)|
E(j1)
ΔR(ℓ,j1)
max|Δϕ(W,{ℓ~{}or~{}⧸ET})|
∠(j1,j2)
ΔR(W,ℓ)
Recoil(pj12T)
cos[∠(j1,ℓν)]HCM
H(j12,j1)
ΔR(ℓ,light~{}jet)
Table 11: Table of input variables for the final signal discriminant for the
H→WW→eνjj channel for two jets events in the pretag category for MH≥155 GeV.
Variables are listed by their importance in the MVA.
MVA Input Variables
---
∑(pT)VIS
Recoil(pWT)
MVAMJ(VH)
max|Δϕ(W,{ℓ~{}or~{}⧸ET})|
SIG(W,ℓ)
mAsym
|Δη(W,ℓ)|
mj12
ΔR(ℓ,j1)
E(j1)
∠(j1,j2)
cos[∠(j1,ℓν)]HCM
H(j12,j1)
Recoil(pj12T)
ΔR(ℓ,light~{}jet)
Table 12: Table of input variables for the final signal discriminant for the
H→WW→eνjj channel for three jets events in the pretag category for MH≥155 GeV.
Variables are listed by their importance in the MVA.
MVA Input Variables
---
V(j12)
∠(ℓ,ν)
S(ℓj1j2)
Δϕ[W,bis(j1,j2)]
pj1T
min[ΔR(ν,{j1 or j2})]
cos[∠(j1,ℓ)]HCM
⧸ET
ΔR(j1,j2)
pj12T/2∑i=1pjiT
minΔR(j12,{j1~{}or~{}j2})
pWT
∑pT(ℓ,⧸ET,j1,j2)
min|Δϕ(W,{ℓ~{}or~{}⧸ET})|
pWT/(pℓT+⧸ET)
min[ΔR(W,{ℓ~{}or~{}ν})]
Δϕ(j12,j1)
max[SIG(W,{ℓ~{}or~{}ν})]
ΔpT(j1,j2)
max(pℓT,⧸ET)/pWT
SIG(j12,j1)
min(pℓT,⧸ET)/pWT
Table 13: Table of input variables for the final signal discriminant for the
H→WW→μνjj channel for two jets events in the pretag category for MH≥155 GeV.
Variables are listed by their importance in the MVA.
MVA Input Variables
---
∑(pT)VIS
E(ℓ)
ΔR(j1,j2)
(pWT−pj12T)/(pWT+pj12T)
mj12ℓ
∠(j1,j2)HCM
mj12T
Δϕ[W,bis(j1,j2)]
Δϕ(W,j2)
KminT
mℓνj2
ΔR(ℓ,j1)
S(ν1)
mℓνj12(pZ(ν)=0)
cosθ(ℓ)ℓνCM
cosθ(j1,j2)CM
min[SIG(W,{ℓ~{}or~{}ν})]
pWT/(pℓT+⧸ET)
SIG(j12,j1)
SIGjets(j123)
pj12T/2∑i=1pjiT
T(j12)
ΔR(ℓ,ν)
pj123T
Table 14: Table of input variables for the final signal discriminant for the
H→WW→μνjj channel for three jets events in the pretag category for MH≥155 GeV.
Variables are listed by their importance in the MVA.
MVA Input Variables
---
∑pT(ℓ,⧸ET,j1,j2,j3,j4)
mj1234
pℓT+⧸ET
⧸ETSC
ΔR(ℓ,j3)
∠(j1,j2)HCM
(pWT−pj12T)/(pWT+pj12T)
Vj13
E(ℓ)
MWT
mAsym
pℓνj12T/∑pT(ℓ,⧸ET,j1,j2)
KminT
S
|Δη(W,ℓ)|
cos(∠(j1,ℓν))HCM;4j
cos[∠(jj12CM1,(W→ℓν)HCM)]
pν1Z
Δϕ[W,bis(j1,j2)]
ΔpT(j2,j23)
ΔR(j3,j13)
ΔR(W,ℓ)
∠(ℓ,j12)
∠(ℓ,ν)
pν2Z
Recoil(pWT)
⧸pTSig
ην
pWT/pj12T
min|Δϕ(W,{ℓ~{}or~{}⧸ET})|
Table 15: Table of input variables for the MVAtt discriminant for the
H→WW→ℓνjjjj channel. Variables are listed by their importance in the MVA.
MVA Input Variables
---
Wj3
cosθ(ℓ)ℓνCM
ΔR(ℓ,j3)
Tj23
qℓ×ηℓ
max|Δη(ℓ,{j1~{}or~{}j2})|
mj12ℓ
(pℓT+⧸ET−2∑i=1pjiT)/∑pT(ℓ,⧸ET,j1,j2)
max|Δη(W,{ℓ~{}or~{}ν})|
ΔR(ℓ,ν)
Δη(j1,j2)
ΔR(W,ℓ)
cos[∠(ℓℓνCM;4j,(W→ℓν)ℓνjjCM;4j)]
SIGjets(ℓ)
C
ΔR(ℓ,j2)
(pWT−pj12T)/(pWT+pj12T)
A
cos[∠(ℓℓνCM,(W→ℓν)HCM)]
4∑i=1pjiT
SIM(ℓ,ν)
|Δη(W,ℓ)|
pWT
A(ν1)
∠(ℓ,ν)
ην
E(ℓ)
Recoil(pWT)
pWT/pj12T
Table 16: Table of input variables for the MVAVJ discriminant for the
H→WW→ℓνjjjj channel. Variables are listed by their importance in the MVA.
MVA Input Variables
---
cosθ(ℓ)ℓνCM
SIGjets(ℓ)
qℓ×ηj2
cos[∠(jj12CM1,(W→ℓν)HCM)]
(pℓT+⧸ET−2∑i=1pjiT)/∑pT(ℓ,⧸ET,j1,j2)
max|Δη(ℓ,{j1~{}or~{}j2})|
S(ℓν2j1j2)
qℓ×ηℓ
∠(ℓ,ν)
qℓ×ηj3
ην
|Δη(W,ℓ)|
(pWT−pj12T)/(pWT+pj12T)
∠(j1,j2)HCM
ΔR(W,ℓ)
pℓνj12T/∑pT(ℓ,⧸ET,j1,j2)
SIM(ℓ,ν)
ΔR(ℓ,j2)
mj12ℓ
cos[∠(ℓℓνCM;4j,(W→ℓν)ℓνjjCM;4j)]
SIGjets(j123)
SIGjets(j1234)
∠[ℓ,bis(j1,j2)]
ΔR(j3,j13)
4∑i=1pjiT
max[ΔR(W,{ℓ~{}or~{}ν})]
pWT/pj12T
Table 17: Table of input variables for the MVAVV discriminant for the
H→WW→ℓνjjjj channel. Variables are listed by their importance in the MVA.
Variable | e 0T | e 1L | μ 0T | μ 1L
---|---|---|---|---
MVAtt | 1 | 5 | 5 | 3
MVAMJ(H→VV) | 2 | 7 | 4 | 9
MVAVJ | 3 | 1 | 1 | 1
MVAVV | 4 | 8 | 7 | 2
SIGjets(ℓ) | 5 | 12 | 14 |
mj12ℓ | 6 | | |
pj23T/3∑i=2pjiT | 7 | | 9 | 5
⧸ESigT | 8 | 14 | |
MVAMJ(VH) | 9 | 3 | 12 | 14
E(ℓ) | 10 | 6 | | 16
pj12T/2∑i=1pjiT | 11 | | |
qℓ×ηℓ | 12 | | 3 | 8
cos[∠(ℓℓνCM,(W→ℓν)HCM)] | 13 | | |
A(ν1) | 14 | | |
(pWT−pj12T)/(pWT+pj12T) | 15 | | |
ΔR(W,ℓ) | 16 | | | 13
ΔpT(ℓ,⧸ET)/pWT | 17 | 11 | |
cosθ(ℓ) | 18 | 9 | |
pVIST | | 2 | |
ΔR(ν,j1) | | 4 | |
SIGjets(j123) | | 10 | |
qℓ×ηj1 | | 13 | |
|Δη(W,ℓ)| | | 15 | 13 |
max|Δη(W,{ℓ~{}or~{}ν})| | | 16 | |
cosθ(j1,j2)CM | | | 2 | 12
C | | | 6 |
|Δη(ℓ,j3)| | | | 8 |
cos[∠(j1,ℓν)]HCM | | | 10 |
A | | | 11 |
pℓT+⧸ET | | | 15 |
∠[ℓ,bis(j1,j2)] | | | 16 |
cosθ(ℓ)ℓνCM | | | 17 |
TW→ℓν | | | 18 |
Δη(ℓ,ν) | | | 19 |
MWT | | | 20 |
ΔR(ℓ,j3) | | | | 4
cos[∠(j1,ℓ)]HCM | | | | 6
mℓνj12(pZ(ν)=0) | | | | 7
S(ℓj1j2) | | | | 10
max[ΔR(W,{ℓ~{}or~{}ν})] | | | | 11
max[SIG(j12,{j1~{}or~{}j2})] | | | | 15
Recoil(pWT) | | | | 17
Table 18: Table of input variables for the final signal discriminant for the
H→WW→ℓνjjjj channel. Variables are listed by their rank of importance when
used in the zero b-tags (0T) and one loose b-tag (1L) categories for each
charged lepton type.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.